Spaces:
Running
Running
Update requirements.txt
Browse files- requirements.txt +2 -21
requirements.txt
CHANGED
|
@@ -1,21 +1,2 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
A simple Gradio app that uses Hugging Face models (for example: Qwen2.5-Coder-32B) via the Hugging Face Inference API.
|
| 4 |
-
It provides an "Analyze → Rewrite" mode where the model first reviews the input and then generates an optimized rewrite.
|
| 5 |
-
|
| 6 |
-
## Features
|
| 7 |
-
- Analyze input code or prompt (bugs, edge-cases, security, performance).
|
| 8 |
-
- Produce an optimized rewritten version ready to run or feed into other models.
|
| 9 |
-
- Full user control over model name, max_new_tokens, temperature, top_p, top_k.
|
| 10 |
-
- Ready for deployment on Hugging Face Spaces (Gradio).
|
| 11 |
-
|
| 12 |
-
## Files
|
| 13 |
-
- `app.py` : main Gradio application
|
| 14 |
-
- `requirements.txt` : Python dependencies
|
| 15 |
-
- `README.md` : this file
|
| 16 |
-
|
| 17 |
-
## Run locally (using Hugging Face Inference API)
|
| 18 |
-
1. Create a Hugging Face API token at https://huggingface.co/settings/tokens
|
| 19 |
-
2. Export token as an environment variable or provide it in the UI:
|
| 20 |
-
```bash
|
| 21 |
-
export HF_API_TOKEN="your_token_here"
|
|
|
|
| 1 |
+
gradio>=3.50
|
| 2 |
+
requests>=2.31
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|