Spaces:
Running
Running
Ajout explication modes local/HF/démo dans le README
Browse files
README.md
CHANGED
@@ -77,4 +77,46 @@ The `install_agrilens.py` script:
|
|
77 |
- Creates the virtual environment if needed
|
78 |
- Installs all dependencies (`requirements.txt`)
|
79 |
- Checks for the model in `models/`
|
80 |
-
- Shows launch instructions
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
77 |
- Creates the virtual environment if needed
|
78 |
- Installs all dependencies (`requirements.txt`)
|
79 |
- Checks for the model in `models/`
|
80 |
+
- Shows launch instructions
|
81 |
+
|
82 |
+
---
|
83 |
+
|
84 |
+
## 🇫🇷 Modes de fonctionnement (local vs Hugging Face)
|
85 |
+
|
86 |
+
| Plateforme | Modèle utilisé | Inférence réelle | Dépendance Internet | Remarques |
|
87 |
+
|-------------------|---------------------------------------|------------------|---------------------|-----------|
|
88 |
+
| Local (offline) | Modèle téléchargé (dossier `models/`) | Oui | Non | Rapide, 100% offline |
|
89 |
+
| Hugging Face (token HF) | google/gemma-3n-E2B-it (API HF) | Oui | Oui | Espace GPU recommandé, token requis |
|
90 |
+
| Hugging Face (public) | Aucun (mode démo) | Non | Oui | Réponse factice, test UI uniquement |
|
91 |
+
|
92 |
+
### Instructions
|
93 |
+
- **Local (offline)** :
|
94 |
+
- Placez le modèle téléchargé dans le dossier `models/`
|
95 |
+
- Lancez l’application normalement (`streamlit run src/streamlit_app.py`)
|
96 |
+
- Aucun accès Internet requis
|
97 |
+
- **Hugging Face (inférence réelle)** :
|
98 |
+
- Ajoutez la variable d’environnement `HF_TOKEN` dans les settings du Space
|
99 |
+
- Acceptez les conditions d’utilisation du modèle sur [la page du modèle](https://huggingface.co/google/gemma-3n-E2B-it)
|
100 |
+
- Utilisez un Space GPU pour de meilleures performances
|
101 |
+
- **Hugging Face (mode démo)** :
|
102 |
+
- Si aucun token n’est présent, l’application reste en mode démo (pas d’inférence réelle, réponse factice)
|
103 |
+
|
104 |
+
## 🇬🇧 Modes of operation (local vs Hugging Face)
|
105 |
+
|
106 |
+
| Platform | Model used | Real inference | Internet required | Notes |
|
107 |
+
|-------------------|---------------------------------------|------------------|---------------------|-----------|
|
108 |
+
| Local (offline) | Downloaded model (`models/` folder) | Yes | No | Fast, 100% offline |
|
109 |
+
| Hugging Face (HF token) | google/gemma-3n-E2B-it (HF API) | Yes | Yes | GPU Space recommended, token required |
|
110 |
+
| Hugging Face (public) | None (demo mode) | No | Yes | Fictive answer, UI test only |
|
111 |
+
|
112 |
+
### Instructions
|
113 |
+
- **Local (offline)** :
|
114 |
+
- Put the downloaded model in the `models/` folder
|
115 |
+
- Launch the app normally (`streamlit run src/streamlit_app.py`)
|
116 |
+
- No Internet required
|
117 |
+
- **Hugging Face (real inference)** :
|
118 |
+
- Add the `HF_TOKEN` environment variable in the Space settings
|
119 |
+
- Accept the model terms on [the model page](https://huggingface.co/google/gemma-3n-E2B-it)
|
120 |
+
- Use a GPU Space for best performance
|
121 |
+
- **Hugging Face (demo mode)** :
|
122 |
+
- If no token is present, the app stays in demo mode (no real inference, fake answer)
|