--- title: 'Multilingual Sentiment Analyzer' emoji: 📉 colorFrom: gray colorTo: pink sdk: docker pinned: false --- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference 🌍 Multilingual Sentiment Analyzer A comprehensive AI-powered sentiment analysis application with explainable AI features (SHAP & LIME) supporting multiple languages. 🚀 Features Multi-language Support: Auto-detect or manually select from English, Chinese, Spanish, French, German, Swedish Advanced Analysis: SHAP and LIME explainable AI integration Batch Processing: Analyze multiple texts simultaneously with parallel processing Interactive Visualizations: Real-time charts and dashboards with Plotly History Management: Track and export analysis history Multiple Themes: Customizable color themes for visualizations File Upload: Support for CSV and TXT file uploads Export Functionality: Export results in CSV and JSON formats 📁 Project Structure sentiment_analyzer/ ├── config.py # Configuration settings ├── models.py # Model management & sentiment engine ├── analysis.py # SHAP/LIME explainable AI ├── visualization.py # Plotly visualizations ├── data_utils.py # Data processing & history management ├── app.py # Application logic ├── main.py # Gradio interface & startup ├── requirements.txt # Python dependencies ├── Dockerfile # Docker configuration └── README.md # This file 🛠 Installation Option 1: Local Installation bash# Clone the repository git clone cd sentiment_analyzer # Install dependencies pip install -r requirements.txt # Download NLTK data python -c "import nltk; nltk.download('stopwords'); nltk.download('punkt')" # Run the application python main.py Option 2: Docker bash# Build the Docker image docker build -t sentiment-analyzer . # Run the container docker run -p 7860:7860 sentiment-analyzer Option 3: Hugging Face Spaces Create a new Space on Hugging Face Select "Docker" as the SDK Upload all project files The app will automatically deploy 🎯 Usage Single Text Analysis Navigate to the "Single Analysis" tab Enter your text in any supported language Select language (or use Auto Detect) Choose visualization theme Configure preprocessing options Click "Analyze" Advanced Analysis (SHAP/LIME) Go to "Advanced Analysis" tab Enter text for explainable AI analysis Adjust number of samples (50-300) Click "SHAP Analysis" or "LIME Analysis" View feature importance visualizations Batch Processing Switch to "Batch Analysis" tab Upload a file or enter multiple texts (one per line) Configure analysis settings Click "Analyze Batch" View summary statistics and detailed results History & Analytics Open "History & Analytics" tab View comprehensive analysis dashboard Export data in CSV or JSON format Clear history when needed 🔧 Configuration Edit config.py to customize: Model Settings: Change supported models and languages Processing Limits: Adjust batch sizes and memory limits UI Themes: Modify color schemes Cache Settings: Configure caching parameters 📊 Supported Models English: CardiffNLP Twitter RoBERTa Multilingual: CardiffNLP XLM-RoBERTa Chinese: UER RoBERTa Chinese 🎨 Themes Default: Green/Red/Orange Ocean: Blue/Orange/Cyan Dark: Darker variants Rainbow: Purple/Pink/Red ⚡ Performance Features Model Caching: LRU cache for efficient model management Parallel Processing: Multi-threaded batch analysis Memory Optimization: Automatic cleanup and GPU management Lazy Loading: Models loaded on-demand 🐛 Troubleshooting Common Issues SHAP/LIME Errors: Reduce sample size or text length Memory Issues: Lower batch size or enable text cleaning Model Loading: Check internet connection for model downloads Port Conflicts: Change port in main.py if 7860 is occupied Performance Tips Use GPU if available for faster processing Enable text cleaning for better preprocessing Reduce sample sizes for faster explainable AI analysis Clear history periodically to save memory 📄 License This project is open source and available under the MIT License. 🤝 Contributing Contributions are welcome! Please feel free to submit pull requests or open issues for bugs and feature requests. 📞 Support For support and questions, please open an issue in the repository. Built with ❤️ using Gradio, Transformers, SHAP, LIME, and Plotly