A comprehensive and scalable Streamlit Chatbot Application that integrates multiple language models through the Ollama API, featuring sophisticated model management, interactive chat interfaces, and RAG (Retrieval-Augmented Generation) capabilities for document analysis.
- Multi-Model Support: Seamlessly interact with various state-of-the-art Ollama language models including Llama, Mistral, Gemma, and 125+ more.
- Model Management Interface: Easy-to-use interface for downloading, managing, and switching between different language models.
- Real-time Chat Interface: Clean interface with model-specific chat history and streamed responses.
- RAG-Powered Document Analysis: Advanced document processing system supporting PDF analysis with multiple embedding models for context-aware document querying and intelligent responses.
- Responsive Design: Modern, responsive UI with animated components and intuitive navigation.
- Python 3.10 or higher
- Ollama API (latest version)
- Streamlit
- 8GB+ RAM (varies based on model size)
Important Note: Demo Version is not able to run Ollama API, run the app locally for full feature usability.
- Clone the Repository
git clone https://github.com/TsLu1s/talknexus.git
cd talknexus
- Set Up Conda Environment
First, ensure you have Conda installed. Then create and activate a new environment with Python 3.10:
# Create new environment
conda create -n ollama_env python=3.10
# Activate the environment
conda activate ollama_env
- Install Dependencies
pip install -r requirements.txt
- Install Ollama
Visit Ollama API and follow the installation instructions for your operating system. [Possible Restart PC needed]
- Start the Application
streamlit run navegation.py
- Explore the Ollama model ecosystem with detailed model cards
- View comprehensive information about model capabilities and specializations:
- Language Models, Specialized Models, Task-Specific Models, Domain-Specific Models...
- Access quick reference for hardware requirements
- Find links to essential documentation and resources
- Navigate to the "Language Models Management" section
- Select and download desired models from the available list
- Monitor installation progress and system requirements
- Manage installed models through the interface
- Select a model from the dropdown menu
- Enter your message in the chat input
- View real-time responses in the chat window
- Switch between models as needed
- Upload PDF documents for analysis
- Select embedding model and language model
- Ask questions about your documents
- Receive context-aware responses based on document content
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature
) - Commit your changes (
git commit -m 'Add some AmazingFeature'
) - Push to the branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
Distributed under the MIT License. See LICENSE for more information.
Luis Santos - LinkedIn