Skip to content

atlas-2192/Multi-AI-Chat-APP

Repository files navigation

VT.ai

Minimal multimodal AI chat app with dynamic conversation routing

Overview

VT.ai is a VT.ai - Minimal multimodal AI chat app that provides a seamless chat interface for interacting with various Large Language Models (LLMs). It supports both cloud-based providers and local model execution through Ollama.

Key Features 🚀

  • Multi-modal Interactions

    • Text and image processing capabilities
    • Real-time streaming responses
    • [Beta] Advanced Assistant features via OpenAI's Assistant API
  • Flexible Model Support

    • OpenAI, Anthropic, and Google integration
    • Local model execution via Ollama
    • Dynamic parameter adjustment (temperature, top-p)
  • Modern Architecture

    • Built on Chainlit for responsive UI
    • SemanticRouter for intelligent conversation routing
    • Real-time response streaming
    • Customizable model settings

Screenshots

Multi LLM Providers Multi-modal Conversation

Quick Start Guide

Prerequisites

  • Python 3.7+
  • (Recommended) rye for dependency management
  • For local models:

Installation

  1. Clone the repository
  2. Copy .env.example to .env and configure your API keys
  3. Set up Python environment:
    python3 -m venv .venv
    source .venv/bin/activate
    pip3 install -r requirements.txt
  4. Optional: Train semantic router
    python3 src/router/trainer.py
  5. Launch the application:
    chainlit run src/app.py -w

Using Local Models with Ollama

# Download model
ollama pull llama3

# Start Ollama server
ollama serve

Technical Stack

Contributing

  1. Fork the repository
  2. Create a feature branch: git checkout -b feature/amazing-feature
  3. Commit changes: git commit -m 'Add amazing feature'
  4. Push to branch: git push origin feature/amazing-feature
  5. Open a Pull Request

License

This project is licensed under the MIT License. See LICENSE for details.

Connect

-Email: [email protected]

Releases

No releases published

Packages

No packages published