No description
| .github/ISSUE_TEMPLATE | ||
| e2e | ||
| src | ||
| static | ||
| .env.example | ||
| .gitignore | ||
| .mcp.json | ||
| .npmrc | ||
| .prettierignore | ||
| .prettierrc | ||
| bun.lock | ||
| convex.json | ||
| eslint.config.js | ||
| jsrepo.json | ||
| LICENSE | ||
| package.json | ||
| playwright.config.ts | ||
| README.md | ||
| svelte.config.js | ||
| tsconfig.json | ||
| vite.config.ts | ||
| vitest-setup-client.ts | ||
thom.chat
Open-source self-hostable T3 Chat.
✨ About
thom.chat is an open-source alternative/clone to T3 Chat, built for Theo's cloneathon.
It is self-hostable, by self-hosting your own app and https://www.convex.dev/ instance.
While thom.chat is a clone, the featureset is not identical to T3 Chat.
🎯 Key Features
🤖 AI & Models
- Multiple AI Providers - OpenAI, Anthropic, Google Gemini, Mistral, Cohere, OpenRouter
- 600+ AI Models across all providers
- Bring Your Own API Keys - Users must provide their own API keys
- No Usage Limits - Use any model without restrictions when you have the API key
💬 Chat Experience
- Real-time streaming responses
- Streaming on the server for interruption-free chats, even when reloading
- Chat branching for exploring different conversation paths
- Full-text search across your entire chat history
- Privacy mode for secure screen sharing
📁 Rich Content
- File uploads with image support
- Web search integration for real-time information
- Markdown rendering with syntax highlighting
- Chat sharing with public links
⚡ Productivity
- Cursor-like rules for consistent AI behavior
- Keyboard shortcuts for power users
- Enhance prompt button for better prompts
- Message regeneration capabilities
🛠️ Tech Stack
|
Frontend
|
Backend
|
🚀 Quick Start
Prerequisites
- Node.js 18+
- pnpm (recommended)
- At least one AI provider API key (OpenAI, Anthropic, Gemini, etc.)
Installation
-
Clone the repository
git clone https://github.com/tglide/thom-chat.git cd thom-chat -
Install dependencies
pnpm install -
Environment setup
cp .env.example .env # Edit .env with your configuration -
Start development server
pnpm dev -
Open your browser
http://localhost:5173
🎮 Usage
Getting Started
- Sign up for a free account
- Add API Keys - Go to Settings and add API keys for the providers you want to use:
- OpenAI - GPT models, DALL-E, Whisper
- Anthropic - Claude models
- Google Gemini - Gemini models and vision
- Mistral - Mistral models and embeddings
- Cohere - Command models and embeddings
- OpenRouter - Access to 300+ models
- Start Chatting - Select any model from your enabled providers
Supported Providers
| Provider | Models | Streaming | Tools | Vision | Embeddings |
|---|---|---|---|---|---|
| OpenAI | GPT-4, o3-mini, DALL-E, TTS | ✅ | ✅ | ✅ | ✅ |
| Anthropic | Claude 4, Claude 3.5 Sonnet | ✅ | ✅ | ✅ | ❌ |
| Google Gemini | Gemini 2.5 Pro, Imagen | ✅ | ✅ | ✅ | ✅ |
| Mistral | Mistral Large, Mistral Embed | ✅ | ✅ | ❌ | ✅ |
| Cohere | Command A, Command R+ | ✅ | ✅ | ❌ | ✅ |
| OpenRouter | 300+ models | ✅ | ✅ | ✅ | ❌ |
🤝 Contributing
We welcome contributions! Please see our Contributing Guide for details.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
📄 License
This project is licensed under the MIT License - see the LICENSE file for details.
🙏 Acknowledgments
- Inspired by T3 Chat
- Built with SvelteKit
- Powered by Kepler AI SDK
- Database by Convex
Made with ❤️ by Thomas G. Lopes and Aidan Blesar
🌐 Live Demo • 📖 Documentation • 🐛 Report Bug • 💡 Request Feature