
duohub-ai/cultural-Llama-3.1-8B-Instruct
Updated
•
1
duohub is the generative graph RAG company. Follow us on Github
Duohub provides blazing fast graph RAG services specifically designed for voice AI and other low-latency applications, delivering context in under 50ms.
from duohub import Duohub
client = Duohub(api_key="your_api_key")
response = client.query(query="Your question here", memoryID="your_memory_id")
## Why Duohub? ⭐
- 🚄 **Lightning-Fast**: Delivers query responses in under 50ms, making it ideal for real-time voice AI applications
- 🎯 **High Precision**: Graph-based memory ensures accurate and contextually relevant responses
- 🔌 **Easy Integration**: Get started with just 3 lines of code - no complex setup or infrastructure needed
- 🌍 **Global Ready**: Data replicated across 3 locations by default for consistent low-latency performance
- 🎛️ **Flexible Options**: Choose between vector or graph RAG based on your needs
- 🛠️ **Built-in Processing**: Includes coreference resolution, fact extraction, and entity resolution out of the box
- 🏢 **Enterprise Grade**: Supports on-premise deployment, custom ontologies, and dedicated support