Spaces:
Sleeping
Sleeping
metadata
title: ChatFed Generator
emoji: 🤖
colorFrom: blue
colorTo: purple
sdk: docker
pinned: false
license: mit
ChatFed Generator - MCP Server
A language model-based generation service designed for ChatFed RAG (Retrieval-Augmented Generation) pipelines. This module serves as an MCP (Model Context Protocol) server that generates contextual responses using configurable LLM providers with support for retrieval result processing.
MCP Endpoint
The main MCP function is generate
which provides context-aware text generation using configurable LLM providers when properly configured with API credentials.
Parameters:
query
(str, required): The question or query to be answeredcontext
(str|list, required): Context for answering - can be plain text or list of retrieval result dictionaries
Returns: String containing the generated answer based on the provided context and query.
Example usage:
from gradio_client import Client
client = Client("ENTER CONTAINER URL / SPACE ID")
result = client.predict(
query="What are the key findings?",
context="Your relevant documents or context here...",
api_name="/generate"
)
print(result)
Configuration
LLM Provider Configuration
- Set your preferred inference provider in
params.cfg
- Configure the model and generation parameters
- Set the required API key environment variable
- [Optional] Adjust temperature and max_tokens settings
- Run the app:
docker build -t chatfed-generator .
docker run -p 7860:7860 chatfed-generator