File size: 3,526 Bytes
2989aca |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 |
---
# docs/QUICKSTART.md
````markdown
# Quickstart
This guide will get you up and running with **AnyCoder** in minutes.
## 1. Clone the Repository
```bash
git clone https://github.com/your-org/anycoder.git
cd anycoder
````
## 2. Install Dependencies
Make sure you have Python 3.9+ installed.
```bash
pip install --upgrade pip
pip install -r requirements.txt
```
## 3. Set Environment Variables
```bash
export HF_TOKEN=<YOUR_HUGGINGFACE_TOKEN>
export OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>
export GEMINI_API_KEY=<YOUR_GEMINI_API_KEY>
```
## 4. Run the App Locally
```bash
python app.py
```
Open [http://localhost:7860](http://localhost:7860) in your browser to access the UI.
## 5. Explore Features
* **Model selector**: Choose from Groq, OpenAI, Gemini, Fireworks, and HF models.
* **Input**: Enter prompts, upload files or images for context.
* **Generate**: View code, preview, and conversation history.
---
# docs/API\_REFERENCE.md
````markdown
# API Reference
This document describes the public Python modules and functions available in AnyCoder.
## `models.py`
### `ModelInfo` dataclass
```python
@dataclass
class ModelInfo:
name: str
id: str
description: str
default_provider: str = "auto"
````
### `AVAILABLE_MODELS: List[ModelInfo]`
A list of supported models with metadata.
### `find_model(identifier: str) -> Optional[ModelInfo]`
Lookup a model by name or ID.
---
## `inference.py`
### `chat_completion(model_id: str, messages: List[Dict[str,str]], provider: Optional[str]=None, max_tokens: int=4096) -> str`
Send a one-shot chat completion request.
### `stream_chat_completion(model_id: str, messages: List[Dict[str,str]], provider: Optional[str]=None, max_tokens: int=4096) -> Generator[str]`
Stream partial generation results.
---
## `hf_client.py`
### `get_inference_client(model_id: str, provider: str="auto") -> InferenceClient`
Creates an HF InferenceClient with provider routing logic.
---
# docs/ARCHITECTURE.md
```markdown
# Architecture Overview
Below is a high-level diagram of AnyCoder's components and data flow:
```
```
+------------+
| User |
+-----+------+
|
v
+---------+----------+
| Gradio UI (app.py)|
+---------+----------+
|
+------------------------+------------------------+
| | |
v v v
models.py inference.py plugins.py
```
(model registry) (routing & chat\_completion) (extension points)
\| | |
+---------------------+ +------------------------+
|
v
hf\_client.py deploy.py
(HF/OpenAI/Gemini/etc routing) (HF Spaces integration)
```
- **UI Layer** (`app.py` + Gradio): handles inputs, outputs, and state.
- **Model Registry** (`models.py`): metadata-driven list of supported models.
- **Inference Layer** (`inference.py`, `hf_client.py`): abstracts provider selection and API calls.
- **Extensions** (`plugins.py`): plugin architecture for community or custom integrations.
- **Deployment** (`deploy.py`): Helpers to preview in an iframe or push to Hugging Face Spaces.
This separation ensures modularity, testability, and easy extensibility.
```
|