Spaces:
Runtime error
Runtime error
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,8 +1,3 @@
|
|
| 1 |
-
# KaLLaM - Motivational-Therapeutic Advisor
|
| 2 |
-
|
| 3 |
-
KaLLaM is a bilingual (Thai/English) multi-agent assistant designed for physical and mental-health conversations. It orchestrates specialized agents (Supervisor, Doctor, Psychologist, Translator, Summarizer), persists state in SQLite, and exposes Gradio front-ends alongside data and can use evaluation tooling for model psychological skill benchmark.
|
| 4 |
-
Finalist in PAN-SEA AI DEVELOPER CHALLENGE 2025 Round 2: Develop Deployable Solutions & Pitch
|
| 5 |
-
|
| 6 |
---
|
| 7 |
title: KaLLaM Demo
|
| 8 |
emoji: 🐠
|
|
@@ -16,130 +11,4 @@ license: apache-2.0
|
|
| 16 |
short_description: 'PAN-SEA AI DEVELOPER CHALLENGE 2025 Round 2: Develop Deploya'
|
| 17 |
---
|
| 18 |
|
| 19 |
-
|
| 20 |
-
- Multi-agent orchestration that routes requests to domain specialists.
|
| 21 |
-
- Thai/English support backed by SEA-Lion translation services.
|
| 22 |
-
- Conversation persistence with export utilities for downstream analysis.
|
| 23 |
-
- Ready-to-run Gradio demo and developer interfaces.
|
| 24 |
-
- Evaluation scripts for MISC/BiMISC-style coding pipelines.
|
| 25 |
-
|
| 26 |
-
## Requirements
|
| 27 |
-
- Python 3.10 or newer (3.11+ recommended; Docker/App Runner images use 3.11).
|
| 28 |
-
- pip, virtualenv (or equivalent), and Git for local development.
|
| 29 |
-
- Access tokens for the external models you plan to call (SEA-Lion, Google Gemini, optional OpenAI or AWS Bedrock).
|
| 30 |
-
|
| 31 |
-
## Quick Start (Local)
|
| 32 |
-
1. Clone the repository and switch into it.
|
| 33 |
-
2. Create and activate a virtual environment:
|
| 34 |
-
```powershell
|
| 35 |
-
python -m venv .venv
|
| 36 |
-
.venv\Scripts\Activate.ps1
|
| 37 |
-
```
|
| 38 |
-
```bash
|
| 39 |
-
python -m venv .venv
|
| 40 |
-
source .venv/bin/activate
|
| 41 |
-
```
|
| 42 |
-
3. Install dependencies (editable mode keeps imports pointing at `src/`):
|
| 43 |
-
```bash
|
| 44 |
-
python -m pip install --upgrade pip setuptools wheel
|
| 45 |
-
pip install -e .[dev]
|
| 46 |
-
```
|
| 47 |
-
4. Create a `.env` file at the project root (see the next section) and populate the keys you have access to.
|
| 48 |
-
5. Launch one of the Gradio apps:
|
| 49 |
-
```bash
|
| 50 |
-
python gui/chatbot_demo.py # bilingual demo UI
|
| 51 |
-
python gui/chatbot_dev_app.py # Thai-first developer UI
|
| 52 |
-
```
|
| 53 |
-
|
| 54 |
-
The Gradio server binds to http://127.0.0.1:7860 by default; override via `GRADIO_SERVER_NAME` and `GRADIO_SERVER_PORT`.
|
| 55 |
-
|
| 56 |
-
## Environment Configuration
|
| 57 |
-
Configuration is loaded with `python-dotenv`, so any variables in `.env` are available at runtime. Define only the secrets relevant to the agents you intend to use.
|
| 58 |
-
|
| 59 |
-
**Core**
|
| 60 |
-
- `SEA_LION_API_KEY` *or* (`SEA_LION_GATEWAY_URL` + `SEA_LION_GATEWAY_TOKEN`) for SEA-Lion access.
|
| 61 |
-
- `SEA_LION_BASE_URL` (optional; defaults to `https://api.sea-lion.ai/v1`).
|
| 62 |
-
- `SEA_LION_MODEL_ID` to override the default SEA-Lion model.
|
| 63 |
-
- `GEMINI_API_KEY` for Doctor/Psychologist English responses.
|
| 64 |
-
|
| 65 |
-
**Optional integrations**
|
| 66 |
-
- `OPENAI_API_KEY` if you enable any OpenAI-backed tooling via `strands-agents`.
|
| 67 |
-
- `AWS_REGION` (and optionally `AWS_DEFAULT_REGION`) plus temporary credentials (`AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_SESSION_TOKEN`) when running Bedrock-backed flows.
|
| 68 |
-
- `AWS_ROLE_ARN` if you assume roles for Bedrock access.
|
| 69 |
-
- `NGROK_AUTHTOKEN` when tunnelling Gradio externally.
|
| 70 |
-
- `TAVILY_API_KEY` if you wire in search or retrieval plugins.
|
| 71 |
-
|
| 72 |
-
Example scaffold:
|
| 73 |
-
```env
|
| 74 |
-
SEA_LION_API_KEY=your-sea-lion-token
|
| 75 |
-
SEA_LION_MODEL_ID=aisingapore/Gemma-SEA-LION-v4-27B-IT
|
| 76 |
-
GEMINI_API_KEY=your-gemini-key
|
| 77 |
-
OPENAI_API_KEY=sk-your-openai-key
|
| 78 |
-
AWS_REGION=ap-southeast-2
|
| 79 |
-
# AWS_ACCESS_KEY_ID=...
|
| 80 |
-
# AWS_SECRET_ACCESS_KEY=...
|
| 81 |
-
# AWS_SESSION_TOKEN=...
|
| 82 |
-
```
|
| 83 |
-
Keep `.env` out of version control and rotate credentials regularly. You can validate temporary AWS credentials with `python test_credentials.py`.
|
| 84 |
-
|
| 85 |
-
## Running and Persistence
|
| 86 |
-
- Conversations, summaries, and metadata persist to `chatbot_data.db` (SQLite). The schema is created automatically on first run.
|
| 87 |
-
- Export session transcripts with `ChatbotManager.export_session_json()`; JSON files land in `exported_sessions/`.
|
| 88 |
-
- Logs are emitted per agent into `logs/` (daily files) and to stdout.
|
| 89 |
-
|
| 90 |
-
## Docker
|
| 91 |
-
Build and run the containerised Gradio app:
|
| 92 |
-
```bash
|
| 93 |
-
docker build -t kallam .
|
| 94 |
-
docker run --rm -p 8080:8080 --env-file .env kallam
|
| 95 |
-
```
|
| 96 |
-
Environment variables are read at runtime; use `--env-file` or `-e` flags to provide the required keys. Override the entry script with `APP_FILE`, for example `-e APP_FILE=gui/chatbot_dev_app.py`.
|
| 97 |
-
|
| 98 |
-
## AWS App Runner
|
| 99 |
-
The repo ships with `apprunner.yaml` for AWS App Runner's managed Python 3.11 runtime.
|
| 100 |
-
1. Push the code to a connected repository (GitHub or CodeCommit) or supply an archive.
|
| 101 |
-
2. In the App Runner console choose **Source code** -> **Managed runtime** and upload/select `apprunner.yaml`.
|
| 102 |
-
3. Configure AWS Secrets Manager references for the environment variables listed under `run.env` (SEA-Lion, Gemini, OpenAI, Ngrok, etc.).
|
| 103 |
-
4. Deploy. App Runner exposes the Gradio UI on the service URL and honours the `$PORT` variable (defaults to 8080).
|
| 104 |
-
|
| 105 |
-
For fully containerised deployments on App Runner, ECS, or EKS, build the Docker image and supply the same environment variables.
|
| 106 |
-
|
| 107 |
-
## Project Layout
|
| 108 |
-
```
|
| 109 |
-
project-root/
|
| 110 |
-
|-- src/kallam/
|
| 111 |
-
| |-- app/ # ChatbotManager facade
|
| 112 |
-
| |-- domain/agents/ # Supervisor, Doctor, Psychologist, Translator, Summarizer, Orchestrator
|
| 113 |
-
| |-- infra/ # SQLite stores, exporter, token counter
|
| 114 |
-
| `-- infrastructure/ # Shared SEA-Lion configuration helpers
|
| 115 |
-
|-- gui/ # Gradio demo and developer apps
|
| 116 |
-
|-- scripts/ # Data prep and evaluation utilities
|
| 117 |
-
|-- data/ # Sample datasets (gemini, human, orchestrated, SEA-Lion)
|
| 118 |
-
|-- exported_sessions/ # JSON exports created at runtime
|
| 119 |
-
|-- logs/ # Runtime logs (generated)
|
| 120 |
-
|-- Dockerfile
|
| 121 |
-
|-- apprunner.yaml
|
| 122 |
-
|-- test_credentials.py
|
| 123 |
-
`-- README.md
|
| 124 |
-
```
|
| 125 |
-
|
| 126 |
-
## Development Tooling
|
| 127 |
-
- Run tests: `pytest -q`
|
| 128 |
-
- Lint: `ruff check src`
|
| 129 |
-
- Type-check: `mypy src`
|
| 130 |
-
- Token usage: see `src/kallam/infra/token_counter.py`
|
| 131 |
-
- Supervisor/translator fallbacks log warnings if credentials are missing.
|
| 132 |
-
|
| 133 |
-
## Scripts and Evaluation
|
| 134 |
-
The `scripts/` directory includes:
|
| 135 |
-
- `eng_silver_misc_coder.py` and `thai_silver_misc_coder.py` for SEA-Lion powered coding pipelines.
|
| 136 |
-
- `model_evaluator.py` plus preprocessing and visualisation helpers (`ex_data_preprocessor.py`, `in_data_preprocessor.py`, `visualizer.ipynb`).
|
| 137 |
-
|
| 138 |
-
## Note:
|
| 139 |
-
### Proporsal
|
| 140 |
-
Refer to KaLLaM Proporsal.pdf for more information of the project
|
| 141 |
-
### Citation
|
| 142 |
-
See `Citation.md` for references and datasets.
|
| 143 |
-
### License
|
| 144 |
-
Apache License 2.0. Refer to `LICENSE` for full terms.
|
| 145 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
title: KaLLaM Demo
|
| 3 |
emoji: 🐠
|
|
|
|
| 11 |
short_description: 'PAN-SEA AI DEVELOPER CHALLENGE 2025 Round 2: Develop Deploya'
|
| 12 |
---
|
| 13 |
|
| 14 |
+
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|