metadata
title: Turing-Test-Prompt-Competition
app_file: eval.py
sdk: gradio
sdk_version: 4.44.0
Turing-Test-Prompt-Competition
This project implements a chatbot using vLLM for inference and Streamlit for the user interface and Gradio for the evaluation interface.
Setup and Deployment
vLLM Deployment
To deploy vLLM:
Download the LLaMA model:
modal run download_llama.py
Deploy the vLLM inference service:
modal deploy vllm_inference.py
Running the Chatbot
To run the chatbot locally:
Start the Streamlit app:
streamlit run chatbot.py
To make the chatbot accessible over the internet, use ngrok:
ngrok http 8501
Running the Evaluation Interface
To run the evaluation interface locally:
Start the Gradio app:
gradio eval.py
To deploy to HF Space, run:
gradio deploy
Project Structure
download_llama.py
: Script to download the LLaMA modelvllm_inference.py
: vLLM inference servicechatbot.py
: Streamlit-based chatbot interface
License
This project is licensed under the GNU Affero General Public License v3.0. See the LICENSE file for details.
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
Support
If you encounter any problems or have any questions, please open an issue in this repository.