Yaofu3 commited on
Commit
0c8b247
1 Parent(s): a549d9d

Update contribution guide in README (#13)

Browse files

- add contribution guide (688b71ba6c4fbff5f2fd4ad6526cb53cf087b14a)

Files changed (1) hide show
  1. README.md +58 -7
README.md CHANGED
@@ -15,20 +15,71 @@ tags:
15
 
16
  Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
17
 
18
- ## Local development
19
 
20
- Create a virtual environment and install the dependencies:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
 
22
  ```bash
23
- conda create -n <env_name> python=3.10
24
- conda activate <env_name>
25
  pip install -r requirements.txt
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26
  ```
27
 
28
- **Follow the instructions in Dockerfile to install other necessary dependencies.**
29
 
30
- Start the backend server in debug mode:
 
 
31
 
32
  ```bash
33
  python backend-cli.py --debug
34
- ```
 
 
 
 
 
 
 
15
 
16
  Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
17
 
18
+ # Contributing to Open-MOE-LLM-Leaderboard
19
 
20
+ Thank you for your interest in contributing to the Open-MOE-LLM-Leaderboard project! We welcome contributions from everyone. Below you'll find guidance on how to set up your development environment, understand our architecture, and contribute effectively. If you have any questions or wish to discuss your contributions, please reach out to Yao Fu via email at [Y.Fu@ed.ac.uk](mailto:y.fu@ed.ac.uk).
21
+
22
+ ## What We're Looking For in Contributions
23
+
24
+ We are looking for contributions in several key areas to enhance the Open-MOE-LLM-Leaderboard project:
25
+
26
+ 1. **General Bug Fixes/Reports**: We welcome reports of any bugs found in the frontend interface or backend, as well as fixes for these issues.
27
+
28
+ 2. **Adding New Tasks (Benchmark Datasets)**: If you have ideas for new benchmark datasets that could be added, your contributions would be greatly appreciated.
29
+
30
+ 3. **Supporting New Inference Frameworks**: Expanding our project to support new inference frameworks is crucial for our growth. If you can contribute in this area, please reach out.
31
+
32
+ 4. **Testing More Models**: To make our leaderboard as comprehensive as possible, we need to test a wide range of models. Contributions in this area are highly valuable.
33
+
34
+ Documentation is currently of lower priority, but if you have thoughts or suggestions, please feel free to raise them.
35
+
36
+ Your contributions are crucial to the success and improvement of the Open-MOE-LLM-Leaderboard project. We look forward to collaborating with you.
37
+
38
+
39
+ ## Development Setup
40
+
41
+ To start contributing, set up your development environment as follows:
42
 
43
  ```bash
44
+ conda create -n leaderboard python=3.10
45
+ conda activate leaderboard
46
  pip install -r requirements.txt
47
+ pip install -i https://test.pypi.org/simple/ --extra-index-url https://pypi.org/simple/ moe-infinity
48
+ pip install pydantic==2.6.4 # Resolves a dependency conflict with moe-infinity
49
+ python -m spacy download en # Required for selfcheckgpt
50
+ ```
51
+
52
+ ## Architecture Overview
53
+
54
+ The Open-MOE-LLM-Leaderboard project uses the following architecture:
55
+
56
+ - **User Interface (Gradio)** ->upload-> **HuggingFace Dataset (Request)** ->download-> **Backend GPU Server** ->upload-> **HuggingFace Dataset (Result)** ->download-> **User Interface (Gradio)**
57
+
58
+ In brief:
59
+ 1. Users submit model benchmarking requests through the Gradio interface ([app.py](./app.py)). These requests are then recorded in a HuggingFace dataset ([sparse-generative-ai/requests](https://huggingface.co/datasets/sparse-generative-ai/requests)).
60
+ 2. The backend ([backend-cli.py](./backend-cli.py)), running on a GPU server, processes these requests, performs the benchmarking tasks, and uploads the results to another HuggingFace dataset ([sparse-generative-ai/results](https://huggingface.co/datasets/sparse-generative-ai/results)).
61
+ 3. Finally, the Gradio interface retrieves and displays these results to the users.
62
+
63
+ ## Running the Gradio Interface
64
+
65
+ To launch the Gradio interface, execute:
66
+
67
+ ```bash
68
+ python app.py
69
  ```
70
 
71
+ Then, open your browser and navigate to http://127.0.0.1:7860.
72
 
73
+ ## Running the Backend
74
+
75
+ To start the backend process, use:
76
 
77
  ```bash
78
  python backend-cli.py --debug
79
+ ```
80
+
81
+ For additional details, please consult the [backend-cli.py](./backend-cli.py) script.
82
+
83
+ ---
84
+
85
+ We look forward to your contributions and are here to help guide you through the process. Thank you for supporting the Open-MOE-LLM-Leaderboard project!