cArlIcon commited on
Commit
213e4c2
1 Parent(s): f77c5ff

update README

Browse files
Files changed (1) hide show
  1. README.md +0 -79
README.md CHANGED
@@ -40,85 +40,6 @@ While benchmarking open-source models, we have observed a disparity between the
40
 
41
  To extensively evaluate model's capability, we adopted the methodology outlined in Llama2. Specifically, we included PIQA, SIQA, HellaSwag, WinoGrande, ARC, OBQA, and CSQA to assess common sense reasoning. SquAD, QuAC, and BoolQ were incorporated to evaluate reading comprehension. CSQA was exclusively tested using a 7-shot setup, while all other tests were conducted in a 0-shot configuration. Additionally, we introduced GSM8K (8-shot@1), MATH (4-shot@1), HumanEval (0-shot@1), and MBPP (3-shot@1) under the category "Math & Code". Due to technical constraints, we did not test Falcon-180 on QuAC and OBQA; the score is derived by averaging the scores on the remaining tasks. Since the scores for these two tasks are generally lower than the average, we believe that Falcon-180B's performance was not underestimated.
42
 
43
- ## Usage
44
-
45
- Feel free to [create an issue](https://github.com/01-ai/Yi/issues/new) if you encounter any problem when using the Yi series models.
46
-
47
- ### 1. Run with docker
48
-
49
- The recommended approach to try out our models is through docker. We provide the following docker images.
50
-
51
- - `ghcr.io/01-ai/yi:latest`
52
- - `ml-a100-cn-beijing.cr.volces.com/ci/01-ai/yi:latest`
53
-
54
- Note that the `latest` tag always point to the latest code in the `main` branch. To test a stable version, please replace it with a specific [tag](https://github.com/01-ai/Yi/tags).
55
-
56
- #### 1.1 Try out the base model:
57
-
58
- ```bash
59
- docker run -it ghcr.io/01-ai/yi:latest python demo/text_generation.py
60
- ```
61
-
62
- To reuse the downloaded models in the previous step, you can mount them into the container:
63
-
64
- ```bash
65
- docker run -it \
66
- -v /path/to/model:/model \
67
- ghcr.io/01-ai/yi:latest \
68
- python demo/text_generation.py \
69
- --model /model
70
- ```
71
-
72
- For more advanced usage, please refer the [doc](./demo/README.md).
73
-
74
- #### 1.2 Finetuning from the base model:
75
-
76
- ```bash
77
- docker run -it \
78
- -v /path/to/base/model:/base_model \
79
- -v /path/to/save/finetuned/model:/finetuned_model \
80
- ghcr.io/01-ai/yi:latest \
81
- bash finetune/scripts/run_sft_Yi_6b.sh
82
- ```
83
-
84
- Once finished, you can compare the finetuned model and the base model with the following command:
85
-
86
- ```bash
87
- docker run -it \
88
- -v /path/to/save/finetuned/model/:/finetuned_model \
89
- -v /path/to/base/model/:/base_model \
90
- ghcr.io/01-ai/yi:latest \
91
- bash finetune/scripts/run_eval.sh
92
- ```
93
-
94
- For more advanced usage like fine-tuning based on your custom data, please refer the [doc](./finetune/README.md).
95
-
96
- #### 1.3 Quantization
97
-
98
- ```bash
99
- docker run -it \
100
- -v /path/to/base/model:/base_model \
101
- -v /path/to/save/quantization/model:/quantized_model \
102
- ghcr.io/01-ai/yi:latest \
103
- python quantization/gptq/quant_autogptq.py \
104
- --model /base_model \
105
- --output_dir /quantized_model \
106
- --trust_remote_code
107
- ```
108
-
109
- Once finished, you can then evaluate the resulted model as follows:
110
-
111
- ```bash
112
- docker run -it \
113
- -v /path/to/save/quantization/model:/quantized_model \
114
- ghcr.io/01-ai/yi:latest \
115
- python quantization/gptq/eval_quantized_model.py \
116
- --model /quantized_model \
117
- --trust_remote_code
118
- ```
119
-
120
- For more detailed explanation, please read the [doc](./quantization/gptq/README.md)
121
-
122
  ## Disclaimer
123
 
124
  Although we use data compliance checking algorithms during the training process to ensure the compliance of the trained model to the best of our ability, due to the complexity of the data and the diversity of language model usage scenarios, we cannot guarantee that the model will generate correct and reasonable output in all scenarios. Please be aware that there is still a risk of the model producing problematic outputs. We will not be responsible for any risks and issues resulting from misuse, misguidance, illegal usage, and related misinformation, as well as any associated data security concerns.
 
40
 
41
  To extensively evaluate model's capability, we adopted the methodology outlined in Llama2. Specifically, we included PIQA, SIQA, HellaSwag, WinoGrande, ARC, OBQA, and CSQA to assess common sense reasoning. SquAD, QuAC, and BoolQ were incorporated to evaluate reading comprehension. CSQA was exclusively tested using a 7-shot setup, while all other tests were conducted in a 0-shot configuration. Additionally, we introduced GSM8K (8-shot@1), MATH (4-shot@1), HumanEval (0-shot@1), and MBPP (3-shot@1) under the category "Math & Code". Due to technical constraints, we did not test Falcon-180 on QuAC and OBQA; the score is derived by averaging the scores on the remaining tasks. Since the scores for these two tasks are generally lower than the average, we believe that Falcon-180B's performance was not underestimated.
42
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
43
  ## Disclaimer
44
 
45
  Although we use data compliance checking algorithms during the training process to ensure the compliance of the trained model to the best of our ability, due to the complexity of the data and the diversity of language model usage scenarios, we cannot guarantee that the model will generate correct and reasonable output in all scenarios. Please be aware that there is still a risk of the model producing problematic outputs. We will not be responsible for any risks and issues resulting from misuse, misguidance, illegal usage, and related misinformation, as well as any associated data security concerns.