huangzixian commited on
Commit
e5b8673
1 Parent(s): c10d85a

update readme

Browse files
Files changed (1) hide show
  1. README.md +9 -7
README.md CHANGED
@@ -10,6 +10,15 @@
10
 
11
  🔥 Compared with fine-tuning Llama-2 on the same setting, LLaMAX-7B-X-CSQA improves the average accuracy up to 4.2% on the X-CSQA dataset.
12
 
 
 
 
 
 
 
 
 
 
13
  ### Model Usage
14
 
15
  Code Example:
@@ -27,13 +36,6 @@ generate_ids = model.generate(inputs.input_ids, max_length=30)
27
  tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
28
  # => E
29
  ```
30
- ### Experiments
31
-
32
-
33
- | X-CSQA | Avg. | Sw | Ur | Hi | Ar | Vi | Ja | Pl | Zh | Nl | Ru | It | De | Pt | Fr | Es | En |
34
- |--------------------|------|------|------|------|------|----|-------|------|-------|----|------|------|-------|------|-------|--------|--------|
35
- | Llama2-7B-X-CSQA | 50.9 | 23.2 | 24.7 | 32.9 | 32.4 | 51.0 | 50.0 | 51.5 | 55.6 | 56.9 | 55.8 | 58.8 | 59.9 | 60.4 | 61.8 | 61.9 | 78.1 |
36
- | LLaMAX-7B-X-CSQA | 55.1 | 43.5 | 39.0 | 44.1 | 45.1 | 54.0 | 49.9 | 54.6 | 58.2 | 58.9 | 57.1 | 59.1 | 59.0 | 60.9 | 61.6 | 62.7 | 74.0 |
37
 
38
  ### Citation
39
  if our model helps your work, please cite this paper:
 
10
 
11
  🔥 Compared with fine-tuning Llama-2 on the same setting, LLaMAX-7B-X-CSQA improves the average accuracy up to 4.2% on the X-CSQA dataset.
12
 
13
+
14
+ ### Experiments
15
+
16
+
17
+ | X-CSQA | Avg. | Sw | Ur | Hi | Ar | Vi | Ja | Pl | Zh | Nl | Ru | It | De | Pt | Fr | Es | En |
18
+ |--------------------|------|------|------|------|------|----|-------|------|-------|----|------|------|-------|------|-------|--------|--------|
19
+ | Llama2-7B-X-CSQA | 50.9 | 23.2 | 24.7 | 32.9 | 32.4 | 51.0 | 50.0 | 51.5 | 55.6 | 56.9 | 55.8 | 58.8 | 59.9 | 60.4 | 61.8 | 61.9 | 78.1 |
20
+ | LLaMAX-7B-X-CSQA | 55.1 | 43.5 | 39.0 | 44.1 | 45.1 | 54.0 | 49.9 | 54.6 | 58.2 | 58.9 | 57.1 | 59.1 | 59.0 | 60.9 | 61.6 | 62.7 | 74.0 |
21
+
22
  ### Model Usage
23
 
24
  Code Example:
 
36
  tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
37
  # => E
38
  ```
 
 
 
 
 
 
 
39
 
40
  ### Citation
41
  if our model helps your work, please cite this paper: