mrp commited on
Commit
67bc146
1 Parent(s): ddf888a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -6
README.md CHANGED
@@ -71,7 +71,7 @@ print(tokenizer.decode(output[0], skip_special_tokens=True))
71
 
72
  # Training Details
73
  ## Training Data
74
- Finetuning datasets are sourced from [LAION OIG chip2 and infill_dbpedia (Apache-2.0)](https://huggingface.co/datasets/laion/OIG), [DataBricks Dolly v2 (Apache-2.0)](https://github.com/databrickslabs/dolly), [OpenAI TL;DR (MIT)](https://github.com/openai/summarize-from-feedback), [Hello-SimpleAI HC3 (CC-BY SA)](https://huggingface.co/datasets/Hello-SimpleAI/HC3), [dolphin](https://huggingface.co/datasets/ehartford/dolphin), [iapp_wiki_qa_squad](https://huggingface.co/datasets/iapp_wiki_qa_squad) , [thaisum](https://huggingface.co/datasets/thaisum), [xlsum](https://huggingface.co/datasets/csebuetnlp/xlsum), [scb_mt_enth_2020](https://huggingface.co/datasets/scb_mt_enth_2020), han dataset, [xp3x](https://huggingface.co/datasets/Muennighoff/xP3x) and [Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus).
75
  ## Training regime
76
  - QLoRA with 4 A100 (40GB)
77
 
@@ -81,17 +81,16 @@ We performed human and machine evaluations on XQuAD zero-shot and one-shot setti
81
  ## XQuAD
82
  | Model | Exact Match (Zero-shot) | F1 (Zero-shot) | Exact Match (One-shot) | F1 (One-shot) |
83
  |:--------------:|:-----------------------:|:--------------:|:----------------------:|:-------------:|
84
- | openthaigpt7B | 18.5714 | 28.4002 | 30.4202 | 39.7556 |
85
- | SeaLLM7B | - | - | - | 44.43 |
86
- | Typhoon-7b | 23.8655 | 36.27 | **46.7227** | **57.898** |
87
- | WangchanLion7B | **37.563** | **49.8432** | 39.2437 | 51.0627 |
88
 
89
  ## iAPP Wiki QA
90
  | Model | Exact Match (Zero-shot) | F1 (Zero-shot) | Exact Match (One-shot) | F1 (One-shot) |
91
  |:--------------:|:-----------------------:|:--------------:|:----------------------:|:-------------:|
92
  | openthaigpt7B | 22.0568 | 40.0696 | 31.3938 | 47.9775 |
93
  | SeaLLM7B | 8.2544 | 34.4038 | 40.0541 | 58.2673 |
94
- | Typhoon-7b | 27.3342 | 46.2938 | 43.3018 | 59.9434 |
95
  | WangchanLion7B | **55.4804** | **67.9262** | **56.4276** | **68.8471** |
96
 
97
  # What WangchanLion offers:
 
71
 
72
  # Training Details
73
  ## Training Data
74
+ Finetuning datasets are sourced from [LAION OIG chip2 and infill_dbpedia (Apache-2.0)](https://huggingface.co/datasets/laion/OIG), [DataBricks Dolly v2 (Apache-2.0)](https://github.com/databrickslabs/dolly), [OpenAI TL;DR (MIT)](https://github.com/openai/summarize-from-feedback), [Hello-SimpleAI HC3 (CC-BY SA)](https://huggingface.co/datasets/Hello-SimpleAI/HC3), [dolphin](https://huggingface.co/datasets/ehartford/dolphin), [iapp_wiki_qa_squad](https://huggingface.co/datasets/iapp_wiki_qa_squad) , [thaisum](https://huggingface.co/datasets/thaisum), [xlsum](https://huggingface.co/datasets/csebuetnlp/xlsum), [scb_mt_enth_2020](https://huggingface.co/datasets/scb_mt_enth_2020), [han dataset](https://huggingface.co/datasets/pythainlp/han-instruct-dataset-v1.0), [xp3x](https://huggingface.co/datasets/Muennighoff/xP3x) and [Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus).
75
  ## Training regime
76
  - QLoRA with 4 A100 (40GB)
77
 
 
81
  ## XQuAD
82
  | Model | Exact Match (Zero-shot) | F1 (Zero-shot) | Exact Match (One-shot) | F1 (One-shot) |
83
  |:--------------:|:-----------------------:|:--------------:|:----------------------:|:-------------:|
84
+ | openthaigpt7B | 18.57 | 28.40 | 30.42 | 39.76 |
85
+ | SeaLLM7B | - | - | - | 44.43 |
86
+ | Typhoon-7b | - | - | 34.46 | **54.03** |
87
+ | WangchanLion7B | **37.56** | 49.8432 | **39.2437** | 51.0627 |
88
 
89
  ## iAPP Wiki QA
90
  | Model | Exact Match (Zero-shot) | F1 (Zero-shot) | Exact Match (One-shot) | F1 (One-shot) |
91
  |:--------------:|:-----------------------:|:--------------:|:----------------------:|:-------------:|
92
  | openthaigpt7B | 22.0568 | 40.0696 | 31.3938 | 47.9775 |
93
  | SeaLLM7B | 8.2544 | 34.4038 | 40.0541 | 58.2673 |
 
94
  | WangchanLion7B | **55.4804** | **67.9262** | **56.4276** | **68.8471** |
95
 
96
  # What WangchanLion offers: