Update README.md
Browse files
README.md
CHANGED
@@ -19,9 +19,10 @@ This repo contains the **Law Knowledge Probing dataset** used in our **ICLR 2024
|
|
19 |
|
20 |
We explore **continued pre-training on domain-specific corpora** for large language models. While this approach enriches LLMs with domain knowledge, it significantly hurts their prompting ability for question answering. Inspired by human learning via reading comprehension, we propose a simple method to **transform large-scale pre-training corpora into reading comprehension texts**, consistently improving prompting performance across tasks in biomedicine, finance, and law domains. **Our 7B model competes with much larger domain-specific models like BloombergGPT-50B**.
|
21 |
|
22 |
-
### π€ We
|
23 |
|
24 |
**************************** **Updates** ****************************
|
|
|
25 |
* 2024/4/14: Released the knowledge probing datasets at [med_knowledge_prob](https://huggingface.co/datasets/AdaptLLM/med_knowledge_prob) and [law_knowledge_prob](https://huggingface.co/datasets/AdaptLLM/law_knowledge_prob)
|
26 |
* 2024/4/2: Released the raw data splits (train and test) of all the evaluation datasets
|
27 |
* 2024/1/16: π Our [research paper](https://huggingface.co/papers/2309.09530) has been accepted by ICLR 2024!!!π
|
|
|
19 |
|
20 |
We explore **continued pre-training on domain-specific corpora** for large language models. While this approach enriches LLMs with domain knowledge, it significantly hurts their prompting ability for question answering. Inspired by human learning via reading comprehension, we propose a simple method to **transform large-scale pre-training corpora into reading comprehension texts**, consistently improving prompting performance across tasks in biomedicine, finance, and law domains. **Our 7B model competes with much larger domain-specific models like BloombergGPT-50B**.
|
21 |
|
22 |
+
### π€ [2024/6/21] We release the 2nd version of AdaptLLM at [Instruction-Pretrain](https://huggingface.co/instruction-pretrain), effective for both general pre-training from scratch and domain-adaptive continual pre-training!!! π€
|
23 |
|
24 |
**************************** **Updates** ****************************
|
25 |
+
* 2024/6/21: ππ» Released the 2nd version of AdaptLLM at [Instruction-Pretrain](https://huggingface.co/instruction-pretrain) ππ»
|
26 |
* 2024/4/14: Released the knowledge probing datasets at [med_knowledge_prob](https://huggingface.co/datasets/AdaptLLM/med_knowledge_prob) and [law_knowledge_prob](https://huggingface.co/datasets/AdaptLLM/law_knowledge_prob)
|
27 |
* 2024/4/2: Released the raw data splits (train and test) of all the evaluation datasets
|
28 |
* 2024/1/16: π Our [research paper](https://huggingface.co/papers/2309.09530) has been accepted by ICLR 2024!!!π
|