AdaptLLM commited on
Commit
a390bef
1 Parent(s): 61b19da

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -118,7 +118,7 @@ model-index:
118
  name: Open LLM Leaderboard
119
  ---
120
 
121
- # Adapting Large Language Models to Domains (ICLR 2024)
122
  This repo contains the domain-specific chat model developed from **LLaMA-2-Chat-7B**, using the method in our paper [Adapting Large Language Models via Reading Comprehension](https://huggingface.co/papers/2309.09530).
123
 
124
  We explore **continued pre-training on domain-specific corpora** for large language models. While this approach enriches LLMs with domain knowledge, it significantly hurts their prompting ability for question answering. Inspired by human learning via reading comprehension, we propose a simple method to **transform large-scale pre-training corpora into reading comprehension texts**, consistently improving prompting performance across tasks in biomedicine, finance, and law domains. **Our 7B model competes with much larger domain-specific models like BloombergGPT-50B**.
@@ -182,7 +182,7 @@ outputs = model.generate(input_ids=inputs, max_length=4096)[0]
182
  answer_start = int(inputs.shape[-1])
183
  pred = tokenizer.decode(outputs[answer_start:], skip_special_tokens=True)
184
 
185
- print(f'### User Input:\n{user_input}\n\n### Assistant Output:\n{pred}')
186
  ```
187
 
188
  ### LLaMA-3-8B (💡New!)
 
118
  name: Open LLM Leaderboard
119
  ---
120
 
121
+ # Adapting LLMs to Domains via Continual Pre-Training (ICLR 2024)
122
  This repo contains the domain-specific chat model developed from **LLaMA-2-Chat-7B**, using the method in our paper [Adapting Large Language Models via Reading Comprehension](https://huggingface.co/papers/2309.09530).
123
 
124
  We explore **continued pre-training on domain-specific corpora** for large language models. While this approach enriches LLMs with domain knowledge, it significantly hurts their prompting ability for question answering. Inspired by human learning via reading comprehension, we propose a simple method to **transform large-scale pre-training corpora into reading comprehension texts**, consistently improving prompting performance across tasks in biomedicine, finance, and law domains. **Our 7B model competes with much larger domain-specific models like BloombergGPT-50B**.
 
182
  answer_start = int(inputs.shape[-1])
183
  pred = tokenizer.decode(outputs[answer_start:], skip_special_tokens=True)
184
 
185
+ print(pred)
186
  ```
187
 
188
  ### LLaMA-3-8B (💡New!)