AdaptLLM commited on
Commit
f45c799
1 Parent(s): b8827d3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -0
README.md CHANGED
@@ -74,6 +74,9 @@ pred = tokenizer.decode(outputs[answer_start:], skip_special_tokens=True)
74
  print(pred)
75
  ```
76
 
 
 
 
77
  ## 2. Domain-Specific Tasks
78
 
79
  ### Pre-templatized Testing Splits
 
74
  print(pred)
75
  ```
76
 
77
+ ### LLaMA-3-8B (💡New!)
78
+ In our recent research on [Instruction-Pretrain](https://huggingface.co/papers/2406.14491), we developed a context-based instruction synthesizer to augment the raw corpora with instruction-response pairs, **enabling Llama3-8B to be comparable to or even outperform Llama3-70B**: [Finance-Llama3-8B](https://huggingface.co/instruction-pretrain/finance-Llama3-8B), [Biomedicine-Llama3-8B](https://huggingface.co/instruction-pretrain/medicine-Llama3-8B).
79
+
80
  ## 2. Domain-Specific Tasks
81
 
82
  ### Pre-templatized Testing Splits