Text Generation
Transformers
PyTorch
English
llama
text-generation-inference
Inference Endpoints
dwadden commited on
Commit
49fd11c
1 Parent(s): c745b33

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -10,6 +10,6 @@ language:
10
 
11
  SciTulu is a collection of instruction-following language models targeting scientific literature understanding use cases. Starting from the [Tulu v2 7B](https://huggingface.co/allenai/tulu-2-7b) model, SciTulu is trained on a mix of science-specific demonstrations from the [SciRIFF dataset](https://huggingface.co/datasets/allenai/SciRIFF-train-mix), together with general-domain instructions from the [Tulu v2 SFT mix](https://huggingface.co/datasets/allenai/tulu-v2-sft-mixture). SciTulu 7B achives a 28.1% average improvement over Tulu v2 7B on nine held-out scientific literature understanding tasks. More information can be found in our preprint: [SciRIFF: A Resource to Enhance Language Model Instruction-Following over Scientific Literature](https://arxiv.org/abs/2406.07835).
12
 
13
- Training and evaluation code for SciTulu is available in our GitHub repository: https://huggingface.co/datasets/allenai/SciRIFF.
14
 
15
  See the [Tulu model card](https://huggingface.co/allenai/tulu-2-7b) for more information on potential risks, biases, and limitations.
 
10
 
11
  SciTulu is a collection of instruction-following language models targeting scientific literature understanding use cases. Starting from the [Tulu v2 7B](https://huggingface.co/allenai/tulu-2-7b) model, SciTulu is trained on a mix of science-specific demonstrations from the [SciRIFF dataset](https://huggingface.co/datasets/allenai/SciRIFF-train-mix), together with general-domain instructions from the [Tulu v2 SFT mix](https://huggingface.co/datasets/allenai/tulu-v2-sft-mixture). SciTulu 7B achives a 28.1% average improvement over Tulu v2 7B on nine held-out scientific literature understanding tasks. More information can be found in our preprint: [SciRIFF: A Resource to Enhance Language Model Instruction-Following over Scientific Literature](https://arxiv.org/abs/2406.07835).
12
 
13
+ Training and evaluation code for SciTulu is available in our GitHub repository: https://github.com/allenai/SciRIFF.
14
 
15
  See the [Tulu model card](https://huggingface.co/allenai/tulu-2-7b) for more information on potential risks, biases, and limitations.