Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,24 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# LLama-2 7b Wu, Koo, Black, Blum, Scalzo, Kurtz
|
2 |
+
|
3 |
+
## Model Description
|
4 |
+
|
5 |
+
LLama-2-WuKurtz is a state-of-the-art language model developed by Wu, Koo, Black, Blum, Scalzo, and Kurtz. It has been fine-tuned on our synthesized dataset comprising 80,000 training examples mastering nephrology.
|
6 |
+
This model is apart of our paper Boosting Open-Sourced Large Language Models with Proprietary Imitation Learning [released soon!]
|
7 |
+
|
8 |
+
## Training Data
|
9 |
+
|
10 |
+
The model was trained on a Nephrology synthesized dataset that was carefully curated and preprocessed. This dataset includes 80,000 examples that cover a wide range from imitation learning from proprietary LLMs, proprietary data, and lecture information, providing the model with a comprehensive understanding of Nephrology
|
11 |
+
|
12 |
+
## Model Performance
|
13 |
+
|
14 |
+
Detailed performance metrics will be updated soon!
|
15 |
+
|
16 |
+
## Usage
|
17 |
+
|
18 |
+
You can use this model for a variety of NLP tasks, including but not limited to text generation, text classification, sentiment analysis, and named entity recognition.
|
19 |
+
|
20 |
+
```python
|
21 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
22 |
+
|
23 |
+
tokenizer = AutoTokenizer.from_pretrained("SeanWu25/llama-2-7b-WuKurtz")
|
24 |
+
model = AutoModelForCausalLM.from_pretrained("SeanWu25/llama-2-7b-WuKurtz")
|