matthewkenney commited on
Commit
3a3e821
1 Parent(s): 7909b26

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +70 -0
README.md ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ pipeline_tag: summarization
6
+ widget:
7
+ - text: What is the peak phase of T-eV?
8
+ example_title: Question Answering
9
+ tags:
10
+ - arxiv
11
+ ---
12
+ # Table of Contents
13
+
14
+ 0. [TL;DR](#TL;DR)
15
+ 1. [Model Details](#model-details)
16
+ 2. [Usage](#usage)
17
+ 3. [Uses](#uses)
18
+ 4. [Citation](#citation)
19
+
20
+ # TL;DR
21
+
22
+ This is a Phi-1_5 model trained on [camel-ai/biology](https://huggingface.co/datasets/camel-ai/biology). This model is for research purposes only and ***should not be used in production settings***.
23
+
24
+
25
+ ## Model Description
26
+
27
+
28
+ - **Model type:** Language model
29
+ - **Language(s) (NLP):** English
30
+ - **License:** Apache 2.0
31
+ - **Related Models:** [Phi-1_5](https://huggingface.co/microsoft/phi-1_5)
32
+
33
+ # Usage
34
+
35
+ Find below some example scripts on how to use the model in `transformers`:
36
+
37
+ ## Using the Pytorch model
38
+
39
+ ```python
40
+
41
+ from huggingface_hub import notebook_login
42
+ from datasets import load_dataset, Dataset
43
+ from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
44
+
45
+ model = "ArtifactAI/phi-biology"
46
+
47
+ model = AutoModelForCausalLM.from_pretrained(base_model, trust_remote_code= True)
48
+ tokenizer = AutoTokenizer.from_pretrained(base_model, trust_remote_code=True)
49
+
50
+ def generate(prompt):
51
+ inputs = tokenizer(f'''Below is an instruction that describes a task. Write a response that appropriately completes the request If you are adding additional white spaces, stop writing".\n\n### Instruction:\n{prompt}.\n\n### Response:\n ''', return_tensors="pt", return_attention_mask=False)
52
+ streamer = TextStreamer(tokenizer, skip_prompt= True)
53
+ _ = model.generate(**inputs, streamer=streamer, max_new_tokens=500)
54
+
55
+ generate("What are the common techniques used in identifying a new species, and how can scientists accurately categorize it within the existing taxonomy system?")
56
+ ```
57
+
58
+ ## Training Data
59
+
60
+ The model was trained on [camel-ai/biology](https://huggingface.co/datasets/camel-ai/biology), a dataset of question/answer pairs. Questions are generated using the t5-base model, while the answers are generated using the GPT-3.5-turbo model.
61
+
62
+ # Citation
63
+
64
+ ```
65
+ @misc{phi-math,
66
+ title={phi-biology},
67
+ author={Matthew Kenney},
68
+ year={2023}
69
+ }
70
+ ```