hamadandrabi commited on
Commit
06b32e6
1 Parent(s): 9e3f804

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +67 -42
README.md CHANGED
@@ -4,26 +4,39 @@ datasets:
4
  ---
5
  Original Model Card:
6
 
7
- Model Summary
8
- The language model phi-1.5 is a Transformer with 1.3 billion parameters. It was trained using the same data sources as phi-1, augmented with a new data source that consists of various NLP synthetic texts. When assessed against benchmarks testing common sense, language understanding, and logical reasoning, phi-1.5 demonstrates a nearly state-of-the-art performance among models with less than 10 billion parameters.
 
 
 
 
 
 
 
 
 
9
 
10
- We did not fine-tune phi-1.5 either for instruction following or through reinforcement learning from human feedback. The intention behind crafting this open-source model is to provide the research community with a non-restricted small model to explore vital safety challenges, such as reducing toxicity, understanding societal biases, enhancing controllability, and more.
11
 
12
  For a safer model release, we exclude generic web-crawl data sources such as common-crawl from the training. This strategy prevents direct exposure to potentially harmful online content, enhancing the model's safety without RLHF. However, the model is still vulnerable to generating harmful content. We hope the model can help the research community to further study the safety of language models.
13
 
14
  phi-1.5 can write poems, draft emails, create stories, summarize texts, write Python code (such as downloading a Hugging Face transformer model), etc.
15
 
16
- Intended Uses
17
  Given the nature of the training data, phi-1.5 is best suited for prompts using the QA format, the chat format, and the code format. Note that phi-1.5, being a base model, often produces irrelevant text following the main answer. In the following example, we've truncated the answer for illustrative purposes only.
18
 
19
- QA format:
 
 
20
  Write a detailed analogy between mathematics and a lighthouse.
21
 
22
  Answer: Mathematics is like a lighthouse, guiding us through the vast ocean of numbers and calculations. Just as a lighthouse illuminates the darkness, mathematics provides us with a clear path to navigate through complex problems. It helps us make sense of the world around us, just like a lighthouse helps ships find their way home.
23
-
24
  where the model generates the text after "Answer:".
25
 
26
- Chat format:
 
 
27
  Alice: I don't know why, I'm struggling to maintain focus while studying. Any suggestions?
28
 
29
  Bob: Have you tried using a timer? It can help you stay on track and avoid distractions.
@@ -41,10 +54,11 @@ Alice: Thanks for the advice, guys. I feel more motivated now.
41
  Charlie: No problem, Alice. We're all in this together.
42
 
43
  Bob: Yeah, and remember that it's okay to ask for help if you need it. We're here to support each other.
44
-
45
  where the model generates the text after the first "Bob:".
46
 
47
- Code format:
 
48
  def print_prime(n):
49
  """
50
  Print all primes between 1 and n
@@ -59,36 +73,42 @@ def print_prime(n):
59
  if is_prime:
60
  primes.append(num)
61
  print(primes)
62
-
63
  where the model generates the text after the comments.
64
 
65
- Notes
66
-
67
- phi-1.5 is intended for research purposes. The model-generated text/code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing these models in their applications.
68
- Direct adoption for production tasks is out of the scope of this research project. As a result, phi-1.5 has not been tested to ensure that it performs adequately for any production-level application. Please refer to the limitation sections of this document for more details.
69
- Limitations of phi-1.5
70
- Generate Inaccurate Code and Facts: The model often produces incorrect code snippets and statements. Users should treat these outputs as suggestions or starting points, not as definitive or accurate solutions.
71
- Limited Scope for code: If the model generates Python scripts that utilize uncommon packages or scripts in other languages, we strongly recommend users manually verify all API uses.
72
- Unreliable Responses to Instruction: The model has not undergone instruction fine-tuning. As a result, it may struggle or fail to adhere to intricate or nuanced instructions provided by users.
73
- Language Limitations: The model is primarily designed to understand standard English. Informal English, slang, or any other language outside of English might pose challenges to its comprehension, leading to potential misinterpretations or errors in response.
74
- Potential Societal Biases: Regardless of the safe data used for its training, the model is not entirely free from societal biases. There's a possibility it may generate content that mirrors these societal biases, particularly if prompted or instructed to do so. We urge users to be aware of this and to exercise caution and critical thinking when interpreting model outputs.
75
- Toxicity: Despite that the model is trained with carefully selected data, the model can still produce harmful content if explicitly prompted or instructed to do so. We chose to release the model for research purposes only -- We hope to help the open-source community develop the most effective ways to reduce the toxicity of a model directly after pretraining.
76
- Training
77
- Model
78
- Architecture: a Transformer-based model with next-word prediction objective
79
- Dataset size: 30B tokens
80
- Training tokens: 150B tokens
81
- Precision: fp16
82
- GPUs: 32xA100-40G
83
- Training time: 8 days
84
- Software
85
- PyTorch
86
- DeepSpeed
87
- flash-attention
88
- License
89
- The model is licensed under the Research License.
90
-
91
- Sample Code
 
 
 
 
 
 
92
  import torch
93
  from transformers import AutoModelForCausalLM, AutoTokenizer
94
 
@@ -104,21 +124,26 @@ def print_prime(n):
104
  outputs = model.generate(**inputs, max_length=200)
105
  text = tokenizer.batch_decode(outputs)[0]
106
  print(text)
 
107
 
108
- If you need to use the model in a lower precision (e.g., FP16), please wrap the model's forward pass with torch.autocast(), as follows:
109
-
110
  with torch.autocast(model.device.type, dtype=torch.float16, enabled=True):
111
  outputs = model.generate(**inputs, max_length=200)
 
112
 
113
- Remark. In the generation function, our model currently does not support beam search (num_beams > 1). Furthermore, in the forward pass of the model, we currently do not support outputting hidden states or attention values, or using custom input embeddings (instead of the model's).
 
 
 
114
 
115
- Citation
116
  You can find the paper at https://arxiv.org/abs/2309.05463
117
 
 
118
  @article{textbooks2,
119
  title={Textbooks Are All You Need II: \textbf{phi-1.5} technical report},
120
  author={Li, Yuanzhi and Bubeck, S{\'e}bastien and Eldan, Ronen and Del Giorno, Allie and Gunasekar, Suriya and Lee, Yin Tat},
121
  journal={arXiv preprint arXiv:2309.05463},
122
  year={2023}
123
  }
124
-
 
4
  ---
5
  Original Model Card:
6
 
7
+ ---
8
+ license: other
9
+ license_name: microsoft-research-license
10
+ license_link: https://huggingface.co/microsoft/phi-1_5/resolve/main/Research%20License.docx
11
+ language:
12
+ - en
13
+ pipeline_tag: text-generation
14
+ ---
15
+ ## Model Summary
16
+
17
+ The language model phi-1.5 is a Transformer with **1.3 billion** parameters. It was trained using the same data sources as [phi-1](https://huggingface.co/microsoft/phi-1), augmented with a new data source that consists of various NLP synthetic texts. When assessed against benchmarks testing common sense, language understanding, and logical reasoning, phi-1.5 demonstrates a nearly state-of-the-art performance among models with less than 10 billion parameters.
18
 
19
+ We **did not** fine-tune phi-1.5 either for **instruction following or through reinforcement learning from human feedback**. The intention behind crafting this open-source model is to provide the research community with a non-restricted small model to explore vital safety challenges, such as reducing toxicity, understanding societal biases, enhancing controllability, and more.
20
 
21
  For a safer model release, we exclude generic web-crawl data sources such as common-crawl from the training. This strategy prevents direct exposure to potentially harmful online content, enhancing the model's safety without RLHF. However, the model is still vulnerable to generating harmful content. We hope the model can help the research community to further study the safety of language models.
22
 
23
  phi-1.5 can write poems, draft emails, create stories, summarize texts, write Python code (such as downloading a Hugging Face transformer model), etc.
24
 
25
+ ## Intended Uses
26
  Given the nature of the training data, phi-1.5 is best suited for prompts using the QA format, the chat format, and the code format. Note that phi-1.5, being a base model, often produces irrelevant text following the main answer. In the following example, we've truncated the answer for illustrative purposes only.
27
 
28
+ #### QA format:
29
+
30
+ ```markdown
31
  Write a detailed analogy between mathematics and a lighthouse.
32
 
33
  Answer: Mathematics is like a lighthouse, guiding us through the vast ocean of numbers and calculations. Just as a lighthouse illuminates the darkness, mathematics provides us with a clear path to navigate through complex problems. It helps us make sense of the world around us, just like a lighthouse helps ships find their way home.
34
+ ```
35
  where the model generates the text after "Answer:".
36
 
37
+ #### Chat format:
38
+
39
+ ```markdown
40
  Alice: I don't know why, I'm struggling to maintain focus while studying. Any suggestions?
41
 
42
  Bob: Have you tried using a timer? It can help you stay on track and avoid distractions.
 
54
  Charlie: No problem, Alice. We're all in this together.
55
 
56
  Bob: Yeah, and remember that it's okay to ask for help if you need it. We're here to support each other.
57
+ ```
58
  where the model generates the text after the first "Bob:".
59
 
60
+ #### Code format:
61
+ ```python
62
  def print_prime(n):
63
  """
64
  Print all primes between 1 and n
 
73
  if is_prime:
74
  primes.append(num)
75
  print(primes)
76
+ ```
77
  where the model generates the text after the comments.
78
 
79
+ **Notes**
80
+ * phi-1.5 is intended for research purposes. The model-generated text/code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing these models in their applications.
81
+ * Direct adoption for production tasks is out of the scope of this research project. As a result, phi-1.5 has not been tested to ensure that it performs adequately for any production-level application. Please refer to the limitation sections of this document for more details.
82
+
83
+ ## Limitations of phi-1.5
84
+
85
+ * Generate Inaccurate Code and Facts: The model often produces incorrect code snippets and statements. Users should treat these outputs as suggestions or starting points, not as definitive or accurate solutions.
86
+ * Limited Scope for code: If the model generates Python scripts that utilize uncommon packages or scripts in other languages, we strongly recommend users manually verify all API uses.
87
+ * Unreliable Responses to Instruction: The model has not undergone instruction fine-tuning. As a result, it may struggle or fail to adhere to intricate or nuanced instructions provided by users.
88
+ * Language Limitations: The model is primarily designed to understand standard English. Informal English, slang, or any other language outside of English might pose challenges to its comprehension, leading to potential misinterpretations or errors in response.
89
+ * Potential Societal Biases: Regardless of the safe data used for its training, the model is not entirely free from societal biases. There's a possibility it may generate content that mirrors these societal biases, particularly if prompted or instructed to do so. We urge users to be aware of this and to exercise caution and critical thinking when interpreting model outputs.
90
+ * Toxicity: Despite that the model is trained with carefully selected data, the model can still produce harmful content if explicitly prompted or instructed to do so. We chose to release the model for research purposes only -- We hope to help the open-source community develop the most effective ways to reduce the toxicity of a model directly after pretraining.
91
+
92
+ ## Training
93
+
94
+ ### Model
95
+ * Architecture: a Transformer-based model with next-word prediction objective
96
+ * Dataset size: 30B tokens
97
+ * Training tokens: 150B tokens
98
+ * Precision: fp16
99
+ * GPUs: 32xA100-40G
100
+ * Training time: 8 days
101
+
102
+ ### Software
103
+ * [PyTorch](https://github.com/pytorch/pytorch)
104
+ * [DeepSpeed](https://github.com/microsoft/DeepSpeed)
105
+ * [flash-attention](https://github.com/HazyResearch/flash-attention)
106
+
107
+ ### License
108
+ The model is licensed under the [Research License](https://huggingface.co/microsoft/phi-1_5/resolve/main/Research%20License.docx).
109
+
110
+ ### Sample Code
111
+ ```python
112
  import torch
113
  from transformers import AutoModelForCausalLM, AutoTokenizer
114
 
 
124
  outputs = model.generate(**inputs, max_length=200)
125
  text = tokenizer.batch_decode(outputs)[0]
126
  print(text)
127
+ ```
128
 
129
+ If you need to use the model in a lower precision (e.g., FP16), please wrap the model's forward pass with `torch.autocast()`, as follows:
130
+ ```python
131
  with torch.autocast(model.device.type, dtype=torch.float16, enabled=True):
132
  outputs = model.generate(**inputs, max_length=200)
133
+ ```
134
 
135
+ **Remark.** In the generation function, our model currently does not support beam search (`num_beams` > 1).
136
+ Furthermore, in the forward pass of the model, we currently do not support outputting hidden states or attention values, or using custom input embeddings (instead of the model's).
137
+
138
+ ### Citation
139
 
 
140
  You can find the paper at https://arxiv.org/abs/2309.05463
141
 
142
+ ```bib
143
  @article{textbooks2,
144
  title={Textbooks Are All You Need II: \textbf{phi-1.5} technical report},
145
  author={Li, Yuanzhi and Bubeck, S{\'e}bastien and Eldan, Ronen and Del Giorno, Allie and Gunasekar, Suriya and Lee, Yin Tat},
146
  journal={arXiv preprint arXiv:2309.05463},
147
  year={2023}
148
  }
149
+ ```