suriyagunasekar commited on
Commit
c8f6ad8
1 Parent(s): 762a311

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -8,7 +8,7 @@ pipeline_tag: text-generation
8
 
9
  The language model phi-1.5 is a Transformer with 1.3 billion parameters. It was trained using the same data sources as [phi-1](https://huggingface.co/microsoft/phi-1), augmented with a new data source that consists of various NLP synthetic texts. When assessed against benchmarks testing common sense, language understanding, and logical reasoning, phi-1.5 demonstrates a nearly state-of-the-art performance among models with less than 10 billion parameters.
10
 
11
- We did not fine-tune phi-1.5 either for instruction following or through reinforcement learning from human feedback. The intention behind crafting this open-source model is to provide the research community with a non-restricted small model to explore vital safety challenges, such as reducing toxicity, understanding societal biases, enhancing controllability, and more.
12
 
13
  For a safer model release, we exclude generic web-crawl data sources such as common-crawl from the training. This strategy prevents direct exposure to potentially harmful online content, enhancing the model's safety without RLHF. However, the model is still vulnerable to generating harmful content. We hope the model can help the research community to further study the safety of language models.
14
 
@@ -27,7 +27,7 @@ where the model generates the text after "Answer:".
27
  #### Chat format:
28
 
29
  ```markdown
30
- Alice: Alice: I don't know why, I'm struggling to maintain focus while studying. Any suggestions?
31
 
32
  Bob: Have you tried using a timer? It can help you stay on track and avoid distractions.
33
 
@@ -102,6 +102,7 @@ The model is licensed under the [Research License](https://huggingface.co/micros
102
  import torch
103
  from transformers import AutoModelForCausalLM, AutoTokenizer
104
 
 
105
  model = AutoModelForCausalLM.from_pretrained("microsoft/phi-1_5", trust_remote_code=True, torch_dtype="auto")
106
  tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-1_5", trust_remote_code=True, torch_dtype="auto")
107
  inputs = tokenizer('''```python
@@ -116,7 +117,7 @@ print(text)
116
  ```
117
 
118
  **Remark.** In the generation function, our model currently does not support beam search (`num_beams` >1) and `attention_mask' parameters.
119
- Furthermore, in the forward pass of the model, we currently do not support outputing hidden states or attention values, or using custom input embeddings (instead of the model's).
120
 
121
  ### Citation
122
 
 
8
 
9
  The language model phi-1.5 is a Transformer with 1.3 billion parameters. It was trained using the same data sources as [phi-1](https://huggingface.co/microsoft/phi-1), augmented with a new data source that consists of various NLP synthetic texts. When assessed against benchmarks testing common sense, language understanding, and logical reasoning, phi-1.5 demonstrates a nearly state-of-the-art performance among models with less than 10 billion parameters.
10
 
11
+ We **did not** fine-tune phi-1.5 either for **instruction following or through reinforcement learning from human feedback**. The intention behind crafting this open-source model is to provide the research community with a non-restricted small model to explore vital safety challenges, such as reducing toxicity, understanding societal biases, enhancing controllability, and more.
12
 
13
  For a safer model release, we exclude generic web-crawl data sources such as common-crawl from the training. This strategy prevents direct exposure to potentially harmful online content, enhancing the model's safety without RLHF. However, the model is still vulnerable to generating harmful content. We hope the model can help the research community to further study the safety of language models.
14
 
 
27
  #### Chat format:
28
 
29
  ```markdown
30
+ Alice: I don't know why, I'm struggling to maintain focus while studying. Any suggestions?
31
 
32
  Bob: Have you tried using a timer? It can help you stay on track and avoid distractions.
33
 
 
102
  import torch
103
  from transformers import AutoModelForCausalLM, AutoTokenizer
104
 
105
+ torch.set_default_device('cuda')
106
  model = AutoModelForCausalLM.from_pretrained("microsoft/phi-1_5", trust_remote_code=True, torch_dtype="auto")
107
  tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-1_5", trust_remote_code=True, torch_dtype="auto")
108
  inputs = tokenizer('''```python
 
117
  ```
118
 
119
  **Remark.** In the generation function, our model currently does not support beam search (`num_beams` >1) and `attention_mask' parameters.
120
+ Furthermore, in the forward pass of the model, we currently do not support outputting hidden states or attention values, or using custom input embeddings (instead of the model's).
121
 
122
  ### Citation
123