gugarosa commited on
Commit
cb3004d
1 Parent(s): 8399ff6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -4
README.md CHANGED
@@ -43,8 +43,6 @@ where the model generates the code after the comments. (Note: This is a legitima
43
 
44
  * Direct adoption for production coding tasks is out of the scope of this research project. As a result, Phi-1 has not been tested to ensure that it performs adequately for production-level code. Please refer to the limitation sections of this document for more details.
45
 
46
- * If you are using `transformers<4.37.0`, always load the model with `trust_remote_code=True` to prevent side-effects.
47
-
48
  ## Sample Code
49
 
50
  ```python
@@ -53,8 +51,8 @@ from transformers import AutoModelForCausalLM, AutoTokenizer
53
 
54
  torch.set_default_device("cuda")
55
 
56
- model = AutoModelForCausalLM.from_pretrained("microsoft/phi-1", torch_dtype="auto", trust_remote_code=True)
57
- tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-1", trust_remote_code=True)
58
 
59
  inputs = tokenizer('''def print_prime(n):
60
  """
@@ -73,6 +71,7 @@ print(text)
73
  * Replicate Scripts Online: As our model is trained on Python scripts found online, there is a small chance it may replicate such scripts, especially if they appear repetitively across different online sources.
74
 
75
  * Generate Inaccurate Code: The model frequently generates incorrect code. We suggest that users view these outputs as a source of inspiration rather than definitive solutions.
 
76
  * Unreliable Responses to Alternate Formats: Despite appearing to comprehend instructions in formats like Q&A or chat, our models often respond with inaccurate answers, even when seeming confident. Their capabilities with non-code formats are significantly more limited.
77
 
78
  * Limitations on Natural Language Comprehension. As a coding bot, Phi-1's main focus is to help with coding-related questions. While it may have some natural language comprehension capabilities, its primary function is not to engage in general conversations or demonstrate common sense like a general AI assistant. Its strength lies in providing assistance and guidance in the context of programming and software development.
 
43
 
44
  * Direct adoption for production coding tasks is out of the scope of this research project. As a result, Phi-1 has not been tested to ensure that it performs adequately for production-level code. Please refer to the limitation sections of this document for more details.
45
 
 
 
46
  ## Sample Code
47
 
48
  ```python
 
51
 
52
  torch.set_default_device("cuda")
53
 
54
+ model = AutoModelForCausalLM.from_pretrained("microsoft/phi-1", torch_dtype="auto")
55
+ tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-1")
56
 
57
  inputs = tokenizer('''def print_prime(n):
58
  """
 
71
  * Replicate Scripts Online: As our model is trained on Python scripts found online, there is a small chance it may replicate such scripts, especially if they appear repetitively across different online sources.
72
 
73
  * Generate Inaccurate Code: The model frequently generates incorrect code. We suggest that users view these outputs as a source of inspiration rather than definitive solutions.
74
+
75
  * Unreliable Responses to Alternate Formats: Despite appearing to comprehend instructions in formats like Q&A or chat, our models often respond with inaccurate answers, even when seeming confident. Their capabilities with non-code formats are significantly more limited.
76
 
77
  * Limitations on Natural Language Comprehension. As a coding bot, Phi-1's main focus is to help with coding-related questions. While it may have some natural language comprehension capabilities, its primary function is not to engage in general conversations or demonstrate common sense like a general AI assistant. Its strength lies in providing assistance and guidance in the context of programming and software development.