suriyagunasekar commited on
Commit
3670ef4
1 Parent(s): 9c0466b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -9
README.md CHANGED
@@ -8,10 +8,10 @@ tags:
8
  ---
9
  ## Model Summary
10
 
11
- The phi-1.5 is a language model with 1.3 billion parameters specialized for basic Python coding. Its training involved a variety of data sources, including subsets of Python codes from [The Stack v1.2](https://huggingface.co/datasets/bigcode/the-stack), Q&A content from [StackOverflow](https://archive.org/download/stackexchange), competition code from [code_contests](https://github.com/deepmind/code_contests), and synthetic Python textbooks and exercises generated by [gpt-3.5-turbo-0301](https://platform.openai.com/docs/models/gpt-3-5). Even though the model and the datasets are relatively small compared to contemporary Large Language Models (LLMs), phi-1 has demonstrated an impressive accuracy rate exceeding 45% on the simple Python coding benchmark, HumanEval.
12
 
13
  ## Intended Uses
14
- Given the nature of the training data, the phi-1 model are best suited for prompts using the code format:
15
 
16
  #### code format:
17
  ```python
@@ -29,17 +29,17 @@ def print_prime(n):
29
  where the model generates the code after the comments. (Note: This is a legitimate and correct use of the else statement in Python loops.)
30
 
31
  **Notes**
32
- * The phi-1 model are intended for research purposes. The model-generated code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing these models in their applications.
33
- * Direct adoption for production coding tasks is out of the scope of this research project. As a result, the phi-1 model have not been tested to ensure that they perform adequately for production-level code. Please refer to the limitation sections of this document for more details.
34
 
35
  ## Limitations of phi-1
36
 
37
  * Limited Scope: 99.8% of the Python scripts in our fine-tuning dataset use only the packages "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages, we strongly recommend users manually verify all API uses.
38
  * Replicate Scripts Online: As our model is trained on Python scripts found online, there is a small chance it may replicate such scripts, especially if they appear repetitively across different online sources.
39
- * Generate Inaccurate Code: The models frequently generate incorrect code. We suggest that users view these outputs as a source of inspiration rather than definitive solutions.
40
  * Unreliable Responses to Alternate Formats: Despite appearing to comprehend instructions in formats like Q&A or chat, our models often respond with inaccurate answers, even when seeming confident. Their capabilities with non-code formats are significantly more limited.
41
  * Limitations on Natural Language Comprehension. As a coding bot, phi-1's main focus is to help with coding-related questions. While it may have some natural language comprehension capabilities, its primary function is not to engage in general conversations or demonstrate common sense like a general AI assistant. Its strength lies in providing assistance and guidance in the context of programming and software development.
42
- * Potential Biases: The phi-1 family models, like other AI models, are trained on web and synthetic data. This data can contain biases and errors that might affect the AI's performance. Biases could stem from various sources like unbalanced representation, stereotypes, or controversial opinions present in the training data. As a result, the AI model might sometimes generate responses that reflect these biases or errors.
43
 
44
  ## Warning about Security Risks
45
  When leveraging the phi-1 model, it's paramount to be vigilant. The model, though powerful, can inadvertently introduce security vulnerabilities in the generated code. Examples include, but are not limited to:
@@ -56,11 +56,10 @@ Given these potential pitfalls, and others not explicitly mentioned, it's essent
56
  ## Training
57
  ### Model (phi-1)
58
  * Architecture: a Transformer-based model with next-word prediction objective
59
- * Training steps: ~24000 step
60
- * Training tokens: ~51B tokens
61
  * Precision: fp16
62
  * GPUs: 8 A100
63
- * Training time: 4 days
64
 
65
  ### Software
66
  * [PyTorch](https://github.com/pytorch/pytorch)
 
8
  ---
9
  ## Model Summary
10
 
11
+ The phi-1 is a language model with 1.3 billion parameters specialized for basic Python coding. Its training involved a variety of data sources, including subsets of Python codes from [The Stack v1.2](https://huggingface.co/datasets/bigcode/the-stack), Q&A content from [StackOverflow](https://archive.org/download/stackexchange), competition code from [code_contests](https://github.com/deepmind/code_contests), and synthetic Python textbooks and exercises generated by [gpt-3.5-turbo-0301](https://platform.openai.com/docs/models/gpt-3-5). Even though the model and the datasets are relatively small compared to contemporary Large Language Models (LLMs), phi-1 has demonstrated an impressive accuracy rate exceeding 50% on the simple Python coding benchmark, HumanEval.
12
 
13
  ## Intended Uses
14
+ Given the nature of the training data, the phi-1 model is best suited for prompts using the code format:
15
 
16
  #### code format:
17
  ```python
 
29
  where the model generates the code after the comments. (Note: This is a legitimate and correct use of the else statement in Python loops.)
30
 
31
  **Notes**
32
+ * The phi-1 model is intended for research purposes. The model-generated code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing this model in their applications.
33
+ * Direct adoption for production coding tasks is out of the scope of this research project. As a result, the phi-1 model has not been tested to ensure that it performs adequately for production-level code. Please refer to the limitation sections of this document for more details.
34
 
35
  ## Limitations of phi-1
36
 
37
  * Limited Scope: 99.8% of the Python scripts in our fine-tuning dataset use only the packages "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages, we strongly recommend users manually verify all API uses.
38
  * Replicate Scripts Online: As our model is trained on Python scripts found online, there is a small chance it may replicate such scripts, especially if they appear repetitively across different online sources.
39
+ * Generate Inaccurate Code: The model frequently generates incorrect code. We suggest that users view these outputs as a source of inspiration rather than definitive solutions.
40
  * Unreliable Responses to Alternate Formats: Despite appearing to comprehend instructions in formats like Q&A or chat, our models often respond with inaccurate answers, even when seeming confident. Their capabilities with non-code formats are significantly more limited.
41
  * Limitations on Natural Language Comprehension. As a coding bot, phi-1's main focus is to help with coding-related questions. While it may have some natural language comprehension capabilities, its primary function is not to engage in general conversations or demonstrate common sense like a general AI assistant. Its strength lies in providing assistance and guidance in the context of programming and software development.
42
+ * Potential Biases: The phi-1 model, like other AI models, is trained on web and synthetic data. This data can contain biases and errors that might affect the AI's performance. Biases could stem from various sources like unbalanced representation, stereotypes, or controversial opinions present in the training data. As a result, the model might sometimes generate responses that reflect these biases or errors.
43
 
44
  ## Warning about Security Risks
45
  When leveraging the phi-1 model, it's paramount to be vigilant. The model, though powerful, can inadvertently introduce security vulnerabilities in the generated code. Examples include, but are not limited to:
 
56
  ## Training
57
  ### Model (phi-1)
58
  * Architecture: a Transformer-based model with next-word prediction objective
59
+ * Training tokens: 54B tokens (7 unique tokens)
 
60
  * Precision: fp16
61
  * GPUs: 8 A100
62
+ * Training time: 6 days
63
 
64
  ### Software
65
  * [PyTorch](https://github.com/pytorch/pytorch)