kumarijy commited on
Commit
aa7e752
1 Parent(s): 0471427

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -4
README.md CHANGED
@@ -10,7 +10,6 @@ tags:
10
  # Phi-2-OV-Quantized Model Card
11
 
12
  The original source of this model is: [microsoft/phi-2](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-base)
13
- This Phi-2 model is a transformer model with 2.7 billion parameters.
14
  This model is optimized and converted to OpenVino Intermediate Representation (IR) format using Optimum-cli.
15
  The model has been exported from Int4 while exporting this model from Huggingface.
16
 
@@ -21,9 +20,9 @@ Intended to be used with:
21
 
22
  - **Model type:** a Transformer-based model with next-word prediction objective
23
  - **Language(s):** English
24
- - **License:** This model is licensed under the MIT License. The use of DeepSeek Coder model is subject to the model License. DeppSeek Coder supports commercial use. See the [LICENSE-MODEL](https://github.com/deepseek-ai/deepseek-coder/blob/main/LICENSE-MODEL) for more details.
25
- - **Model Summary:** deepseek-coder-1.3b-base is a 1.3B parameter model with Multi-Head Attention trained on 1 trillion tokens.
26
- - **Resources for more information:** [deepseek-coder-1.3b-base](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-base), [Paper](https://arxiv.org/abs/2401.14196).
27
 
28
 
29
  # Intended Uses
 
10
  # Phi-2-OV-Quantized Model Card
11
 
12
  The original source of this model is: [microsoft/phi-2](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-base)
 
13
  This model is optimized and converted to OpenVino Intermediate Representation (IR) format using Optimum-cli.
14
  The model has been exported from Int4 while exporting this model from Huggingface.
15
 
 
20
 
21
  - **Model type:** a Transformer-based model with next-word prediction objective
22
  - **Language(s):** English
23
+ - **License:** This model is licensed under the [MIT License](https://huggingface.co/microsoft/phi-2/resolve/main/LICENSE)
24
+ - **Model Summary:** Phi-2 is a Transformer with 2.7 billion parameters. It was trained using the same data sources as Phi-1.5, augmented with a new data source that consists of various NLP synthetic texts and filtered websites (for safety and educational value). When assessed against benchmarks testing common sense, language understanding, and logical reasoning, Phi-2 showcased a nearly state-of-the-art performance among models with less than 13 billion parameters.
25
+ - **Resources for more information:** [microsoft/phi-2](https://huggingface.co/microsoft/phi-2)
26
 
27
 
28
  # Intended Uses