Text Generation
Transformers
Safetensors
llama
code
granite
Eval Results
text-generation-inference
amezasor commited on
Commit
2263de3
1 Parent(s): bbe82c2

typo correction

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -273,7 +273,7 @@ Citation
273
  # Granite-3B-Code-Base
274
 
275
  ## Model Summary
276
- **Granite-3B-Code-Base** is a decoder-only code model designed for code generative tasks (e.g.code generation, code explanation, code fixing, etc.). It was trained from scratch with a two-phase training strategy. In phase 1, our model is trained on 3 to 4 trillion tokens sourced from 116 programming languages, ensuring a comprehensive understanding of programming languages and syntax. In phase 2, our model is trained on 500 billion tokens with a carefully designed mixture of high-quality data from code and natural language domains to improve the models’ ability to reason and follow instructions.
277
 
278
  - **Developers:** IBM Research
279
  - **GitHub Repository:** [ibm-granite/granite-code-models](https://github.com/ibm-granite/granite-code-models)
 
273
  # Granite-3B-Code-Base
274
 
275
  ## Model Summary
276
+ **Granite-3B-Code-Base** is a decoder-only code model designed for code generative tasks (e.g., code generation, code explanation, code fixing, etc.). It is trained from scratch with a two-phase training strategy. In phase 1, our model is trained on 3 to 4 trillion tokens sourced from 116 programming languages, ensuring a comprehensive understanding of programming languages and syntax. In phase 2, our model is trained on 500 billion tokens with a carefully designed mixture of high-quality data from code and natural language domains to improve the models’ ability to reason and follow instructions.
277
 
278
  - **Developers:** IBM Research
279
  - **GitHub Repository:** [ibm-granite/granite-code-models](https://github.com/ibm-granite/granite-code-models)