Text Generation
Transformers
Safetensors
llama
code
granite
Eval Results
text-generation-inference
mayank-mishra commited on
Commit
2f97d76
1 Parent(s): 15049b3

add abstract

Browse files
Files changed (1) hide show
  1. README.md +7 -0
README.md CHANGED
@@ -222,6 +222,13 @@ model-index:
222
  veriefied: false
223
  ---
224
 
 
 
 
 
 
 
 
225
  # Granite-3B-Code-Base
226
 
227
  ## Model Summary
 
222
  veriefied: false
223
  ---
224
 
225
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62cd5057674cdb524450093d/BucUQFAghqlrmQl9bKJVN.png)
226
+
227
+ Large Language Models (LLMs) trained on code are revolutionizing the software development process. Increasingly, code LLMs are being integrated into software development environments to improve the productivity of human programmers, and LLM-based agents are beginning to show promise for handling complex tasks autonomously. Realizing the full potential of code LLMs requires a wide range of capabilities, including code generation, fixing bugs, explaining and documenting code, maintaining repositories, and more.
228
+ In this work, we introduce the Granite series of decoder-only code models for code generative tasks, trained with code written in 116 programming languages.
229
+ The family consists of models ranging in size from 3 to 34 billion parameters,
230
+ suitable for applications ranging from complex application modernization tasks to on-device memory-constrained use cases. Evaluation on HumanEvalPack demonstrates that the granite code models matches or outperforms corresponding Code Llama models of twice their size. Evaluation on a comprehensive set of tasks demonstrates that consistently reaches state-of-the-art performance among available open-source code LLMs. Remarkably, Granite 34B beats recently released Code Llama 70B in HumanEval, demonstrating not only the efficacy for code generation but also its efficiency which is a critical factor for using LLMs at scale. The Granite Code model family was optimized for enterprise software development workflows and performs well across a range of coding tasks (e.g. code generation, fixing and explanation), making it a versatile 'all around' code model.
231
+
232
  # Granite-3B-Code-Base
233
 
234
  ## Model Summary