rpand002 commited on
Commit
7bac3cd
1 Parent(s): 0e2b426

update context length

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  pipeline_tag: text-generation
3
- base_model: ibm-granite/granite-3b-code-base
4
  inference: false
5
  license: apache-2.0
6
  datasets:
@@ -205,10 +205,10 @@ model-index:
205
 
206
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62cd5057674cdb524450093d/1hzxoPwqkBJXshKVVe6_9.png)
207
 
208
- # Granite-3B-Code-Instruct
209
 
210
  ## Model Summary
211
- **Granite-3B-Code-Instruct** is a 3B parameter model fine tuned from *Granite-3B-Code-Base* on a combination of **permissively licensed** instruction data to enhance instruction following capabilities including logical reasoning and problem-solving skills.
212
 
213
  - **Developers:** IBM Research
214
  - **GitHub Repository:** [ibm-granite/granite-code-models](https://github.com/ibm-granite/granite-code-models)
@@ -223,13 +223,13 @@ The model is designed to respond to coding related instructions and can be used
223
  <!-- TO DO: Check starcoder2 instruct code example that includes the template https://huggingface.co/bigcode/starcoder2-15b-instruct-v0.1 -->
224
 
225
  ### Generation
226
- This is a simple example of how to use **Granite-3B-Code-Instruct** model.
227
 
228
  ```python
229
  import torch
230
  from transformers import AutoModelForCausalLM, AutoTokenizer
231
  device = "cuda" # or "cpu"
232
- model_path = "ibm-granite/granite-3b-code-instruct"
233
  tokenizer = AutoTokenizer.from_pretrained(model_path)
234
  # drop device_map if running on CPU
235
  model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device)
@@ -265,4 +265,4 @@ Granite Code Instruct models are trained on the following types of data.
265
  We train the Granite Code models using two of IBM's super computing clusters, namely Vela and Blue Vela, both outfitted with NVIDIA A100 and H100 GPUs respectively. These clusters provide a scalable and efficient infrastructure for training our models over thousands of GPUs.
266
 
267
  ## Ethical Considerations and Limitations
268
- Granite code instruct models are primarily finetuned using instruction-response pairs across a specific set of programming languages. Thus, their performance may be limited with out-of-domain programming languages. In this situation, it is beneficial providing few-shot examples to steer the model's output. Moreover, developers should perform safety testing and target-specific tuning before deploying these models on critical applications. The model also inherits ethical considerations and limitations from its base model. For more information, please refer to *[Granite-3B-Code-Base](https://huggingface.co/ibm-granite/granite-3b-code-base)* model card.
 
1
  ---
2
  pipeline_tag: text-generation
3
+ base_model: ibm-granite/granite-3b-code-base-2k
4
  inference: false
5
  license: apache-2.0
6
  datasets:
 
205
 
206
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62cd5057674cdb524450093d/1hzxoPwqkBJXshKVVe6_9.png)
207
 
208
+ # Granite-3B-Code-Instruct-2K
209
 
210
  ## Model Summary
211
+ **Granite-3B-Code-Instruct-2K** is a 3B parameter model fine tuned from *Granite-3B-Code-Base-2K* on a combination of **permissively licensed** instruction data to enhance instruction following capabilities including logical reasoning and problem-solving skills.
212
 
213
  - **Developers:** IBM Research
214
  - **GitHub Repository:** [ibm-granite/granite-code-models](https://github.com/ibm-granite/granite-code-models)
 
223
  <!-- TO DO: Check starcoder2 instruct code example that includes the template https://huggingface.co/bigcode/starcoder2-15b-instruct-v0.1 -->
224
 
225
  ### Generation
226
+ This is a simple example of how to use **Granite-3B-Code-Instruct-2K** model.
227
 
228
  ```python
229
  import torch
230
  from transformers import AutoModelForCausalLM, AutoTokenizer
231
  device = "cuda" # or "cpu"
232
+ model_path = "ibm-granite/granite-3b-code-instruct-2k"
233
  tokenizer = AutoTokenizer.from_pretrained(model_path)
234
  # drop device_map if running on CPU
235
  model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device)
 
265
  We train the Granite Code models using two of IBM's super computing clusters, namely Vela and Blue Vela, both outfitted with NVIDIA A100 and H100 GPUs respectively. These clusters provide a scalable and efficient infrastructure for training our models over thousands of GPUs.
266
 
267
  ## Ethical Considerations and Limitations
268
+ Granite code instruct models are primarily finetuned using instruction-response pairs across a specific set of programming languages. Thus, their performance may be limited with out-of-domain programming languages. In this situation, it is beneficial providing few-shot examples to steer the model's output. Moreover, developers should perform safety testing and target-specific tuning before deploying these models on critical applications. The model also inherits ethical considerations and limitations from its base model. For more information, please refer to *[Granite-3B-Code-Base-2K](https://huggingface.co/ibm-granite/granite-3b-code-base-2k)* model card.