File size: 1,824 Bytes
3fcf466
 
2385c54
 
 
 
 
 
 
 
 
3fcf466
2385c54
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
---
license: apache-2.0
datasets:
- togethercomputer/RedPajama-Data-1T
language:
- en
pipeline_tag: text-generation
tags:
- sharded
- bf16
- instruct
---

# togethercomputer/RedPajama-INCITE-Instruct-7B-v0.1


This is the `togethercomputer/RedPajama-INCITE-Instruct-7B-v0.1` model but the model file(s) were sharded to ~2GB each to ensure it's possible to load on low-RAM runtimes (like Colab).

Please refer to the [original model card](https://huggingface.co/togethercomputer/RedPajama-INCITE-Instruct-7B-v0.1) for all details/issues w.r.t. to this model. Below as an adapted version of the inference code just as a reference.

## basic inference

See the original model card for more options etc. 

install packages

```bash
pip install -U transformers accelerate
```

inference (this will use a GPU if available):
```python
import torch
import transformers
from transformers import AutoTokenizer, AutoModelForCausalLM

MIN_TRANSFORMERS_VERSION = "4.25.1"

# check transformers version
assert (
    transformers.__version__ >= MIN_TRANSFORMERS_VERSION
), f"Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher."

model_name = "ethzanalytics/RedPajama-INCITE-Instruct-7B-v0.1-sharded-bf16"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name, torch_dtype=torch.bfloat16, device_map="auto"
)
# infer
prompt = "Q: The capital of France is?\nA:"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
input_length = inputs.input_ids.shape[1]
outputs = model.generate(
    **inputs,
    max_new_tokens=128,
    do_sample=True,
    temperature=0.7,
    top_p=0.7,
    top_k=50,
    return_dict_in_generate=True,
)
token = outputs.sequences[0, input_length:]
output_str = tokenizer.decode(token)
print(output_str)
"""
Paris
"""
```