File size: 873 Bytes
00713fd 66fa895 00713fd 3ed7103 00713fd |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
---
pipeline_tag: text-generation
---
Not my model(obviously); downloaded the Mistral release model from https://models.mistralcdn.com/mistral-7b-v0-2/mistral-7B-v0.2.tar and uploaded for my own sanity(and fine-tuning), since it's still not uploaded on Mistral repo.
The standard code works:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model = AutoModelForCausalLM.from_pretrained("redscroll/Mistral-7B-v0.2", torch_dtype=torch.bfloat16, device_map = "auto")
tokenizer = AutoTokenizer.from_pretrained("redscroll/Mistral-7B-v0.2")
input_text = "In my younger and more vulnerable years"
input_ids = tokenizer(input_text, return_tensors = "pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens = 500, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id)
print(tokenizer.decode(outputs[0]))
``` |