File size: 2,051 Bytes
599bd45 9734ee0 e3fefad 9734ee0 00a9ebe dd683bd 53fc189 e3fefad 00a9ebe 7aa637f 9feb919 72f647b dd683bd ae31c5c dd683bd ae31c5c dd683bd ae31c5c dd683bd ae31c5c 72f647b 9feb919 e3fefad 9feb919 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 |
---
language:
- tr
- en
- es
license: apache-2.0
library_name: transformers
tags:
- Generative AI
- text-generation-inference
- text-generation
- peft
- unsloth
- medical
- biology
- code
- space
---
# Model Trained By Meforgers
*This model, named 'Aixr,' is designed for science and artificial intelligence development. You can use it as the foundation for many of your scientific projects and interesting ideas. In short, Aixr is an artificial intelligence model that is based on futurism and innovation.*
- # *Firstly*
-If you intend to use unsloth with Pytorch 1.3.0: Utilize the "ampere" path for newer RTX 30xx GPUs or higher.
```python
pip install "unsloth[cu118-torch230] @ git+https://github.com/unslothai/unsloth.git"
pip install "unsloth[cu121-torch230] @ git+https://github.com/unslothai/unsloth.git"
pip install "unsloth[cu118-ampere-torch230] @ git+https://github.com/unslothai/unsloth.git"
pip install "unsloth[cu121-ampere-torch230] @ git+https://github.com/unslothai/unsloth.git"
```
-Also you can use another system
- # *Usage*
```python
from unsloth import FastLanguageModel
import torch
# Variable side
max_seq_length = 512
dtype = torch.float16
load_in_4bit = True
# Alpaca prompt
alpaca_prompt = """### Instruction:
{0}
### Input:
{1}
### Response:
{2}
"""
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="Meforgers/Aixr",
max_seq_length=max_seq_length,
dtype=dtype,
load_in_4bit=load_in_4bit,
)
FastLanguageModel.for_inference(model)
inputs = tokenizer(
[
alpaca_prompt.format(
"Can u text me basic python code?", # instruction side (You need to change that side)
"", # input
"", # output - leave this blank for generation!
)
],
return_tensors="pt"
).to("cuda")
outputs = model.generate(**inputs, max_new_tokens=128, use_cache=True)
print(tokenizer.batch_decode(outputs))
``` |