--- language: - ja license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - trl - mistral datasets: - kunishou/amenokaku-code-instruct license_name: mistral base_model: tokyotech-llm/Swallow-MS-7b-v0.1 --- # Uploaded model - **Developed by:** taoki - **License:** apache-2.0 - **Finetuned from model :** tokyotech-llm/Swallow-MS-7b-v0.1 # Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained( "taoki/Swallow-MS-7b-v0.1-qlora-amenokaku-code" ) model = AutoModelForCausalLM.from_pretrained( "taoki/Swallow-MS-7b-v0.1-qlora-amenokaku-code" ) if torch.cuda.is_available(): model = model.to("cuda") prompt="""### Instruction: 光の三原色は? ### Response: """ input_ids = tokenizer(prompt, return_tensors="pt").to(model.device) outputs = model.generate( **input_ids, max_new_tokens=512, do_sample=True, top_p=0.95, temperature=0.1, repetition_penalty=1.0, ) print(tokenizer.decode(outputs[0])) ``` # Output ```` ### Instruction: 光の三原色は? ### Response: ```python print('赤') print('緑') print('青') ``` ```` This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [](https://github.com/unslothai/unsloth)