Edit model card

Install

pip install peft transformers bitsandbytes

Run by transformers

from transformers import TextStreamer, AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
tokenizer = AutoTokenizer.from_pretrained("alpindale/Mistral-7B-v0.2-hf",)
mis_model = AutoModelForCausalLM.from_pretrained("alpindale/Mistral-7B-v0.2-hf", load_in_4bit = True)
mis_model = PeftModel.from_pretrained(mis_model, "svjack/emoji_Mistral7B_v2_lora")
mis_model = mis_model.eval()

streamer = TextStreamer(tokenizer)

def mistral_hf_predict(prompt, mis_model = mis_model,
    tokenizer = tokenizer, streamer = streamer,
    do_sample = True,
    top_p = 0.95,
    top_k = 40,
    max_new_tokens = 512,
    max_input_length = 3500,
    temperature = 0.9,
    repetition_penalty = 1.0,
    device = "cuda"):
    messages = [
        {"role": "user", "content": prompt[:max_input_length]}
    ]

    encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
    model_inputs = encodeds.to(device)

    generated_ids = mis_model.generate(model_inputs, max_new_tokens=max_new_tokens,
                                do_sample=do_sample,
                                  streamer = streamer,
                                  top_p = top_p,
                                  top_k = top_k,
                                  temperature = temperature,
                                  repetition_penalty = repetition_penalty,
                                  )
    out = tokenizer.batch_decode(generated_ids)[0].split("[/INST]")[-1].replace("</s>", "").strip()
    return out

out = mistral_hf_predict('''
对下面的内容添加emoji
走在公园的大道上,可以发现许多树的叶子,已染上了秋的色彩,到处可以看到黄灿灿的树叶。
其中最引人注目的是那金黄金黄的银杏树,远远望去,犹如金色的海洋.
微风吹过,银杏树叶纷纷飘落,就像一只只美丽的蝴蝶,展开双翅在空中飞舞。
''',
repetition_penalty = 1.1)
print(out)

Output

🍃🎊🍂🌞走在公园的大道上,可以发现许多树的叶子,已染上了秋的色彩,到处可以看到黄灿灿的树叶 ☀️。
其中最引人注目的是那金黄金黄的银杏树 🌟,远远望去,犹如金色的海洋 🌊。
微风吹过,银杏树叶纷纷飘落,就像一只只美丽的蝴蝶 🦋,展开双翅在空中飞舞 ✈️

train_2024-05-15-20-33-30

This model is a fine-tuned version of alpindale/Mistral-7B-v0.2-hf on the emoji_add_instruction_zh dataset.

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 2
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • num_epochs: 3.0
  • mixed_precision_training: Native AMP

Training results

Framework versions

  • PEFT 0.10.0
  • Transformers 4.40.2
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
5
Unable to determine this model’s pipeline type. Check the docs .

Adapter for