Edit model card

This is the Mistral version of Qwen1.5-7B-Chat model by Alibaba Cloud. The original codebase can be found at: (https://github.com/hiyouga/LLaMA-Factory/blob/main/tests/llamafy_qwen.py). I have made modifications to make it compatible with qwen1.5. This model is converted with https://github.com/Minami-su/character_AI_open/blob/main/mistral_qwen2.py

special

1.Before using this model, you need to modify modeling_mistral.py in transformers library

2.vim /root/anaconda3/envs/train/lib/python3.9/site-packages/transformers/models/mistral/modeling_mistral.py

3.find MistralAttention,

4.modify q,k,v,o bias=False ----->, bias=config.attention_bias

Before: image/png After: image/png

Differences between qwen2 mistral and qwen2 llamafy

Compared to qwen2 llamafy,qwen2 mistral can use sliding window attention,qwen2 mistral is faster than qwen2 llamafy, and the context length is better

Usage:


from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
tokenizer = AutoTokenizer.from_pretrained("Minami-su/Qwen1.5-7B-Chat_mistral")
model = AutoModelForCausalLM.from_pretrained("Minami-su/Qwen1.5-7B-Chat_mistral", torch_dtype="auto", device_map="auto")
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)

messages = [
    {"role": "user", "content": "Who are you?"}
]
inputs = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
inputs = inputs.to("cuda")
generate_ids = model.generate(inputs,max_length=32768, streamer=streamer)

Test

load in 4bit

hf-causal (pretrained=Qwen1.5-7B-Chat), limit: None, provide_description: False, num_fewshot: 0, batch_size: 8
|    Task     |Version| Metric |Value |   |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge|      0|acc     |0.4155|±  |0.0144|
|             |       |acc_norm|0.4480|±  |0.0145|
|truthfulqa_mc|      1|mc1     |0.3513|±  |0.0167|
|             |       |mc2     |0.5165|±  |0.0159|
|winogrande   |      0|acc     |0.6330|±  |0.0135|

load in 4bit

hf-causal (pretrained=Qwen1.5-7B-Chat_mistral), limit: None, provide_description: False, num_fewshot: 0, batch_size: 16
|    Task     |Version| Metric |Value |   |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge|      0|acc     |0.4172|±  |0.0144|
|             |       |acc_norm|0.4480|±  |0.0145|
|truthfulqa_mc|      1|mc1     |0.3488|±  |0.0167|
|             |       |mc2     |0.5161|±  |0.0159|
|winogrande   |      0|acc     |0.6306|±  |0.0136|

```

Downloads last month
3,838
Inference API
Input a message to start chatting with Minami-su/Qwen1.5-7B-Chat_mistral.
Inference API (serverless) has been turned off for this model.