Edit model card

This is the LLaMAfied version of Qwen-14B model by Alibaba Cloud.

This model is converted with https://github.com/hiyouga/LLaMA-Factory/blob/main/tests/llamafy_qwen.py

The tokenizer is borrowed from https://huggingface.co/CausalLM/72B-preview-llamafied-qwen-llamafy

You may use this model for fine-tuning in downstream tasks, we recommend using our efficient fine-tuning toolkit. https://github.com/hiyouga/LLaMA-Factory

Usage:

from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer

tokenizer = AutoTokenizer.from_pretrained("imdatta0/qwen_14b_llamafied")
model = AutoModelForCausalLM.from_pretrained("imdatta0/qwen_14b_llamafied", torch_dtype="auto", device_map="auto")

Thanks to : hiyouga/Qwen-14B-Chat-LLaMAfied

Downloads last month
1
Safetensors
Model size
14.2B params
Tensor type
BF16
·
Inference Examples
Inference API (serverless) has been turned off for this model.