--- license: apache-2.0 language: - en - zh pipeline_tag: text-generation --- # Unichat-llama3-Chinese-8B ## 介绍 * 中国联通发布业界第一个llama3中文指令微调模型,2024年4月19日22点 * 本模型以[**Meta Llama 3**](https://huggingface.co/collections/meta-llama/meta-llama-3-66214712577ca38149ebb2b6)为基础,增加中文数据进行训练,实现llama3模型高质量中文问答,模型支持原生长度为8K * 基础模型 [**Meta-Llama-3-8B**](https://huggingface.co/meta-llama/Meta-Llama-3-8B) ### 📊 数据 - 高质量指令数据,覆盖多个领域和行业,为模型训练提供充足的数据支持。 - 微调指令数据经过严格的人工筛查,保证优质的指令数据用于模型微调 ## 快速开始 ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "UnicomLLM/Unichat-llama3-Chinese-8B" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device="cuda", ) messages = [ {"role": "system", "content": "You are a helpful assistant"}, {"role": "user", "content": "Who are you?"}, ] prompt = pipeline.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) terminators = [ pipeline.tokenizer.eos_token_id, pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = model.generate( prompt, max_new_tokens=2048, eos_token_id=terminators, do_sample=False, temperature=0.6, top_p=1, repetition_penalty=1.05 ) response = outputs[0][input_ids.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True)) ``` ## 资源 更多模型,数据集和训练相关细节请参考: * Github:[**Unichat-llama3-Chinese**](https://github.com/UnicomAI/Unichat-llama3-Chinese)