--- tags: - autotrain - text-generation widget: - text: 'I love AutoTrain because ' license: other datasets: - SeanJIE250/llama2_law language: - en - zh --- # Contact Information Email:zhengjie1sun@gmail.com # English INTRO Give me some red stars ♥️ if u like this model! It's the model focused on Law field, honestly,doing bad as a daily chatbot however,start to know Mandarin and can handle the case study in details. # Mandarin INTRO 老玩家点点红星♥️!中文法律对话机器人,具体案件审理较为不错。 # Usage First at first , implementing this command needs transformer library ,you can do the download directly.Hope u well! ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "SeanJIE250/chatbot_LAW" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "杀了人在中国判多少年?"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') outputs = model.generate(input_ids.to('cuda'),max_new_tokens=200)//you can adjust the max_new_tokens as you want. response = tokenizer.decode(outputs[0][input_ids.shape[1]:], skip_special_tokens=False) print(response) messages = [ {"role": "user", "content": "How to split the property if I divorced with my handsband?"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') outputs = model.generate(input_ids.to('cuda'),max_new_tokens=200)//you can adjust the max_new_tokens as you want. response = tokenizer.decode(outputs[0][input_ids.shape[1]:], skip_special_tokens=False) print(response) ```