TL-CodeLLaMA-2
TL-CodeLLaMA-2 is a model designed for tool use, built upon CodeLLaMA-7b. It is trained on 1,217 data samples using the TL-Training framework and demonstrates effective performance across a variety of tool use tasks. More information can be found in the paper "TL-Training: A Task-Feature-Based Framework for Training Large Language Models in Tool Use".
Model Use
Requirements
To use this model, please make sure to install transformers:
pip install transformers
Data Orgnization
The data needs to be organized in the following format:
[
{
"role": "System",
"content": "Function:\ndef random_advice():\n \"\"\"\n Returns a random advice slip as a slip object.\n \"\"\"\n\nFunction:\ndef advice_by_id(slip_id:str):\n \"\"\"\n If an advice slip is found with the corresponding {slip_id}, a slip object is returned.\n\n Args:\n slip_id (string): The unique ID of this advice slip.\n \"\"\"\n\nFunction:\ndef search_advice(query:str):\n \"\"\"\n If an advice slip is found, containing the corresponding search term in {query}, an array of slip objects is returned inside a search object.\n\n Args:\n query (string): The search query provided.\n \"\"\"\n\nFunction:\ndef ask_to_user(question:str):\n \"\"\"\n You can ask user for guidance when you think you need more information to handle the task, but you should use this tool as less as you can.\n\n Args:\n question (string): The question you want to ask to user.\n \"\"\"\n\nFunction:\ndef finish(answer:str):\n \"\"\"\n Finish the task and give your answer.\n\n Args:\n answer (string): Your answer for the task.\n \"\"\"\n\n"
},
{
"role": "User",
"content": "Could you give me some advice about 'love'?"
},
{
"role": "Assistant",
"content": "search_advice(query = 'love') "
},
{
"role": "Output",
"content": "..."
}
]
Chat Template
The chat template is:
{% for message in messages %}{{message['role'] + ': ' + message['content']}}{% if loop.last %}{% if add_generation_prompt %}{{ '\nAssistant:' }}{% else %}{{ '</s>'}}{% endif %}{% else %}{{ '\n' }}{% endif %}{% endfor %}
Inference
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "Junjie-Ye/TL-CodeLLaMA-2"
data = [
{
"role": "System",
"content": "Function:\ndef random_advice():\n \"\"\"\n Returns a random advice slip as a slip object.\n \"\"\"\n\nFunction:\ndef advice_by_id(slip_id:str):\n \"\"\"\n If an advice slip is found with the corresponding {slip_id}, a slip object is returned.\n\n Args:\n slip_id (string): The unique ID of this advice slip.\n \"\"\"\n\nFunction:\ndef search_advice(query:str):\n \"\"\"\n If an advice slip is found, containing the corresponding search term in {query}, an array of slip objects is returned inside a search object.\n\n Args:\n query (string): The search query provided.\n \"\"\"\n\nFunction:\ndef ask_to_user(question:str):\n \"\"\"\n You can ask user for guidance when you think you need more information to handle the task, but you should use this tool as less as you can.\n\n Args:\n question (string): The question you want to ask to user.\n \"\"\"\n\nFunction:\ndef finish(answer:str):\n \"\"\"\n Finish the task and give your answer.\n\n Args:\n answer (string): Your answer for the task.\n \"\"\"\n\n"
},
{
"role": "User",
"content": "Could you give me some advice about 'love'?"
}
]
chat_template = "{% for message in messages %}{{message['role'] + ': ' + message['content']}}{% if loop.last %}{% if add_generation_prompt %}{{ '\nAssistant:' }}{% else %}{{ '</s>'}}{% endif %}{% else %}{{ '\n' }}{% endif %}{% endfor %}"
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype="auto",
device_map="auto",
trust_remote_code=True
).eval()
tokenizer = AutoTokenizer.from_pretrained(model_path,
padding_side="left",
trust_remote_code=True)
if tokenizer.pad_token_id is None:
tokenizer.pad_token_id = tokenizer.eos_token_id
text = tokenizer.apply_chat_template(
data,
tokenize=False,
chat_template=chat_template,
add_generation_prompt=add_generation_prompt
)
model_inputs = tokenizer(
[text], return_tensors="pt", padding=True).to("cuda")
generated_ids = model.generate(
max_new_tokens=1024,
**model_inputs,
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
print(response)
- Downloads last month
- 11
Model tree for Junjie-Ye/TL-CodeLLaMA-2
Base model
codellama/CodeLlama-7b-hf