ayan-sh003
commited on
Commit
•
2bf72df
1
Parent(s):
2e1f982
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,47 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
datasets:
|
4 |
+
- NousResearch/hermes-function-calling-v1
|
5 |
+
base_model: microsoft/Phi-3.5-mini-instruct
|
6 |
+
---
|
7 |
+
|
8 |
+
# phi3.5-phunction-calling Model Card
|
9 |
+
|
10 |
+
## Model Overview
|
11 |
+
|
12 |
+
**Model Name:** phi3.5-phunction-calling
|
13 |
+
|
14 |
+
**Description:** This model is a fine-tuned version of the phi3.5 model, specifically designed for function calling tasks. It has been optimized to understand and execute function calls accurately and efficiently.
|
15 |
+
|
16 |
+
## Intended Use
|
17 |
+
|
18 |
+
**Primary Use Case:** This model is intended for use in applications where function calling is a critical component, such as automated assistants, code generation, and API interaction.
|
19 |
+
|
20 |
+
**Limitations:** While the model is highly accurate, it may still produce errors or misunderstandings in complex or ambiguous function calls.
|
21 |
+
|
22 |
+
```py
|
23 |
+
from unsloth.chat_templates import get_chat_template
|
24 |
+
|
25 |
+
tokenizer = get_chat_template(
|
26 |
+
tokenizer,
|
27 |
+
chat_template = "phi-3", # Supports zephyr, chatml, mistral, llama, alpaca, vicuna, vicuna_old, unsloth
|
28 |
+
mapping = {"role" : "from", "content" : "value", "user" : "human", "assistant" : "gpt"}, # ShareGPT style
|
29 |
+
)
|
30 |
+
|
31 |
+
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
|
32 |
+
|
33 |
+
messages = [
|
34 |
+
{"from": "system", "value": "You are a function calling AI model. You are provided with function signatures within <tools> </tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions.\n<tools>\n[{'type': 'function', 'function': {'name': 'search_recipes', 'description': 'Searches for recipes based on given criteria.', 'parameters': {'type': 'object', 'properties': {'cuisine': {'type': 'string', 'description': 'The type of cuisine to search for.'}, 'dietary_restriction': {'type': 'string', 'description': 'Any dietary restrictions to consider.', 'enum': ['vegetarian', 'vegan', 'gluten-free', 'none']}}, 'required': ['cuisine']}}}, {'type': 'function', 'function': {'name': 'get_recipe_details', 'description': 'Retrieves detailed information about a specific recipe.', 'parameters': {'type': 'object', 'properties': {'recipe_id': {'type': 'string', 'description': 'The unique identifier for the recipe.'}}, 'required': ['recipe_id']}}}, {'type': 'function', 'function': {'name': 'calculate_nutrition', 'description': 'Calculates nutritional information for a given recipe.', 'parameters': {'type': 'object', 'properties': {'recipe_id': {'type': 'string', 'description': 'The unique identifier for the recipe.'}, 'serving_size': {'type': 'integer', 'description': 'The number of servings to calculate nutrition for.', 'default': 1}}, 'required': ['recipe_id']}}}]\n</tools>\nFor each function call return a json object with function name and arguments within <tool_call> </tool_call> tags with the following schema:\n<tool_call>\n{'arguments': <args-dict>, 'name': <function-name>}\n</tool_call>\n"},
|
35 |
+
{"from": "human", "value": "I'm planning a dinner party and I'm looking for some Italian recipes to try. Can you help me find some vegetarian Italian dishes? Once we have a list, I'd like to get more details about the first recipe in the search results. Finally, I want to calculate the nutritional information for that recipe, assuming I'm cooking for 4 people. Can you please perform these tasks for me?"},
|
36 |
+
]
|
37 |
+
|
38 |
+
inputs = tokenizer.apply_chat_template(
|
39 |
+
messages,
|
40 |
+
tokenize = True,
|
41 |
+
add_generation_prompt = True, # Must add for generation
|
42 |
+
return_tensors = "pt",
|
43 |
+
).to("cuda")
|
44 |
+
|
45 |
+
outputs = model.generate(input_ids = inputs, max_new_tokens = 256, use_cache = True)
|
46 |
+
tokenizer.batch_decode(outputs)
|
47 |
+
```
|