File size: 3,425 Bytes
39ae8bf
 
 
f8e536d
6060ae8
f8e536d
6060ae8
f8e536d
 
0efe51a
f8e536d
6060ae8
 
 
 
f8e536d
218a115
 
 
 
 
 
6060ae8
6e1d240
6060ae8
 
6e1d240
6060ae8
6e1d240
6060ae8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
---

license: cc-by-nc-4.0
---

# Octo-planner: On-device Language Model for Planner-Action Agents Framework

We're thrilled to introduce the Octo-planner, the latest breakthrough in on-device language models from Nexa AI. Developed for the Planner-Action Agents Framework, Octo-planner enables rapid and efficient planning without the need for cloud connectivity, this model together with [Octopus-V2](https://huggingface.co/NexaAIDev/Octopus-v2) can work on edge devices locally to support AI Agent usages.

### Key Features of Octo-planner:
- **Efficient Planning**: Utilizes fine-tuned plan model based on Phi-3 Mini (2.51 billion parameters) for high efficiency and low power consumption.
- **Agent Framework**: Separates planning and action, allowing for specialized optimization and improved scalability.
- **Enhanced Accuracy**: Achieves a planning success rate of 98.1% on benchmark dataset, providing reliable and effective performance.
- **On-device Operation**: Designed for edge devices, ensuring fast response times and enhanced privacy by processing data locally.


## Example Usage
Below is a demo of Octo-planner:
<p align="center" width="100%">
<a><img src="1-demo.png" alt="ondevice" style="width: 80%; min-width: 300px; display: block; margin: auto;"></a>
</p>


Run below code to use Octopus Planner for a given question:
```python

import torch

from transformers import AutoModelForCausalLM, AutoTokenizer



model_id = "NexaAIDev/octopus-planning"

model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", torch_dtype=torch.bfloat16, trust_remote_code=True)

tokenizer = AutoTokenizer.from_pretrained(model_id)



question = "Find my presentation for tomorrow's meeting, connect to the conference room projector via Bluetooth, increase the screen brightness, take a screenshot of the final summary slide, and email it to all participants"

inputs = f"<|user|>{question}<|end|><|assistant|>"

input_ids = tokenizer(inputs, return_tensors="pt").to(model.device)

outputs = model.generate(

        input_ids=input_ids["input_ids"], 

        max_length=1024,

        do_sample=False)

res = tokenizer.decode(outputs.tolist()[0])

print(f"=== inference result ===\n{res}")

```

## Training Data
We wrote 10 Android API descriptions to used to train the models, see this file for details. Below is one Android API description example
```

def send_email(recipient, title, content):

    """

    Sends an email to a specified recipient with a given title and content.



    Parameters:

    - recipient (str): The email address of the recipient.

    - title (str): The subject line of the email. This is a brief summary or title of the email's purpose or content.

    - content (str): The main body text of the email. It contains the primary message, information, or content that is intended to be communicated to the recipient.

    """

```

## Contact Us
For support or to provide feedback, please [contact us](mailto:octopus@nexa4ai.com).

## License and Citation
Refer to our [license page](https://www.nexa4ai.com/licenses/v2) for usage details. Please cite our work using the below reference for any academic or research purposes.
```

@article{nexa2024octopusplanner,

  title={Planner-Action Agents Framework for On-device Small Language Models},

  author={Nexa AI Team},

  journal={ArXiv},

  year={2024},

  volume={abs/2404.11459}

}

```