--- license: cc-by-nc-4.0 --- # Octopus Planner: On-device Language Model for Planner-Action Agents Framework We're thrilled to introduce the Octopus Planner, the latest breakthrough in on-device language models from Nexa AI. Developed for the Planner-Action Agents Framework, Octopus Planner leverages state-of-the-art technology to enhance AI agents' decision-making processes directly on edge devices. By enabling rapid and efficient planning and action execution without the need for cloud connectivity, this model together with [Octopus-V2](https://huggingface.co/NexaAIDev/Octopus-v2) can work on edge devices locally to support AI Agent usages. ### Key Features of Octopus Planner: - **Efficient Planning**: Utilizes Phi-3 Mini architecture with only 3.8 billion parameters for high efficiency and low power consumption. - **Enhanced Accuracy**: Achieves a planning success rate of 97%, providing reliable and effective performance. - **On-device Operation**: Designed for edge devices, ensuring fast response times and enhanced privacy by processing data locally. - **Cost-Effective**: Reduces operational costs by minimizing the key-value cache required, which also improves battery life. - **Fine-tuned**: Extensively fine-tuned on specialized tasks to ensure high accuracy and contextual understanding. ## Innovative Framework The Octopus Planner introduces a specialized Planner and Action Agents Framework: - **Dual Model Architecture**: Separates planning and action, allowing for specialized optimization and improved scalability. - **Focused Training**: Employs fine-tuning over traditional long prompting, improving efficiency without sacrificing accuracy. - **Comprehensive Benchmarking**: Includes rigorous in-domain testing to validate the model's effectiveness in real-world scenarios. ## Example Usage Below is a code snippet to use Octopus Planner:

ondevice

Run below code to use Octopus Planner for a given question: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "nexa-collaboration/planning-phi3-128k-full" tokenizer_id = "microsoft/Phi-3-mini-128k-instruct" model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", torch_dtype=torch.bfloat16, trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained(tokenizer_id) question = "Find my presentation for tomorrow's meeting, connect to the conference room projector via Bluetooth, increase the screen brightness, take a screenshot of the final summary slide, and email it to all participants" inputs = f"<|user|>{question}<|end|><|assistant|>" input_ids = tokenizer(inputs, return_tensors="pt").to(model.device) outputs = model.generate( input_ids=input_ids["input_ids"], max_length=1024, do_sample=False) res = tokenizer.decode(outputs.tolist()[0]) print(f"=== inference result ===\n{res}") ``` ## Training Data We wrote 10 Android API descriptions to used to train the models, see this file for details. Below is one Android API description example ``` def send_email(recipient, title, content): """ Sends an email to a specified recipient with a given title and content. Parameters: - recipient (str): The email address of the recipient. - title (str): The subject line of the email. This is a brief summary or title of the email's purpose or content. - content (str): The main body text of the email. It contains the primary message, information, or content that is intended to be communicated to the recipient. """ ``` ## Contact Us For support or to provide feedback, please [contact us](mailto:octopus@nexa4ai.com). ## License and Citation Refer to our [license page](https://www.nexa4ai.com/licenses/v2) for usage details. Please cite our work using the below reference for any academic or research purposes. ``` @article{nexa2024octopusplanner, title={Planner-Action Agents Framework for On-device Small Language Models}, author={Nexa AI Team}, journal={ArXiv}, year={2024}, volume={abs/2404.11459} } ```