Papers
arxiv:2404.19296

Octopus v4: Graph of language models

Published on Apr 30
Β· Featured in Daily Papers on May 1
Authors:

Abstract

Language models have been effective in a wide range of applications, yet the most sophisticated models are often proprietary. For example, GPT-4 by OpenAI and various models by Anthropic are expensive and consume substantial energy. In contrast, the open-source community has produced competitive models, like Llama3. Furthermore, niche-specific smaller language models, such as those tailored for legal, medical or financial tasks, have outperformed their proprietary counterparts. This paper introduces a novel approach that employs functional tokens to integrate multiple open-source models, each optimized for particular tasks. Our newly developed Octopus v4 model leverages functional tokens to intelligently direct user queries to the most appropriate vertical model and reformat the query to achieve the best performance. Octopus v4, an evolution of the Octopus v1, v2, and v3 models, excels in selection and parameter understanding and reformatting. Additionally, we explore the use of graph as a versatile data structure that effectively coordinates multiple open-source models by harnessing the capabilities of the Octopus model and functional tokens. Use our open-sourced GitHub (https://www.nexa4ai.com/) to try Octopus v4 models (https://huggingface.co/NexaAIDev/Octopus-v4), and contrite to a larger graph of language models. By activating models less than 10B parameters, we achieved SOTA MMLU score of 74.8 among the same level models.

Community

I can barely navigate your website, those animations are resource hogs.

Β·
Paper author

That is really helpful feedback. We will adjust the website soon! Sorry for the inconvenience.

This is interesting. It is like mixture of models something similar to mixture of experts

Β·

I like this metaphor

What ethical considerations are taken into account when developing and deploying multifaceted models like Octopus v4, particularly in sensitive domains like healthcare and finance?

Β·
Paper author

We are working on safety evaluation, thanks for pointing this out.

Is Octopus V4 capable of running on small devices like Octopus V2?

Β·
Paper author

Definitely, GGUF format is coming soon

Here's a plain-english rewrite of the paper: https://www.aimodels.fyi/papers/arxiv/octopus-v4-graph-language-models

This is truly impressive. Could you share some insights on the primary challenges you faced in developing Octopus v4, particularly with integrating various open-source models?

Β·
Paper author

Finding the best domain specific LLM can be challenging, there is no leaderboard to rank them, so we make one by our own: https://huggingface.co/spaces/NexaAIDev/domain_llm_leaderboard

The hybrid approach to achieve high MMLU is interesting :>
Quick question:

  1. How could I build a graph network follow the idea? Do we need to set up all those query services from those model providers?
  2. Assume those connected models' parameters got updated from time to time, will this impact your graph search model (v4) decision? How to maintain a fresh model seems challenging.
Β·
Paper author
  1. We will update our build_graph tutorial soon : https://github.com/NexaAI/octopus-v4
    We have build a token_mapping for you, see https://github.com/NexaAI/octopus-v4/blob/main/utils.py
  2. In our current design, the orchestration model (Octopus-V4) is trained independent of worker nodes (domain-specific LLMs). At this moment, there won't be impact. But we will consider this in our future design.

The idea and demo are fabulous!
I want to use the Octopus V4 on your github for my data science projects, can you briefly introduce what I can use it for?

How does Octopus v4's use of functional tokens enhance handling of multi-domain queries compared to larger, single-model systems?

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Very great model, looking forward to using it for on device training!

Sign up or log in to comment

Models citing this paper 3

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2404.19296 in a dataset README.md to link it from this page.

Spaces citing this paper 5

Collections including this paper 18