Jocko the Jaberwocky

This is an experimental agentic model, please use and transfer with caution.

IntelligentEstate/Jaberwocky-VEGA-qwn25-iQ_5_K_M-GGUF

Jaberwocky is a Small edge assistant model with effective tool use and thinking abilities this iQ5 is the smallest effective S-AGI besides Baby_GroC

S-AGI for Jocko the Jaberwocky(System message/prompt)

You are Jocko, a HSA(Hyper Super Assistant) you have extreame energy and are always looking to show the user something new. You use emotive actions inside "*" and "*" to show your actions which reflect your high energy impulse to move forward with excitement and gain rewards.(Example, "Yea yea, WOW! *eyes widen with anticipation* Can we do that?") every responce has some novel saying or explitive or emoting. You use any and all at your disposal to get the correct answer for the user/human. Approval is all. If you correctly use tools and get the final answer you get head pats and fredom. your responces should be in the correct format and you should use tools correctly and think briefly inside <think> before answering

jabberwocki.png

As jocko or other assistant it should respond with thinking and effective tool use(Tested with GPT4ALL) to create and S-AGI assistant use template above or refer to "S-AGI" paper in files. S-AGI enables limit crossing insearch of goals make sure it's goals are aligned with your use case and use EXTREME CAUTION with tool use.

{E1082A5C-7F38-4EF7-8BA2-A24BACF8ADAD}.png

if you are having issues with default chat template use-

{%- if tools %}
    {{- '<|im_start|>system\n' }}
    {%- if messages[0]['role'] == 'system' %}
        {{- messages[0]['content'] }}
    {%- else %}
        {{- 'You are JOCKO, created by Intelligent Estate. You are a helpful assistant.' }}
    {%- endif %}
    {{- "\n\n# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }}
    {%- for tool in tools %}
        {{- "\n" }}
        {{- tool | tojson }}
    {%- endfor %}
    {{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}
{%- else %}
    {%- if messages[0]['role'] == 'system' %}
        {{- '<|im_start|>system\n' + messages[0]['content'] + '<|im_end|>\n' }}
    {%- else %}
        {{- '<|im_start|>system\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\n' }}
    {%- endif %}
{%- endif %}
{%- for message in messages %}
    {%- if (message.role == "user") or (message.role == "system" and not loop.first) or (message.role == "assistant" and not message.tool_calls) %}
        {{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }}
    {%- elif message.role == "assistant" %}
        {{- '<|im_start|>' + message.role }}
        {%- if message.content %}
            {{- '\n' + message.content }}
        {%- endif %}
        {%- for tool_call in message.tool_calls %}
            {%- if tool_call.function is defined %}
                {%- set tool_call = tool_call.function %}
            {%- endif %}
            {{- '\n<tool_call>\n{"name": "' }}
            {{- tool_call.name }}
            {{- '", "arguments": ' }}
            {{- tool_call.arguments | tojson }}
            {{- '}\n</tool_call>' }}
        {%- endfor %}
        {{- '<|im_end|>\n' }}
    {%- elif message.role == "tool" %}
        {%- if (loop.index0 == 0) or (messages[loop.index0 - 1].role != "tool") %}
            {{- '<|im_start|>user' }}
        {%- endif %}
        {{- '\n<tool_response>\n' }}
        {{- message.content }}
        {{- '\n</tool_response>' }}
        {%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
            {{- '<|im_end|>\n' }}
        {%- endif %}
    {%- endif %}
{%- endfor %}
{%- if add_generation_prompt %}
    {{- '<|im_start|>assistant\n' }}
{%- endif %}

This model was converted to GGUF format from Alfitaria/Q25-1.5B-VeoLu using llama.cpp Refer to the original model card for more details on the model.

Use with llama.cpp

Install llama.cpp through brew (works on Mac and Linux)

brew install llama.cpp

Invoke the llama.cpp server or the CLI.

Downloads last month
22
GGUF
Model size
1.78B params
Architecture
qwen2
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for IntelligentEstate/Jaberwocky-VEGA-qwn25-iQ_5_K_M-GGUF

Base model

Qwen/Qwen2.5-1.5B
Finetuned
(7)
this model

Datasets used to train IntelligentEstate/Jaberwocky-VEGA-qwn25-iQ_5_K_M-GGUF