Text Generation
MLX
Safetensors
bailing_hybrid
Mixture of Experts
mixture-of-experts
hybrid-attention
mla
lightning-attention
jangtq
osaurus
bailing
ling
apple-silicon
conversational
custom_code
Instructions to use OsaurusAI/Ling-2.6-flash-JANGTQ with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- MLX
How to use OsaurusAI/Ling-2.6-flash-JANGTQ with MLX:
# Make sure mlx-lm is installed # pip install --upgrade mlx-lm # Generate text with mlx-lm from mlx_lm import load, generate model, tokenizer = load("OsaurusAI/Ling-2.6-flash-JANGTQ") prompt = "Write a story about Einstein" messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) text = generate(model, tokenizer, prompt=prompt, verbose=True) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
- Pi new
How to use OsaurusAI/Ling-2.6-flash-JANGTQ with Pi:
Start the MLX server
# Install MLX LM: uv tool install mlx-lm # Start a local OpenAI-compatible server: mlx_lm.server --model "OsaurusAI/Ling-2.6-flash-JANGTQ"
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "mlx-lm": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "OsaurusAI/Ling-2.6-flash-JANGTQ" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use OsaurusAI/Ling-2.6-flash-JANGTQ with Hermes Agent:
Start the MLX server
# Install MLX LM: uv tool install mlx-lm # Start a local OpenAI-compatible server: mlx_lm.server --model "OsaurusAI/Ling-2.6-flash-JANGTQ"
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default OsaurusAI/Ling-2.6-flash-JANGTQ
Run Hermes
hermes
- MLX LM
How to use OsaurusAI/Ling-2.6-flash-JANGTQ with MLX LM:
Generate or start a chat session
# Install MLX LM uv tool install mlx-lm # Interactive chat REPL mlx_lm.chat --model "OsaurusAI/Ling-2.6-flash-JANGTQ"
Run an OpenAI-compatible server
# Install MLX LM uv tool install mlx-lm # Start the server mlx_lm.server --model "OsaurusAI/Ling-2.6-flash-JANGTQ" # Calling the OpenAI-compatible server with curl curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "OsaurusAI/Ling-2.6-flash-JANGTQ", "messages": [ {"role": "user", "content": "Hello"} ] }'
| { | |
| "add_bos_token": false, | |
| "add_eos_token": false, | |
| "bos_token": "<|startoftext|>", | |
| "chat_template": "{% set thinking_option = 'off' %}\n{{- '<role>SYSTEM</role>' }}\n{%- if tools %}\n {%- if messages[0].role == 'system' %}\n {{- messages[0].content + '\\n' }}\n {%- endif %}\n {{- \"# Tools\\n\\nYou may call one or more functions to assist with the user query.\\n\\nYou are provided with function signatures within <tools></tools> XML tags:\\n<tools>\" }}\n {%- for tool in tools %}\n {{- \"\\n\" }}\n {{- tool | tojson }}\n {%- endfor %}\n {{- \"\\n</tools>\\n\\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\\n<tool_call>\\n{\\\"name\\\": <function-name>, \\\"arguments\\\": <args-json-object>}\\n</tool_call>\\n\" }}\n {{- 'detailed thinking ' + thinking_option + '<|role_end|>' }}\n{%- else %}\n {%- if messages[0].role == 'system' %}\n {%- if 'detailed thinking on' in messages[0].content or 'detailed thinking off' in messages[0].content %}\n {{- messages[0].content + '<|role_end|>' }}\n {%- else %}\n {{- messages[0].content + '\\n' }}\n {{- 'detailed thinking ' + thinking_option + '<|role_end|>' }}\n {%- endif %}\n {% else %}\n {{- 'detailed thinking ' + thinking_option + '<|role_end|>' }}\n {%- endif %}\n{%- endif %}\n{%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}\n{%- for message in messages[::-1] %}\n {%- set index = (messages|length - 1) - loop.index0 %}\n {%- if ns.multi_step_tool and message.role == \"user\" and message.content is string and not(message.content.startswith('<tool_response>') and message.content.endswith('</tool_response>')) %}\n {%- set ns.multi_step_tool = false %}\n {%- set ns.last_query_index = index %}\n {%- endif %}\n{%- endfor %}\n{%- for message in messages %}\n {%- if message.content is string %}\n {%- set content = message.content %}\n {%- else %}\n {%- set content = '' %}\n {%- endif %}\n {%- if message.role == \"user\" %}\n {{- '<role>HUMAN</role>' + message.content + '<|role_end|>' }}\n {%- elif message.role == \"system\" and not loop.first %}\n {{- '<role>SYSTEM</role>' + message.content + '<|role_end|>' }}\n {%- elif message.role == \"assistant\" %}\n {%- set reasoning_content = '' %}\n {%- if message.reasoning_content is string %}\n {%- set reasoning_content = message.reasoning_content %}\n {%- else %}\n {%- if '</think>' in content %}\n {%- set reasoning_content = content.split('</think>')[0].rstrip('\\n').split('<think>')[-1].lstrip('\\n') %}\n {%- set content = content.split('</think>')[-1].lstrip('\\n') %}\n {%- endif %}\n {%- endif %}\n {%- if loop.index0 > ns.last_query_index %}\n {%- if reasoning_content %}\n {{- '<role>ASSISTANT</role>' + '\\n<think>\\n' + reasoning_content.strip('\\n') + '\\n</think>\\n\\n' + content.lstrip('\\n') }}\n {%- else %}\n {{- '<role>ASSISTANT</role>' + content }}\n {%- endif %}\n {%- else %}\n {{- '<role>ASSISTANT</role>' + content }}\n {%- endif %}\n {%- if message.tool_calls %}\n {%- for tool_call in message.tool_calls %}\n {%- if (loop.first and content) or (not loop.first) %}\n {{- '\\n' }}\n {%- endif %}\n {%- if tool_call.function %}\n {%- set tool_call = tool_call.function %}\n {%- endif %}\n {{- '<tool_call>\\n{\"name\": \"' }}\n {{- tool_call.name }}\n {{- '\", \"arguments\": ' }}\n {%- if tool_call.arguments is string %}\n {{- tool_call.arguments }}\n {%- else %}\n {{- tool_call.arguments | tojson }}\n {%- endif %}\n {{- '}\\n</tool_call>' }}\n {%- endfor %}\n {%- endif %}\n {{- '<|role_end|>' }}\n {%- elif message.role == \"tool\" %}\n {%- if loop.first or (messages[loop.index0 - 1].role != \"tool\") %}\n {{- '<role>OBSERVATION</role>' }}\n {%- endif %}\n {{- '\\n<tool_response>\\n' }}\n {{- content }}\n {{- '\\n</tool_response>' }}\n {%- if loop.last or (messages[loop.index0 + 1].role != \"tool\") %}\n {{- '<|role_end|>' }}\n {%- endif %}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<role>ASSISTANT</role>' }}\n{%- endif %}", | |
| "clean_up_tokenization_spaces": false, | |
| "cls_token": "[CLS]", | |
| "eos_token": "<|role_end|>", | |
| "fast_tokenizer": true, | |
| "gmask_token": "[gMASK]", | |
| "merges_file": null, | |
| "model_max_length": 1000000000000000019884624838656, | |
| "pad_token": "<|endoftext|>", | |
| "tokenizer_class": "PreTrainedTokenizerFast", | |
| "trust_remote_code": true, | |
| "vocab_file": null | |
| } |