Edit model card
YAML Metadata Warning: The pipeline tag "conversational" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, text2text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, other

Atom

a suite of finetuned LLMs for atomically precise function calling 🧪

✅ Massive function calling dataset of over 20M samples.

✅ First Model: Atom-Z-Tiny - Zephr trained on 100k samples

✅ Vision function calling coming soon

Usage

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

tokenizer = AutoTokenizer.from_pretrained("kye/Atom-Z-Tiny-7B")

model = AutoModelForCausalLM.from_pretrained(
  "kye/Atom-Z-Tiny-7B", 
  trust_remote_code=True, 
).to(device)

task = """


[INST] <<SYS>>
<function>Available functions:
<function>{
    "name": "generate_password",
    "description": "Generate a random password with specified criteria",
    "parameters": {
        "type": "object",
        "properties": {
            "length": {
                "type": "integer",
                "description": "The length of the password"
            },
            "include_numbers": {
                "type": "boolean",
                "description": "Include numbers in the password"
            },
            "include_special_characters": {
                "type": "boolean",
                "description": "Include special characters in the password"
            }
        },
        "required": [
            "length"
        ]
    }
}
<</SYS>>

I need a new password. Can you generate one for me? [/INST]


"""

input_ids = tokenizer.encode(task, return_tensors="pt")
output = model.generate(input_ids.to(device), max_length=128, temperature=0.7).cpu()
output_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(output_text)

Contribute

To contribute please join the Agora discord:

https://discord.gg/dMPPswVcZ8

All of our operations take place here and you can learn how to contribute to models that advance Humanity!

We're also granting GPU power to researchers working on cool projects so share your project!

Downloads last month
0
Safetensors
Model size
7.24B params
Tensor type
FP16
·
Inference API
Input a message to start chatting with kye/Atom-Z-Tiny-7B.
Inference API (serverless) does not yet support nemo models for this pipeline type.