--- license: apache-2.0 language: - en tags: - function calling - mistral - llama - open source ai - code - task automation - workflow automation library_name: nemo pipeline_tag: conversational --- # Atom a suite of finetuned LLMs for atomically precise function calling ๐Ÿงช โœ… Massive function calling dataset of over 20M samples. โœ… First Model: Atom-Z-Tiny - Zephr trained on 100k samples โœ… Vision function calling coming soon ## Usage ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM device = torch.device("cuda" if torch.cuda.is_available() else "cpu") tokenizer = AutoTokenizer.from_pretrained("kye/Atom-Z-Tiny-7B") model = AutoModelForCausalLM.from_pretrained( "kye/Atom-Z-Tiny-7B", trust_remote_code=True, ).to(device) input_context = "Space Robots are" input_ids = tokenizer.encode(input_context, return_tensors="pt") output = model.generate(input_ids.to(device), max_length=128, temperature=0.7).cpu() output_text = tokenizer.decode(output[0], skip_special_tokens=True) print(output_text) ``` # Contribute To contribute please join the Agora discord: https://discord.gg/dMPPswVcZ8 All of our operations take place here and you can learn how to contribute to models that advance Humanity! We're also granting GPU power to researchers working on cool projects so share your project!