Lite-Oute-1-65M-Instruct
Lite-Oute-1-65M-Instruct is an experimental ultra-compact model in the Lite series, based on the LLaMA architecture and comprising approximately 65 million parameters.
The primary goal of this model was to explore the lower limits of model size while still maintaining basic language understanding capabilities.
Due to its extremely small size, this model demonstrates basic text generation abilities but may struggle with instructions or maintaining topic coherence.
Users should be aware of its significant limitations compared to larger models and expect inconsistent or potentially inaccurate responses.
Available versions:
Lite-Oute-1-65M-Instruct
Lite-Oute-1-65M-Instruct-GGUF
Lite-Oute-1-65M
Lite-Oute-1-65M-GGUF
Chat format
This model uses ChatML template. Ensure you use the correct template:
<|im_start|>system
[System message]<|im_end|>
<|im_start|>user
[Your question or message]<|im_end|>
<|im_start|>assistant
[The model's response]<|im_end|>
Benchmark | 5-shot | 0-shot |
---|---|---|
ARC Challenge | 22.61 | 23.63 |
ARC Easy | 37.16 | 40.49 |
CommonsenseQA | 19.41 | 20.64 |
HellaSWAG | 28.74 | 28.41 |
MMLU | 25.20 | 23.45 |
OpenBookQA | 27.40 | 28.60 |
PIQA | 60.88 | 60.77 |
Winogrande | 50.59 | 50.04 |
Usage with HuggingFace transformers
The model can be used with HuggingFace's transformers
library:
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = AutoModelForCausalLM.from_pretrained("OuteAI/Lite-Oute-1-65M-Instruct").to(device)
tokenizer = AutoTokenizer.from_pretrained("OuteAI/Lite-Oute-1-65M-Instruct")
def generate_response(message: str, temperature: float = 0.4, repetition_penalty: float = 1.12) -> str:
# Apply the chat template and convert to PyTorch tensors
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": message}
]
input_ids = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, return_tensors="pt"
).to(device)
# Generate the response
output = model.generate(
input_ids,
max_length=512,
temperature=temperature,
repetition_penalty=repetition_penalty,
do_sample=True
)
# Decode the generated output
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
return generated_text
message = "I'd like to learn about language models. Can you break down the concept for me?"
response = generate_response(message)
print(response)
Risk Disclaimer
By using this model, you acknowledge that you understand and assume the risks associated with its use. You are solely responsible for ensuring compliance with all applicable laws and regulations. We disclaim any liability for problems arising from the use of this open-source model, including but not limited to direct, indirect, incidental, consequential, or punitive damages. We make no warranties, express or implied, regarding the model's performance, accuracy, or fitness for a particular purpose. Your use of this model is at your own risk, and you agree to hold harmless and indemnify us, our affiliates, and our contributors from any claims, damages, or expenses arising from your use of the model.
- Downloads last month
- 30