lambdAI โ Lightweight Math & Logic Reasoning Model
lambdAI is a compact, fine-tuned language model built on top of TinyLlama-1.1B-Chat-v1.0
, designed for educational reasoning tasks in both Portuguese and English. It focuses on logic, number theory, and mathematics, delivering fast performance with minimal computational requirements.
Model Architecture
- Base Model: TinyLlama-1.1B-Chat
- Fine-Tuning Strategy: LoRA (applied to
q_proj
andv_proj
) - Quantization: 8-bit (NF4 via
bnb_config
) - Dataset:
HuggingFaceH4/MATH
โ subset:number_theory
- Max Tokens per Sample: 512
- Batch Size: 20 per device
- Epochs: 3
Example Usage (Python)
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("lambdaindie/lambdai")
tokenizer = AutoTokenizer.from_pretrained("lambdaindie/lambdai")
input_text = "Problema: Prove que 17 รฉ um nรบmero primo."
inputs = tokenizer(input_text, return_tensors="pt")
output = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(output[0], skip_special_tokens=True))```
About Lambda
Lambda is an indie tech startup founded by Marius Jabami in Angola, focused on AI-driven educational tools, automation, and lightweight software solutions. The lambdAI model is the first release in a planned series of educational LLMs optimized for reasoning, logic, and low-resource deployment.
Stay updated on the project at lambdaindie.github.io and huggingface.co/lambdaindie.
---
Developed with care by Marius Jabami โ Powered by ambition and open source.
---
- Downloads last month
- 90
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for lambdaindie/lambda-1v-1B
Base model
TinyLlama/TinyLlama-1.1B-Chat-v1.0