daniicruzz commited on
Commit
2c5b9ff
·
verified ·
1 Parent(s): 4f7b1ad

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +296 -3
README.md CHANGED
@@ -1,3 +1,296 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+
3
+ base_model:
4
+
5
+ - Qwen/Qwen3-0.6B
6
+ - MultiverseComputing/LittleLamb-0.3B
7
+ library_name: transformers
8
+ license: apache-2.0
9
+
10
+ ---
11
+ <div align="center">
12
+
13
+ # LittleLamb 0.3B
14
+
15
+ ### Powered by CompactifAI
16
+
17
+ [![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
18
+ [![HuggingFace](https://img.shields.io/badge/🤗-Model_Hub-yellow.svg)](https://huggingface.co/MultiverseComputingCAI/LittleLamb)
19
+ [![Discord](https://img.shields.io/badge/Discord-Community-5865F2?logo=discord&logoColor=white)](https://discord.gg/cGas9uStqp)
20
+
21
+ **Tiny Model** · **50% Compressed** · **Thinking & Non-Thinking Modes**
22
+
23
+ </div>
24
+
25
+ ---
26
+
27
+ ## Table of Contents
28
+
29
+ - [Highlights](#highlights)
30
+ - [Model Overview](#model-overview)
31
+ - [Key Characteristics](#key-characteristics)
32
+ - [Quick Start](#quick-start)
33
+ - [What's New in LittleLamb 0.3B](#whats-new-in-littlelamb-03b)
34
+ - [Dual-Mode Inference (Thinking / Non-Thinking)](#dual-mode-inference-thinking--non-thinking)
35
+ - [Training & Fine-Tuning](#training--fine-tuning)
36
+ - [Architecture](#architecture)
37
+ - [Evaluation & Benchmarks](#evaluation--benchmarks)
38
+ - [Languages](#languages)
39
+ - [Intended Use](#intended-use)
40
+ - [Safety & Limitations](#safety--limitations)
41
+ - [Model Information](#model-information)
42
+ - [Citation](#citation)
43
+
44
+ ---
45
+
46
+ ## Model Overview
47
+
48
+ **LittleLamb 0.3B** is a **general-purpose bilingual model** at **290M parameters**, a similar size class to **270M** models such as **gemma3-270m-it** and **functiongemma-270m-it**—developed based on [Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B), by **Multiverse Computing**. The original Qwen3-0.6B is an open-weight, instruction-tuned model with thinking and non-thinking capabilities and multilingual coverage. LittleLamb 0.3B is compressed at a **50% compression rate** using **CompactifAI**, Multiverse Computing's proprietary technology. The model supports **English and Spanish** and retains Qwen3's dual thinking/non-thinking modes.
49
+
50
+ ---
51
+
52
+ ## Key Characteristics
53
+
54
+
55
+ | Characteristic | Description |
56
+ | ---------------- | ---------------------------------------------------------------------------------------------------------------- |
57
+ | Base model | [Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B) (0.6B params, 0.44B non-embedding; open-weight, Apache 2.0) |
58
+ | **Parameters** | 290M total parameters after CompactifAI compression (50% compression rate from base 0.6B) |
59
+ | **Architecture** | Decoder-only Transformer (Qwen3 family) |
60
+ | **Compression** | CompactifAI (proprietary) |
61
+ | **Languages** | English and Spanish; inherits broader multilingual tokenizer coverage from Qwen3 |
62
+ | **Modes** | Thinking (`enable_thinking=True`) and non-thinking (`enable_thinking=False`) via chat template |
63
+
64
+
65
+ ---
66
+
67
+ ## Quick Start
68
+
69
+ This model can be loaded with the **Transformers** library. Requires `transformers>=4.51.0` for Qwen3 architecture support.
70
+
71
+ ```python
72
+ from transformers import AutoModelForCausalLM, AutoTokenizer
73
+
74
+ model_id = "MultiverseComputingCAI/LittleLamb"
75
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
76
+ model = AutoModelForCausalLM.from_pretrained(
77
+ model_id,
78
+ torch_dtype="auto",
79
+ device_map="auto",
80
+ )
81
+
82
+ messages = [{"role": "user", "content": "Hello!"}]
83
+ text = tokenizer.apply_chat_template(
84
+ messages,
85
+ tokenize=False,
86
+ add_generation_prompt=True,
87
+ enable_thinking=True,
88
+ )
89
+ inputs = tokenizer([text], return_tensors="pt").to(model.device)
90
+ output_ids = model.generate(**inputs, max_new_tokens=256)[0]
91
+ response = tokenizer.decode(
92
+ output_ids[len(inputs.input_ids[0]) :], skip_special_tokens=True
93
+ )
94
+ print(response)
95
+ ```
96
+
97
+ For OpenAI-compatible serving, use a stack that supports Qwen3 reasoning (e.g. recent **vLLM** or **SGLang** with Qwen3 parsers); see the [Qwen3-0.6B model card](https://huggingface.co/Qwen/Qwen3-0.6B) for deployment examples.
98
+
99
+ ---
100
+
101
+ ## What's New in LittleLamb 0.3B
102
+
103
+ ### Summary
104
+
105
+ - **Ultra-compact general-purpose model** at 290M parameters, suitable for edge and on-device deployment.
106
+ - **Developed based on [Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B)** with **CompactifAI** compression (~50% parameter reduction vs. base non-embedding count).
107
+ - **Bilingual focus:** English and Spanish for supported use cases.
108
+
109
+ ---
110
+
111
+ ## Dual-Mode Inference (Thinking / Non-Thinking)
112
+
113
+ LittleLamb 0.3B inherits Qwen3's dual-mode capability, supporting seamless switching between **thinking mode** (for complex reasoning) and **non-thinking mode** (for efficient general-purpose dialogue).
114
+
115
+ The model generates internal reasoning in Qwen3’s thinking format (see the Qwen3 chat template) before producing the final response. Use this for tasks requiring multi-step reasoning, math, or code generation.
116
+
117
+ Set `enable_thinking=False` for lower-latency dialogue without explicit chain-of-thought in the template. Follow the **sampling parameters** recommended in the [Qwen3-0.6B model card](https://huggingface.co/Qwen/Qwen3-0.6B) for each mode.
118
+
119
+ ---
120
+
121
+ ## Training & Fine-Tuning
122
+
123
+ ### Base Model: Qwen3-0.6B
124
+
125
+ The base model [Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B) is a causal language model from the Qwen3 family, supporting thinking/non-thinking. See the [Qwen3 technical report](https://arxiv.org/abs/2505.09388) for details.
126
+
127
+ ---
128
+
129
+ ## Architecture
130
+
131
+ ### Model Specifications
132
+
133
+
134
+ | Field | Value |
135
+ | ---------------- | ----------------------------------------------------------------------- |
136
+ | Base model | [Qwen/Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B) (0.6B params) |
137
+ | Total parameters |290M dense |
138
+
139
+
140
+ ---
141
+
142
+ ## Evaluation & Benchmarks
143
+
144
+ ### Evaluation Methodology
145
+
146
+ Benchmark scores were obtained with the following setups. Methodology varies by benchmark family.
147
+
148
+ For **LittleLamb 0.3B** and **Qwen3-0.6B (base)**, benchmark runs are reported under both **thinking** and **non-thinking** chat modes using the sampling settings recommended in the [Qwen3-0.6B model card](https://huggingface.co/Qwen/Qwen3-0.6B).
149
+
150
+ #### MMLU-Pro, GPQA Diamond, HLE (Humanity's Last Exam)
151
+
152
+ - **Evaluation framework**: [Nemo-skills](https://github.com/NVIDIA/NeMo-Skills)
153
+ - **Inference library**: vLLM 0.18.0
154
+ - **Thinking mode** (`enable_thinking=True`, per Qwen3-0.6B instruct): temperature = 0.6, top_p = 0.95, top_k = 20, min_p = 0
155
+ - **Non-thinking mode** (`enable_thinking=False`, per Qwen3-0.6B instruct): temperature = 0.7, top_p = 0.8, top_k = 20, min_p = 0
156
+
157
+ ### Quantitative Results (Reported & Planned)
158
+
159
+ Reported numbers use the methodology described above.
160
+
161
+ #### Thinking mode
162
+
163
+
164
+ | Benchmark | gemma3-270m-it | Qwen3-0.6B (think) | LittleLamb-0.3B (think) |
165
+ | ------------ | -------------- | ------------------ | ----------------------- |
166
+ | HLE | 4.00 | 5.65 | 6.12 |
167
+ | GPQA Diamond | 21.21 | 29.59 | 28.18 |
168
+ | MMLU-Pro | 6.23 | 38.27 | 31.21 |
169
+
170
+
171
+ #### Non-thinking mode
172
+
173
+
174
+ | Benchmark | gemma3-270m-it | Qwen3-0.6B (no think) | LittleLamb-0.3B (no think) |
175
+ | ------------ | -------------- | --------------------- | -------------------------- |
176
+ | HLE | 4.00 | 4.54 | 5.37 |
177
+ | GPQA Diamond | 21.21 | 27.77 | 24.04 |
178
+ | MMLU-Pro | 6.23 | 25.72 | 25.11 |
179
+
180
+
181
+ ![Intelligence Thinking](assets/littlelamb-intelligence-thinking-family.png)
182
+ ![Intelligence No-Thinking](assets/littlelamb-intelligence-nothinking-family.png)
183
+
184
+ ### Quantitative Results (Inference Performance)
185
+
186
+ #### Metrics reported
187
+ - **System Output Throughput**: Mean output tokens per second across all concurrent requests over the benchmarking phase.
188
+ - **End-to-End Latency per Query:** Median end-to-end response time for each query from the time the query is sent.
189
+ - **Output Speed per Query:** Median output tokens per second after the first token is received for each query.
190
+ - **Time to first token (TTFT):** Median
191
+ - **Estimated Peak Memory Usage:** KV cache utilization is monitored during the phase and we estimate memory usage as follows: $model\_ weights_{gb} + kv\_ cache_{usage\_pct} × (nvml\_used_{gb} − model\_ weights_{gb})$
192
+ - **Model weights:**
193
+ **Summary of improvements:** Little Lamb shows a slight improvement in performance with respect to the original Qwen Model. This is expected as for such small models, VRAM usage is dominated by KV cache and not model weights.
194
+
195
+ #### Performance evaluation conditions
196
+
197
+ Our performance evaluation follows the spirit of [Artificial Analysis](https://artificialanalysis.ai/methodology/system-load-test).
198
+
199
+ - **Inference library**: vLLM 0.18.0
200
+ - **Monitoring libraries**: GuideLLM 0.6.0, nvidia-ml-py 13.590.48
201
+ - **Hardware**: 1× NVIDIA L4 GPU
202
+ - **Conditions**: concurrency=16
203
+ - **Phase duration**: Each phase lasts 3 minutes (excluding ramp-up and cool-down periods).
204
+ - **Workload shape**: 1,000 input tokens and 1,000 output tokens per query.
205
+ - **Streaming**: Benchmarking is conducted with streaming enabled.
206
+
207
+
208
+ **Summary of improvements:** Little Lamb shows a slight improvement in performance with respect to the original Qwen Model. This is expected as for such small models, VRAM usage is dominated by KV cache and not model weights.
209
+
210
+ ![Performance](assets/littlelamb-performance-family.png)
211
+
212
+ ---
213
+
214
+ ## Languages
215
+
216
+ - **Primary languages**: English and Spanish (supported for product use cases).
217
+
218
+ ---
219
+
220
+ ## Intended Use
221
+
222
+ ### Recommended Use Cases
223
+
224
+ Aligned with [Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B) use cases, with the benefit of a smaller footprint suitable for edge and on-device deployment:
225
+
226
+ - **On-device and edge inference** where memory and compute are constrained
227
+ - **Reasoning tasks** with configurable thinking/non-thinking modes
228
+ - **Bilingual applications** (English and Spanish)
229
+ - **Chatbots and virtual assistants** in resource-constrained environments
230
+ - **General knowledge, math, and science** question answering
231
+
232
+ ### Out-of-Scope Uses
233
+
234
+ - Harmful, illegal, or deceptive content generation
235
+ - Impersonation of real individuals without consent
236
+ - High-risk decision-making without human oversight
237
+ - Surveillance or tracking of individuals
238
+ - Any use that violates applicable laws or regulations
239
+
240
+ ---
241
+
242
+ ## Safety & Limitations
243
+
244
+ ### Known Limitations
245
+
246
+ - **Model scale:** At ~0.3B parameters, this is an ultra-compact model. Several frontier-scale benchmarks (GDPval-AA, Terminal-Bench Hard, AA-LCR, CritPt) produce no discriminative signal at this model size, as the base Qwen3-0.6B itself scores near zero on them.
247
+ - **Thinking mode:** Performance differs substantially between thinking and non-thinking modes across benchmarks. Users should evaluate both modes for their specific use case.
248
+
249
+ ### Recommendations
250
+
251
+ - Use human oversight for critical applications
252
+ - Perform task-specific evaluation prior to deployment
253
+ - Test both thinking and non-thinking modes for your use case
254
+
255
+ ---
256
+
257
+ ## Model Information
258
+
259
+
260
+ | Field | Value |
261
+ | ------------ | --------------------------------------------------------------------------- |
262
+ | Model name | LittleLamb |
263
+ | Based on | [Qwen/Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B) |
264
+ | Version | 2604 |
265
+ | Release date | 28/04/2026 |
266
+ | Developed by | Multiverse Computing |
267
+ | License | Apache 2.0 |
268
+ | Contact | [business@multiversecomputing.com](mailto:business@multiversecomputing.com) |
269
+
270
+
271
+ ---
272
+
273
+ ## Citation
274
+
275
+ If you use this model, please cite the base model and this variant:
276
+
277
+ ```bibtex
278
+ @misc{qwen3technicalreport,
279
+ title = {Qwen3 Technical Report},
280
+ author = {Qwen Team},
281
+ year = {2025},
282
+ eprint = {2505.09388},
283
+ archivePrefix = {arXiv},
284
+ primaryClass = {cs.CL},
285
+ url = {https://arxiv.org/abs/2505.09388}
286
+ }
287
+ @misc{littlelamb,
288
+ title = {LittleLamb: Compressed Qwen3-0.6B via CompactifAI},
289
+ author = {Multiverse Computing},
290
+ year = {2026},
291
+ url = {https://huggingface.co/MultiverseComputingCAI/LittleLamb},
292
+ note = {Model developed based on Qwen/Qwen3-0.6B using CompactifAI technology}
293
+ }
294
+ ```
295
+
296
+ **Built by [Multiverse Computing](https://www.multiversecomputing.com)** · [Report an issue](https://huggingface.co/MultiverseComputingCAI/LittleLamb/discussions) · [Discord](https://discord.gg/cGas9uStqp)