ThingsAI commited on
Commit
750d4ec
·
verified ·
1 Parent(s): c1f7579

Upload 8 files

Browse files
Files changed (5) hide show
  1. README.md +38 -120
  2. config.json +1 -1
  3. generation_config.json +1 -1
  4. model.safetensors +1 -1
  5. training_args.bin +1 -1
README.md CHANGED
@@ -1,139 +1,57 @@
1
  ---
2
- language:
3
- - en
4
- - code
5
- license: apache-2.0
6
  tags:
7
- - smol
8
- - pretraining
9
- - instruct
10
- - 50M
11
- - causal-lm
12
- - gqa
13
- - swiglu
14
- - rmsnorm
15
- datasets:
16
- - HuggingFaceTB/smollm-corpus
17
- metrics:
18
- - perplexity
19
- model-index:
20
- - name: Quark-50m-Instruct
21
- results: []
22
- pipeline_tag: text-generation
23
  ---
24
 
25
- # Quark-50m-Instruct
26
 
27
- **Quark-50m-Instruct** is a small (≈50M parameters) decoder-only language model, fine-tuned for instruction following.
28
- It is built on the same architecture as the now‑abandoned “SmolLM” family and was fully pretrained on 5 billion tokens from
29
- [HuggingFaceTB/smollm‑corpus](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus).
30
 
31
- - **Model type:** Causal Language Model (LLaMA‑style decoder)
32
- - **Architecture:** GQA · SwiGLU · RMSNorm · RoPE · Weight‑tying
33
- - **Pretraining tokens:** 5 B
34
- - **Fine‑tuning:** Instruction‑tuned (details below)
35
- - **Creators:** [OvercastLab](https://huggingface.co/OvercastLab) (research & development lab for ML/AI)
36
- - **Release date:** 22 April 2026
37
 
38
- ## Model Summary
39
-
40
- Quark-50m-Instruct is designed to be an efficient assistant that can run on consumer GPUs (e.g., RTX 3070 with 8 GB VRAM)
41
- and even on CPU for light workloads. It is **not** competitive with large models on knowledge‑intensive tasks,
42
- but it excels at:
43
-
44
- - Simple conversational tasks
45
- - Code generation and explanation (Python)
46
- - Short text rewriting and summarisation
47
- - On‑device / edge inference
48
-
49
- The architecture closely follows the efficient‑small‑LM blueprint popularised by SmolLM:
50
-
51
- | Component | Details |
52
- |-------------|-------------------------------|
53
- | Vocab size | 49,152 |
54
- | Hidden size | 384 |
55
- | Layers | 24 |
56
- | Attention | Grouped Query (6 Q heads, 2 KV heads) |
57
- | FFN | SwiGLU with 1,024 intermediate |
58
- | Position | RoPE (θ = 10,000) |
59
- | Normalisation | RMSNorm (pre‑block) |
60
-
61
- Total trainable parameters: **≈48 M** (with weight tying).
62
-
63
- ## Uses
64
-
65
- ### Direct Use
66
- The model can be used via the 🤗 Transformers library for standard text generation.
67
- It expects chat‑formatted input (see example below).
68
-
69
- ### Downstream Use
70
- Because of the open Apache‑2.0 license, you may fine‑tune Quark-50m‑Instruct on your own data for
71
- domain‑specific tasks – for instance, a customer‑support bot, a code reviewer, or a story writer.
72
-
73
- ### Limitations
74
- - Limited world knowledge (stopped at mid‑2025 pretraining data).
75
- - Short context window (2,048 tokens).
76
- - Small size means it can make more factual mistakes than larger models.
77
-
78
- ## Training Details
79
-
80
- ### Pretraining
81
-
82
- The base model was pretrained from scratch on a single NVIDIA A100.
83
- Training took approximately **One Day**.
84
-
85
- #### Data mix
86
-
87
- Quark‑50m was trained on exactly 5 billion tokens sampled from `HuggingFaceTB/smollm-corpus` with the following proportions:
88
-
89
- | Subset | Share | Tokens |
90
- |-------------------|-------|--------|
91
- | cosmopedia‑v2 | 60% | 3.0 B |
92
- | fineweb‑edu‑dedup | 40% | 2.0 B |
93
 
94
- All data was tokenised with the official [Cosmo2 tokenizer](https://huggingface.co/HuggingFaceTB/cosmo2-tokenizer) (vocab size 49,152).
 
 
 
 
95
 
96
- #### Hyperparameters (pretraining)
97
 
98
- | Parameter | Value |
99
- |-------------------------|----------------------------|
100
- | Sequence length | 2,048 |
101
- | Micro‑batch size | 4 |
102
- | Gradient accumulation | 16 |
103
- | Effective batch | 64 seqs (≈131k tokens) |
104
- | Optimizer | AdamW (β₁=0.9, β₂=0.95) |
105
- | Learning rate | 3e‑4 → 3e‑5 (cosine decay)|
106
- | Warmup steps | 1,000 |
107
- | Weight decay | 0.1 |
108
- | Gradient clipping | 1.0 |
109
- | Mixed precision | bfloat16 |
110
 
111
- ### Instruction Fine‑tuning
112
 
113
- The base model was fine‑tuned on a curated set of instruction‑following data (details to be released).
114
- The fine‑tuning used **LoRA** with the same sequence length and a lower learning rate (1e‑4) for a few thousand steps.
115
 
116
- ## How to Use
117
 
118
- ```python
119
- from transformers import AutoTokenizer, AutoModelForCausalLM
120
 
121
- model_name = "OvercastLab/Quark-50m-Instruct"
 
 
 
 
122
 
123
- tokenizer = AutoTokenizer.from_pretrained(model_name)
124
- model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto")
125
 
126
- messages = [
127
- {"role": "system", "content": "You are Quark, a helpful assistant."},
128
- {"role": "user", "content": "Explain group query attention in one sentence."}
129
- ]
130
 
131
- inputs = tokenizer.apply_chat_template(
132
- messages,
133
- tokenize=True,
134
- add_generation_prompt=True,
135
- return_tensors="pt"
136
- ).to(model.device)
137
 
138
- outputs = model.generate(inputs, max_new_tokens=128)
139
- print(tokenizer.decode(outputs[0], skip_special_tokens=True))
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ library_name: transformers
3
+ model_name: sft_id
 
 
4
  tags:
5
+ - generated_from_trainer
6
+ - trl
7
+ - sft
8
+ licence: license
 
 
 
 
 
 
 
 
 
 
 
 
9
  ---
10
 
11
+ # Model Card for sft_id
12
 
13
+ This model is a fine-tuned version of [None](https://huggingface.co/None).
14
+ It has been trained using [TRL](https://github.com/huggingface/trl).
 
15
 
16
+ ## Quick start
 
 
 
 
 
17
 
18
+ ```python
19
+ from transformers import pipeline
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
 
21
+ question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
22
+ generator = pipeline("text-generation", model="None", device="cuda")
23
+ output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
24
+ print(output["generated_text"])
25
+ ```
26
 
27
+ ## Training procedure
28
 
29
+
 
 
 
 
 
 
 
 
 
 
 
30
 
 
31
 
 
 
32
 
33
+ This model was trained with SFT.
34
 
35
+ ### Framework versions
 
36
 
37
+ - TRL: 1.2.0
38
+ - Transformers: 5.6.2
39
+ - Pytorch: 2.4.1+cu124
40
+ - Datasets: 4.8.4
41
+ - Tokenizers: 0.22.2
42
 
43
+ ## Citations
 
44
 
 
 
 
 
45
 
 
 
 
 
 
 
46
 
47
+ Cite TRL as:
48
+
49
+ ```bibtex
50
+ @software{vonwerra2020trl,
51
+ title = {{TRL: Transformers Reinforcement Learning}},
52
+ author = {von Werra, Leandro and Belkada, Younes and Tunstall, Lewis and Beeching, Edward and Thrush, Tristan and Lambert, Nathan and Huang, Shengyi and Rasul, Kashif and Gallouédec, Quentin},
53
+ license = {Apache-2.0},
54
+ url = {https://github.com/huggingface/trl},
55
+ year = {2020}
56
+ }
57
+ ```
config.json CHANGED
@@ -28,7 +28,7 @@
28
  "rope_type": "default"
29
  },
30
  "tie_word_embeddings": true,
31
- "transformers_version": "5.6.1",
32
  "use_cache": false,
33
  "vocab_size": 49152
34
  }
 
28
  "rope_type": "default"
29
  },
30
  "tie_word_embeddings": true,
31
+ "transformers_version": "5.6.2",
32
  "use_cache": false,
33
  "vocab_size": 49152
34
  }
generation_config.json CHANGED
@@ -6,6 +6,6 @@
6
  2
7
  ],
8
  "pad_token_id": 0,
9
- "transformers_version": "5.6.1",
10
  "use_cache": true
11
  }
 
6
  2
7
  ],
8
  "pad_token_id": 0,
9
+ "transformers_version": "5.6.2",
10
  "use_cache": true
11
  }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:431dbd275b83cb41bd28cdd1bb6d9c30e87ed1c5da31957e70867c9cc095efa7
3
  size 113367352
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ae5952eb75980cb4d0a309de77867170af85f62c992c16db60c244ad71798cd6
3
  size 113367352
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7111b6742ad6cc5ab900057295f3d9c66f5ee720c6c73b73b0f9abad6b7f195c
3
  size 5304
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:82a7ba29d21c786c5562de6a6c7a3aa158352399f3651c4038d36d59bf4bd17e
3
  size 5304