Text Generation
scaling
GregorZiegltrumAA commited on
Commit
498a793
1 Parent(s): fe3e3b3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +56 -5
README.md CHANGED
@@ -1,5 +1,56 @@
1
- ---
2
- license: other
3
- license_name: open-aleph-license
4
- license_link: LICENSE
5
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: open-aleph-license
4
+ license_link: LICENSE
5
+ library_name: scaling
6
+ pipeline_tag: text-generation
7
+ ---
8
+
9
+
10
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/671a0238b080a748c29b8fea/v1rfcKVaL8vnjuCqWUmI-.png)
11
+
12
+
13
+ # u-μP: Stable training in low precision for a significant speed-up and memory reduction during training
14
+
15
+
16
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/671a0238b080a748c29b8fea/F1-zbAXF5LGvxpIRrYfU4.png)
17
+
18
+
19
+ This Repository holds the model weights for the 7B u-μP models trained at Aleph Alpha Research, in collaboration with Graphcore, for 72k steps (300B tokens). Please note, that the released checkpoints are not fully converged models and are intended for research use only.
20
+
21
+ You can find all model weights at the following links:
22
+ - [umup-research-7b-bf16](https://huggingface.co/Aleph-Alpha/umup-research-7b-bf16)
23
+ - [umup-research-7b-fp8](https://huggingface.co/Aleph-Alpha/umup-research-7b-fp8)
24
+ - [sp-baseline-research-7b-bf16](https://huggingface.co/Aleph-Alpha/sp-baseline-research-7b-bf16)
25
+ - [umup-research-3b-bf16](https://huggingface.co/Aleph-Alpha/umup-research-3b-bf16)
26
+ - [umup-research-3b-fp8](https://huggingface.co/Aleph-Alpha/umup-research-3b-fp8)
27
+ - [sp-baseline-research-3b-bf16](https://huggingface.co/Aleph-Alpha/sp-baseline-research-3b-bf16)
28
+ - [umup-research-1b-bf16](https://huggingface.co/Aleph-Alpha/umup-research-1b-bf16)
29
+ - [umup-research-1b-fp8](https://huggingface.co/Aleph-Alpha/umup-research-1b-fp8)
30
+ - [sp-baseline-research-1b-bf16](https://huggingface.co/Aleph-Alpha/sp-baseline-research-1b-bf16)
31
+
32
+ The Maximal Update Parametrization (μP) aims to make the optimal hyperparameters (HPs) of a model-independent of its size, allowing them to be swept using a cheap proxy model rather than the full-size target model. We present a new scheme, u-μP, which improves upon μP by combining it with Unit Scaling, a method for designing models that makes them easy to train in low precision. The two techniques have a natural affinity: μP ensures that the scale of activations is independent of model size, and Unit Scaling ensures that activations, weights, and gradients begin training with a scale of one. This synthesis opens the door to a simpler scheme, whose default values are near-optimal. This in turn facilitates a more efficient sweeping strategy, with u-μP models reaching a lower loss than comparable μP models and working out-of-the-box in FP8.
33
+
34
+
35
+
36
+ If you want to learn more details about u-μP, check out our [blog post](https://aleph-alpha.com/in-awe-at-the-scale-of-these-tensors-a-gentle-introduction-to-unit-scaled-maximal-update-parametrization/) and our [paper](https://arxiv.org/abs/2407.17465).
37
+
38
+ Unit-Scaled Maximal Update Parametrization (u-μP) is available in [Scaling](https://github.com/Aleph-Alpha/scaling), our official large-scale training codebase. Please note, that FP8-trained checkpoints only work on chips with FP8 support, like the Hopper architecture.
39
+
40
+ # Usage
41
+ You can generate tokens with the [Scaling](https://github.com/Aleph-Alpha/scaling) inference implementation:
42
+
43
+ ```python
44
+ from scaling.transformer.inference import TransformerInferenceModule
45
+ from pathlib import Path
46
+
47
+ ckpt_path = Path("<path_to_repo>/sp-baseline-research-1b-bf16")
48
+
49
+ model = TransformerInferenceModule.from_checkpoint(ckpt_path)
50
+
51
+ prompt = "Yesterday I dreamt of "
52
+
53
+ output = model.generate(max_tokens=100, input_text=prompt)
54
+
55
+ print(output.completion_text)
56
+ ```