abideen commited on
Commit
d87f477
1 Parent(s): 36ed84d

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +64 -0
README.md ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - HuggingFaceTB/cosmopedia
5
+ language:
6
+ - en
7
+ library_name: transformers
8
+ tags:
9
+ - bitnet
10
+ - llama
11
+ - open-source
12
+ - cosmopedia
13
+ ---
14
+ # Bitnet-LLama-70M
15
+
16
+ ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64e380b2e12618b261fa6ba0/0MOUO0XIQGEpgVcpccPSK.jpeg)
17
+
18
+ Bitnet-LLama-70M is a 70M parameter model trained using the method described in [The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits](https://arxiv.org/abs/2402.17764).
19
+
20
+ It was trained on the subset of the [HuggingFaceTB/cosmopedia](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia) dataset. This is just a small experiment to try out BitNet.
21
+
22
+ Bitnet-LLama-70M was trained
23
+
24
+ Wandb training report is as follows:
25
+
26
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64e380b2e12618b261fa6ba0/bkIXNv5jpfl4ZaZQO3Sgg.png)
27
+
28
+ # Sample inference code
29
+
30
+ ```python
31
+ from transformers import AutoModelForCausalLM, AutoTokenizer
32
+
33
+ # Load a pretrained BitNet model
34
+ model = "abideen/Bitnet-Llama-70M"
35
+ tokenizer = AutoTokenizer.from_pretrained(model)
36
+ model = AutoModelForCausalLM.from_pretrained(model)
37
+
38
+ def convert_to_bitnet(model, copy_weights):
39
+ for name, module in model.named_modules():
40
+ # Replace linear layers with BitNet
41
+ if isinstance(module, LlamaSdpaAttention) or isinstance(module, LlamaMLP):
42
+ for child_name, child_module in module.named_children():
43
+ if isinstance(child_module, nn.Linear):
44
+ bitlinear = BitLinear(child_module.in_features, child_module.out_features, child_module.bias is not None).to(device="cuda:0")
45
+ if copy_weights:
46
+ bitlinear.weight = child_module.weight
47
+ if child_module.bias is not None:
48
+ bitlinear.bias = child_module.bias
49
+ setattr(module, child_name, bitlinear)
50
+ # Remove redundant input_layernorms
51
+ elif isinstance(module, LlamaDecoderLayer):
52
+ for child_name, child_module in module.named_children():
53
+ if isinstance(child_module, LlamaRMSNorm) and child_name == "input_layernorm":
54
+ setattr(module, child_name, nn.Identity().to(device="cuda:0"))
55
+
56
+
57
+ convert_to_bitnet(model, copy_weights=True)
58
+ model.to(device="cuda:0")
59
+
60
+ prompt = "What is Machine Learning?"
61
+ inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
62
+ generate_ids = model.generate(inputs.input_ids, max_length=100)
63
+ tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
64
+ ```