vivekraina commited on
Commit
6a64c0c
1 Parent(s): fee7b19

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +33 -0
README.md ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ![Falcon7b8bit.jpg](https://cdn-uploads.huggingface.co/production/uploads/6439639821221ac74117ee31/5eBD58An3E7FE6aumBSGN.jpeg)
2
+ # 🚀 Falcon-7B 8-bit Model
3
+ This repository is home to the 8-bit of Falcon-7B model, converted from its original model (https://huggingface.co/tiiuae/falcon-7b).
4
+
5
+ Falcon-7B is a 7B parameters causal decoder-only model built by TII and trained on 1,500B tokens of RefinedWeb enhanced with curated corpora. It is made available under the Apache 2.0 license.
6
+
7
+ Usage
8
+ You can use this model directly with a pipeline for tasks such as text generation and instruction following:
9
+ ```python
10
+ from transformers import AutoTokenizer, AutoModelForCausalLM
11
+ import transformers
12
+ import torch
13
+
14
+ model = "vivekraina/falcon-7b-8bit"
15
+
16
+ tokenizer = AutoTokenizer.from_pretrained(model)
17
+ pipe = transformers.pipeline(
18
+ "text-generation",
19
+ model=model,
20
+ tokenizer=tokenizer,
21
+ trust_remote_code=True
22
+ )
23
+ sequences = pipe(
24
+ "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
25
+ max_length=200,
26
+ do_sample=True,
27
+ top_k=10,
28
+ num_return_sequences=1,
29
+ eos_token_id=tokenizer.eos_token_id,
30
+ )
31
+ for seq in sequences:
32
+ print(f"Result: {seq['generated_text']}")
33
+ ```