wannaphong commited on
Commit
7676b30
1 Parent(s): 45497d6

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +58 -0
README.md ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ pipeline_tag: text-generation
6
+ ---
7
+
8
+ # NumFa 3B 1 epoch-test
9
+
10
+ NumFa 3B 1 epoch is the first epoch of NumFa 3B model.
11
+
12
+ Base model: openllama3b
13
+
14
+ **For testing only**
15
+
16
+ ## Model Details
17
+
18
+ ### Model Description
19
+
20
+ The model was trained by TPU.
21
+
22
+ This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
23
+
24
+ - **Developed by:** NumFa
25
+ - **Model type:** text-generation
26
+ - **Language(s) (NLP):** English
27
+ - **License:** apache-2.0
28
+
29
+
30
+ ### Out-of-Scope Use
31
+
32
+ Math, Coding, and other language
33
+
34
+
35
+ ## Bias, Risks, and Limitations
36
+
37
+ The model can has a bias from dataset. Use at your own risks!
38
+
39
+ ## How to Get Started with the Model
40
+
41
+ Use the code below to get started with the model.
42
+
43
+ **Example**
44
+
45
+ 1.
46
+
47
+ ```python
48
+ # !pip install accelerate sentencepiece transformers bitsandbytes
49
+ import torch
50
+ from transformers import pipeline
51
+
52
+ pipe = pipeline("text-generation", model="numfa_3b_1epoch", torch_dtype=torch.bfloat16, device_map="auto")
53
+
54
+ # We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
55
+
56
+ outputs = pipe("test is", max_new_tokens=300, do_sample=True, temperature=0.9, top_k=50, top_p=0.95, no_repeat_ngram_size=2,typical_p=1.)
57
+ print(outputs[0]["generated_text"])
58
+ ```