chrisociepa commited on
Commit
6b778ce
1 Parent(s): 869919c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -7
README.md CHANGED
@@ -106,22 +106,38 @@ Our training dataset contains:
106
  * Polish Wikipedia: 970 million tokens
107
  * web crawl data: 813 million tokens
108
 
109
- ## How to Use
110
 
111
- Our model is fully compatible with HuggingFace - you can use it right away.
112
 
113
- Regarding the full precision, it can be run on a graphic card with 6 GB VRAM.
 
 
 
 
 
 
 
 
 
114
 
115
  ```python
116
- import transformers
117
- model = transformers.AutoModelForCausalLM.from_pretrained('Azurro/APT-1B-Base')
 
118
  ```
119
 
120
- In order to reduce the memory usage, you can use smaller precision (bfloat16) and run the model on a graphic card with 4 GB VRAM.
121
 
122
  ```python
123
  import transformers
124
- model = transformers.AutoModelForCausalLM.from_pretrained('Azurro/APT-1B-Base', torch_dtype=torch.bfloat16)
 
 
 
 
 
 
125
  ```
126
 
127
  ## Limitations and Biases
 
106
  * Polish Wikipedia: 970 million tokens
107
  * web crawl data: 813 million tokens
108
 
109
+ ### Quickstart
110
 
111
+ This model can be easily loaded using the AutoModelForCausalLM functionality.
112
 
113
+ ```python
114
+ from transformers import AutoTokenizer, AutoModelForCausalLM
115
+
116
+ model_name = "Azurro/APT-1B-Base"
117
+
118
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
119
+ model = AutoModelForCausalLM.from_pretrained(model_name)
120
+ ```
121
+
122
+ In order to reduce the memory usage, you can use smaller precision (`bfloat16`).
123
 
124
  ```python
125
+ import torch
126
+
127
+ model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16)
128
  ```
129
 
130
+ And then you can use Hugging Face Pipelines to generate text.
131
 
132
  ```python
133
  import transformers
134
+
135
+ text = "Najważniejszym celem człowieka na ziemi jest"
136
+
137
+ pipeline = transformers.pipeline("text-generation", model=model, tokenizer=tokenizer)
138
+ sequences = pipeline(max_length=100, do_sample=True, top_k=10, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id)
139
+ for seq in sequences:
140
+ print(f"Result: {seq['generated_text']}")
141
  ```
142
 
143
  ## Limitations and Biases