Q-bert commited on
Commit
d34e613
1 Parent(s): 1de3164

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -14
README.md CHANGED
@@ -28,17 +28,18 @@ AlpaGo is an adapter model trained using the Qlora technique on top of the GPT-N
28
  You can utilize AlpaGo to perform natural language processing tasks. Here's an example of how to use it:
29
 
30
  ```python
31
- from alphago import AlpaGo
32
-
33
- # Load the AlpaGo model
34
- model = AlpaGo()
35
-
36
- # Example input sentence
37
- input_text = "Hello, AlpaGo!"
38
-
39
- # Send the sentence to the model and get the results
40
- output = model.process_text(input_text)
41
-
42
- # Print the output
43
- print(output)
44
- ```
 
 
28
  You can utilize AlpaGo to perform natural language processing tasks. Here's an example of how to use it:
29
 
30
  ```python
31
+ from peft import PeftModel
32
+ import torch
33
+ from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
34
+
35
+ model_id = "EleutherAI/gpt-neox-20b"
36
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
37
+ bnb_config = BitsAndBytesConfig(
38
+ load_in_4bit=True,
39
+ bnb_4bit_use_double_quant=True,
40
+ bnb_4bit_quant_type="nf4",
41
+ bnb_4bit_compute_dtype=torch.bfloat16
42
+ )
43
+ model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config, device_map={"":0})
44
+ model = PeftModel.from_pretrained(model, "myzens/AlpaGo")
45
+ ```