cocaho commited on
Commit
f45c64f
1 Parent(s): 1616cb5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +28 -1
README.md CHANGED
@@ -9,6 +9,9 @@ tags:
9
  model-index:
10
  - name: outputs
11
  results: []
 
 
 
12
  ---
13
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -20,7 +23,31 @@ This model is a fine-tuned version of [google/gemma-7b](https://huggingface.co/g
20
 
21
  ## Model description
22
 
23
- More information needed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
 
25
  ## Intended uses & limitations
26
 
 
9
  model-index:
10
  - name: outputs
11
  results: []
12
+ language:
13
+ - en
14
+ pipeline_tag: text-generation
15
  ---
16
 
17
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
23
 
24
  ## Model description
25
 
26
+ Trained by Sociology Text
27
+
28
+ ### Example
29
+ ```python
30
+ import torch
31
+ from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig, GemmaTokenizer
32
+
33
+
34
+ model_id = ""
35
+ bnb_config = BitsAndBytesConfig(
36
+ load_in_4bit=True,
37
+ bnb_4bit_quant_type="nf4",
38
+ bnb_4bit_compute_dtype=torch.bfloat16
39
+ )
40
+
41
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
42
+ model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config, device_map={"":0})
43
+
44
+ text = "sociology is "
45
+ device = "cuda:0"
46
+ inputs = tokenizer(text, return_tensors="pt").to(device)
47
+
48
+ outputs = model.generate(**inputs, max_new_tokens=60)
49
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
50
+ ```
51
 
52
  ## Intended uses & limitations
53