zz07 commited on
Commit
be0b5fa
·
verified ·
1 Parent(s): 700af5d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +41 -10
README.md CHANGED
@@ -1,17 +1,18 @@
1
  ---
2
  license: other
3
- license_name: public-domain
4
- license_link: LICENSE
5
  language:
6
  - en
 
 
 
 
 
7
  base_model:
8
- - meta-llama/Llama-3.1-8B-Instruct
9
  - meta-llama/Llama-3.2-1B-Instruct
10
  - meta-llama/Llama-3.2-3B-Instruct
11
- tags:
12
- - GSA
13
- pipeline_tag: text-generation
14
- library_name: transformers
15
  ---
16
 
17
  # Overview of Gene-R1
@@ -35,16 +36,33 @@ The model contains three versions:
35
  ```python
36
  import transformers
37
  from transformers import AutoTokenizer, AutoModelForCausalLM
38
-
 
39
  tokenizer_test = AutoTokenizer.from_pretrained(
40
- model_path = "xxxxxxxxx", # The local path where the model is located
41
  token = "xxxxxxxxx" # Your access key of hugging face
42
  )
43
  model_test = AutoModelForCausalLM.from_pretrained(
44
- model_path = "xxxxxxxxx", # The local path where the model is located
45
  device_map = "auto",
46
  token = "xxxxxxxxx" # Your access key of hugging face
47
  )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
48
 
49
  def complete_chat(system, prompt, model, tokenizer):
50
  model.generation_config.do_sample=False
@@ -110,6 +128,19 @@ The expected output looks like:
110
 
111
  More details of model usage can be referred at our GitHub: [GitHub](https://github.com/ncbi-nlp/Gene-R1)
112
 
 
 
 
 
 
 
 
 
 
 
 
 
 
113
  # Acknowledgments
114
 
115
  This research was supported in part by the Intramural Research Program of the National Institutes of Health (NIH).
 
1
  ---
2
  license: other
3
+ library_name: transformers
4
+ pipeline_tag: text-generation
5
  language:
6
  - en
7
+ tags:
8
+ - gene-set-analysis
9
+ - biomedical
10
+ - reasoning
11
+ - llama
12
  base_model:
 
13
  - meta-llama/Llama-3.2-1B-Instruct
14
  - meta-llama/Llama-3.2-3B-Instruct
15
+ - meta-llama/Llama-3.1-8B-Instruct
 
 
 
16
  ---
17
 
18
  # Overview of Gene-R1
 
36
  ```python
37
  import transformers
38
  from transformers import AutoTokenizer, AutoModelForCausalLM
39
+
40
+ model_id = "ncbi/Gene-R1"
41
  tokenizer_test = AutoTokenizer.from_pretrained(
42
+ model_id,
43
  token = "xxxxxxxxx" # Your access key of hugging face
44
  )
45
  model_test = AutoModelForCausalLM.from_pretrained(
46
+ model_id,
47
  device_map = "auto",
48
  token = "xxxxxxxxx" # Your access key of hugging face
49
  )
50
+
51
+ # If you want to download a specific model of Gene-R1, using below code:
52
+ <!-- model_id = "ncbi/Gene-R1"
53
+ subfolder = "3B"
54
+
55
+ tokenizer_test = AutoTokenizer.from_pretrained(
56
+ model_id,
57
+ subfolder=subfolder,
58
+ token = "xxxxxxxxx" # Your access key of hugging face
59
+ )
60
+ model_test = AutoModelForCausalLM.from_pretrained(
61
+ model_id,
62
+ subfolder=subfolder,
63
+ device_map = "auto",
64
+ token = "xxxxxxxxx" # Your access key of hugging face
65
+ )-->
66
 
67
  def complete_chat(system, prompt, model, tokenizer):
68
  model.generation_config.do_sample=False
 
128
 
129
  More details of model usage can be referred at our GitHub: [GitHub](https://github.com/ncbi-nlp/Gene-R1)
130
 
131
+ # Download statistics
132
+
133
+ Hugging Face tracks downloads automatically based on requests to model query files such as `config.json`.
134
+ To ensure downloads are counted, please load the full models directly from the Hub using `transformers`:
135
+
136
+ ```python
137
+ from transformers import AutoTokenizer, AutoModelForCausalLM
138
+
139
+ model_id = "ncbi/Gene-R1"
140
+ tokenizer = AutoTokenizer.from_pretrained(model_id, hf_token)
141
+ model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", hf_token)
142
+ ```
143
+
144
  # Acknowledgments
145
 
146
  This research was supported in part by the Intramural Research Program of the National Institutes of Health (NIH).