PEFT
text-generation-inference
ucllovelab commited on
Commit
80121ee
1 Parent(s): e8eaecd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -0
README.md CHANGED
@@ -13,6 +13,9 @@ We fine-tuned Llama2-7b-chat using LoRA. We used a batch size of 1 and a chunk s
13
  ## Training data:
14
  Please refer to Dataset card: https://huggingface.co/datasets/BrainGPT/train_valid_split_pmc_neuroscience_2002-2022_filtered_subset
15
 
 
 
 
16
  ## Load and use model:
17
  ```python
18
  from peft import PeftModel, PeftConfig
 
13
  ## Training data:
14
  Please refer to Dataset card: https://huggingface.co/datasets/BrainGPT/train_valid_split_pmc_neuroscience_2002-2022_filtered_subset
15
 
16
+ ## Model weights:
17
+ The current version of BrainGPT was fine-tuned on Llama-2-7b-chat-hf with LoRA, `adapter_model.bin` contains the LoRA adapter weights. To load and use the full model, you need to be granted access to Llama-2-7b-chat-hf via https://huggingface.co/meta-llama/Llama-2-7b-chat-hf.
18
+
19
  ## Load and use model:
20
  ```python
21
  from peft import PeftModel, PeftConfig