Text Generation
PEFT
Safetensors
llama-2
Eval Results
dfurman commited on
Commit
5785368
1 Parent(s): 2941035

Upload model

Browse files
Files changed (3) hide show
  1. README.md +2 -14
  2. adapter_config.json +23 -0
  3. adapter_model.safetensors +3 -0
README.md CHANGED
@@ -1,21 +1,9 @@
1
  ---
2
  library_name: peft
3
- license: llama2
4
- datasets:
5
- - ehartford/dolphin
6
- tags:
7
- - llama-2
8
- inference: false
9
- pipeline_tag: text-generation
10
- ---
11
-
12
- # llama-2-70b-dolphin 🦙🐬
13
-
14
- This instruction model was built via parameter-efficient QLoRA finetuning of [llama-2-70b](https://huggingface.co/meta-llama/Llama-2-70b-hf) on the first 25k rows of [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin) (an open-source implementation of [Microsoft's Orca](https://www.microsoft.com/en-us/research/publication/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4/)). Finetuning was executed on a single H100 (80 GB PCIe) for roughly XX hours on the [Lambda Labs](https://cloud.lambdalabs.com/instances) platform.
15
-
16
  ---
 
17
 
18
  ### Framework versions
19
 
20
 
21
- - PEFT 0.5.0.dev0
 
1
  ---
2
  library_name: peft
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
4
+ ## Training procedure
5
 
6
  ### Framework versions
7
 
8
 
9
+ - PEFT 0.5.0.dev0
adapter_config.json ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "auto_mapping": null,
3
+ "base_model_name_or_path": "meta-llama/Llama-2-70b-hf",
4
+ "bias": "none",
5
+ "fan_in_fan_out": false,
6
+ "inference_mode": true,
7
+ "init_lora_weights": true,
8
+ "layers_pattern": null,
9
+ "layers_to_transform": null,
10
+ "lora_alpha": 16,
11
+ "lora_dropout": 0.1,
12
+ "modules_to_save": null,
13
+ "peft_type": "LORA",
14
+ "r": 64,
15
+ "revision": null,
16
+ "target_modules": [
17
+ "q_proj",
18
+ "k_proj",
19
+ "v_proj",
20
+ "o_proj"
21
+ ],
22
+ "task_type": "CAUSAL_LM"
23
+ }
adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d8b24117dbb94738050e7366e1c7818252e14e29c611ebbfbc39d8a9f79b7431
3
+ size 1048663568