Ciaranshu commited on
Commit
32d9302
1 Parent(s): 4f30dae

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +47 -3
README.md CHANGED
@@ -1,11 +1,55 @@
1
  ---
 
2
  library_name: peft
 
 
 
 
 
 
 
 
 
3
  ---
4
- ## Training procedure
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
 
6
 
7
  The following `bitsandbytes` quantization config was used during training:
8
- - quant_method: bitsandbytes
9
  - load_in_8bit: True
10
  - load_in_4bit: False
11
  - llm_int8_threshold: 6.0
@@ -18,4 +62,4 @@ The following `bitsandbytes` quantization config was used during training:
18
  ### Framework versions
19
 
20
 
21
- - PEFT 0.6.0.dev0
 
1
  ---
2
+ license: mit
3
  library_name: peft
4
+ language:
5
+ - en
6
+ pipeline_tag: text-generation
7
+ tags:
8
+ - facebook
9
+ - meta
10
+ - pytorch
11
+ - llama
12
+ - llama-2
13
  ---
14
+
15
+ **Website**: [FireAct Agent](https://fireact-agent.github.io)
16
+
17
+ # **FireAct Llama-2/CodeLlama**
18
+ FireAct Llama/CodeLlama is a collection of fine-tuned generative text models for performaning ReAct with external search tools. Links to other models can be found in the Index section.
19
+
20
+ ## Foundation Model Details
21
+ *Note: As the foundation models, Llama-2 and CodeLlama, are developed by Meta, please also read the guidence and license on their website, [Llama-2](https://huggingface.co/meta-llama) and [CodeLlama](https://huggingface.co/codellama), before using FireAct models.*
22
+
23
+ **Model Developers** Sysmtem 2 Research, Cambridge LTL, Monash University, Princeton PLI.
24
+
25
+ **Variations** FireAct models including Llama-2-7B full fine-tuned models, and Llama-2-[7B,13B], CodeLlama-[7B,13B,34B] LoRA fine-tuned models. All released models are fine-tuned on multi-task (HotpotQA/StrategyQA/MMLU) and multi-types (ReAct/CoT/Reflexion) data.
26
+
27
+ **Input** Models input text only.
28
+
29
+ **Output** Models generate text only.
30
+
31
+ ## Index
32
+ **Full Fine-tuned Model**
33
+
34
+ FireAct Llama-2:
35
+ - [fireact_llama_2_7b](https://huggingface.co/forestai/fireact_llama_2_7b)
36
+
37
+ **LoRA Fine-tuned Model**
38
+
39
+ FireAct Llama-2:
40
+ - [fireact_llama_2_7b_lora](https://huggingface.co/forestai/fireact_llama_2_7b_lora)
41
+ - [fireact_llama_2_13b_lora](https://huggingface.co/forestai/fireact_llama_2_13b_lora)
42
+
43
+ FireAct CodeLlama:
44
+ - [fireact_codellama_7b_lora](https://huggingface.co/forestai/fireact_codellama_7b_lora)
45
+ - [fireact_codellama_13b_lora](https://huggingface.co/forestai/fireact_codellama_13b_lora)
46
+ - [fireact_codellama_34b_lora](https://huggingface.co/forestai/fireact_codellama_34b_lora)
47
+
48
+
49
+ ## LoRA Training procedure
50
 
51
 
52
  The following `bitsandbytes` quantization config was used during training:
 
53
  - load_in_8bit: True
54
  - load_in_4bit: False
55
  - llm_int8_threshold: 6.0
 
62
  ### Framework versions
63
 
64
 
65
+ - PEFT 0.4.0