nmitchko commited on
Commit
cf0b23c
1 Parent(s): f3e7d3f

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +90 -0
README.md ADDED
@@ -0,0 +1,90 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ library_name: peft
5
+ pipeline_tag: text-generation
6
+ tags:
7
+ - medical
8
+ license: llama2
9
+ datasets:
10
+ - nmitchko/i2b2-query-data-1.0
11
+ ---
12
+
13
+ # i2b2 QueryBuilder - 34b Merged
14
+
15
+ <!-- TODO: Add a link here N: DONE-->
16
+ ![Screenshot](https://huggingface.co/nmitchko/i2b2-querybuilder-codellama-34b/resolve/main/Example%20Query.png)
17
+
18
+ ## Model Description
19
+
20
+ This model will generate queries for your i2b2 query builder trained on [this dataset](https://huggingface.co/datasets/nmitchko/i2b2-query-data-1.0) for `10 epochs` . For evaluation use.
21
+ * Do not use as a final research query builder.
22
+ * Results may be incorrect or mal-formatted.
23
+ * The onus of research accuracy is on the researcher, not the AI model.
24
+
25
+ ## Prompt Format
26
+
27
+ If you are using text-generation-webui, you can download the instruction template [i2b2.yaml](https://huggingface.co/nmitchko/i2b2-querybuilder-codellama-34b/resolve/main/i2b2.yaml)
28
+
29
+ ```md
30
+ Below is an instruction that describes a task.
31
+
32
+ ### Instruction:
33
+ {input}
34
+
35
+ ### Response:
36
+ ```xml
37
+ ```
38
+
39
+
40
+ ### Architecture
41
+ `nmitchko/i2b2-querybuilder-34b-merged` is a large language model LoRa specifically fine-tuned for generating queries in the [i2b2 query builder](https://community.i2b2.org/wiki/display/webclient/3.+Query+Tool).
42
+ It is based on [`codellama-34b-hf`](https://huggingface.co/codellama/CodeLlama-34b-hf) at 34 billion parameters.
43
+
44
+ The primary goal of this model is to improve research accuracy with the i2b2 tool.
45
+ It was trained using [LoRA](https://arxiv.org/abs/2106.09685), specifically [QLora Multi GPU](https://github.com/ChrisHayduk/qlora-multi-gpu), to reduce memory footprint.
46
+
47
+ See Training Parameters for more info This Lora supports 4-bit and 8-bit modes.
48
+
49
+ ### Requirements
50
+
51
+ ```
52
+ bitsandbytes>=0.41.0
53
+ peft@main
54
+ transformers@main
55
+ ```
56
+
57
+ Steps to load this model:
58
+ 1. Load base model (codellama-34b-hf) using transformers
59
+
60
+ ## Training Parameters
61
+
62
+ The model was trained for or 10 epochs on [i2b2-query-data-1.0](https://huggingface.co/datasets/nmitchko/i2b2-query-data-1.0)
63
+ `i2b2-query-data-1.0` contains only tasks and outputs for i2b2 queries xsd schemas.
64
+
65
+
66
+ | Item | Amount | Units |
67
+ |---------------|--------|-------|
68
+ | LoRA Rank | 64 | ~ |
69
+ | LoRA Alpha | 16 | ~ |
70
+ | Learning Rate | 1e-4 | SI |
71
+ | Dropout | 5 | % |
72
+
73
+ ## Training procedure
74
+
75
+
76
+ The following `bitsandbytes` quantization config was used during training:
77
+ - quant_method: QuantizationMethod.BITS_AND_BYTES
78
+ - load_in_8bit: False
79
+ - load_in_4bit: True
80
+ - llm_int8_threshold: 6.0
81
+ - llm_int8_skip_modules: None
82
+ - llm_int8_enable_fp32_cpu_offload: False
83
+ - llm_int8_has_fp16_weight: False
84
+ - bnb_4bit_quant_type: nf4
85
+ - bnb_4bit_use_double_quant: True
86
+ - bnb_4bit_compute_dtype: bfloat16
87
+
88
+ ### Framework versions
89
+
90
+ - PEFT 0.6.0.dev0