PEFT
Safetensors
English
adirik commited on
Commit
1cb3e84
1 Parent(s): 2ff4113

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -16
README.md CHANGED
@@ -1,21 +1,17 @@
1
  ---
2
  library_name: peft
 
 
 
 
 
3
  ---
4
- ## Training procedure
 
 
5
 
6
 
7
- The following `bitsandbytes` quantization config was used during training:
8
- - quant_method: bitsandbytes
9
- - load_in_8bit: False
10
- - load_in_4bit: True
11
- - llm_int8_threshold: 6.0
12
- - llm_int8_skip_modules: None
13
- - llm_int8_enable_fp32_cpu_offload: False
14
- - llm_int8_has_fp16_weight: False
15
- - bnb_4bit_quant_type: fp4
16
- - bnb_4bit_use_double_quant: False
17
- - bnb_4bit_compute_dtype: float32
18
- ### Framework versions
19
-
20
-
21
- - PEFT 0.4.0
 
1
  ---
2
  library_name: peft
3
+ license: apache-2.0
4
+ datasets:
5
+ - neuralwork/fashion-style-instruct
6
+ language:
7
+ - en
8
  ---
9
+ ## Style-Instruct Mistral 7B
10
+ Mistral 7B instruct fine-tuned on the [neuralwork/fashion-style-instruct]() dataset with LoRA and 4bit quantization. See the blog [post]() and Github [repository](https://github.com/neuralwork/instruct-finetune-mistral)
11
+ for training details. This model is trained with body type / personal style descriptions as input, target events (e.g. casual date, business meeting) as context and outfit combination suggestions as output.
12
 
13
 
14
+ ## Usage
15
+ This repo contains the LoRA parameters of the fine-tuned Mistral 7B model. To perform inference, load and use the model as follows:
16
+ ```
17
+ ```