PEFT
Safetensors
English
adirik commited on
Commit
c9bc1be
1 Parent(s): 1cb3e84

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +48 -2
README.md CHANGED
@@ -7,11 +7,57 @@ language:
7
  - en
8
  ---
9
  ## Style-Instruct Mistral 7B
10
- Mistral 7B instruct fine-tuned on the [neuralwork/fashion-style-instruct]() dataset with LoRA and 4bit quantization. See the blog [post]() and Github [repository](https://github.com/neuralwork/instruct-finetune-mistral)
11
- for training details. This model is trained with body type / personal style descriptions as input, target events (e.g. casual date, business meeting) as context and outfit combination suggestions as output.
12
 
13
 
14
  ## Usage
15
  This repo contains the LoRA parameters of the fine-tuned Mistral 7B model. To perform inference, load and use the model as follows:
 
16
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
  ```
 
7
  - en
8
  ---
9
  ## Style-Instruct Mistral 7B
10
+ Mistral 7B instruct fine-tuned on the [neuralwork/fashion-style-instruct]() dataset with LoRA and 4bit quantization. See the blog [post](https://blog.neuralwork.ai/) and Github [repository](https://github.com/neuralwork/instruct-finetune-mistral)
11
+ for training details. This model is trained with body type / personal style descriptions as input, target events (e.g. casual date, business meeting) as context and outfit combination suggestions as output. For a full list of event types, refer to the gradio demo [file](https://github.com/neuralwork/instruct-finetune-mistral/blob/main/app.py) in the Github repository.
12
 
13
 
14
  ## Usage
15
  This repo contains the LoRA parameters of the fine-tuned Mistral 7B model. To perform inference, load and use the model as follows:
16
+
17
  ```
18
+ import torch
19
+ from peft import AutoPeftModelForCausalLM
20
+ from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
21
+
22
+
23
+ def format_instruction(input, event):
24
+ return f"""You are a personal stylist recommending fashion advice and clothing combinations. Use the self body and style description below, combined with the event described in the context to generate 5 self-contained and complete outfit combinations.
25
+ ### Input:
26
+ {input}
27
+
28
+ ### Context:
29
+ I'm going to a {event}.
30
+
31
+ ### Response:
32
+ """
33
+
34
+ # input is a self description of your body type and personal style
35
+ prompt = "I'm an athletic and 171cm tall woman in my mid twenties, I have a rectangle shaped body with slightly broad shoulders and have a sleek, casual style. I usually prefer darker colors."
36
+ event = "business meeting"
37
+ prompt = format_instruction(prompt, event)
38
+
39
+ # load base LLM model, LoRA params and tokenizer
40
+ model = AutoPeftModelForCausalLM.from_pretrained(
41
+ "neuralwork/mistral-7b-style-instruct",
42
+ low_cpu_mem_usage=True,
43
+ torch_dtype=torch.float16,
44
+ load_in_4bit=True,
45
+ )
46
+ tokenizer = AutoTokenizer.from_pretrained("neuralwork/mistral-7b-style-instruct")
47
+ input_ids = tokenizer(prompt, return_tensors="pt", truncation=True).input_ids.cuda()
48
+
49
+ # inference
50
+ with torch.inference_mode():
51
+ outputs = model.generate(
52
+ input_ids=input_ids,
53
+ max_new_tokens=800,
54
+ do_sample=True,
55
+ top_p=0.9,
56
+ temperature=0.9
57
+ )
58
+
59
+ # decode output tokens and strip response
60
+ outputs = outputs.detach().cpu().numpy()
61
+ outputs = tokenizer.batch_decode(outputs, skip_special_tokens=True)
62
+ output = outputs[0][len(prompt):]
63
  ```