juliensimon HF staff commited on
Commit
ba4239e
1 Parent(s): 8737c72

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -1
README.md CHANGED
@@ -2,6 +2,8 @@
2
  ---
3
  license: creativeml-openrail-m
4
  base_model: runwayml/stable-diffusion-v1-5
 
 
5
  tags:
6
  - stable-diffusion
7
  - stable-diffusion-diffusers
@@ -10,7 +12,11 @@ tags:
10
  - lora
11
  inference: true
12
  ---
13
-
 
 
 
 
14
  # LoRA text2image fine-tuning - juliensimon/stable-diffusion-v1-5-pokemon-lora
15
  These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following.
16
 
 
2
  ---
3
  license: creativeml-openrail-m
4
  base_model: runwayml/stable-diffusion-v1-5
5
+ datasets:
6
+ - lambdalabs/pokemon-blip-captions
7
  tags:
8
  - stable-diffusion
9
  - stable-diffusion-diffusers
 
12
  - lora
13
  inference: true
14
  ---
15
+
16
+ This model was fine-tuned using 4-bit QLoRa, following the instructions in https://huggingface.co/blog/lora.
17
+
18
+ I used a Amazon EC2 g4dn.xlarge instance (1xT4 GPU), with the Deep Learning AMI for PyTorch. Training time was about 6 hours. On-demand price is about $3, which can easily be reduced to about $1 with EC2 Spot Instances.
19
+
20
  # LoRA text2image fine-tuning - juliensimon/stable-diffusion-v1-5-pokemon-lora
21
  These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following.
22