animanatwork commited on
Commit
ebabd25
1 Parent(s): 2e0a5a7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -22,7 +22,7 @@ inference: true
22
  # LoRA text2image fine-tuning - animanatwork/illustrations-lora
23
  These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the animanatwork/text_to_image_dataset dataset.
24
 
25
- Some images from the dataset:
26
 
27
  <div style="display: flex; justify-content: space-between;">
28
  <img src="https://cdn-uploads.huggingface.co/production/uploads/66297c313291276a14318d23/fHCi3t9AlK5AasMt_K0nh.png" width="30%" />
@@ -31,7 +31,7 @@ Some images from the dataset:
31
  </div>
32
 
33
 
34
- Some images generated from the model using the prompt: "a stylized illustration of a woman sitting in a comfortable chair, reading a book. She is wearing a hat, and her expression appears focused and calm. A black cat is also depicted, sitting beside her and looking at the book, suggesting a shared moment of quiet and companionship. The woman is dressed in a casual outfit with yellow shoes, and the overall color scheme is simple, using black, white, and yellow. The setting seems cozy and peaceful, ideal for reading."
35
 
36
  <div style="display: flex; justify-content: space-between;">
37
  <img src="./image_0.png" width="25%" />
@@ -56,7 +56,7 @@ Do NOT use in production. This model was purely created for research purposes.
56
  ## Training details
57
 
58
  - The model was trained on the "animanatwork/text_to_image_dataset" dataset using 10_000 training step (default is 15_000) and took several hours to train. For more details see [Colab notebook](https://colab.research.google.com/drive/1CePJWR2sfYW-w0oPuiIdJzuc82Z6yYHt#scrollTo=QzKEQJYkUv2Q).
59
-
60
 
61
 
62
  [TODO: describe the data used to train the model]
 
22
  # LoRA text2image fine-tuning - animanatwork/illustrations-lora
23
  These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the animanatwork/text_to_image_dataset dataset.
24
 
25
+ Below, we can find some images from the dataset:
26
 
27
  <div style="display: flex; justify-content: space-between;">
28
  <img src="https://cdn-uploads.huggingface.co/production/uploads/66297c313291276a14318d23/fHCi3t9AlK5AasMt_K0nh.png" width="30%" />
 
31
  </div>
32
 
33
 
34
+ The images below are generated from the model using the prompt: "a stylized illustration of a woman sitting in a comfortable chair, reading a book. She is wearing a hat, and her expression appears focused and calm. A black cat is also depicted, sitting beside her and looking at the book, suggesting a shared moment of quiet and companionship. The woman is dressed in a casual outfit with yellow shoes, and the overall color scheme is simple, using black, white, and yellow. The setting seems cozy and peaceful, ideal for reading."
35
 
36
  <div style="display: flex; justify-content: space-between;">
37
  <img src="./image_0.png" width="25%" />
 
56
  ## Training details
57
 
58
  - The model was trained on the "animanatwork/text_to_image_dataset" dataset using 10_000 training step (default is 15_000) and took several hours to train. For more details see [Colab notebook](https://colab.research.google.com/drive/1CePJWR2sfYW-w0oPuiIdJzuc82Z6yYHt#scrollTo=QzKEQJYkUv2Q).
59
+ - The dataset's tokens were generated using chatGPT vision. During training, I noticed CLIP can only use 77 tokens for a given image. Since most of our image descriptions contained more tokens, we'll have to create a new dataset that doesn't exceed the maximum.
60
 
61
 
62
  [TODO: describe the data used to train the model]