AlekseyCalvin commited on
Commit
f7d342c
1 Parent(s): db37476

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -45,9 +45,9 @@ widget:
45
  <Gallery />
46
 
47
  # Mayakovsky Style Soviet Constructivist Posters & Cartoons Flux LoRA(v.1) by SOON®
48
- Trained via Ostris' [ai-toolkit](https://replicate.com/ostris/flux-dev-lora-trainer/train) on 50 high-resolution poster scans & artworks the great Soviet **poet, artist, & Marxist activist Vladimir Mayakovsky**. <br>
49
  For this training experiment, we first spent many days rigorously translating the textual elements (slogans, captions, titles, inset poems, speech fragments, etc), with form/signification/rhymes intact, throughout every image subsequently used for training. <br>
50
- These textographic elements were, furthermore, re-placed by us into their original visual contexts, using fonts matched to the sources. <br>
51
  We then manually composed highly detailed paragraph-long captions, wherein we detailed both the graphic and the textual content of each piece, its layout, as well as the most intuitive/intended apprehension of each composition. <br>
52
  This version of the resultent LoRA was trained on our custom Schnell-based checkpoint (Historic Color 2), available here, for 3600 steps at a Transformer Learning Rate of .00002, batch 1, ademamix8bit! No synthetic data, zero auto-generated captions! <br>
53
 
 
45
  <Gallery />
46
 
47
  # Mayakovsky Style Soviet Constructivist Posters & Cartoons Flux LoRA(v.1) by SOON®
48
+ Trained via Ostris' [ai-toolkit](https://replicate.com/ostris/flux-dev-lora-trainer/train) on 50 high-resolution scans of 1910s/1920s posters & artworks by the great Soviet **poet, artist, & Marxist activist Vladimir Mayakovsky**. <br>
49
  For this training experiment, we first spent many days rigorously translating the textual elements (slogans, captions, titles, inset poems, speech fragments, etc), with form/signification/rhymes intact, throughout every image subsequently used for training. <br>
50
+ These translated textographic elements were, furthermore, re-placed by us into their original visual contexts, using fonts matched up to the sources. <br>
51
  We then manually composed highly detailed paragraph-long captions, wherein we detailed both the graphic and the textual content of each piece, its layout, as well as the most intuitive/intended apprehension of each composition. <br>
52
  This version of the resultent LoRA was trained on our custom Schnell-based checkpoint (Historic Color 2), available here, for 3600 steps at a Transformer Learning Rate of .00002, batch 1, ademamix8bit! No synthetic data, zero auto-generated captions! <br>
53