Update README.md
Browse files
README.md
CHANGED
|
@@ -18,13 +18,51 @@ widget:
|
|
| 18 |
greens, and soft blues, creating a sense of tranquil, natural beauty
|
| 19 |
output:
|
| 20 |
url: images/example_jl6x0209w.png
|
| 21 |
-
|
| 22 |
---
|
| 23 |
|
| 24 |
# FLUX.1-dev Impressionism fine-tuning with LoRA
|
| 25 |
|
| 26 |
This is a LoRA fine-tuning of the FLUX.1 model trained on a curated dataset of impressionist paintings from WikiArt.
|
| 27 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 28 |
## Dataset
|
| 29 |
The model was trained on the [WikiArt Impressionism Curated Dataset](https://huggingface.co/datasets/dolphinium/wikiart-impressionism-curated), which contains 1,000 high-quality Impressionist paintings with the following distribution:
|
| 30 |
|
|
@@ -41,27 +79,12 @@ The model was trained on the [WikiArt Impressionism Curated Dataset](https://hug
|
|
| 41 |
|
| 42 |
## Usage
|
| 43 |
|
| 44 |
-
|
| 45 |
-
from diffusers import StableDiffusionPipeline
|
| 46 |
-
import torch
|
| 47 |
-
|
| 48 |
-
model_id = "black-forest-labs/FLUX.1-dev"
|
| 49 |
-
lora_model_path = "dolphinium/FLUX.1-dev-wikiart-impressionism"
|
| 50 |
-
|
| 51 |
-
pipe = StableDiffusionPipeline.from_pretrained(
|
| 52 |
-
model_id,
|
| 53 |
-
torch_dtype=torch.float16
|
| 54 |
-
).to("cuda")
|
| 55 |
-
|
| 56 |
-
# Load LoRA weights
|
| 57 |
-
pipe.unet.load_attn_procs(lora_model_path)
|
| 58 |
|
| 59 |
-
|
| 60 |
-
prompt = "an impressionist style landscape with rolling hills and autumn trees"
|
| 61 |
-
image = pipe(prompt).images[0]
|
| 62 |
-
image.save("impressionist_landscape.png")
|
| 63 |
-
```
|
| 64 |
|
|
|
|
|
|
|
| 65 |
|
| 66 |
## License
|
| 67 |
-
This model inherits the license of the base FLUX.1 model and the WikiArt dataset.
|
|
|
|
| 18 |
greens, and soft blues, creating a sense of tranquil, natural beauty
|
| 19 |
output:
|
| 20 |
url: images/example_jl6x0209w.png
|
|
|
|
| 21 |
---
|
| 22 |
|
| 23 |
# FLUX.1-dev Impressionism fine-tuning with LoRA
|
| 24 |
|
| 25 |
This is a LoRA fine-tuning of the FLUX.1 model trained on a curated dataset of impressionist paintings from WikiArt.
|
| 26 |
|
| 27 |
+
## Training Process & Results
|
| 28 |
+
|
| 29 |
+
### Training Environment
|
| 30 |
+
- GPU: NVIDIA A100
|
| 31 |
+
- Training Duration: ~1 hour for 1000 steps
|
| 32 |
+
- Training Notebook: [Google Colab Notebook](https://colab.research.google.com/drive/1G9k6iwSGKXmA32ok4zOPijFUFwBAZ9aB?usp=sharing)
|
| 33 |
+
- Training Framework: [AI-Toolkit](https://github.com/ostris/ai-toolkit)
|
| 34 |
+
|
| 35 |
+
## Training Progress Visualization
|
| 36 |
+
|
| 37 |
+
### Training Progress Grid
|
| 38 |
+

|
| 39 |
+
*4x6 grid showing model progression across different prompts (rows) at various training steps (columns: 0, 200, 400, 600, 800, 1000)*
|
| 40 |
+
|
| 41 |
+
### Step-by-Step Evolution
|
| 42 |
+

|
| 43 |
+
*Evolution of the model's output for the prompt: "An impressionist painting portrays a vast landscape with gently rolling hills under a radiant sky. Clusters of autumn trees dot the scene, rendered with loose, expressive brushstrokes and a palette of warm oranges, deep greens, and soft blues, creating a sense of tranquil, natural beauty" (Steps 0-1000, sampled every 100 steps)*
|
| 44 |
+
|
| 45 |
+
|
| 46 |
+
### Base vs Fine-tuned
|
| 47 |
+

|
| 48 |
+
*Left side is the base model and right side is this fine-tuned model*
|
| 49 |
+
|
| 50 |
+
|
| 51 |
+
### Current Results & Future Improvements
|
| 52 |
+
The most notable improvements are observed in landscape generation, which can be attributed to:
|
| 53 |
+
- Strong representation of landscapes (30%) in the training dataset
|
| 54 |
+
- Inherent structural similarities in impressionist landscape paintings
|
| 55 |
+
- Clear patterns in color usage and brushstroke techniques
|
| 56 |
+
|
| 57 |
+
Future improvements will focus on:
|
| 58 |
+
- Experimenting with different LoRA configurations and ranks
|
| 59 |
+
- Fine-tuning hyperparameters for better convergence
|
| 60 |
+
- Improving caption quality and specificity
|
| 61 |
+
- Extending training duration beyond 1000 steps
|
| 62 |
+
- Developing custom training scripts for more granular control
|
| 63 |
+
|
| 64 |
+
While the current implementation uses the [AI-Toolkit](https://github.com/ostris/ai-toolkit), future iterations will involve developing custom training scripts to gain deeper insights into model configuration and behavior.
|
| 65 |
+
|
| 66 |
## Dataset
|
| 67 |
The model was trained on the [WikiArt Impressionism Curated Dataset](https://huggingface.co/datasets/dolphinium/wikiart-impressionism-curated), which contains 1,000 high-quality Impressionist paintings with the following distribution:
|
| 68 |
|
|
|
|
| 79 |
|
| 80 |
## Usage
|
| 81 |
|
| 82 |
+
To run code 4-bit with quantization check out this [Google Colab Notebook](https://colab.research.google.com/drive/1dnCeNGHQSuWACrG95rH4TXPgXwNNdTh-?usp=sharing).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 83 |
|
| 84 |
+
On Google Colab the cheapest way to run code is acquiring a T4 with high-ram if I am not wrong :)
|
|
|
|
|
|
|
|
|
|
|
|
|
| 85 |
|
| 86 |
+
Also thanks to providers original notebook to run code 4-bit with quantization.
|
| 87 |
+
[Original Colab Notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/Flux/Run_Flux_on_an_8GB_machine.ipynb) :
|
| 88 |
|
| 89 |
## License
|
| 90 |
+
This model inherits the license of the base [FLUX.1 model](https://huggingface.co/black-forest-labs/FLUX.1-dev) and the [WikiArt](https://huggingface.co/datasets/huggan/wikiart) dataset.
|