dolphinium commited on
Commit
af63c59
·
verified ·
1 Parent(s): 809a87e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -3
README.md CHANGED
@@ -57,7 +57,8 @@ The most notable improvements are observed in landscape generation, which can be
57
  Future improvements will focus on:
58
  - Experimenting with different LoRA configurations and ranks
59
  - Fine-tuning hyperparameters for better convergence
60
- - Improving caption quality and specificity
 
61
  - Extending training duration beyond 1000 steps
62
  - Developing custom training scripts for more granular control
63
 
@@ -74,8 +75,10 @@ The model was trained on the [WikiArt Impressionism Curated Dataset](https://hug
74
  ## Model Details
75
  - Base Model: [FLUX.1](https://huggingface.co/black-forest-labs/FLUX.1-dev)
76
  - LoRA Rank: 16
77
- - Training Steps: 2000
78
- - Resolution: 512-1024px
 
 
79
 
80
  ## Usage
81
 
 
57
  Future improvements will focus on:
58
  - Experimenting with different LoRA configurations and ranks
59
  - Fine-tuning hyperparameters for better convergence
60
+ - Improving caption quality and specificity(current captions may be too complex that model can not capture spesific features)
61
+ - 'content_or_style' paramater on training config is currently set to 'balanced'. I also want to test 'style' parameter for model training.
62
  - Extending training duration beyond 1000 steps
63
  - Developing custom training scripts for more granular control
64
 
 
75
  ## Model Details
76
  - Base Model: [FLUX.1](https://huggingface.co/black-forest-labs/FLUX.1-dev)
77
  - LoRA Rank: 16
78
+ - Training Steps: 1000
79
+ - Resolution: 512-768-1024px
80
+
81
+ You can find detailed training configurations on [config.yaml](config.yaml)
82
 
83
  ## Usage
84