Skylion007 commited on
Commit
01f42b8
1 Parent(s): a844463

Improve formatting

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -15,10 +15,10 @@ language:
15
  ## Summary
16
  CommonCanvas is a family of latent diffusion models capable of generating images from a given text prompt. The architecture is based off of Stable Diffusion XL. Different CommonCanvas models are trained exclusively on subsets of the CommonCatalog Dataset (See Data Card), a large dataset of Creative Commons licensed images with synthetic captions produced using a pre-trained BLIP-2 captioning model.
17
 
18
- Input: CommonCatalog Text Captions
19
- Output: CommonCatalog Images
20
- Architecture: Stable Diffusion XL
21
- Version Number: 0.1
22
 
23
  The goal of this purpose is to produce a model that is competitive with Stable Diffusion XL, but to do so using an easily accessible dataset of known provenance. Doing so makes replicating the model significantly easier and provides proper attribution to all the creative commons work used to train the model. The exact training recipe of the model can be found in the paper hosted at this link. https://arxiv.org/abs/2310.16825
24
 
 
15
  ## Summary
16
  CommonCanvas is a family of latent diffusion models capable of generating images from a given text prompt. The architecture is based off of Stable Diffusion XL. Different CommonCanvas models are trained exclusively on subsets of the CommonCatalog Dataset (See Data Card), a large dataset of Creative Commons licensed images with synthetic captions produced using a pre-trained BLIP-2 captioning model.
17
 
18
+ **Input:** CommonCatalog Text Captions
19
+ **Output:** CommonCatalog Images
20
+ **Architecture:** Stable Diffusion XL
21
+ **Version Number:** 0.1
22
 
23
  The goal of this purpose is to produce a model that is competitive with Stable Diffusion XL, but to do so using an easily accessible dataset of known provenance. Doing so makes replicating the model significantly easier and provides proper attribution to all the creative commons work used to train the model. The exact training recipe of the model can be found in the paper hosted at this link. https://arxiv.org/abs/2310.16825
24