Diffusers
English
File size: 2,544 Bytes
e72566d
 
 
 
 
 
 
 
 
 
 
 
79aebf6
 
e72566d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
79aebf6
 
e72566d
 
 
 
 
 
 
 
79aebf6
 
 
 
 
 
 
 
 
e72566d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
---
datasets:
- kakaobrain/coyo-700m
language:
- en
---


## Model Details

### Model Description

This repo includes every models we trained during the Jax Community event sprint, organized by Hugging Face.
The folders {model} contains the Flax checkpoint and {model}_pt the Torch checkpoint.



- **Developed by:** Baptiste Lemaire, Guillaume Thomas and Tom Dupuis from CEA-List
- **Model type:** Canny Edge Maps conditionned Diffusion model 
- **Language(s) (NLP):** English


## Uses

<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
- Fast low resolution image generation
- Online data augmentation  

See our gradio app for more information : [UCDR-Net gradio](https://huggingface.co/spaces/Baptlem/UCDR-Net)


## Training Details
### Training Data

* [Coyo-700M](https://github.com/kakaobrain/coyo-dataset)
* [Bridge](https://sites.google.com/view/bridgedata)

### Training Procedure 
We trained from scratch each one of our models. We kept the initial parameters, except for the Batch Size.
You can find the training script in the following [Event repo's folder](https://github.com/huggingface/community-events/blob/main/jax-controlnet-sprint/training_scripts/train_controlnet_flax.py)


#### Preprocessing 

-Resize to 128 resolution
-Canny Edge Map


#### Training parameters
The following table describes the differents hyperpa
![alt text](./table_training.png)

We stopped the coyo model a bit after it processed its first epoch. After running it, we discovered it performed pretty well even after only one epoch. So we deciced to keep it.


The last model has been trained with a custom DataLoader. The previous loads a batch containing 4 images from Bridge and 28 from Coyo.
Therefore, we can't talk about epoch as the model processed coyo faster than bridge. We then trained the model according to steps and not epoch. 



### Results

See [UCDR-Net gradio](https://huggingface.co/spaces/Baptlem/UCDR-Net)




## Environmental Impact

<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->

Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).

- **Hardware Type:** TPU v4
- **Cloud Provider:** Gcloud
- **Compute Region:** us-central2-b
- **Carbon Emitted:** [More Information Needed]