Chilli-b's picture
Update README.md
6fd3f35
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
datasets:
- Norod78/Vintage-Faces-FFHQAligned
---
# English Version
## Example Fine-Tuned Model for learning diffusion model
My first fintuning model through python script
base model: google/ddpm-celebahq-256
fine-tuning dataset: vintageface
> The loss is not very stable, and the output is not very satisfactory.
> But since it's a study case, you can't ask for too much.
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6462500d2538819c729dc355/QF-nZ18Q251cAiA4Ksi9O.png)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6462500d2538819c729dc355/gkqTtkcIwConyJJaQU4zi.png)
### Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('Chilli-b/my-finetuned-model-celebahq-on-VintageFaces-4epochs')
image = pipeline().images[0]
image
```
# 中文版
## 学习扩散模型时训练的微调模型
这是我第一个微调模型,也是第一个通过 Python 脚本进行训练的微调模型。
基底模型:google/ddpm-celebahq-256
微调数据集:Norod78/Vintage-Faces-FFHQAligned
> loss 不是很稳定,输出结果也不太令人满意。
> 不过既然是学习案例,那也不能要求太多。
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6462500d2538819c729dc355/QF-nZ18Q251cAiA4Ksi9O.png)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6462500d2538819c729dc355/gkqTtkcIwConyJJaQU4zi.png)
### 模型使用
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('Chilli-b/my-finetuned-model-celebahq-on-VintageFaces-4epochs')
image = pipeline().images[0]
image
```