File size: 4,330 Bytes
b371f1d
 
e5377b6
 
 
 
 
 
 
 
 
 
 
 
 
b2f6368
72abc91
b2f6368
 
 
b371f1d
34f6b68
b7e697a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a2ad510
b7e697a
 
 
41a01d7
5416a0d
 
 
 
 
b7e697a
 
 
 
77d1c7c
 
61efb2c
77d1c7c
a4bdc3c
61efb2c
fe98256
a4bdc3c
 
 
 
 
 
a2ad510
62dca14
 
b129b37
41a01d7
a4bdc3c
 
5416a0d
 
 
b2f6368
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
---
license: apache-2.0
task_categories:
- text-to-image
language:
- en
tags:
- image generation
- novelai
- nai
- text2image
- prompt
- images
- stable-diffusion
- stable-diffusion-xl
- art
viewer: true
pretty_name: novelai3 images
size_categories:
- n<1K
---

## Novelai3 Images
The Novelai3 text-to-image distillation dataset contains over 30GB of anime-related (text, image) pairs, intended solely for educational and research purposes! It must not be used for any illicit activities.

### Production Method
The dataset was created through automated browser operations, repeatedly clicking the "generate image" button and saving the resulting images. Over the course of a month, approximately 38GB of (image, text instruction) pairs were collected.  
It has not been finely filtered by humans, but there are plans to select and refine it when time and energy permit (a reduced version of the dataset, novelai3-filtered, will be released separately).

### Use & Citation
Feel free to train an open-source version of the open-novelai3 image generation model on top of existing anime models, but please remember to cite our dataset work link when using it for training (this data collection effort was quite laborious in terms of time and manpower 💦💦).
We hope to contribute to the better development of open-source artificial intelligence!

### Some Training Suggestions
0. It is not recommended to train with the entire dataset all at once.
1. It is suggested to adjust the proportions and repetition frequencies of different subcategories within the dataset according to the style of the model you wish to learn (this can be done by directly adding or deleting data).
2. You can check if the current prompt is what you want, and if not, write Python scripts to perform batch replacements (you can even use models like GPT4-V, Qwen-VL, BLIP2, Deepbooru, etc., to replace the current tags as needed).
3. For the categories you're particularly interested in, you can manually review the specific image content and actively delete some of the poorly generated samples before training (this is akin to a human preference selection, which will improve the final quality of the model).

## Download Method

Aistudio  
https://aistudio.baidu.com/datasetdetail/257868  

huggingface  
https://huggingface.co/datasets/shareAI/novelai3  


Tip: If you find some categories you're interested in are not currently included, feel free to make suggestions and we will consider expanding the dataset.

## Novelai3 Images
novelai3的文本生成图片蒸馏数据集 ,包含30余G二次元动漫方面的(文本,图像)对,仅作为学习和研究使用!不得用于违规用途。

### 制作方式
通过自动化浏览器操作,不断点击生图按钮和保存生成的图像,最后收集得到的一个数据集,时间为期一个月,大概获得了38个G的(图像,文本指令)对。  
未经过人为精细筛选,计划之后有时间精力再进行筛选(会单独再放出一个novelai3-filtered的缩小版数据集)  
### 使用 & 引用
欢迎拿去在已有的动漫模型基础上训练开源版本的open-novelai3图像生成模型哈,但请使用训练时不要忘了引用我们的数据集工作链接(本次数据收集工作在时间和人力上都是很辛苦的💦💦)
希望能够为人工智能开源更好的发展贡献一份绵薄之力~  

### 一点训练建议
0、不建议直接使用全部数据一股脑进行训练。   
1、建议根据自己想要学出的模型风格,按需调整数据集中各个子类别的占比、重复次数等(可以通过直接增删数据)   
2、可以查看当前的prompt是否是你想要的,如果不是可以编写python脚本规则进行批量替换(甚至可以调用GPT4-V、Qwen-VL、BLIP2、Deepbooru等模型按需替换掉当前标签)  
3、针对你重点关注的那些类别,可以手动查看具体图像内容,主动删一些生成效果不好的学习样本再训练 (相当于人类偏好选择,会提升模型最终的质量表现) 

## 下载方式

aistudio  
https://aistudio.baidu.com/datasetdetail/257868  

huggingface  
https://huggingface.co/datasets/shareAI/novelai3  

tip: 如果你发现有哪些想要的类别当前没有,欢迎提出建议我们会再进行数据扩充。