HUANGYIFEI commited on
Commit
cf1cd5b
1 Parent(s): 9301fb5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +88 -43
README.md CHANGED
@@ -1,43 +1,88 @@
1
- # Graph Mask AutoEncoder(GraphMAE) on QM9 Dataset
2
- ## Overview
3
- We run the **Graph Mask AutoEncoder** on **QM9 Dataset** for pretraining. We use the atom position of each atom and the embedding of their element type as the input feature (dim=7) and predict the input feature by using the GraphSage with 4-dim hidden representation.
4
-
5
- **Total Epochs: 10**
6
- ## How to run
7
- ### If you do not want to re-train the model again
8
- - **Unzip the model.zip** to get the model weight & embedded graph in each epoch
9
- ### If you want to try out the training process
10
- - step1. **Preprocess the dataset** (we have provided the preprocessed as well)
11
-
12
- ```bash
13
- python prepare_QM9_dataset.py --label_keys "mu" "gap"
14
- ```
15
- - step2. **Train the Graph Mask AutoEncoder on the preprocessed dataset**
16
- ```bash
17
- python run.py [--dataset_path] [--batch_size] [--epochs] [--device] [--save_dir]
18
- ```
19
-
20
- ## Model Description
21
- ### Overview
22
- Ref:**[GraphMAE](https://arxiv.org/abs/2205.10803)**
23
- >Self-supervised learning (SSL) has been extensively explored in recent years. Particularly, generative SSL has seen emerging success in natural language processing and other AI fields, such as the wide adoption of BERT and GPT. Despite this, contrastive learning-which heavily relies on structural data augmentation and complicated training strategies-has been the dominant approach in graph SSL, while the progress of generative SSL on graphs, especially graph autoencoders (GAEs), has thus far not reached the potential as promised in other fields. In this paper, we identify and examine the issues that negatively impact the development of GAEs, including their reconstruction objective, training robustness, and error metric. We present a masked graph autoencoder GraphMAE that mitigates these issues for generative self-supervised graph pretraining. Instead of reconstructing graph structures, we propose to focus on feature reconstruction with both a masking strategy and scaled cosine error that benefit the robust training of GraphMAE. We conduct extensive experiments on 21 public datasets for three different graph learning tasks. The results manifest that GraphMAE-a simple graph autoencoder with careful designs-can consistently generate outperformance over both contrastive and generative state-of-the-art baselines. This study provides an understanding of graph autoencoders and demonstrates the potential of generative self-supervised pre-training on graphs.
24
-
25
- ### Detail
26
- Encoder & Decoder: Two layer [GraphSage](https://docs.dgl.ai/generated/dgl.nn.pytorch.conv.SAGEConv.html)
27
-
28
- Readout Method: Mean
29
-
30
- HiddenDims: 4 (Default)
31
-
32
- MaskRate: 0.3 (Default)
33
-
34
- Training on RTX 4060
35
-
36
- ## Dataset Description
37
- ### Overview
38
- Ref: **[QM9](https://docs.dgl.ai/generated/dgl.data.QM9Dataset.html)**
39
- > Type: Molecule property prediction
40
- >
41
- > Sample_num: 130831
42
- >
43
- > Total Elements: H,C,N,O,F
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+ # 模型训练过程汇总(持续更新中)
5
+
6
+ 对于已收集的每一个模型,`code` 目录为模型定义、训练和测试的代码和脚本文件,`model` 目录为已收集的 epoch 模型文件,`dataset.zip` 为模型数据集。
7
+
8
+ 下表汇总了所有收集的模型训练过程信息:
9
+
10
+ <table>
11
+ <tr>
12
+ <th>模型名称</th>
13
+ <th>模型简介</th>
14
+ <th>模型类型</th>
15
+ <th>Epoch数量</th>
16
+ <th>数据集信息</th>
17
+ </tr>
18
+ <tr>
19
+ <td><a href="https://huggingface.co/datasets/code-philia/ttvnet/tree/main/Code-Code/Clone-detection-BigCloneBench" target="_blank">Clone-detection-BigCloneBench</a></td>
20
+ <td>基于大规模代码克隆基准数据集的代码克隆检测模型,任务是进行二元分类(0/1),其中1代表语义等价,0代表其他情况。</td>
21
+ <td>代码克隆检测</td>
22
+ <td>2个epoch</td>
23
+ <td>BigCloneBench数据集</td>
24
+ </tr>
25
+ <tr>
26
+ <td><a href="https://huggingface.co/datasets/code-philia/ttvnet/tree/main/Code-Code/Clone-detection-POJ-104" target="_blank">Clone-detection-POJ-104</a></td>
27
+ <td>基于POJ-104数据集的代码克隆检测模型,任务是识别不同编程题目中相似的代码实现,给定一段代码和一组候选代码,任务是返回具有相同语义的Top K个代码</td>
28
+ <td>代码克隆检测</td>
29
+ <td>2个epoch (0-1)</td>
30
+ <td>POJ-104编程题目数据集</td>
31
+ </tr>
32
+ <tr>
33
+ <td><a href="https://huggingface.co/datasets/code-philia/ttvnet/tree/main/Code-Code/CodeCompletion-token" target="_blank">CodeCompletion-token</a></td>
34
+ <td>基于token级别的代码自动补全模型</td>
35
+ <td>代码补全</td>
36
+ <td>5个epoch (Java语料库)</td>
37
+ <td>Java代码token序列数据集</td>
38
+ </tr>
39
+ <tr>
40
+ <td><a href="https://huggingface.co/datasets/code-philia/ttvnet/tree/main/Code-Code/Defect-detection" target="_blank">Defect-detection</a></td>
41
+ <td>代码缺陷检测模型,通过分析代码来识别潜在的缺陷和错误(进行二元分类(0/1))</td>
42
+ <td>代码缺陷检测</td>
43
+ <td>5个epoch</td>
44
+ <td>包含缺陷标注的C语言代码数据集</td>
45
+ </tr>
46
+ <tr>
47
+ <td><a href="https://huggingface.co/datasets/code-philia/ttvnet/tree/main/Code-Code/code-refinement" target="_blank">code-refinement</a></td>
48
+ <td>代码优化模型</td>
49
+ <td>代码优化/重构</td>
50
+ <td>34个epoch(small数据集)</td>
51
+ <td>代码优化前后对数据集(C语言)</td>
52
+ </tr>
53
+ <tr>
54
+ <td><a href="https://huggingface.co/datasets/code-philia/ttvnet/tree/main/Code-Text/code-to-text" target="_blank">code-to-text</a></td>
55
+ <td>代码到自然语言的转换模型</td>
56
+ <td>代码注释生成</td>
57
+ <td>每种语言10个epoch (支持Python/Java/JavaScript/PHP/Ruby/Go)</td>
58
+ <td>多语言代码-文本对数据集</td>
59
+ </tr>
60
+ <tr>
61
+ <td><a href="https://huggingface.co/datasets/code-philia/ttvnet/tree/main/Text-code/NL-code-search-Adv" target="_blank">NL-code-search-Adv</a></td>
62
+ <td>高级自然语言代码搜索模型,通过计算自然语言查询与代码片段之间的相似性来实现代码搜索,</td>
63
+ <td>代码搜索</td>
64
+ <td>2个epoch</td>
65
+ <td>自然语言-(python)代码对数据集</td>
66
+ </tr>
67
+ <tr>
68
+ <td><a href="https://huggingface.co/datasets/code-philia/ttvnet/tree/main/Text-code/NL-code-search-WebQuery" target="_blank">NL-code-search-WebQuery</a></td>
69
+ <td>基于Web查询的代码搜索模型,该模型通过编码器处理代码和自然语言输入,并利用多层感知器(MLP)来计算相似性得分</td>
70
+ <td>代码搜索</td>
71
+ <td>两个数据集各3个epoch</td>
72
+ <td>Web查询-代码对数据集(CodeSearchNet数据集和CoSQA数据集(python))</td>
73
+ </tr>
74
+ <tr>
75
+ <td><a href="https://huggingface.co/datasets/code-philia/ttvnet/tree/main/Text-code/text-to-code" target="_blank">text-to-code</a></td>
76
+ <td>自然语言到代码的生成模型</td>
77
+ <td>代码生成</td>
78
+ <td>23个epoch</td>
79
+ <td>文本描述-代码(c语言)对数据集</td>
80
+ </tr>
81
+ <tr>
82
+ <td><a href="https://huggingface.co/datasets/code-philia/ttvnet/tree/main/Graph" target="_blank">Graph</a></td>
83
+ <td>在QM9数据集上训练的图掩码自编码器,通过对分子图中的原子的坐标以及类型进行预测实现自监督训练</td>
84
+ <td>图自编码器</td>
85
+ <td>10个epoch</td>
86
+ <td>分子属性预测数据集</td>
87
+ </tr>
88
+ </table>