Datasets:

ArXiv:
License:
DonsetPG commited on
Commit
6f2e910
·
verified ·
1 Parent(s): 3534cfc

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +369 -0
README.md ADDED
@@ -0,0 +1,369 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Training Graph Neural Networks for Mesh-based Physics Simulations
2
+
3
+ ## Overview
4
+
5
+ This repository let's you train Graph Neural Networks on meshes (e.g. fluid dynamics, material simulations, etc).
6
+ It is based on the work from different papers:
7
+ - [Learning Mesh-Based Simulation with Graph Networks](https://arxiv.org/abs/2010.03409)
8
+ - [Multi-Grid Graph Neural Networks with Self-Attention for Computational Mechanics](https://arxiv.org/pdf/2409.11899)
9
+ - [MeshMask: Physics Simulations with Masked Graph Neural Networks](https://arxiv.org/pdf/2501.08738)
10
+ - [Training Transformers to Simulate Complex Physics](https://arxiv.org/abs/2508.18051)
11
+
12
+ We offer a simple training script to:
13
+ - Setup your model's architecture
14
+ - Define your dataset with different augmentation functions
15
+ - Follow the training live, including live vizualisations
16
+
17
+ The code is based on Pytorch, and a JAX extension might follow at some point.
18
+
19
+ At the moment, the repository supports the following:
20
+ - architecture:
21
+ * [x] Mesh Graph Net
22
+ * [x] Transformers
23
+ * [ ] Multigrid
24
+ - dataset:
25
+ * [x] matrix based, using .h5
26
+ * [x] .xdmf based (if you have .vtu, .vtk etc, you can easily convert them to .xdmf)
27
+ - training methods and augmentations
28
+ * [x] K-hop neighbours
29
+ * [x] Nodes Masking
30
+ * [x] Augmented Adjacency Matrix
31
+ * [ ] Sub-meshs
32
+
33
+ Feel free to open a PR if you want to implement a new feature, or an issue to request one.
34
+
35
+ ## Datasets
36
+
37
+ You can access the 3D Coarse Aneurysm dataset here !
38
+
39
+ ### Tutorials
40
+
41
+ We offer 2 Google colab to showcase training on:
42
+ - a Flow past a Cylinder Dataset with message passing
43
+ - [Colab](https://colab.research.google.com/drive/1DVOLrfPPLsjrsC8oq1KaDTIMHxHl1rgH?usp=sharing)
44
+ - dataset is from [Learning Mesh-Based Simulation with Graph Networks](https://arxiv.org/abs/2010.03409)
45
+ - a blood flow inside a 3D Aneurysm with Transformers
46
+ - [Colab](https://colab.research.google.com/drive/1csjUx72GPcHzaaBC9z2b7wuxHVrAVsbO?usp=sharing)
47
+ - dataset is from [AnXplore: a comprehensive fluid-structure interaction study of 101 intracranial aneurysms](https://www.frontiersin.org/journals/bioengineering-and-biotechnology/articles/10.3389/fbioe.2024.1433811/full?field&journalName=Frontiers_in_Bioengineering_and_Biotechnology&id=1433811)
48
+
49
+ ## Vizualisations
50
+
51
+ We use [Weights and Biases](https://wandb.ai/site) to log most information during training. This includes:
52
+ - training and validation loss
53
+ - per step
54
+ - per epoch
55
+ - All Rollout RMSE on validation dataset
56
+
57
+ We also save:
58
+ - Images of ground truth and 1-step prediction for specific indices
59
+ - `LogPyVistaPredictionsCallback(dataset=val_dataset, indices=[1, 2, 3])` in [train.py](https://github.com/DonsetPG/graph-physics/blob/main/graphphysics/train.py)
60
+ - Video of ground truth and auto regressive prediction between the first and the last index of the same `indices` list as above
61
+ - Meshes of auto regressive prediction as `.xdmf` file for the first trajectory of the validation dataset.
62
+
63
+ > [!WARNING]
64
+ > If saving thoses meshes takes too much space, you can 1. monitor the disk usage using Weights and Biases, 2. Remove this functionnality in [lightning_module.py](https://github.com/DonsetPG/graph-physics/blob/0c9b6af20a25e7d08f2731efdfe4911f34fbc274/graphphysics/training/lightning_module.py#L154) (see the code below)
65
+
66
+ https://github.com/DonsetPG/graph-physics/blob/6687b0bafabdd575d2ace6c0e7c39796e1f1624c/graphphysics/training/lightning_module.py#L151-L165
67
+
68
+ ## Setup
69
+
70
+ ### Default requirements
71
+
72
+ ```python
73
+ import torch
74
+
75
+ def format_pytorch_version(version):
76
+ return version.split('+')[0]
77
+
78
+ TORCH_version = torch.__version__
79
+ TORCH = format_pytorch_version(TORCH_version)
80
+
81
+ def format_cuda_version(version):
82
+ return 'cu' + version.replace('.', '')
83
+
84
+ CUDA_version = torch.version.cuda
85
+ CUDA = format_cuda_version(CUDA_version)
86
+ ```
87
+
88
+ ```
89
+ pip install torch-scatter -f https://pytorch-geometric.com/whl/torch-{TORCH}+{CUDA}.html
90
+ pip install torch-sparse -f https://pytorch-geometric.com/whl/torch-{TORCH}+{CUDA}.html
91
+ pip install torch-cluster -f https://pytorch-geometric.com/whl/torch-{TORCH}+{CUDA}.html
92
+ pip install torch-spline-conv -f https://pytorch-geometric.com/whl/torch-{TORCH}+{CUDA}.html
93
+ pip install torch-geometric
94
+
95
+ pip install loguru==0.7.2
96
+ pip install autoflake==2.3.0
97
+ pip install pytest==8.0.1
98
+ pip install meshio==5.3.5
99
+ pip install h5py==3.10.0
100
+
101
+ !pip install pyvista lightning==2.5.0 wandb "wandb[media]"
102
+ !pip install pytorch-lightning==2.5.0 torchmetrics==1.6.3
103
+ ```
104
+
105
+ ### DGL
106
+
107
+ You will need to install DGL. You can find information on how to set it up for your environnement [here](https://www.dgl.ai/pages/start.html).
108
+
109
+ In the case of a google colab, you can use:
110
+ ```
111
+ pip install dgl -f https://data.dgl.ai/wheels/torch-2.4/cu124/repo.html
112
+ ```
113
+
114
+ ### WandB
115
+
116
+ We use Weights and Bias to log most of our metrics and vizualizations during the trainig. Make sure you create and account, and log in before you start training.
117
+
118
+ ```python
119
+ import wandb
120
+ wandb.login()
121
+ ```
122
+
123
+ ### Vizualization in Colab
124
+
125
+ > [!WARNING]
126
+ > Note that if you train inside a notebook, you will need a specific set-up to allow for Pyvista to work
127
+
128
+ ```
129
+ apt-get install -qq xvfb
130
+ pip install pyvista panel -q
131
+ ```
132
+
133
+ and run
134
+
135
+ ```python
136
+ import os
137
+ os.system('/usr/bin/Xvfb :99 -screen 0 1024x768x24 &')
138
+ os.environ['DISPLAY'] = ':99'
139
+
140
+ import panel as pn
141
+ pn.extension('vtk')
142
+ ```
143
+
144
+ in the same call as your training.
145
+
146
+ ## Documentation
147
+
148
+ Most of setting up a new use case depends on two `.json` files: one to define the dataset details, and one for the training settings.
149
+
150
+ Let's start with the training settings. An exemple is available [here](https://github.com/DonsetPG/graph-physics/blob/main/training_config/cylinder.json).
151
+
152
+ ### Dataset
153
+
154
+ ```json
155
+ "dataset": {
156
+ "extension": "h5",
157
+ "h5_path": "dataset/h5_dataset/cylinder_flow/train.h5",
158
+ "meta_path": "dataset/h5_dataset/cylinder_flow/meta.json",
159
+ "khop": 1
160
+ }
161
+ ```
162
+
163
+ - `extension`: If the dataset used is h5 or xdmf.
164
+ - `h5_path` (`xdmf_folder` for an xdmf dataset): Path to the dataset.
165
+
166
+ > [!NOTE]
167
+ > You will need a dataset at the same location with `test` instead of `train` in its name for the validation step to work. Otherwise, you can specify its name directly in `training.py`
168
+
169
+ - `meta_path`: Location to the .json file with the dataset details (see below)
170
+ - `khop`: K-hop neighbours size to use. You should start with 1.
171
+
172
+ You also need to define a few other parameters:
173
+
174
+ ```json
175
+ "index": {
176
+ "feature_index_start": 0,
177
+ "feature_index_end": 2,
178
+ "output_index_start": 0,
179
+ "output_index_end": 2,
180
+ "node_type_index": 2
181
+ }
182
+ ```
183
+ - `feature_index_`: This is to define where we should look for nodes features. The end is excluded. For example, if you have 2D velocities at index 0 and 1, and pressure at index 2. If you want to use the pressure you should set `feature_index_start=0` and `feature_index_end=3`, otherwise, `feature_index_end=2`.
184
+
185
+ - `output_index_`: We define our architectures to predict one of your feature for the enxt time steps. So you need to tell us where to look. For example, if you want to predict the velocity at the next step, since the velocity is at index 0 and 1, you will set `output_index_start=0` and `output_index_end=2`.
186
+
187
+ - `node_type_index`: Finally, we use a node type classification for each node:
188
+
189
+ ```python
190
+ NORMAL = 0
191
+ OBSTACLE = 1
192
+ AIRFOIL = 2
193
+ HANDLE = 3
194
+ INFLOW = 4
195
+ OUTFLOW = 5
196
+ WALL_BOUNDARY = 6
197
+ SIZE = 9
198
+ ```
199
+
200
+ > [!WARNING]
201
+ > You should modify this if this is not at all representative of your use case. Those are taken from [Meshgraphnet](https://github.com/google-deepmind/deepmind-research/tree/master/meshgraphnets) and we found them to be general enough for all of our use cases.
202
+
203
+ This means that you either need to have such feature in your dataset, or to define a python function to build them (see below). After that, you need to tell us where to look. For example, if we only have velocity and node type, we will have `node_type_index=2`. If we also had the pressure, we would set `node_type_index=3`
204
+
205
+ > [!WARNING]
206
+ > H5-based dataloader does not support multiple workers. XDMF can.
207
+
208
+ ### Custom Processing Functions
209
+
210
+ First, we allow you to add noise to your inputs to make the prediction of a trajectory more robust.
211
+
212
+ ```json
213
+ "preprocessing": {
214
+ "noise": 0.02,
215
+ "noise_index_start": [0],
216
+ "noise_index_end": [2],
217
+ "masking": 0
218
+ },
219
+ ```
220
+
221
+ > [!WARNING]
222
+ > Masking is not implemented yet.
223
+
224
+ ```python
225
+ def add_noise(
226
+ graph: Data,
227
+ noise_index_start: Union[int, List[int]],
228
+ noise_index_end: Union[int, List[int]],
229
+ noise_scale: Union[float, List[float]],
230
+ node_type_index: int,
231
+ ) -> Data:
232
+ """
233
+ Adds Gaussian noise to the specified features of the graph's nodes.
234
+
235
+ Parameters:
236
+ graph (Data): The graph to modify.
237
+ noise_index_start (Union[int, List[int]]): The starting index or indices for noise addition.
238
+ noise_index_end (Union[int, List[int]]): The ending index or indices for noise addition.
239
+ noise_scale (Union[float, List[float]]): The standard deviation(s) of the Gaussian noise.
240
+ node_type_index (int): The index of the node type feature.
241
+
242
+ Returns:
243
+ Data: The modified graph with noise added to node features.
244
+ """
245
+ ```
246
+
247
+ Second, in the case of dealing with multiple meshes, you can add extra edges based on closeness of those different meshes:
248
+
249
+ ```json
250
+ "world_pos_parameters": {
251
+ "use": false,
252
+ "world_pos_index_start": 0,
253
+ "world_pos_index_end": 3
254
+ }
255
+ ```
256
+
257
+ See the [description](https://arxiv.org/abs/2010.03409) regarding world edges.
258
+
259
+ Finally, in the case where:
260
+ - you need to build the node type
261
+ - you need to build extra features that were not in your dataset
262
+
263
+ In `train.py`:
264
+
265
+ ```python
266
+ # Build preprocessing function
267
+ preprocessing = get_preprocessing(
268
+ param=parameters,
269
+ device=device,
270
+ use_edge_feature=use_edge_feature,
271
+ extra_node_features=None,
272
+ )
273
+ ```
274
+
275
+ where:
276
+
277
+ ```python
278
+ extra_node_features: Optional[
279
+ Union[Callable[[Data], Data], List[Callable[[Data], Data]]]
280
+ ] = None
281
+ ```
282
+
283
+ You can define one or several functions that takes a graph as an input, and returns another graph with the new features.
284
+
285
+ > [!NOTE]
286
+ > In the case where you might need the previous graph as well (to compute acceleration for example, you can pass `get_previous_data` in the `get_dataset` function, and you will be able to access it using the `previous_data` attribute: `graph.previous_data`)
287
+ > You can check [build_features](https://github.com/DonsetPG/graph-physics/blob/main/graphphysics/external/aneurysm.py) where we use `previous_velocity = torch.tensor(graph.previous_data["Vitesse"], device=device)`
288
+ > It's important to note to if you do so, those previous data also need to be updated autoregressively during the validation steps. To do so, we added 2 parameters in `train.py`: `previous_data_start` and `previous_data_end`. By default, they are set to 4 and 7. This works if for example, you set the acceleration (computed using the previous velocity) at indexes 4, 5 and 6.
289
+
290
+ For example, let's imagine we want to add the nodes position as a feature, one could define the following function:
291
+
292
+ ```python
293
+ def add_pos(graph: Data) -> Data:
294
+ graph.x = torch.cat(
295
+ (
296
+ graph.pos,
297
+ graph.x,
298
+ ),
299
+ dim=1,
300
+ )
301
+ return graph
302
+ ```
303
+
304
+ <details>
305
+ <summary>In that case, the settings would need to be updated.</summary>
306
+ ```json
307
+ "index": {
308
+ "feature_index_start": 0,
309
+ "feature_index_end": 4,
310
+ "output_index_start": 2,
311
+ "output_index_end": 4,
312
+ "node_type_index": 4
313
+ }
314
+ ```
315
+ </details>
316
+
317
+ You can find more examples regarding adding features and building node type [here](https://github.com/DonsetPG/graph-physics/tree/main/graphphysics/external).
318
+
319
+ We simply then call the function `add_pos` in `get_preprocessing`:
320
+
321
+ ```python
322
+ # Build preprocessing function
323
+ preprocessing = get_preprocessing(
324
+ param=parameters,
325
+ device=device,
326
+ use_edge_feature=use_edge_feature,
327
+ extra_node_features=add_pos,
328
+ )
329
+ ```
330
+
331
+
332
+ ### Architecture
333
+
334
+ ```json
335
+ "model": {
336
+ "type": "transformer",
337
+ "message_passing_num": 5,
338
+ "hidden_size": 32,
339
+ "node_input_size": 2,
340
+ "output_size": 2,
341
+ "edge_input_size": 0,
342
+ "num_heads": 4
343
+ }
344
+ ```
345
+
346
+ - `type`: Type of the model, either `transformer` or `epd` (message passing)
347
+ - `message_passing_num`: Number of Layers
348
+ - `hidden_size`: Number of hidden neurons
349
+ - `node_input_size`: Number of node features
350
+
351
+ > [!WARNING]
352
+ > This should not count the node type feature.
353
+
354
+ - `edge_input_size`: Size of the edge features. 3 in 2D and 4 in 3D. 0 for transformer based model.
355
+ - `output_size`: Size of the output
356
+ - `num_heads`: Number of heads for transformer based model.
357
+
358
+ ### Dataset Settings
359
+
360
+ You will also need to design a .json to define the dataset details. Those `meta.json` files are inspired from [Meshgraphnet](https://github.com/google-deepmind/deepmind-research/tree/master/meshgraphnets).
361
+
362
+ You will need to define:
363
+
364
+ - `dt`: the time step of your simulation
365
+ - `features`: a set of features used, including at least `cells` and `mesh_pos` for the .h5 dataset.
366
+ - `field_names`: the list of all features
367
+ - `trajectory_length`: the number of time steps per trajectory
368
+
369
+ Examples can be found [here](https://github.com/DonsetPG/graph-physics/tree/main/dataset_config).