ICCS
/

English
climate
tztsai commited on
Commit
ec86bf7
1 Parent(s): a682083

Upload models

Browse files
LICENSE ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ MIT License
2
+
3
+ Copyright (c) 2020, 2023 Janni Yuval and Institute of Computing for Climate Science
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
README.md ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Convection Parameterization in CAM
2
+
3
+ Note that this repository and code is still work in progress and undergoing significant development.
4
+ Once a useable release is produced it will be tagged.
5
+
6
+ ## Description
7
+ This repository contains code as part of an effort to deploy machine learning (ML) models of geophysical parameterisations into the [Community Earth System Model (CESM)](https://www.cesm.ucar.edu/).
8
+ This work is part of the [M<sup>2</sup>LInES](https://m2lines.github.io/) project aiming to improve performance of climate models using ML models for subgrid parameterizations.
9
+
10
+ A Neural Net providing a subgrid parameterization of atmospheric convection in a [single column model](https://www.arm.gov/publications/proceedings/conf04/extended_abs/randall_da.pdf) has been developed and successfully deployed as part of an atmospheric simulation.
11
+ The work is described in a [GRL paper](https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2020GL091363) with [accompanying code available](https://github.com/yaniyuval/Neural_nework_parameterization/tree/v.1.0.3). The repository contains the neural net and its implementation into a simple system for atmospheric modelling, [SAM](http://rossby.msrc.sunysb.edu/~marat/SAM.html).
12
+
13
+ The aims of this repository are to:
14
+ 1. develop a standalone fortran module based on this neural net that can be used elsewhere,
15
+ 2. deploy the module in another atmospheric model, and
16
+ 3. evaluate its performance.
17
+
18
+ We may also perform an investigation into interfacing the pytorch implementation of the Neural Net using the [pytorch-fortran bridging code](https://github.com/Cambridge-ICCS/fortran-pytorch-lib) developed at the [Institute of Computing for Climate Science](https://cambridge-iccs.github.io/).
19
+
20
+ The model will first be deployed into the [Single Column Atmospheric Model (SCAM)](https://www.cesm.ucar.edu/models/simple/scam) - a single column version of the CESM.
21
+ We plan to evaluate performance using SCAM in the gateIII configuration for tropical convection in a similar manner described by the [SCAM6 pulication in JAMES](https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2018MS001578).
22
+ This will compare model performance to data from an intense observation period (IOP) described in an [AMS publication](https://journals.ametsoc.org/view/journals/atsc/36/1/1520-0469_1979_036_0053_saposs_2_0_co_2.xml).
23
+
24
+ Long term developments of this project will seek to re-deploy more complex ML parameterizations into mode complex atmospheric models such as the [Community Atmospheric Model (CAM)](https://www.cesm.ucar.edu/models/cam) part of the CESM.
25
+
26
+
27
+ ## Repository structure
28
+
29
+ ```
30
+ ├── NN_module
31
+ │   └── ...
32
+ └── torch_nets
33
+ └── ...
34
+ ```
35
+
36
+
37
+
38
+ ### Contents
39
+
40
+ ### `NN_module/`
41
+ This folder contains the fortran neural net extracted from the [code referenced above](https://github.com/yaniyuval/Neural_nework_parameterization/tree/v.1.0.3), along with any dependencies, that may be compiled as a standalone fortran module.
42
+
43
+ Currently there is code that can be built on CSD3 using the included shell script.
44
+
45
+ This now needs cleaning up, testing, and a proper makefile creating (see open issues #9 and #10).
46
+
47
+ ### ``torch_nets/``
48
+ The directory contains the PyTorch versions of the neural networks we are interested in.
49
+
50
+
51
+ ## Contributing
52
+
53
+ This repository is currently private as it is new and work in progress.
54
+ Open tickets can be viewed at ['Issues'](https://github.com/m2lines/convection-parameterization-in-CAM/issues).
55
+
56
+ To contribute find a relevant issue or open a new one and assign yourself to work on it.
57
+ Then create a branch in which to add your contribution and open a pull request.
58
+ Once ready assign a reviewer and request a code review.
59
+ Merging should _only_ be performed once a code review has taken place.
models.py ADDED
@@ -0,0 +1,196 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Neural network architectures."""
2
+
3
+ from typing import Optional
4
+
5
+ import netCDF4 as nc # type: ignore
6
+ import torch
7
+ from torch import nn, Tensor
8
+
9
+
10
+ class ANN(nn.Sequential):
11
+ """Model used in the paper.
12
+
13
+ Paper: https://doi.org/10.1029/2020GL091363
14
+
15
+
16
+ Parameters
17
+ ----------
18
+ n_in : int
19
+ Number of input features.
20
+ n_out : int
21
+ Number of output features.
22
+ n_layers : int
23
+ Number of layers.
24
+ neurons : int
25
+ The number of neurons in the hidden layers.
26
+ dropout : float
27
+ The dropout probability to apply in the hidden layers.
28
+ device : str
29
+ The device to put the model on.
30
+ features_mean : ndarray
31
+ The mean of the input features.
32
+ features_std : ndarray
33
+ The standard deviation of the input features.
34
+ outputs_mean : ndarray
35
+ The mean of the output features.
36
+ outputs_std : ndarray
37
+ The standard deviation of the output features.
38
+ output_groups : ndarray
39
+ The number of output features in each group of the ouput.
40
+
41
+ Notes
42
+ -----
43
+ If you are doing inference, always remember to put the model in eval model,
44
+ by using ``model.eval()``, so the dropout layers are turned off.
45
+
46
+ """
47
+
48
+ def __init__( # pylint: disable=too-many-arguments,too-many-locals
49
+ self,
50
+ n_in: int = 61,
51
+ n_out: int = 148,
52
+ n_layers: int = 5,
53
+ neurons: int = 128,
54
+ dropout: float = 0.0,
55
+ device: str = "cpu",
56
+ features_mean: Optional[Tensor] = None,
57
+ features_std: Optional[Tensor] = None,
58
+ outputs_mean: Optional[Tensor] = None,
59
+ outputs_std: Optional[Tensor] = None,
60
+ output_groups: Optional[list] = None,
61
+ ):
62
+ """Initialize the ANN model."""
63
+ dims = [n_in] + [neurons] * (n_layers - 1) + [n_out]
64
+ layers = []
65
+
66
+ for i in range(n_layers):
67
+ layers.append(nn.Linear(dims[i], dims[i + 1]))
68
+ if i < n_layers - 1:
69
+ layers.append(nn.ReLU()) # type: ignore
70
+ layers.append(nn.Dropout(dropout)) # type: ignore
71
+
72
+ super().__init__(*layers)
73
+
74
+ fmean = fstd = omean = ostd = None
75
+
76
+ if features_mean is not None:
77
+ assert features_std is not None
78
+ assert len(features_mean) == len(features_std)
79
+ fmean = torch.tensor(features_mean)
80
+ fstd = torch.tensor(features_std)
81
+
82
+ if outputs_mean is not None:
83
+ assert outputs_std is not None
84
+ assert len(outputs_mean) == len(outputs_std)
85
+ if output_groups is None:
86
+ omean = torch.tensor(outputs_mean)
87
+ ostd = torch.tensor(outputs_std)
88
+ else:
89
+ assert len(output_groups) == len(outputs_mean)
90
+ omean = torch.tensor(
91
+ [x for x, g in zip(outputs_mean, output_groups) for _ in range(g)]
92
+ )
93
+ ostd = torch.tensor(
94
+ [x for x, g in zip(outputs_std, output_groups) for _ in range(g)]
95
+ )
96
+
97
+ self.register_buffer("features_mean", fmean)
98
+ self.register_buffer("features_std", fstd)
99
+ self.register_buffer("outputs_mean", omean)
100
+ self.register_buffer("outputs_std", ostd)
101
+
102
+ self.to(torch.device(device))
103
+
104
+ def forward(self, input: Tensor): # pylint: disable=redefined-builtin
105
+ """Pass the input through the model.
106
+
107
+ Override the forward method of nn.Sequential to add normalization
108
+ to the input and denormalization to the output.
109
+
110
+ Parameters
111
+ ----------
112
+ input : Tensor
113
+ A mini-batch of inputs.
114
+
115
+ Returns
116
+ -------
117
+ Tensor
118
+ The model output.
119
+
120
+ """
121
+ if self.features_mean is not None:
122
+ input = (input - self.features_mean) / self.features_std
123
+
124
+ # pass the input through the layers using nn.Sequential.forward
125
+ output = super().forward(input)
126
+
127
+ if self.outputs_mean is not None:
128
+ output = output * self.outputs_std + self.outputs_mean
129
+
130
+ return output
131
+
132
+ def load(self, path: str) -> "ANN":
133
+ """Load the model from a checkpoint.
134
+
135
+ Parameters
136
+ ----------
137
+ path : str
138
+ The path to the checkpoint.
139
+
140
+ """
141
+ state = torch.load(path)
142
+ for key in ["features_mean", "features_std", "outputs_mean", "outputs_std"]:
143
+ if key in state and getattr(self, key) is None:
144
+ setattr(self, key, state[key])
145
+ self.load_state_dict(state)
146
+ return self
147
+
148
+ def save(self, path: str):
149
+ """Save the model to a checkpoint.
150
+
151
+ Parameters
152
+ ----------
153
+ path : str
154
+ The path to save the checkpoint to.
155
+
156
+ """
157
+ torch.save(self.state_dict(), path)
158
+
159
+
160
+ def load_from_netcdf_params(nc_file: str, dtype: str = "float32") -> ANN:
161
+ """Load the model with weights and biases from the netcdf file.
162
+
163
+ Parameters
164
+ ----------
165
+ nc_file : str
166
+ The netcdf file containing the parameters.
167
+ dtype : str
168
+ The data type to cast the parameters to.
169
+
170
+ """
171
+ data_set = nc.Dataset(nc_file) # pylint: disable=no-member
172
+
173
+ model = ANN(
174
+ features_mean=data_set["fscale_mean"][:].astype(dtype),
175
+ features_std=data_set["fscale_stnd"][:].astype(dtype),
176
+ outputs_mean=data_set["oscale_mean"][:].astype(dtype),
177
+ outputs_std=data_set["oscale_stnd"][:].astype(dtype),
178
+ output_groups=[30, 29, 29, 30, 30],
179
+ )
180
+
181
+ for i, layer in enumerate(l for l in model.modules() if isinstance(l, nn.Linear)):
182
+ layer.weight.data = torch.tensor(data_set[f"w{i+1}"][:].astype(dtype))
183
+ layer.bias.data = torch.tensor(data_set[f"b{i+1}"][:].astype(dtype))
184
+
185
+ return model
186
+
187
+
188
+ if __name__ == "__main__":
189
+ # Load the model from the netcdf file and save it to a checkpoint.
190
+ net = load_from_netcdf_params(
191
+ "qobsTTFFFFFTF30FFTFTF30TTFTFTFFF80FFTFTTF2699FFFF_X01_no_qp_no_adv_"
192
+ "surf_F_Tin_qin_disteq_O_Trad_rest_Tadv_qadv_qout_qsed_RESCALED_7epochs"
193
+ "_no_drop_REAL_NN_layers5in61out148_BN_F_te70.nc"
194
+ )
195
+ net.save("nn_state.pt")
196
+ print("Model saved to nn_state.pt")
nn_state.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:552b224668a40820e9afd0e4f83053dbc9f4ee7c75814ad32ed2805041afb1e1
3
+ size 312574
qobsTTFFFFFTF30FFTFTF30TTFTFTFFF80FFTFTTF2699FFFF_X01_no_qp_no_adv_surf_F_Tin_qin_disteq_O_Trad_rest_Tadv_qadv_qout_qsed_RESCALED_7epochs_no_drop_REAL_NN_layers5in61out148_BN_F_te70.nc ADDED
Binary file (308 kB). View file
 
requirements.txt ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ torch
2
+ black
3
+ pytest
4
+ pydocstyle
5
+ pylint
6
+ mypy
7
+ netcdf4
test_python_net.py ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """A smoke test for the ANN model.
2
+
3
+ This test checks that the model can be loaded from a weights file in both pt format and
4
+ netcdf format and that they produce the expected output when given an input of all ones.
5
+ This ensures that it is equivalent to the Fortran NN model.
6
+ """
7
+
8
+ import os
9
+ from pathlib import Path
10
+ import torch
11
+ import numpy as np
12
+ from models import ANN, load_from_netcdf_params
13
+
14
+
15
+ os.chdir(Path(__file__).parent)
16
+
17
+ expected = np.loadtxt("nn_ones.txt").astype(np.float32)
18
+ # nn_ones.txt is the output of the Fortran NN model given an input of all ones.
19
+
20
+ model1 = ANN().load("nn_state.pt") # load from the pytorch weights
21
+ model2 = load_from_netcdf_params(
22
+ "qobsTTFFFFFTF30FFTFTF30TTFTFTFFF80FFTFTTF2699FFFF_X01_no_qp_no_adv_"
23
+ "surf_F_Tin_qin_disteq_O_Trad_rest_Tadv_qadv_qout_qsed_RESCALED_7epochs"
24
+ "_no_drop_REAL_NN_layers5in61out148_BN_F_te70.nc"
25
+ ) # load from the NetCDF weights of the pretrained Fortran NN model
26
+ # file created at https://github.com/yaniyuval/Neural_nework_parameterization/blob/f81f5f695297888f0bd1e0e61524590b4566bf03/NN_training/src/ml_train_nn.py#L417 # pylint: disable=line-too-long
27
+ # (which the naming scheme integrating information about the training setup, see e.g., https://github.com/yaniyuval/Neural_nework_parameterization/blob/f81f5f695297888f0bd1e0e61524590b4566bf03/NN_training/src/ml_train_nn.py#L263-L265) # pylint: disable=line-too-long
28
+ # This Neural Net can be found at https://github.com/yaniyuval/Neural_nework_parameterization/tree/f81f5f695297888f0bd1e0e61524590b4566bf03/NNs # pylint: disable=line-too-long
29
+
30
+
31
+ x = torch.ones(61)
32
+
33
+ actual1 = model1.forward(x).detach().numpy()
34
+ actual2 = model2.forward(x).detach().numpy()
35
+
36
+ assert np.all(actual1 == actual2)
37
+ assert np.allclose(expected, actual1, atol=3e-8, rtol=2e-6)
38
+ # Values of atol and rtol are chosen to be the lowest that still pass the test.
39
+
40
+ print("Smoke tests passed")