katielink commited on
Commit
f8cfccc
1 Parent(s): 1622e76

Update for 1.0

Browse files
README.md CHANGED
@@ -31,11 +31,9 @@ If you use the MedNIST dataset, please acknowledge the source.
31
  Assuming the current directory is the bundle directory, and the dataset was extracted to the directory `./MedNIST`, the following command will train the network for 50 epochs:
32
 
33
  ```
34
- PYTHONPATH=./scripts python -m monai.bundle run training --meta_file configs/metadata.json --config_file configs/train.json --logging_file configs/logging.conf --bundle_root .
35
  ```
36
 
37
- Note that the training code relies on extra scripts in the `scripts` directory which are made accessible by changing the `PYTHONPATH` variable in this invocation. If your `PYTHONPATH` is already used for other things you will have to add `./scripts` to the variable rather than replace it.
38
-
39
  Not also the output from the training will be placed in the `models` directory but will not overwrite the `model.pt` file that may be there already. You will have to manually rename the most recent checkpoint file to `model.pt` to use the inference script mentioned below after checking the results are correct. This saved checkpoint contains a dictionary with the generator weights stored as `model` and omits the discriminator.
40
 
41
  Another feature in the training file is the addition of sigmoid activation to the network by modifying it's structure at runtime. This is done with a line in the `training` section calling `add_module` on a layer of the network. This works best for training although the definition of the model now doesn't strictly match what it is in the `generator` section.
@@ -66,6 +64,6 @@ The model can be loaded without MONAI code after this operation. For example, an
66
  ```python
67
  import torch
68
  net = torch.jit.load("mednist_gan.ts")
69
- latent = torch.rand(1,64)
70
  img = net(latent) # (1,1,64,64)
71
  ```
 
31
  Assuming the current directory is the bundle directory, and the dataset was extracted to the directory `./MedNIST`, the following command will train the network for 50 epochs:
32
 
33
  ```
34
+ python -m monai.bundle run training --meta_file configs/metadata.json --config_file configs/train.json --logging_file configs/logging.conf --bundle_root .
35
  ```
36
 
 
 
37
  Not also the output from the training will be placed in the `models` directory but will not overwrite the `model.pt` file that may be there already. You will have to manually rename the most recent checkpoint file to `model.pt` to use the inference script mentioned below after checking the results are correct. This saved checkpoint contains a dictionary with the generator weights stored as `model` and omits the discriminator.
38
 
39
  Another feature in the training file is the addition of sigmoid activation to the network by modifying it's structure at runtime. This is done with a line in the `training` section calling `add_module` on a layer of the network. This works best for training although the definition of the model now doesn't strictly match what it is in the `generator` section.
 
64
  ```python
65
  import torch
66
  net = torch.jit.load("mednist_gan.ts")
67
+ latent = torch.rand(1, 64)
68
  img = net(latent) # (1,1,64,64)
69
  ```
configs/inference.json CHANGED
@@ -52,6 +52,11 @@
52
  "keys": "pred",
53
  "sigmoid": true
54
  },
 
 
 
 
 
55
  {
56
  "_target_": "SaveImaged",
57
  "keys": "pred",
 
52
  "keys": "pred",
53
  "sigmoid": true
54
  },
55
+ {
56
+ "_target_": "ToTensord",
57
+ "keys": "pred",
58
+ "track_meta": false
59
+ },
60
  {
61
  "_target_": "SaveImaged",
62
  "keys": "pred",
configs/metadata.json CHANGED
@@ -1,11 +1,12 @@
1
  {
2
  "schema": "https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/meta_schema_generator_20220718.json",
3
- "version": "0.2.0",
4
  "changelog": {
5
- "0.2.0": "unify naming",
 
6
  "0.1.0": "Initial version"
7
  },
8
- "monai_version": "0.9.0",
9
  "pytorch_version": "1.10.0",
10
  "numpy_version": "1.21.0",
11
  "optional_packages_version": {
 
1
  {
2
  "schema": "https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/meta_schema_generator_20220718.json",
3
+ "version": "0.3.0",
4
  "changelog": {
5
+ "0.3.0": "Update for 1.0",
6
+ "0.2.0": "Unify naming",
7
  "0.1.0": "Initial version"
8
  },
9
+ "monai_version": "1.0.0",
10
  "pytorch_version": "1.10.0",
11
  "numpy_version": "1.21.0",
12
  "optional_packages_version": {
configs/train.json CHANGED
@@ -1,9 +1,8 @@
1
  {
2
  "imports": [
3
- "$from functools import partial",
4
  "$import glob",
5
- "$from losses import discriminator_loss",
6
- "$from losses import generator_loss"
7
  ],
8
  "bundle_root": ".",
9
  "device": "$torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')",
@@ -66,7 +65,7 @@
66
  "keys": "reals"
67
  },
68
  {
69
- "_target_": "AddChanneld",
70
  "keys": "reals"
71
  },
72
  {
@@ -148,10 +147,10 @@
148
  "train_data_loader": "@real_dataloader",
149
  "g_network": "@gnetwork",
150
  "g_optimizer": "@goptimizer",
151
- "g_loss_function": "$partial(generator_loss, disc_net=@dnetwork)",
152
  "d_network": "@dnetwork",
153
  "d_optimizer": "@doptimizer",
154
- "d_loss_function": "$partial(discriminator_loss, disc_net=@dnetwork)",
155
  "d_train_steps": 5,
156
  "g_update_latents": true,
157
  "latent_shape": "@latent_size",
 
1
  {
2
  "imports": [
3
+ "$import functools",
4
  "$import glob",
5
+ "$import scripts"
 
6
  ],
7
  "bundle_root": ".",
8
  "device": "$torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')",
 
65
  "keys": "reals"
66
  },
67
  {
68
+ "_target_": "EnsureChannelFirstd",
69
  "keys": "reals"
70
  },
71
  {
 
147
  "train_data_loader": "@real_dataloader",
148
  "g_network": "@gnetwork",
149
  "g_optimizer": "@goptimizer",
150
+ "g_loss_function": "$functools.partial(scripts.losses.generator_loss, disc_net=@dnetwork)",
151
  "d_network": "@dnetwork",
152
  "d_optimizer": "@doptimizer",
153
+ "d_loss_function": "$functools.partial(scripts.losses.discriminator_loss, disc_net=@dnetwork)",
154
  "d_train_steps": 5,
155
  "g_update_latents": true,
156
  "latent_shape": "@latent_size",
docs/README.md CHANGED
@@ -24,11 +24,9 @@ If you use the MedNIST dataset, please acknowledge the source.
24
  Assuming the current directory is the bundle directory, and the dataset was extracted to the directory `./MedNIST`, the following command will train the network for 50 epochs:
25
 
26
  ```
27
- PYTHONPATH=./scripts python -m monai.bundle run training --meta_file configs/metadata.json --config_file configs/train.json --logging_file configs/logging.conf --bundle_root .
28
  ```
29
 
30
- Note that the training code relies on extra scripts in the `scripts` directory which are made accessible by changing the `PYTHONPATH` variable in this invocation. If your `PYTHONPATH` is already used for other things you will have to add `./scripts` to the variable rather than replace it.
31
-
32
  Not also the output from the training will be placed in the `models` directory but will not overwrite the `model.pt` file that may be there already. You will have to manually rename the most recent checkpoint file to `model.pt` to use the inference script mentioned below after checking the results are correct. This saved checkpoint contains a dictionary with the generator weights stored as `model` and omits the discriminator.
33
 
34
  Another feature in the training file is the addition of sigmoid activation to the network by modifying it's structure at runtime. This is done with a line in the `training` section calling `add_module` on a layer of the network. This works best for training although the definition of the model now doesn't strictly match what it is in the `generator` section.
@@ -59,6 +57,6 @@ The model can be loaded without MONAI code after this operation. For example, an
59
  ```python
60
  import torch
61
  net = torch.jit.load("mednist_gan.ts")
62
- latent = torch.rand(1,64)
63
  img = net(latent) # (1,1,64,64)
64
  ```
 
24
  Assuming the current directory is the bundle directory, and the dataset was extracted to the directory `./MedNIST`, the following command will train the network for 50 epochs:
25
 
26
  ```
27
+ python -m monai.bundle run training --meta_file configs/metadata.json --config_file configs/train.json --logging_file configs/logging.conf --bundle_root .
28
  ```
29
 
 
 
30
  Not also the output from the training will be placed in the `models` directory but will not overwrite the `model.pt` file that may be there already. You will have to manually rename the most recent checkpoint file to `model.pt` to use the inference script mentioned below after checking the results are correct. This saved checkpoint contains a dictionary with the generator weights stored as `model` and omits the discriminator.
31
 
32
  Another feature in the training file is the addition of sigmoid activation to the network by modifying it's structure at runtime. This is done with a line in the `training` section calling `add_module` on a layer of the network. This works best for training although the definition of the model now doesn't strictly match what it is in the `generator` section.
 
57
  ```python
58
  import torch
59
  net = torch.jit.load("mednist_gan.ts")
60
+ latent = torch.rand(1, 64)
61
  img = net(latent) # (1,1,64,64)
62
  ```
models/model.pt CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5ee4b4a812063936e95a5e39e628884b0160ab5be4897d99ede3dc7746c4308a
3
- size 1272514
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5302e6adf714d568bfc6d5238fc28c03cc99f4d623e3e1c9c5b1a6a56960d395
3
+ size 1272066
scripts/__init__.py ADDED
@@ -0,0 +1 @@
 
 
1
+ from . import losses