surajpaib commited on
Commit
6963196
·
verified ·
1 Parent(s): 6c1dbef

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +16 -7
README.md CHANGED
@@ -37,6 +37,11 @@ Install requirements and import necessary packages
37
 
38
 
39
 
 
 
 
 
 
40
  ```python
41
  # Imports
42
  import torch
@@ -83,6 +88,9 @@ preprocess = Compose([
83
  ])
84
  ```
85
 
 
 
 
86
  ## Run Inference
87
  Process an input CT scan and extract features
88
 
@@ -116,14 +124,15 @@ print(output.shape)
116
  torch.Size([2, 227, 181, 258])
117
 
118
 
119
- ## Fine-tuning instructions
120
-
121
- The above model does not have a trained decoder which means the predictions you will get are nonsensical.
122
-
123
- You can however use the pre-trained encoder and the model architecture to finetune on your own datasets - especially if they are small sized.
124
 
125
- A very simple way to fit this into your pipelines is to take the model loaded above using model = SegResNet.from_pretrained('project-lighter/ct_fm_segresnet') and replace the model in your training pipeline with this.
126
 
127
- Using Auto3DSeg with our model is the recommended approach, follow the instructions here: https://project-lighter.github.io/CT-FM/replication-guide/downstream/#tumor-segmentation-with-auto3dseg
 
 
 
 
 
128
 
129
 
 
37
 
38
 
39
 
40
+ ```python
41
+
42
+ ```
43
+
44
+
45
  ```python
46
  # Imports
47
  import torch
 
88
  ])
89
  ```
90
 
91
+ monai.transforms.croppad.array CropForeground.__init__:allow_smaller: Current default value of argument `allow_smaller=True` has been deprecated since version 1.2. It will be changed to `allow_smaller=False` in version 1.5.
92
+
93
+
94
  ## Run Inference
95
  Process an input CT scan and extract features
96
 
 
124
  torch.Size([2, 227, 181, 258])
125
 
126
 
127
+ ## Fine-tuning Instructions
 
 
 
 
128
 
129
+ The model above does not include a trained decoder, which means the predictions you receive will be nonsensical.
130
 
131
+ However, you can leverage the pre-trained encoder and model architecture to fine-tune on your own datasets—especially if they are small. A simple way to integrate this into your pipeline is to replace the model in your training process with the pre-trained version. For example:
132
+ ```python
133
+ model = SegResNet.from_pretrained('project-lighter/ct_fm_segresnet')
134
+ ```
135
+ We recommend using Auto3DSeg in conjunction with our model. For detailed guidance, please refer to the instructions here:
136
+ https://project-lighter.github.io/CT-FM/replication-guide/downstream/#tumor-segmentation-with-auto3dseg
137
 
138