pereza commited on
Commit
f0b1f9f
1 Parent(s): a33e081

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -230,20 +230,20 @@ For those inclined towards the intricacies of the model, the specific parameters
230
 
231
  ### Loss function
232
 
233
- The Swin2 transformer optimizes its parameters using a composite loss function that aggregates multiple \( \mathcal{L}_1 \) loss terms to enhance its predictive
234
  accuracy across different resolutions and representations:
235
 
236
  1. **Primary Predictions Loss**:
237
- - This term computes the \( \mathcal{L}_1 \) loss between the primary model predictions and the reference values. It ensures that the transformer's outputs
238
  - closely match the ground truth across the primary spatial resolution.
239
 
240
  2. **Downsampled Predictions Loss**:
241
- - Recognizing the importance of accuracy across varying resolutions, this term calculates the \( \mathcal{L}_1 \) loss between the downsampled versions of the
242
  - predictions and the reference values. By incorporating this term, the model is incentivized to preserve critical information even when the data is represented
243
  - at a coarser scale.
244
 
245
  3. **Blurred Predictions Loss**:
246
- - To ensure the model's robustness against small perturbations and noise, this term evaluates the \( \mathcal{L}_1 \) loss between blurred versions of the
247
  - predictions and the references. This encourages the model to produce predictions that maintain accuracy even under slight modifications in the data representation.
248
 
249
  By combining these loss terms, the Swin2 transformer is trained to produce accurate predictions across different resolutions and under various data transformations,
 
230
 
231
  ### Loss function
232
 
233
+ The Swin2 transformer optimizes its parameters using a composite loss function that aggregates multiple L1 loss terms to enhance its predictive
234
  accuracy across different resolutions and representations:
235
 
236
  1. **Primary Predictions Loss**:
237
+ - This term computes the L1 loss between the primary model predictions and the reference values. It ensures that the transformer's outputs
238
  - closely match the ground truth across the primary spatial resolution.
239
 
240
  2. **Downsampled Predictions Loss**:
241
+ - Recognizing the importance of accuracy across varying resolutions, this term calculates the L1 loss between the downsampled versions of the
242
  - predictions and the reference values. By incorporating this term, the model is incentivized to preserve critical information even when the data is represented
243
  - at a coarser scale.
244
 
245
  3. **Blurred Predictions Loss**:
246
+ - To ensure the model's robustness against small perturbations and noise, this term evaluates the L1 loss between blurred versions of the
247
  - predictions and the references. This encourages the model to produce predictions that maintain accuracy even under slight modifications in the data representation.
248
 
249
  By combining these loss terms, the Swin2 transformer is trained to produce accurate predictions across different resolutions and under various data transformations,