nevoit commited on
Commit
3f933b6
1 Parent(s): da26d66

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -23
README.md CHANGED
@@ -35,11 +35,11 @@ We implemented this assignment using mainly Keras and Sklearn.
35
 
36
  An example for the ‘Adults’ dataset:
37
 
38
- ![](https://huggingface.co/nevoit//Generative-Adversarial-Network-/blob/master/figures/Aspose.Words.36be2542-1776-4b1c-8010-360ae82480ae.001.png)
39
 
40
  An example for the ‘Bank-full’ dataset:
41
 
42
- ![](https://huggingface.co/nevoit/Generative-Adversarial-Network-/blob/master/figures/Aspose.Words.36be2542-1776-4b1c-8010-360ae82480ae.002.png)
43
 
44
  **Code Design:**
45
 
@@ -150,15 +150,15 @@ For adults dataset, the results of the model were:
150
  - MMDNF = Mean minimum euclidean distance for the not fooled samples was 0.422
151
  - Several samples that “fooled” the detector:
152
 
153
- ![](huggingface.co/nevoit/Generative-Adversarial-Network-/blob/master/figures/Aspose.Words.36be2542-1776-4b1c-8010-360ae82480ae.003.png)
154
 
155
  - Several samples that “not fooled” the detector:
156
 
157
- ![](huggingface.co/nevoit/Generative-Adversarial-Network-/blob/master/figures/Aspose.Words.36be2542-1776-4b1c-8010-360ae82480ae.004.png)
158
 
159
  - Plotting the PCA shows that the fooled samples are very similar to the real data and the not fooled samples are less similar.
160
 
161
- ![](https://github.com/nevoit/Generative-Adversarial-Network-/blob/master/figures/Aspose.Words.36be2542-1776-4b1c-8010-360ae82480ae.005.png)
162
 
163
  - Out of 100 samples, 74 samples were fooled by the discriminator and 26 samples were not fooled by the discriminator.
164
  - A graph describing the loss of the generator and the discriminator:
@@ -166,7 +166,7 @@ For adults dataset, the results of the model were:
166
  - The generator loss was extremely decreased while the discriminator loss was quite the same.
167
  - Eventually the generator and the discriminator were quite coveraged nearly a loss of 0.6.
168
 
169
- ![](https://github.com/nevoit/Generative-Adversarial-Network-/blob/master/figures/Aspose.Words.36be2542-1776-4b1c-8010-360ae82480ae.006.png)
170
 
171
  For bank-full dataset, the results of the model were:
172
 
@@ -174,15 +174,15 @@ For bank-full dataset, the results of the model were:
174
  - MMDNF = Mean minimum euclidean distance for the not fooled samples was 0.305854238
175
  - Several samples that “fooled” the detector:
176
 
177
- ![](https://github.com/nevoit/Generative-Adversarial-Network-/blob/master/figures/Aspose.Words.36be2542-1776-4b1c-8010-360ae82480ae.007.png)
178
 
179
  - Several samples that “not fooled” the detector:
180
 
181
- ![](https://github.com/nevoit/Generative-Adversarial-Network-/blob/master/figures/Aspose.Words.36be2542-1776-4b1c-8010-360ae82480ae.008.png)
182
 
183
  - Plotting the PCA shows that the fooled samples are very similar to the real data and the not fooled samples are less similar.
184
 
185
- ![](https://github.com/nevoit/Generative-Adversarial-Network-/blob/master/figures/Aspose.Words.36be2542-1776-4b1c-8010-360ae82480ae.009.png)
186
 
187
  - Out of 100 samples, 32 samples were fooled by the discriminator and 68 samples were not fooled by the discriminator.
188
  - A graph describing the loss of the generator and the discriminator:
@@ -190,7 +190,7 @@ For bank-full dataset, the results of the model were:
190
  - The generator loss was extremely decreased while the discriminator loss was quite the same.
191
  - Eventually the generator and the discriminator were coveraged nearly a loss of 0.5.
192
 
193
- ![](https://github.com/nevoit/Generative-Adversarial-Network-/blob/master/figures/Aspose.Words.36be2542-1776-4b1c-8010-360ae82480ae.010.png)
194
 
195
  ## General Generator (Part 2)
196
 
@@ -251,14 +251,14 @@ In this part the main goal was for the distribution of confidence probabilities
251
 
252
  - Class distribution:
253
 
254
- ![](https://github.com/nevoit/Generative-Adversarial-Network-/blob/master/figures/Aspose.Words.36be2542-1776-4b1c-8010-360ae82480ae.011.png)
255
 
256
  - Note that there is some imbalance here, which is nearly identical to the ratio between the mean confidence scores for each class.
257
  - Probability distribution for class 0 and class 1, for the **test set**:
258
 
259
- ![](https://github.com/nevoit/Generative-Adversarial-Network-/blob/master/figures/Aspose.Words.36be2542-1776-4b1c-8010-360ae82480ae.012.png)
260
 
261
- ![](https://github.com/nevoit/Generative-Adversarial-Network-/blob/master/figures/Aspose.Words.36be2542-1776-4b1c-8010-360ae82480ae.013.png)
262
 
263
  - Note that the images mirror each other.
264
 
@@ -272,15 +272,15 @@ In this part the main goal was for the distribution of confidence probabilities
272
 
273
  Class distribution:
274
 
275
- ![](https://github.com/nevoit/Generative-Adversarial-Network-/blob/master/figures/Aspose.Words.36be2542-1776-4b1c-8010-360ae82480ae.014.png)
276
 
277
  - The data here is even more imbalanced. The confidence scores reflect this.
278
  - Confidence score distribution for test set:
279
 
280
 
281
- ![](https://github.com/nevoit/Generative-Adversarial-Network-/blob/master/figures/Aspose.Words.36be2542-1776-4b1c-8010-360ae82480ae.015.png)
282
 
283
- ![](https://github.com/nevoit/Generative-Adversarial-Network-/blob/master/figures/Aspose.Words.36be2542-1776-4b1c-8010-360ae82480ae.016.png)
284
 
285
  **Generator Results:**
286
 
@@ -291,14 +291,14 @@ Here we first uniformly sampled 1000 confidence rates from [0,1]. Then, based on
291
  - Training loss:
292
 
293
 
294
- ![](https://github.com/nevoit/Generative-Adversarial-Network-/blob/master/figures/Aspose.Words.36be2542-1776-4b1c-8010-360ae82480ae.017.png)
295
 
296
  - Confidence score distribution for each class:
297
  - Note that they mirror each other.
298
 
299
- ![](https://github.com/nevoit/Generative-Adversarial-Network-/blob/master/figures/Aspose.Words.36be2542-1776-4b1c-8010-360ae82480ae.018.png)
300
 
301
- ![](https://github.com/nevoit/Generative-Adversarial-Network-/blob/master/figures/Aspose.Words.36be2542-1776-4b1c-8010-360ae82480ae.019.png)
302
 
303
  - The results are far from uniform, but it is obvious that they are skewed towards the original confidence scores.
304
 
@@ -309,21 +309,21 @@ Here we first uniformly sampled 1000 confidence rates from [0,1]. Then, based on
309
 
310
  - Training loss:
311
  -
312
- ![](https://github.com/nevoit/Generative-Adversarial-Network-/blob/master/figures/Aspose.Words.36be2542-1776-4b1c-8010-360ae82480ae.021.png)
313
 
314
  - Confidence score distribution for each class:
315
  - As before, they mirror each other.
316
  - The distribution isn’t uniform, and is slightly skewed in the opposite direction of the distribution for the test set.
317
 
318
- ![](https://github.com/nevoit/Generative-Adversarial-Network-/blob/master/figures/Aspose.Words.36be2542-1776-4b1c-8010-360ae82480ae.022.png)
319
 
320
- ![](https://github.com/nevoit/Generative-Adversarial-Network-/blob/master/figures/Aspose.Words.36be2542-1776-4b1c-8010-360ae82480ae.023.png)
321
 
322
  - Error rates for class 1:
323
 
324
  - **The lowest error rates were achieved for probabilities of around 0.4~**. The highest was for probability of 0.
325
 
326
- ![](https://github.com/nevoit/Generative-Adversarial-Network-/blob/master/figures/Aspose.Words.36be2542-1776-4b1c-8010-360ae82480ae.024.png)
327
 
328
  ## Discussion
329
 
 
35
 
36
  An example for the ‘Adults’ dataset:
37
 
38
+ ![](figures/Aspose.Words.36be2542-1776-4b1c-8010-360ae82480ae.001.png)
39
 
40
  An example for the ‘Bank-full’ dataset:
41
 
42
+ ![](figures/Aspose.Words.36be2542-1776-4b1c-8010-360ae82480ae.002.png)
43
 
44
  **Code Design:**
45
 
 
150
  - MMDNF = Mean minimum euclidean distance for the not fooled samples was 0.422
151
  - Several samples that “fooled” the detector:
152
 
153
+ ![](figures/Aspose.Words.36be2542-1776-4b1c-8010-360ae82480ae.003.png)
154
 
155
  - Several samples that “not fooled” the detector:
156
 
157
+ ![](figures/Aspose.Words.36be2542-1776-4b1c-8010-360ae82480ae.004.png)
158
 
159
  - Plotting the PCA shows that the fooled samples are very similar to the real data and the not fooled samples are less similar.
160
 
161
+ ![](figures/Aspose.Words.36be2542-1776-4b1c-8010-360ae82480ae.005.png)
162
 
163
  - Out of 100 samples, 74 samples were fooled by the discriminator and 26 samples were not fooled by the discriminator.
164
  - A graph describing the loss of the generator and the discriminator:
 
166
  - The generator loss was extremely decreased while the discriminator loss was quite the same.
167
  - Eventually the generator and the discriminator were quite coveraged nearly a loss of 0.6.
168
 
169
+ ![](figures/Aspose.Words.36be2542-1776-4b1c-8010-360ae82480ae.006.png)
170
 
171
  For bank-full dataset, the results of the model were:
172
 
 
174
  - MMDNF = Mean minimum euclidean distance for the not fooled samples was 0.305854238
175
  - Several samples that “fooled” the detector:
176
 
177
+ ![](figures/Aspose.Words.36be2542-1776-4b1c-8010-360ae82480ae.007.png)
178
 
179
  - Several samples that “not fooled” the detector:
180
 
181
+ ![](figures/Aspose.Words.36be2542-1776-4b1c-8010-360ae82480ae.008.png)
182
 
183
  - Plotting the PCA shows that the fooled samples are very similar to the real data and the not fooled samples are less similar.
184
 
185
+ ![](figures/Aspose.Words.36be2542-1776-4b1c-8010-360ae82480ae.009.png)
186
 
187
  - Out of 100 samples, 32 samples were fooled by the discriminator and 68 samples were not fooled by the discriminator.
188
  - A graph describing the loss of the generator and the discriminator:
 
190
  - The generator loss was extremely decreased while the discriminator loss was quite the same.
191
  - Eventually the generator and the discriminator were coveraged nearly a loss of 0.5.
192
 
193
+ ![](figures/Aspose.Words.36be2542-1776-4b1c-8010-360ae82480ae.010.png)
194
 
195
  ## General Generator (Part 2)
196
 
 
251
 
252
  - Class distribution:
253
 
254
+ ![](figures/Aspose.Words.36be2542-1776-4b1c-8010-360ae82480ae.011.png)
255
 
256
  - Note that there is some imbalance here, which is nearly identical to the ratio between the mean confidence scores for each class.
257
  - Probability distribution for class 0 and class 1, for the **test set**:
258
 
259
+ ![](figures/Aspose.Words.36be2542-1776-4b1c-8010-360ae82480ae.012.png)
260
 
261
+ ![](figures/Aspose.Words.36be2542-1776-4b1c-8010-360ae82480ae.013.png)
262
 
263
  - Note that the images mirror each other.
264
 
 
272
 
273
  Class distribution:
274
 
275
+ ![](figures/Aspose.Words.36be2542-1776-4b1c-8010-360ae82480ae.014.png)
276
 
277
  - The data here is even more imbalanced. The confidence scores reflect this.
278
  - Confidence score distribution for test set:
279
 
280
 
281
+ ![](figures/Aspose.Words.36be2542-1776-4b1c-8010-360ae82480ae.015.png)
282
 
283
+ ![](figures/Aspose.Words.36be2542-1776-4b1c-8010-360ae82480ae.016.png)
284
 
285
  **Generator Results:**
286
 
 
291
  - Training loss:
292
 
293
 
294
+ ![](figures/Aspose.Words.36be2542-1776-4b1c-8010-360ae82480ae.017.png)
295
 
296
  - Confidence score distribution for each class:
297
  - Note that they mirror each other.
298
 
299
+ ![](figures/Aspose.Words.36be2542-1776-4b1c-8010-360ae82480ae.018.png)
300
 
301
+ ![](figures/Aspose.Words.36be2542-1776-4b1c-8010-360ae82480ae.019.png)
302
 
303
  - The results are far from uniform, but it is obvious that they are skewed towards the original confidence scores.
304
 
 
309
 
310
  - Training loss:
311
  -
312
+ ![](figures/Aspose.Words.36be2542-1776-4b1c-8010-360ae82480ae.021.png)
313
 
314
  - Confidence score distribution for each class:
315
  - As before, they mirror each other.
316
  - The distribution isn’t uniform, and is slightly skewed in the opposite direction of the distribution for the test set.
317
 
318
+ ![](figures/Aspose.Words.36be2542-1776-4b1c-8010-360ae82480ae.022.png)
319
 
320
+ ![](figures/Aspose.Words.36be2542-1776-4b1c-8010-360ae82480ae.023.png)
321
 
322
  - Error rates for class 1:
323
 
324
  - **The lowest error rates were achieved for probabilities of around 0.4~**. The highest was for probability of 0.
325
 
326
+ ![](figures/Aspose.Words.36be2542-1776-4b1c-8010-360ae82480ae.024.png)
327
 
328
  ## Discussion
329