robot-bengali-2 commited on
Commit
3404b7a
1 Parent(s): 663f544

Add draft for "Security" tab

Browse files
Files changed (2) hide show
  1. st_helpers.py +1 -1
  2. static/tabs.html +65 -4
st_helpers.py CHANGED
@@ -30,7 +30,7 @@ def make_header():
30
 
31
 
32
  def make_tabs():
33
- components.html(f"{tabs_html}", height=400)
34
 
35
 
36
  def make_footer():
 
30
 
31
 
32
  def make_tabs():
33
+ components.html(f"{tabs_html}", height=400, scrolling=True)
34
 
35
 
36
  def make_footer():
static/tabs.html CHANGED
@@ -49,7 +49,7 @@ a:visited {
49
  <!-- Nav tabs -->
50
  <ul class="nav nav-tabs" role="tablist">
51
  <li role="presentation" class="active"><a href="#tab1" aria-controls="tab1" role="tab" data-toggle="tab">"Efficient Training"</a></li>
52
- <li role="presentation"><a href="#tab2" aria-controls="tab2" role="tab" data-toggle="tab">Security & Privacy</a></li>
53
  <li role="presentation"><a href="#tab3" aria-controls="tab3" role="tab" data-toggle="tab">Make Your Own (TBU)</a></li>
54
  </ul>
55
 
@@ -61,9 +61,70 @@ a:visited {
61
  </span>
62
  </div>
63
  <div role="tabpanel" class="tab-pane" id="tab2">
64
- <span class="padded faded text">
65
- <b> TODO 12</b> Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
66
- </span>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
67
  </div>
68
  <div role="tabpanel" class="tab-pane" id="tab3">
69
  <span class="padded faded text">
 
49
  <!-- Nav tabs -->
50
  <ul class="nav nav-tabs" role="tablist">
51
  <li role="presentation" class="active"><a href="#tab1" aria-controls="tab1" role="tab" data-toggle="tab">"Efficient Training"</a></li>
52
+ <li role="presentation"><a href="#tab2" aria-controls="tab2" role="tab" data-toggle="tab">Security</a></li>
53
  <li role="presentation"><a href="#tab3" aria-controls="tab3" role="tab" data-toggle="tab">Make Your Own (TBU)</a></li>
54
  </ul>
55
 
 
61
  </span>
62
  </div>
63
  <div role="tabpanel" class="tab-pane" id="tab2">
64
+ <p>
65
+ <b>Q: If I join a collaborative training, do I allow other people to execute code on my computer?</b>
66
+ </p>
67
+
68
+ <p>
69
+ <b>A:</b> During the training, participants only exchange data (gradients, statistics, model weights) and never send code to each other.
70
+ No other peer can execute code on your computer.
71
+ </p>
72
+
73
+ <p>
74
+ To join the training, you typically need to run the code (implementing the model, data streaming, training loop, etc.)
75
+ from a repository or a Colab notebook provided by the authors of the experiment.
76
+ This is no different from running any other open source project/Colab notebook.
77
+ </p>
78
+
79
+ <p>
80
+ <b>Q: Can a malicious participant influence the training outcome?</b>
81
+ </p>
82
+
83
+ <p>
84
+ <b>A:</b> It is indeed possible unless we use some defense mechanism.
85
+ For instance, a malicious participant can damage model weights by sending large numbers instead of the correct gradients.
86
+ The same can happen due to broken hardware or misconfiguration.
87
+ </p>
88
+
89
+ <p>
90
+ One possible defense is using <b>authentication</b> combined with <b>model checkpointing</b>.
91
+ In this case, participants should log in (e.g. with their Hugging Face account) to interact with the rest of the collaboration.
92
+ In turn, moderators can screen potential participants and add them to an allowlist.
93
+ If something goes wrong (e.g. if a participant sends invalid gradients and the model diverges),
94
+ the moderators remove them from the list and revert the model to the latest checkpoint unaffected by the attack.
95
+ </p>
96
+
97
+ <details>
98
+ <summary>Spoiler: How to implement authentication in a decentralized system efficiently?</summary>
99
+ TODO
100
+ </details>
101
+
102
+ <p>
103
+ Nice bonus: using this data, the moderators can acknowledge the personal contribution of each participant.
104
+ </p>
105
+
106
+ <p>
107
+ Another defense is replacing the naive averaging of the peers' gradients with an <b>aggregation technique robust to outliers</b>.
108
+ <a href="https://arxiv.org/abs/2012.10333">Karimireddy et al. (2020)</a>
109
+ suggested such a technique (named CenteredClip) and proved that it does not significantly affect the model's convergence.
110
+ </p>
111
+
112
+ <details>
113
+ <summary>How does CenteredClip protect from outliers? (Interactive Demo)</summary>
114
+ TODO
115
+ </details>
116
+
117
+ <p>
118
+ In our case, CenteredClip is useful but not enough to protect from malicious participants,
119
+ since it implies that the CenteredClip procedure itself is performed by a trusted server.
120
+ In contrast, in our decentralized system, all participants can aggregate a part of the gradients and we cannot assume all of them to be trusted.
121
+ </p>
122
+
123
+ <p>
124
+ Recently, <a href="https://arxiv.org/abs/2106.11257">Gorbunov et al. (2021)</a>
125
+ proposed a robust aggregation protocol for decentralized systems that does not require this assumption.
126
+ This protocol uses CenteredClip as a subroutine but is able to detect and ban participants who performed it incorrectly.
127
+ </p>
128
  </div>
129
  <div role="tabpanel" class="tab-pane" id="tab3">
130
  <span class="padded faded text">