robot-bengali-2 commited on
Commit
2068f16
1 Parent(s): e9bffe1

Update links on the main page

Browse files
Files changed (2) hide show
  1. app.py +15 -10
  2. static/tabs.html +2 -8
app.py CHANGED
@@ -17,10 +17,14 @@ st.markdown("## Full demo content will be posted here on December 7th!")
17
  make_header()
18
 
19
  content_text(f"""
20
- There was a time when you could comfortably train SoTA vision and language models at home on your workstation.
21
- The first ConvNet to beat ImageNet took in 5-6 days on two gamer-grade GPUs{cite("alexnet")}. Today's top-1 imagenet model
22
- took 20,000 TPU-v3 days{cite("coatnet")}. And things are even worse in the NLP world: training GPT-3 on a top-tier server
23
- with 8 A100 would still take decades{cite("gpt-3")}.""")
 
 
 
 
24
 
25
  content_text(f"""
26
  So, can individual researchers and small labs still train state-of-the-art? Yes we can!
@@ -30,11 +34,12 @@ All it takes is for a bunch of us to come together. In fact, we're doing it righ
30
  draw_current_progress()
31
 
32
  content_text(f"""
33
- The model we're training is called DALLE: a transformer "language model" that generates images from text description.
34
- We're training this model on <a target="_blank" rel="noopener noreferrer" href=https://laion.ai/laion-400-open-dataset/>LAION</a> - the world's largest openly available
35
- image-text-pair dataset with 400 million samples. Our model is based on
36
- <a target="_blank" rel="noopener noreferrer" href=https://github.com/lucidrains/DALLE-pytorch>dalle-pytorch</a>
37
- with several tweaks for memory-efficient training.""")
 
38
 
39
 
40
  content_title("How do I join?")
@@ -50,7 +55,7 @@ That's easy. First, make sure you're logged in at Hugging Face. If you don't hav
50
  <li style="margin-top: 4px;">
51
  You can find other starter kits, evaluation and inference notebooks <b>TODO IN OUR ORGANIZATION</b>;</li>
52
  <li style="margin-top: 4px;">
53
- If you have any issues, <b>TODO DISCORD BADGE</b> </li>
54
  </ul>
55
 
56
  Please note that we currently limit the number of colab participants to <b>TODO</b> to make sure we do not interfere
 
17
  make_header()
18
 
19
  content_text(f"""
20
+ There was a time when you could comfortably train state-of-the-art vision and language models at home on your workstation.
21
+ The first convolutional neural net to beat ImageNet
22
+ (<a target="_blank" href="https://proceedings.neurips.cc/paper/2012/file/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf">AlexNet</a>)
23
+ was trained for 5-6 days on two gamer-grade GPUs. Today's TOP-1 ImageNet model
24
+ (<a target="_blank" href="https://arxiv.org/abs/2106.04803">CoAtNet</a>)
25
+ takes 20,000 TPU-v3 days. And things are even worse in the NLP world: training
26
+ <a target="_blank" href="https://arxiv.org/abs/2005.14165">GPT-3</a> on a top-tier server
27
+ with 8x A100 would take decades.""")
28
 
29
  content_text(f"""
30
  So, can individual researchers and small labs still train state-of-the-art? Yes we can!
 
34
  draw_current_progress()
35
 
36
  content_text(f"""
37
+ We're training a model similar to <a target="_blank" href="https://openai.com/blog/dall-e/">OpenAI DALL-E</a>,
38
+ that is, a transformer "language model" that generates images from text description.
39
+ It is trained on <a target="_blank" href=https://laion.ai/laion-400-open-dataset/>LAION-400M</a>,
40
+ the world's largest openly available image-text-pair dataset with 400 million samples. Our model is based on
41
+ the <a target="_blank" href=https://github.com/lucidrains/DALLE-pytorch>dalle&#8209;pytorch</a> implementation
42
+ by <a target="_blank" href="https://github.com/lucidrains">Phil Wang</a> with several tweaks for memory-efficient training.""")
43
 
44
 
45
  content_title("How do I join?")
 
55
  <li style="margin-top: 4px;">
56
  You can find other starter kits, evaluation and inference notebooks <b>TODO IN OUR ORGANIZATION</b>;</li>
57
  <li style="margin-top: 4px;">
58
+ If you have any issues, <b>TODO DISCORD BADGE</b> </li>
59
  </ul>
60
 
61
  Please note that we currently limit the number of colab participants to <b>TODO</b> to make sure we do not interfere
static/tabs.html CHANGED
@@ -94,10 +94,7 @@ a:visited {
94
  the moderators remove them from the list and revert the model to the latest checkpoint unaffected by the attack.
95
  </p>
96
 
97
- <details>
98
- <summary>Spoiler: How to implement authentication in a decentralized system efficiently?</summary>
99
- TODO
100
- </details>
101
 
102
  <p>
103
  Nice bonus: using this data, the moderators can acknowledge the personal contribution of each participant.
@@ -109,10 +106,7 @@ a:visited {
109
  suggested such a technique (named CenteredClip) and proved that it does not significantly affect the model's convergence.
110
  </p>
111
 
112
- <details>
113
- <summary>How does CenteredClip protect from outliers? (Interactive Demo)</summary>
114
- TODO
115
- </details>
116
 
117
  <p>
118
  In our case, CenteredClip is useful but not enough to protect from malicious participants,
 
94
  the moderators remove them from the list and revert the model to the latest checkpoint unaffected by the attack.
95
  </p>
96
 
97
+ <p><b>Spoiler: How to implement authentication in a decentralized system efficiently?</b></p>
 
 
 
98
 
99
  <p>
100
  Nice bonus: using this data, the moderators can acknowledge the personal contribution of each participant.
 
106
  suggested such a technique (named CenteredClip) and proved that it does not significantly affect the model's convergence.
107
  </p>
108
 
109
+ <p><b>Spoiler: How does CenteredClip protect from outliers? (Interactive Demo)</b></p>
 
 
 
110
 
111
  <p>
112
  In our case, CenteredClip is useful but not enough to protect from malicious participants,