ZennyKenny commited on
Commit
10ccf92
1 Parent(s): ea7a4cb

Update arxiv.org link

Browse files

As it was, [https://arxiv.org/abs/2402.14905] (no whitespace trailing URL) appends URL-encoded square bracket (%5D) to arxiv URL and breaks it. Just added whitespace so that link works.

Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -5,7 +5,7 @@
5
 
6
  By Fernando, Eric and David
7
 
8
- This is a hack around pytorch + huggingface Transformers library to make the original Dolphin Phi-2 to behave in a way inspired by the Meta's paper "MobileLLM: Optimizing Sub-billion Parameter Language Models for On-Device Use Cases" [https://arxiv.org/abs/2402.14905]
9
 
10
  One of the key ideas is that it works as if it was like "an online passthrough", by applying a loop on a module SuperClass, that groups layers, in a such way they get their forward method repeated in a loop.
11
  So, in theory, you can observe more intelligence in the same way MegaDolphin 120b, Professor 155b, Venus120b and other huge models, but use way less vRAM, because instead of cloning the weights, we share them in the vRAM.
 
5
 
6
  By Fernando, Eric and David
7
 
8
+ This is a hack around pytorch + huggingface Transformers library to make the original Dolphin Phi-2 to behave in a way inspired by the Meta's paper "MobileLLM: Optimizing Sub-billion Parameter Language Models for On-Device Use Cases" [ https://arxiv.org/abs/2402.14905 ]
9
 
10
  One of the key ideas is that it works as if it was like "an online passthrough", by applying a loop on a module SuperClass, that groups layers, in a such way they get their forward method repeated in a loop.
11
  So, in theory, you can observe more intelligence in the same way MegaDolphin 120b, Professor 155b, Venus120b and other huge models, but use way less vRAM, because instead of cloning the weights, we share them in the vRAM.