File size: 1,584 Bytes
c71f869
 
 
de20043
 
 
ea7a4cb
 
 
 
dcfb758
 
 
9dc269e
ea7a4cb
 
 
 
 
 
 
 
c71f869
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
---
license: mit
---
> Kensho: a luminous awakening where the veil of illusion dissolves, revealing the boundless truth of our interconnected essence, inviting us into a dance with the infinite.


![image/png](https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/y-PO1eW4kZc4QXx0qtfdU.png)

By Fernando, Eric and David

[![Discord](https://img.shields.io/discord/1156064224225808488?logo=Discord&logoColor=%23ffffff&label=Discord&link=https%3A%2F%2Fdiscord.gg%2FtCMkMDDHwm)](https://discord.gg/cognitivecomputations)
Discord: https://discord.gg/cognitivecomputations

This is a hack around pytorch + huggingface Transformers library to make the original Dolphin Phi-2 to behave in a way inspired by the Meta's paper "MobileLLM: Optimizing Sub-billion Parameter Language Models for On-Device Use Cases" [ https://arxiv.org/abs/2402.14905 ]

One of the key ideas is that it works as if it was like "an online passthrough", by applying a loop on a module SuperClass, that groups layers, in a such way they get their forward method repeated in a loop.
So, in theory, you can observe more intelligence in the same way MegaDolphin 120b, Professor 155b, Venus120b and other huge models, but use way less vRAM, because instead of cloning the weights, we share them in the vRAM.

And actually, this concept could be also used to enable the training of way more efficient models.

We hope the community enjoy it and make good use of it.

It won't work out of the box in the other models. Their "modeling files" should be changed accordingly to achieve the same effect.