Update README.md
Browse files
README.md
CHANGED
|
@@ -15,7 +15,8 @@ Multi-modal Variational Autoencoder for text embedding transformation using geom
|
|
| 15 |
|
| 16 |
This first version is essentialy clip_l + t5-base. Similar to those shunt prototypes in concept but entirely divergent in this implementation. This variation is formatted and trained specifically as a VAE to encode/decode pairs of encodings together.
|
| 17 |
Cantor cross-attention allows a form of high-density sparse containment, which when implemented correctly is a highly efficient global attention mechanism to ensure solidity.
|
| 18 |
-
Fractal modalities make this possible due to sparsity gaps
|
|
|
|
| 19 |
|
| 20 |
The current implementation is trained with only a handful of token sequences, so it's essentially front-loaded. Expect short sequences to work along with many longer squences.
|
| 21 |
Full-sequence pretraining will begin soon with a uniform vocabulary that takes both tokens in for a representative uniform token based on the position.
|
|
|
|
| 15 |
|
| 16 |
This first version is essentialy clip_l + t5-base. Similar to those shunt prototypes in concept but entirely divergent in this implementation. This variation is formatted and trained specifically as a VAE to encode/decode pairs of encodings together.
|
| 17 |
Cantor cross-attention allows a form of high-density sparse containment, which when implemented correctly is a highly efficient global attention mechanism to ensure solidity.
|
| 18 |
+
Fractal modalities make this possible. This is due to sparsity gaps in combinatory route pathologies to learned encoding pattern point encodings,
|
| 19 |
+
thus this allows the matching of a series of potentials that can be viewed only when necessary in the otherwise empty space. Fractal gaps that are filled with purpose.
|
| 20 |
|
| 21 |
The current implementation is trained with only a handful of token sequences, so it's essentially front-loaded. Expect short sequences to work along with many longer squences.
|
| 22 |
Full-sequence pretraining will begin soon with a uniform vocabulary that takes both tokens in for a representative uniform token based on the position.
|