Update README.md
Browse files
README.md
CHANGED
|
@@ -16,10 +16,10 @@ Multi-modal Variational Autoencoder for text embedding transformation using geom
|
|
| 16 |
This first version is essentialy clip_l + t5-base. Similar to those shunt prototypes in concept but entirely divergent in this implementation. This variation is formatted and trained specifically as a VAE to encode/decode pairs of encodings together.
|
| 17 |
Cantor cross-attention allows a form high-density sparse containment, which when implemented correctly is a highly efficient global attention mechanism to ensure solidity.
|
| 18 |
|
| 19 |
-
The current implementation is trained with only a handful of token sequences, so it's essentially front-loaded. Expect short sequences to work.
|
| 20 |
Full-sequence pretraining will begin soon with a uniform vocabulary that takes both tokens in for a representative uniform token based on the position.
|
| 21 |
|
| 22 |
-
|
| 23 |
|
| 24 |

|
| 25 |
|
|
|
|
| 16 |
This first version is essentialy clip_l + t5-base. Similar to those shunt prototypes in concept but entirely divergent in this implementation. This variation is formatted and trained specifically as a VAE to encode/decode pairs of encodings together.
|
| 17 |
Cantor cross-attention allows a form high-density sparse containment, which when implemented correctly is a highly efficient global attention mechanism to ensure solidity.
|
| 18 |
|
| 19 |
+
The current implementation is trained with only a handful of token sequences, so it's essentially front-loaded. Expect short sequences to work along with many longer squences.
|
| 20 |
Full-sequence pretraining will begin soon with a uniform vocabulary that takes both tokens in for a representative uniform token based on the position.
|
| 21 |
|
| 22 |
+
This VAE is not for images - it's trained specifically to encode and decode PAIRS of encodings, each slightly twisted and warped into the direction of intention from the training. This is not your usual VAE, but she's most definitely trained like one.
|
| 23 |
|
| 24 |

|
| 25 |
|