Update README.md
Browse files
README.md
CHANGED
@@ -18,7 +18,8 @@ using our novel Plain training method,
|
|
18 |
as an example of Tabby’s tabular synthesis capabilities.
|
19 |
Tabby enhances transformer-based LLMs by incorporating **Mixture of Experts (MoE) layers**,
|
20 |
allowing for better modeling of structured data.
|
21 |
-
|
|
|
22 |
|
23 |
|
24 |
- **Developed by:** University of Wisconsin-Madison
|
|
|
18 |
as an example of Tabby’s tabular synthesis capabilities.
|
19 |
Tabby enhances transformer-based LLMs by incorporating **Mixture of Experts (MoE) layers**,
|
20 |
allowing for better modeling of structured data.
|
21 |
+
|
22 |
+
🐱 Check out our [blog](https://sprocketlab.github.io/posts/2025/02/tabby/) or [paper](https://arxiv.org/abs/2503.02152) for more details and our [GitHub repo](https://github.com/soCromp/tabby) for code to use the model!
|
23 |
|
24 |
|
25 |
- **Developed by:** University of Wisconsin-Madison
|