Update README.md
Browse files
README.md
CHANGED
|
@@ -6,8 +6,24 @@ colorTo: yellow
|
|
| 6 |
sdk: static
|
| 7 |
pinned: false
|
| 8 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 9 |
|
| 10 |
-
Lexsi Labs -- Aligned and safe AI
|
| 11 |
|
| 12 |
-
Frontier research around Safe and Aligned Intelligence.
|
| 13 |
|
|
|
|
| 6 |
sdk: static
|
| 7 |
pinned: false
|
| 8 |
---
|
| 9 |
+
<center>
|
| 10 |
+
<a href="https://lexsi.ai/"><img src="https://raw.githubusercontent.com/Lexsi-Labs/TabTune/refs/heads/docs/assets/lexsilogowhite.png" width="600"></a>
|
| 11 |
+
<a href="https://lexsi.ai/">https://www.lexsi.ai</a>
|
| 12 |
+
|
| 13 |
+
Paris 馃嚝馃嚪 路 Mumbai 馃嚠馃嚦 路 London 馃嚞馃嚙
|
| 14 |
+
|
| 15 |
+
<a href="https://discord.gg/dSB62Q7A"><img src="https://raw.githubusercontent.com/Lexsi-Labs/TabTune/refs/heads/docs/assets/discord.png" width="150"></a>
|
| 16 |
+
</center>
|
| 17 |
+
|
| 18 |
+
Lexsi Labs drives Aligned and Safe AI Frontier Research. Our goal is to build AI systems that are transparent, reliable, and value-aligned, combining interpretability, alignment, and governance to enable trustworthy intelligence at scale.
|
| 19 |
+
|
| 20 |
+
|
| 21 |
+
### Research Focus
|
| 22 |
+
- **Aligned & Safe AI:** Frameworks for self-monitoring, interpretable, and alignment-aware systems.
|
| 23 |
+
- **Explainability & Alignment:** Faithful, architecture-agnostic interpretability and value-aligned optimization across tabular, vision, and language models.
|
| 24 |
+
- **Safe Behaviour Control:** Techniques for fine-tuning, pruning, and behavioural steering in large models.
|
| 25 |
+
- **Risk & Governance:** Continuous monitoring, drift detection, and fairness auditing for responsible deployment.
|
| 26 |
+
- **Tabular & LLM Research:** Foundational work on tabular intelligence, in-context learning, and interpretable large language models.
|
| 27 |
|
|
|
|
| 28 |
|
|
|
|
| 29 |
|