TinmanLabSL commited on
Commit
ac84558
Β·
verified Β·
1 Parent(s): 64754b7

Upload index.html with huggingface_hub

Browse files
Files changed (1) hide show
  1. index.html +8 -7
index.html CHANGED
@@ -41,6 +41,7 @@
41
  margin-top: 1.8rem;
42
  margin-bottom: 0.8rem;
43
  }
 
44
  ul {
45
  padding-left: 1.2rem;
46
  }
@@ -85,18 +86,18 @@
85
 
86
  <hr />
87
 
88
- <h2>Disposition Distillation</h2>
89
 
90
- <p>Tinman Lab develops <strong>Disposition Distillation (DD)</strong> β€” a multi-teacher distillation methodology that trains <em>how a model behaves</em> into weights, not system prompts. DD models plan before acting, acknowledge uncertainty, verify their own reasoning, and know what they don't know.</p>
91
 
92
  <ul>
93
- <li><strong>4-stage all-MIT pipeline</strong> β€” Kimi K2.5 β†’ GLM-5 β†’ MiniMax M2.7 β†’ GLM-5</li>
94
- <li><strong>7 behavioral dispositions</strong> β€” Eager, Deliberate, Adversarial, Curious, Self-Improving, Humble, Persistent</li>
95
- <li><strong>On-device focus</strong> β€” 0.6B to 2B parameters, quantized for mobile and edge deployment</li>
96
- <li><strong>100% open training data</strong> β€” MIT-licensed teachers only, zero proprietary model outputs</li>
97
  </ul>
98
 
99
- <h2>Models</h2>
100
 
101
  <table>
102
  <thead>
 
41
  margin-top: 1.8rem;
42
  margin-bottom: 0.8rem;
43
  }
44
+ p { margin-bottom: 0.8rem; }
45
  ul {
46
  padding-left: 1.2rem;
47
  }
 
86
 
87
  <hr />
88
 
89
+ <p>We build on-device AI systems that reason, remember, and self-correct β€” small models designed to run autonomously at the edge with calibrated uncertainty and adversarial robustness.</p>
90
 
91
+ <h2>Research Areas</h2>
92
 
93
  <ul>
94
+ <li><strong>Agent Memory</strong> β€” Encrypted semantic memory infrastructure for persistent agent context</li>
95
+ <li><strong>Adversarial Safety</strong> β€” Multi-agent stress-testing and trust verification for autonomous systems</li>
96
+ <li><strong>Perception Systems</strong> β€” On-device vision, voice, and multimodal understanding</li>
97
+ <li><strong>Disposition Distillation</strong> β€” Training behavioral tendencies (planning, uncertainty acknowledgment, self-verification) into sub-billion parameter model weights via all-MIT multi-teacher distillation</li>
98
  </ul>
99
 
100
+ <h2>Open Models</h2>
101
 
102
  <table>
103
  <thead>