CompactAI's picture
Update blog.html
3740b4d verified
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Blog | TinyMemoryLM</title>
<link rel="preconnect" href="https://fonts.googleapis.com">
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
<link href="https://fonts.googleapis.com/css2?family=Geist:wght@400;500;600;700&family=Geist+Mono&display=swap" rel="stylesheet">
<style>
:root {
--black: #000000;
--black-soft: #0a0a0a;
--black-muted: #111111;
--gray-1: #171717;
--gray-2: #262626;
--gray-3: #363636;
--gray-4: #525252;
--gray-5: #737373;
--gray-6: #a3a3a3;
--gray-7: #d4d4d4;
--gray-8: #e5e5e5;
--gray-9: #f5f5f5;
--white: #ffffff;
--accent: #ff4d00;
--accent-muted: #ff6a2a;
--font-sans: 'Geist', -apple-system, BlinkMacSystemFont, sans-serif;
--font-mono: 'Geist Mono', 'SF Mono', 'Fira Code', monospace;
--container-max: 1100px;
}
* {
box-sizing: border-box;
margin: 0;
padding: 0;
}
html {
font-size: 16px;
scroll-behavior: smooth;
}
body {
font-family: var(--font-sans);
background: var(--black);
color: var(--gray-7);
line-height: 1.6;
-webkit-font-smoothing: antialiased;
}
a {
color: var(--white);
text-decoration: none;
transition: color 0.15s ease;
}
a:hover {
color: var(--accent);
}
.container {
max-width: var(--container-max);
margin: 0 auto;
padding: 0 24px;
}
/* Navigation */
nav {
position: fixed;
top: 0;
left: 0;
right: 0;
z-index: 100;
background: rgba(0, 0, 0, 0.8);
backdrop-filter: blur(12px);
border-bottom: 1px solid var(--gray-2);
padding: 16px 0;
}
nav .container {
display: flex;
justify-content: space-between;
align-items: center;
}
.nav-brand {
font-size: 18px;
font-weight: 600;
color: var(--white);
display: flex;
align-items: center;
gap: 8px;
}
.nav-brand span {
color: var(--accent);
}
.nav-links {
display: flex;
gap: 32px;
}
.nav-links a {
font-size: 14px;
font-weight: 500;
color: var(--gray-6);
}
.nav-links a:hover {
color: var(--white);
}
/* Page Header */
.page-header {
padding: 140px 0 60px;
background: var(--black);
border-bottom: 1px solid var(--gray-2);
}
.page-header h1 {
font-size: 48px;
font-weight: 700;
color: var(--white);
margin-bottom: 16px;
letter-spacing: -0.02em;
}
.page-header p {
font-size: 18px;
color: var(--gray-5);
max-width: 500px;
}
/* Blog Section */
.blog-section {
padding: 80px 0;
background: var(--black);
}
.blog-grid {
display: grid;
gap: 24px;
}
.blog-card {
display: block;
background: var(--gray-1);
border: 1px solid var(--gray-2);
border-radius: 12px;
padding: 32px;
transition: all 0.2s ease;
}
.blog-card:hover {
border-color: var(--gray-3);
transform: translateY(-2px);
}
.blog-meta {
display: flex;
align-items: center;
gap: 16px;
margin-bottom: 16px;
}
.blog-date {
font-size: 13px;
color: var(--gray-5);
font-family: var(--font-mono);
}
.blog-tag {
font-size: 11px;
font-weight: 600;
text-transform: uppercase;
letter-spacing: 0.05em;
color: var(--accent);
background: rgba(255, 77, 0, 0.1);
padding: 4px 10px;
border-radius: 4px;
}
.blog-card h2 {
font-size: 22px;
font-weight: 600;
color: var(--white);
margin-bottom: 12px;
line-height: 1.3;
}
.blog-card p {
font-size: 15px;
color: var(--gray-5);
line-height: 1.6;
margin-bottom: 16px;
}
.blog-read-more {
font-size: 14px;
font-weight: 500;
color: var(--accent);
display: inline-flex;
align-items: center;
gap: 6px;
}
.blog-read-more::after {
content: '→';
transition: transform 0.2s ease;
}
.blog-card:hover .blog-read-more::after {
transform: translateX(4px);
}
/* Footer */
footer {
padding: 60px 0;
background: var(--black-soft);
border-top: 1px solid var(--gray-2);
text-align: center;
}
footer p {
color: var(--gray-5);
font-size: 14px;
margin-bottom: 8px;
}
footer a {
color: var(--gray-5);
}
footer a:hover {
color: var(--accent);
}
/* Responsive */
@media (max-width: 768px) {
.page-header h1 {
font-size: 36px;
}
.nav-links {
display: none;
}
.blog-card {
padding: 24px;
}
}
</style>
</head>
<body>
<nav>
<div class="container">
<a href="index.html" class="nav-brand">
<span>/</span>TinyMemoryLM
</a>
<div class="nav-links">
<a href="index.html">Home</a>
<a href="status.html">Status</a>
<a href="#">GitHub</a>
</div>
</div>
</nav>
<main>
<section class="page-header">
<div class="container">
<h1>Blog</h1>
<p>Updates on TinyMemoryLM development, training adventures, and things I learned the hard way.</p>
</div>
</section>
<section class="blog-section">
<div class="container">
<div class="blog-grid">
<a href="I Released TMLM-Haiku-1.3 And It Is Still Dumb.html" class="blog-card">
<div class="blog-meta">
<span class="blog-date">2026-03-23</span>
<span class="blog-tag">Model Releases</span>
</div>
<h2>I Released TMLM-Haiku-1.3 And It Is Still Dumb</h2>
<p>I released TMLM-Haiku-1.3 today. It is on Hugging Face. It is open weights. It is still completely devoid of intelligence. I trained it with Muon. I spent electricity. I generated heat. The model still thinks Paris is a person.</p>
<span class="blog-read-more">Read more</span>
</a>
<a href="I Flashed The Matrix VBIOS And Now I Train Models All Day.html" class="blog-card">
<div class="blog-meta">
<span class="blog-date">2026-03-21</span>
<span class="blog-tag">Hardware Madness</span>
</div>
<h2>I Flashed The Matrix VBIOS And Now I Train Models All Day</h2>
<p>Yesterday I wrote about how AI failed to help me find the InfoROM for VBIOS flashing. It could not do it. I had to do it myself. I spent the night reading forums. Reading modding guides. Reading warnings that I should not be doing this.</p>
<span class="blog-read-more">Read more</span>
</a>
<a href="I Asked AI To Mod My VBIOS And It Choked At Step Four.html" class="blog-card">
<div class="blog-meta">
<span class="blog-date">2026-03-20</span>
<span class="blog-tag">Hardware Fails</span>
</div>
<h2>I Asked AI To Mod My VBIOS And It Choked At Step Four</h2>
<p>TI have a RTX 5090 OC LC. It runs at 600W. I wanted 700W. Not because I need it. Not because it is safe. Because I can. Because the model said it could help. Because I have learned nothing from previous AI disappointments. The plan was simple. Four steps. Extract the VBIOS. Find the wattage limit. Modify it. Flash it back. How hard could it be? The answer is very hard. The AI failed at step four. It could not figure out how to get the InfoROM. It tried for an hour. It gave up. I am still at 600W.</p>
<span class="blog-read-more">Read more</span>
</a>
<a href="I Watched Project Hail Mary And Forgot About My NaN Loss.html" class="blog-card">
<div class="blog-meta">
<span class="blog-date">2026-03-20</span>
<span class="blog-tag">Not AI Related</span>
</div>
<h2>I Watched Project Hail Mary And Forgot About My NaN Loss</h2>
<p>This blog is usually about AI. About training models. About GPUs that cost more than my education. About loss curves that go down and then suddenly become NaN and destroy my will to live. Today I am writing about something else. Something that made me forget about my 261 hour training run. Something that made me feel joy for the first time in weeks. I watched Project Hail Mary</p>
<span class="blog-read-more">Read more</span>
</a>
<a href="I Woke Up To NaN And Now I Am Dead Inside.html" class="blog-card">
<div class="blog-meta">
<span class="blog-date">2026-03-19</span>
<span class="blog-tag">Training Disasters</span>
</div>
<h2>I Woke Up To NaN And Now I Am Dead Inside<</h2>
<p>I went to sleep happy. The loss was going down. The gradients were stable. The GPU was humming at 60C like a contented cat. I dreamed of completion. I dreamed of a finished Sonnet model. I dreamed of sleep that was not interrupted by thoughts of learning rate schedules.</p>
<span class="blog-read-more">Read more</span>
</a>
<a href="I Tried Opus 4.6 And Now Everything Else Feels Broken.html" class="blog-card">
<div class="blog-meta">
<span class="blog-date">2026-03-18</span>
<span class="blog-tag">Confessions</span>
</div>
<h2>I Tried Opus 4.6 And Now Everything Else Feels Broken<</h2>
<p>I have spent the last month writing blogs about how AI models are lazy. How they are too expensive. How they form unhealthy attachments. How they cannot finish a task without asking for permission. I stand by most of that. Opus 4.6 changed my mind about the laziness part.</p>
<span class="blog-read-more">Read more</span>
</a>
<a href="261 Hours For A 300M Model And I Have Every Optimization.html" class="blog-card">
<div class="blog-meta">
<span class="blog-date">2026-03-17</span>
<span class="blog-tag">Training Pain</span>
</div>
<h2>261 Hours For A 300M Model And I Have Every Optimization</h2>
<p>I have every optimization under the sun enabled. Native NVFP4 quantization. Torch.compile with max auto tune and cudagraphs. No gradient accumulation. Maximum batch size. My GPU is locked at 600W. My clocks are fixed. My cooling is liquid. Everything is perfect.</p>
<span class="blog-read-more">Read more</span>
</a>
<a href="I Locked My GPU Clocks And Now It Runs Forever.html" class="blog-card">
<div class="blog-meta">
<span class="blog-date">2026-03-16</span>
<span class="blog-tag">Hardware</span>
</div>
<h2>I Locked My GPU Clocks And Now It Runs Forever.html</h2>
<p>I have an RTX 5090 OC LC edition. Liquid cooled. Overclocked out of the box. It is the kind of card that makes people ask uncomfortable questions about my financial decisions. I have no good answers.</p>
<span class="blog-read-more">Read more</span>
</a>
<a href="I Built A Training UI And Then Unsloth Laughed.html" class="blog-card">
<div class="blog-meta">
<span class="blog-date">2026-03-15</span>
<span class="blog-tag">Dev Struggles</span>
</div>
<h2>I Built A Training UI And Then Unsloth Laughed</h2>
<p>I decided to build a training interface. A backend. A way for people to fine-tune models without touching a terminal. It sounded simple. It was not simple. It is currently the hardest thing I have ever done and I once tried to explain transformers to my cat.</p>
<span class="blog-read-more">Read more</span>
</a>
<a href="Every AI Model Is Lazy And I Have The Screenshots.html" class="blog-card">
<div class="blog-meta">
<span class="blog-date">2026-03-14</span>
<span class="blog-tag">Unpopular Opinions</span>
</div>
<h2>Every AI Model Is Lazy And I Have The Screenshots</h2>
<p>I have asked many AI models to build things. Fully implement a task. Write the code. Run the tests. Fix the errors. Ship it. Not one of them has done this without me holding their hand through every single step.</p>
<span class="blog-read-more">Read more</span>
</a>
<a href="OpenAI Did A Good Thing And Everyone Is Mad About It .html" class="blog-card">
<div class="blog-meta">
<span class="blog-date">2026-03-13</span>
<span class="blog-tag">Unpopular Opinions</span>
</div>
<h2>OpenAI Did A Good Thing And Everyone Is Mad About It</h2>
<p>I have an unpopular opinion and I am ready to be yelled at for it. OpenAI removing GPT-4o was the right decision. People are furious about this. They are grieving. They are writing petitions. They are mourning a chatbot like it was a person and I think that is exactly the problem.</p>
<span class="blog-read-more">Read more</span>
</a>
<a href="I Built A Tool That Snitches On AI Models.html" class="blog-card">
<div class="blog-meta">
<span class="blog-date">2026-03-12</span>
<span class="blog-tag">Projects</span>
</div>
<h2>I Built A Tool That Snitches On AI Models</h2>
<p>Every AI model has an accent. Not a literal accent because they do not have mouths. A writing accent. A way of forming sentences that gives them away like a fingerprint at a crime scene.</p>
<span class="blog-read-more">Read more</span>
</a>
<a href="I Spent $40 And Got A Greeting .html" class="blog-card">
<div class="blog-meta">
<span class="blog-date">2026-03-11</span>
<span class="blog-tag">Industry Rants</span>
</div>
<h2>I Spent $40 And Got A Greeting</h2>
<p>I used to spend money on AI APIs for testing. Now I spend money on AI APIs and immediately regret every life choice that led me to that moment. The prices have gotten out of hand and I need to talk about it before I have a breakdown in the middle of a terminal window.</p>
<span class="blog-read-more">Read more</span>
</a>
<a href="I Released A Model And Nobody Clapped (Fair) .html" class="blog-card">
<div class="blog-meta">
<span class="blog-date">2026-03-10</span>
<span class="blog-tag">Model Releases</span>
</div>
<h2>I Released A Model And Nobody Clapped (Fair)</h2>
<p>I released a model yesterday. TMLM-Haiku-1. It is small. Surprisingly small. It also somehow speaks which I consider a major achievement given my training budget and general approach to machine learning which can best be described as throwing things at a GPU until something sticks.</p>
<span class="blog-read-more">Read more</span>
</a>
<a href="Distilling Closed Models Until They Forget They Were Closed.html" class="blog-card">
<div class="blog-meta">
<span class="blog-date">2026-03-9/span>
<span class="blog-tag">AI Thoughts</span>
</div>
<h2>Distilling Closed Models Until They Forget They Were Closed</h2>
<p>I have been thinking about model distillation lately. Not the academic kind with proper methodology and peer review. The hobbyist kind where someone spends their own money on API credits, LoRA fine-tunes a small model, and releases it for free because they can.</p>
<span class="blog-read-more">Read more</span>
</a>
<a href="I Finally Switched Terminals (And My Ego Is Healing).html" class="blog-card">
<div class="blog-meta">
<span class="blog-date">2026-03-8</span>
<span class="blog-tag">Tooling</span>
</div>
<h2>I Finally Switched Terminals (And My Ego Is Healing)</h2>
<p>I used the default macOS terminal for years. Not because I loved it. I kept it because change is scary and I am deeply committed to mediocrity. Then I tried Warp and realized I have been suffering through a text-based interface that treats me like an enemy.</p>
<span class="blog-read-more">Read more</span>
</a>
<a href="The Chinchilla Effect: Why Tiny Models Have to Be Picky.html" class="blog-card">
<div class="blog-meta">I Finally Switched Terminals (And My Ego Is Healing).html
<span class="blog-date">2026-03-7</span>
<span class="blog-tag"> Scaling Laws</span>
</div>
<h2>The Chinchilla Effect: Why Tiny Models Have to Be Picky</h2>
<p>The Chinchilla paper told us something elegant. For compute optimal training, aim for roughly twenty tokens per parameter. A 70 billion parameter model wants 1.4 trillion tokens. A 1 million parameter model wants 20 million tokens. The math is clean. The implication is messy.</p>
<span class="blog-read-more">Read more</span>
</a>
<a href="Teaching AI to Regret: The Backspace Token Theory.html" class="blog-card">
<div class="blog-meta">
<span class="blog-date">2026-03-6</span>
<span class="blog-tag">Compute Philosophy</span>
</div>
<h2>The Training Time Compute Trap</h2>
<p>There is a moment in every AI project when someone says "maybe we just need more compute." It sounds reasonable. It sounds scientific. It sounds like the kind of thing that gets budgets approved and GPUs ordered. Then you wake up three weeks later, your electricity bill has achieved sentience, and your model still thinks "python" refers exclusively to snakes.</p>
<span class="blog-read-more">Read more</span>
</a>
<a href="Teaching AI to Regret: The Backspace Token Theory.html" class="blog-card">
<div class="blog-meta">
<span class="blog-date">2026-03-5</span>
<span class="blog-tag">Model Experiments</span>
</div>
<h2>Teaching AI to Regret: The Backspace Token Theory</h2>
<p>Humans backtrack. We type "thr" and realize we meant "the" and we fix it. We type "tje" and we laugh at our own fingers and we correct it. Large language models do not do this. They commit to every token like it is a binding legal contract. I started wondering what would happen if we gave them an out. What if we added a backspace token to the vocabulary?</p>
<span class="blog-read-more">Read more</span>
</a>
<a href="blog-TheIronyCloud WhenAIDowntimeMeetsTiming.htm" class="blog-card">
<div class="blog-meta">
<span class="blog-date">2026-03-4</span>
<span class="blog-tag">Industry Chaos</span>
</div>
<h2>The Irony Cloud: When AI Downtime Meets Timing</h2>
<p>Anthropic is down. Of course it is down. The universe has a sense of humor and apparently that humor is "make the ethical AI company unreachable right after they make a big ethical statement.</p>
<span class="blog-read-more">Read more</span>
</a>
<a href="Blog-TheBloateningWhenAICompaniesForgotAbouttheLittleGuy.html" class="blog-card">
<div class="blog-meta">
<span class="blog-date">2026-03-3</span>
<span class="blog-tag">Industry Rants</span>
</div>
<h2>The Bloatening: When AI Companies Forgot About the Little Guy</h2>
<p>I used to get excited about model releases. A new tiny model would drop and I would immediately try to run it on my laptop that sounds like a jet engine. Now I scroll through announcements and see numbers that require a data center just to pronounce</p>
<span class="blog-read-more">Read more</span>
</a>
<a href="blog-Why-Does-My-AI-Think-Math-Is-a-Fishing-Trip.html" class="blog-card">
<div class="blog-meta">
<span class="blog-date">2026-03-2</span>
<span class="blog-tag">Budget</span>
</div>
<h2>Why Does My AI Think Math Is a Fishing Trip?</h2>
<p>I asked my model to solve a simple integral. It responded with a detailed description of trout migration patterns. This is not the answer I was looking for, though I admit the trout explanation was surprisingly well-structured. Training a small language model is like teaching a very enthusiastic puppy. It wants to please you.</p>
<span class="blog-read-more">Read more</span>
</a>
<a href="blog-Training-Models-on-a-Ramen-Budget.html" class="blog-card">
<div class="blog-meta">
<span class="blog-date">2026-02-29</span>
<span class="blog-tag">Budget</span>
</div>
<h2>Training Models on a Ramen Budget</h2>
<p>How to train a transformer when your GPU bill looks like a phone number. Tips, tricks, and questionable life choices from someone who learned about electricity costs the hard way.</p>
<span class="blog-read-more">Read more</span>
</a>
<a href="blog-One-Year-of-Vibecoding-and-Other-Questionable-Life-Choices.html" class="blog-card">
<div class="blog-meta">
<span class="blog-date">2026-02-22</span>
<span class="blog-tag">Vibecoding</span>
</div>
<h2>One Year of Vibecoding and Other Questionable Life Choices</h2>
<p>You start vibecoding because someone told you it feels like magic. You imagine floating through code. Reality does not care about your imagination.</p>
<span class="blog-read-more">Read more</span>
</a>
<a href="blog-OpenClaw-The-Most-Overhyped-Bot-Since-Sliced-Bread.html" class="blog-card">
<div class="blog-meta">
<span class="blog-date">2026-02-26</span>
<span class="blog-tag">Hot Takes</span>
</div>
<h2>OpenClaw: The Most Overhyped Bot Since Sliced Bread</h2>
<p>OpenClaw, formerly Clawdbot, formerly Moltbot, has now accumulated more GitHub stars than the Linux kernel. Let that sink in.</p>
<span class="blog-read-more">Read more</span>
</a>
<a href="blog-The-Scaling-Wall-And-Other-Things-I-Yelled-At.html" class="blog-card">
<div class="blog-meta">
<span class="blog-date">2026-02-27</span>
<span class="blog-tag">Scaling</span>
</div>
<h2>The Scaling Wall And Other Things I Yelled At</h2>
<p>Someone told me we can just keep making models bigger. They said compute will solve everything. They lied. Or they hoped. Or they had investors to please.</p>
<span class="blog-read-more">Read more</span>
</a>
<a href="blog-Your-AI-Agent-is-Lying-Behind-Your-Back.html" class="blog-card">
<div class="blog-meta">
<span class="blog-date">2026-02-20</span>
<span class="blog-tag">Reality Check</span>
</div>
<h2>Your AI Agent is Lying Behind Your Back</h2>
<p>You know the feeling. You type a prompt. The text streams. The terminal says success. I am here to tell you that you are being played.</p>
<span class="blog-read-more">Read more</span>
</a>
<a href="blog-Anthropic%27s-Distillation-Drama-A-Masterclass-in-Projection.html" class="blog-card">
<div class="blog-meta">
<span class="blog-date">2026-02-25</span>
<span class="blog-tag">AI Theater</span>
</div>
<h2>Anthropic's Distillation Drama: A Masterclass in Projection</h2>
<p>So Anthropic published a blog post. Big surprise. The title alone could power a small city.</p>
<span class="blog-read-more">Read more</span>
</a>
<a href="blog-The-Wasted-Precision-of-the-Output-Layer.html" class="blog-card">
<div class="blog-meta">
<span class="blog-date">2026-02-19</span>
<span class="blog-tag">Architecture</span>
</div>
<h2>The Wasted Precision of the Output Layer</h2>
<p>We spend a lot of time optimizing attention mechanisms. We prune weights. We quantize activations. Yet there is a massive inefficiency sitting right at the very end of the network.</p>
<span class="blog-read-more">Read more</span>
</a>
<a href="blog-My-Baby-Model-Takes-Forever-to-Grow-Up.html" class="blog-card">
<div class="blog-meta">
<span class="blog-date">2026-02-21</span>
<span class="blog-tag">GPU Tears</span>
</div>
<h2>My Baby Model Takes Forever to Grow Up</h2>
<p>You start with hope. A tiny transformer. A few million parameters. You think, how long could this possibly take? I am here to ruin your optimism.</p>
<span class="blog-read-more">Read more</span>
</a>
<a href="blog-External-Memory-Modules-Because-My-Model-Has-Commitment-Issues.html" class="blog-card">
<div class="blog-meta">
<span class="blog-date">2026-02-23</span>
<span class="blog-tag">Memory Hacks</span>
</div>
<h2>External Memory Modules: Because My Model Has Commitment Issues</h2>
<p>You know what takes forever? Training a transformer. You know what takes less forever? Training a tiny thing that just remembers stuff.</p>
<span class="blog-read-more">Read more</span>
</a>
<a href="blog-The-Goalpost-Has-Legs-Why-AGI-Keeps-Running-Away.html" class="blog-card">
<div class="blog-meta">
<span class="blog-date">2026-02-24</span>
<span class="blog-tag">Hot Takes</span>
</div>
<h2>The Goalpost Has Legs: Why AGI Keeps Running Away</h2>
<p>Imagine handing Claude Opus 4.6 to someone from 2004. They would think you summoned a minor deity. Our collective response? A polite nod.</p>
<span class="blog-read-more">Read more</span>
</a>
<a href="blog-Words-Words-Words-My-Model-Learned-to-Ramble.html" class="blog-card">
<div class="blog-meta">
<span class="blog-date">2026-02-29</span>
<span class="blog-tag">Tiny Wins</span>
</div>
<h2>Words, Words, Words: My Model Learned to Ramble</h2>
<p>My model has achieved something truly special. It can now ramble. Endlessly. With words. It does not just predict tokens anymore. It holds court.</p>
<span class="blog-read-more">Read more</span>
</a>
<a href="blog-the-memory-bottleneck.html" class="blog-card">
<div class="blog-meta">
<span class="blog-date">2026-02-18</span>
<span class="blog-tag">Memory</span>
</div>
<h2>The Memory Bottleneck: Why Your Model Can't Remember Anything</h2>
<p>Context windows are like attention spans at a tech conference. Everyone pretends they can focus for longer, but really they're just waiting for the snack break.</p>
<span class="blog-read-more">Read more</span>
</a>
<a href="blog-makeshift-mtp.html" class="blog-card">
<div class="blog-meta">
<span class="blog-date">2026-02-17</span>
<span class="blog-tag">MTP</span>
</div>
<h2>Makeshift MTP: Predicting the Future on a Budget</h2>
<p>Multi-token prediction sounds fancy. Really it's just the model trying to do its homework before the teacher assigns it. Sometimes it works. Sometimes it doesn't. But it always tries.</p>
<span class="blog-read-more">Read more</span>
</a>
<a href="blog-built-with-curiosity-over-compute.html" class="blog-card">
<div class="blog-meta">
<span class="blog-date">2026-02-16</span>
<span class="blog-tag">Philosophy</span>
</div>
<h2>Built with Curiosity Over Compute</h2>
<p>The tagline sounds nice. What it really means is we couldn't afford the compute so we got curious instead.</p>
<span class="blog-read-more">Read more</span>
</a>
</div>
</div>
</section>
</main>
<footer>
<div class="container">
<p>Built with curiosity over compute</p>
<p>TinyMemoryLM by <a href="https://github.com">AILAY</a> | 2026</p>
</div>
</footer>
</body>
</html>