The night my AI-powered setup collapsed on bare metal — and the 100× Internalization Protocol I built to take my mind back
Outsourcing Oblivion
Honestly, that night still catches me off guard sometimes. I sat staring at the terminal while the Podman SSH tunnel finally clicked into place. The external drive mounted cleanly inside the container, VS Code claimed every file without a flicker, and the commands ran like silk. Everything looked perfect. I let myself feel a small, private rush of pride.
Then I moved the same setup to a real bare-metal client. No USB drive. No network. Fedora 43’s permission model woke up, SELinux context rules piled on, and the whole thing exploded. I froze. Steps that had been muscle memory the day before simply vanished. I couldn’t rebuild them from scratch, let alone teach them to anyone else. This wasn’t a minor hiccup. It haunted me for weeks and forced me to name a phenomenon that had been creeping up on all of us: outsourcing oblivion.
Let’s call it what it is. We tell ourselves we’re using AI to accelerate learning. In truth we’re often just renting out our mental labor. The work gets done on the surface, but nothing roots deeply. Wilhelm von Humboldt argued that our human powers grow only through our own active engagement. Descartes pointed to the innate structures of the mind that allow infinite creation. Both traditions insist the same thing: knowledge has never been a passive product we consume. It is a capacity we must exercise and rebuild ourselves.
I learned this the hard way. The AI industry sells a seductive story—especially in a high-pressure place like Singapore, where everyone is hunting for any edge to clear a PhD gate or pivot careers. It promises to democratize knowledge: master anything a hundred times faster, no matter how brutal the deadline. I lived the promise. While preparing for my June entrance exam for a PhD in the Czech Republic, I was rewriting a 175-page research proposal from a Taiwan political-science foundation into computational science. At the same time I was building an offline retrieval-augmented generation system with a custom GUI. The AI handed me a completely working solution. Containers deployed, connections established, workflows humming.
It felt fantastic in the moment. But the moment the environment changed—offline deployment, no network, strict permission models—I drew a blank. I couldn’t recreate it independently or explain it with real confidence. That’s the cheat-code mindset in action. Knowledge quietly splits into two categories: the tough, internalized version that survives any change, and the borrowed version—fragile, context-tied, and ultimately alienating.
The problem runs deeper than most people admit. Why are these tools engineered to spit out instant answers and encourage quick consumption instead of the slow, grinding trial-and-error that actually builds mental strength? Because the slow path is the only one that strengthens our generative capacity. Humboldt would probably shake his head. Education was supposed to awaken inner powers, not turn us into passive receivers.
(I hesitated while writing this. Am I being too harsh on AI? It can be an excellent assistant. But when it starts replacing core cognitive labor, the trouble begins.)
So I built what I call the 100× Internalization Protocol. Not a bag of tricks, but a systematic way to reintroduce the productive friction that makes knowledge stick. It has four tightly interlocked steps that turn AI from oracle into sparring partner and force me to use my own generative muscle.
First, reverse prompting. Instead of begging for the answer, I make the model unpack every underlying dependency, every algorithmic trade-off, and exactly why this path beats the alternatives. It drags the hidden scaffolding into daylight.
Second, the mental-model showdown. I draft my own preliminary plan first, then throw it at the AI and ask it to hunt for blind spots and outdated assumptions. The push-and-pull becomes something much closer to real thinking.
Third, extreme stress testing. I treat the model like a ruthless reviewer and force it to simulate the nastiest real-world conditions—race conditions, permission conflicts, bare-metal offline deployments. These are exactly where reality bites.
Fourth, the double Feynman. I explain the whole thing to an imagined complete novice until the intuition lands perfectly. Then I switch to merciless expert mode and attack every assumption. Wherever it cracks, I go back and relearn.
Together these steps change the game. AI stops being a vending machine and starts being an opponent that demands I show up with my own mind.
The output side needs the same discipline. Mental movies in your head aren’t enough anymore. These days I draw logic topologies in Mermaid, write up why certain approaches died, use atomic Git commits to version-control my thinking trail, and record discussions so the AI can later surface contradictions. Private cognition becomes something searchable, auditable—an external extension that resists both forgetting and distortion.
I now refuse the old temptations: straight copying of AI output, static notes, tidy bullet lists. They preserve exactly the passivity we’re trying to break.
I run the whole protocol on my Mac Mini M4 in three rigorous phases. First I slice massive Markdown files—forty to a hundred thousand words—into 5,000- to 10,000-word chunks using headings and logical turns. That prevents the model from drifting into high-altitude summaries when it gets overwhelmed.
Each chunk then goes to a high-fidelity model under strict orders: produce coherent prose, at least thirty percent of the original length, no lists allowed, only natural flowing narrative. Any violation and it reruns.
Finally the expanded material flows into Obsidian. I add bidirectional links, draw visual graphs, and verify with local RAG. Now when I search “Podman SSH” I don’t just get facts—I get the entire reasoning chain and the original context. The goal is roughly 1.6 million words in three months while still handling PhD prep and commercial work. It sounds a little insane, but these days anything less and you risk being left behind.
Of course there are risks. Models still try to sneak in lists under pressure. Long contexts break mid-flow. Uploading sensitive proposals or architectures raises privacy worries. I counter with temperature zero, pre-anonymization, and local preprocessing. The ultimate fix, though, is full local sovereignty—Ollama paired with AnythingLLM or similar open-source stacks.
A few stubborn problems remain: how to auto-detect emotional or thematic shifts across files, optimal cross-document indexing, and reliable ways to quantify internalization depth. I’m still iterating.
At bottom this isn’t about acquiring technical skills faster. It’s about whether we can keep the freedom of the human mind. AI can sharpen our thinking or quietly soften it. The choice is ours.
I keep asking myself: Do we want to stay in the comfortable zone of effortless knowledge, happy passive consumers? Or are we willing to rebuild it ourselves every single time, until independent creation becomes the baseline of how we think? Only the second path turns “accelerated learning” from marketing slogan into genuine liberation.
That’s it.
作者:Rosalind Pembrick