Abraham Dada

A Short Essay: You Can't Consciously Fine-Tune Your Neurons

Written: 18th May 2022
"We are blind to our blindness. We have very little idea of how little we know. We're not designed to know how little we know." — Daniel Kahneman

We like to imagine we're rational agents flipping belief switches on command. We're not. Beliefs don't live in tidy drawers; they're entangled—biases, emotions, habits, identity. You can't consciously fine-tune your neurons in real time—not with any reliable granularity. You can't surgically isolate one irrational belief, quarantine it, and expect the rest of your cognition to stay pristine. That's not how minds update.

Belief systems aren't modular. Say it plainly: accept one comforting claim without evidence and it rarely stays put. It leaks. Not because you're stupid—because most of your updating is unconscious. You're not just choosing isolated ideas; you're training patterns. Lower the bar once and your brain quietly reuses the same cheaper move somewhere adjacent.

You adopt a belief because it feels good or fits your tribe. That small win resets your internal threshold for "what counts as a good enough reason." Attention shifts towards confirming cues; contradictory cues feel noisy and get discounted. Next time you're in a different domain, your brain reaches for the same move because it's metabolically cheap. You didn't decide to generalise it—your update rule did.

First, health slides into finance. You start trusting "vibes" over data in wellness—this detox tea just feels right—and each sip pays you in a tiny hit of relief. That relief is a reward signal; your brain tags the inference ("felt right" ⇒ "works") as a good pathway and, by simple Hebbian logic, strengthens the synapses that carried it. Next month you're evaluating a coin or a stock and your cortex reaches for the cheapest, fastest prior it has. The founder sounds visionary; the narrative gives you the same micro–reward, so your threshold for evidence quietly drops and the body sensation of "this is right" stands in for analysis. You didn't decide to reuse the heuristic; your learning machinery generalised it. Repetition wired a comfort-first shortcut, and shortcuts don't respect domain boundaries—they fire wherever the pattern rhymes.

Then politics leaks into work. You grant "our side wouldn't lie" a pass and it feels good—identity affirmed, tribe intact. That feeling again acts like reinforcement. The mind learns not just a belief but a procedure: when narrative fits identity, lower scrutiny. Three weeks later your team's metrics don't quite add up, but the story flatters your project, so the same low-scrutiny procedure runs. The internal error-check that would normally light up—slow down, something's off—is less sensitive because you've been rewarding "story over friction." The pass you gave your politics trained a global update rule. Under stress and time pressure, the brain chooses cheap paths, and the cheapest path is the one you've rehearsed.

Last, relationships. You adopt "if they cared they'd text first," and you start routing ambiguous behaviour through that frame: absence becomes evidence. That's compression—collapsing a messy, multi-cause situation into a one-step mapping ("no ping" ⇒ "no care"). Each time you explain it that way, the mapping gets more fluent, and fluency feels like truth. Old attachment memories and prediction habits pile on; the system hates uncertainty and would rather minimise error by filling gaps with a familiar story than sit in not-knowing. The lift you feel isn't accuracy; it's the ease of a well-grooved circuit. You end up more certain, but it's certainty you manufactured by shaving off alternatives.

Under the hood, none of this is mystical. Neurones learn by correlation and reward: fire together, wire together, and wire stronger when it feels good. Beliefs aren't just propositions; they're policies—if-this-then-that pathways the brain finds cheap to run. Once a policy is rewarded, the synaptic weights that implement it don't carry a label like "health only" or "politics only." They're just efficient routes. So when a new context rhymes with an old one—same kind of affect, same cadence of story—the policy fires. Attention tilts towards confirming cues, conflict-monitoring eases off, and the global threshold for "good enough" slides without a memo to consciousness. That's why you can't "fine-tune your neurones" on demand: metacognition is slow and sparse; plasticity is local and blind to your intentions. The system optimises for metabolic cost and prediction ease, not for your after-the-fact promise to keep comfort quarantined to one corner of life.

We do try to compartmentalise. It can even hold for a while. But walls built out of willpower are brittle. Under stress, identity pressure, or social reinforcement, they develop holes. The same shortcut slips through—because it's cheaper to reuse it than to spin up a separate, stricter rule set for every domain.

Ultimately, beliefs leak. Over time they shape who you are—often in ways you won't notice until the bill arrives.