Bias Prone Machine
Human minds are prone to holding on to false arguments and wrong ideas for several interacting reasons:
- Confirmation bias, We tend to seek out or give extra weight to information that fits our existing beliefs, and ignore or downplay anything that contradicts them.
- Availability heuristic, Events or arguments that are more vivid or recently encountered feel more plausible than they objectively are.
- Dunning–Kruger effect, People with limited knowledge often overestimate their own competence, so they cling to simplistic but incorrect views.
- Strong priors, In the Bayesian brain framework, your “prior” beliefs act as predictions. When these priors are very confident, any new sensory evidence (prediction error) is down‐weighted and you won’t update your belief much.
- Precision weighting, The brain assigns a precision (inverse variance) to both priors and sensory data. If you overestimate the precision of your priors, you’ll ignore disconfirming evidence and reinforce a false idea.
- Identity‑protective cognition, Beliefs that tie into your group identity (political, religious, cultural) feel threatening if challenged, so you resist changing them even in the face of facts.
- Authority and conformity, If a respected leader or majority holds a wrong idea, you may adopt it to fit in or out of deference, rather than examine it critically.
- Backfire effect, Attempts to correct a false belief can paradoxically reinforce it, because disconfirming evidence feels like a personal attack.
- Sunk‐cost fallacy, Once you’ve invested time, emotion or reputation in an idea, admitting you were wrong feels like losing that investment.
- Effort avoidance, Deeply evaluating every claim takes time and mental energy. Under cognitive load or stress, we fall back on heuristics and old beliefs.
- Information overload, Faced with vast amounts of conflicting information online, it’s easier to latch onto a simple narrative—even if it’s false.
- Halo effect. You form a strong positive (or negative) impression of a person, group or brand based on one salient trait. In Bayesian terms, that gives you an overly precise prior $p(z)$ about anything they say or do. New evidence from that source is up‑weighted—prediction errors are down‑weighted—so you keep believing them even when they’re wrong.
- Cognitive dissonance, When new evidence $x$ conflicts sharply with a held belief (prior $μ_{\rm prior}$), it generates large prediction error. To minimize that error the brain can either update its belief or reinterpret/ignore the evidence. Often it “chooses” the latter, twisting the data to fit the prior rather than admit error.
- Motivated search, We actively seek out or cherry‑pick sources, arguments, and data that confirm our existing beliefs, browsing only familiar websites, watching only friendly commentators, and ignoring broader evidence that might challenge us.
- Biased assimilation, We interpret ambiguous or even disconfirming information in a way that reinforces our priors, explaining away criticism as “fake news,” reframing inconvenient facts as exceptions, or dismissing them as misunderstandings rather than updating our beliefs.
- Strong priors (from repetition, emotion, expertise, halo) plus error‑minimizing drives (disfavoring belief change under dissonance) lock you into false ideas. Under stress, group polarization or fatigue, precision on priors skyrockets and the brain leans even harder on these biases, making belief revision very unlikely.
In hierarchical inference, each prediction comes with a precision (confidence). When the brain assigns too much precision to a prior belief, it under‑weights sensory evidence and produces a biased posterior.
Mathematically, if $\Sigma_{\rm prior}\ll\Sigma_{\rm sens}$, then the Kalman gain
$$ K=\frac{\Sigma_{\rm prior}}{\Sigma_{\rm prior}+\Sigma_{\rm sens}} $$becomes very small and you barely update—this is a formal way to see how strong priors lead to biases.