Yingkui Lin

Yingkui Lin

A Curious Mind.

Free Energy Principle

The Free Energy Principle

$$ a, \mu, m = \arg\min F(\tilde{s}, \mu \mid m) $$

Where:

  1. $F(\tilde{s}, \mu \mid m)$ is the variational free energy, a quantity that bounds surprise.
  2. $\tilde{s}$: sensory inputs (possibly generalized coordinates of sensations).
  3. $\mu$: internal states (like beliefs or expectations).
  4. $m$: the model or structure used to generate predictions.
  5. $a$: actions that can influence the sensory input.
  6. $\arg\min$: denotes the values of $a, \mu, m$ that minimize the free energy.

This means that an agent (e.g. a brain) is constantly trying to:

  1. Perceive the causes of its sensations correctly (update beliefs $\mu$),
  2. Act to bring the world in line with its expectations (change $a$),
  3. Adapt its generative model of the world $m$ to better predict future inputs.

The Bayesian brain hypothesis

$$ \mu = \arg\min_{\mu} \\, D_{\mathrm{KL}} \left( q(\vartheta) \\,\\|\\, p(\vartheta \mid \tilde{s}) \right) $$

The Infomax principle

$$ \mu = \arg\max \left\\{ I(\tilde{s}, \mu) - H(\mu) \right\\} $$
  1. $\tilde{s}$: sensory data (observed input)
  2. $\mu$: internal representation (e.g. neural encoding or beliefs)
  3. $I(\tilde{s}, \mu)$: mutual information between sensory input and internal representations — how much knowing $\mu$ reduces uncertainty about $\tilde{s}$
  4. $H(\mu)$: entropy of the internal representation — how complex or redundant $\mu$ is
  5. Minimizing free energy is equivalent to maximizing mutual information (like Infomax), while penalizing complex beliefs (entropy or KL divergence).