These micro-learnings reduce local uncertainty without disrupting your overall internal model.
So free energy is reduced without destabilizing the system
epistemic pleasure without cognitive cost
high frequent exposure
to high frequent usage vocabularies
Hell Fast·
don’t mind surprises, as long as I anticipated being surprised and can MAKE SENSE of it afterward
Some people's internal model can carry more entropy without collapse
Social reward often comes from confidence, not truth
Updating may threaten self-image or social belonging
seeks minimal expected surprise at the lowest possible cost
updating the internal generative model to better predict sensory input
Without priors, you're too sensitive to noise.
With strong priors, you're stable, but might miss out on new insights
Maybe the brain auto wire the commands of actions at the same time with the knowledge prediction at the same time
so the actions are also considered to be FLUENT