The brain navigates new spaces by 'darting' between reality and mental maps
This is something that I've wondered about when it comes to things like self driving cars, and the difference between good and bad drivers.
When I'm driving I'm constantly making predictions about the future state of the highway and acting on that. For example before most people change lanes, even without using a signal they'll look and slightly move the car in that direction, up to a full second before they actually do it. Or I see two cars that are going to end up in a conflict state (trying to take the same location on the highway) so I pivot away from them and the recovery they will have to make.
Self driving cars for all I know are reactionary. They can't pick up on these things beforehand at this time and preemptively put them self in a safer position. Bad/distracted/unaware drivers are not only reactionary, they'll have a much slower reaction time than a self driving car.
My theory is that this darting is the mechanism of consciousness. We look inward and outward in a loop, which generates the perception of being conscious in a similar way to how sequential frames of film create the illusion of motion. That "persistence of vision" is like the illusion of persistent, continuous consciousness created by the inward-outward regard sequence. Consciousness is a simple algorithm: look at the world, then look at the self to evaluate its reaction to the world. Then repeat.
A particularly interesting part that I did not expect from the title:
> Before the rats encountered the detour, the research team observed that their brains were already firing in patterns that seemed to "imagine" alternate unfamiliar mental routes while they slept. When the researchers compared these sleep patterns to the neural activity during the actual detour, some of them matched.
> “What was surprising was that the rats' brains were already prepared for this novel detour before they ever encountered it,”
> The same brain networks that normally help us imagine shortcuts or possibilities can, when disrupted, trap us in intrusive memories or hallucinations.
There is a fine line between this an wisdom. The Default Mode Network (DMN) is the brain's "simulation machine". When you're not focused on a specific task, the DMN fires up, allowing you to daydream, remember the past, plan for the future, and contemplate others' perspectives.
Wisdom is not about turning the machine off; it's about becoming the director of the movie it's playing. A creative genius envisioning a new world and a person trapped in a state of torment isn't the hardware, but the learned software of regulation, awareness, and perspective.
Wisdom is the process of learning to aim this incredible, imaginative power toward flourishing instead of suffering. Saying "trap us in intrusive memories or hallucinations" is the negative side where there is also a positive side to it all.
This matches my hypothesis on Deja vu
https://kemendo.com/Deja-Vu-Experiment.html
I think it also supports my three loops hypothesis as well:
https://kemendo.com/ThreeLoops.html
In effect, my position is that biological systems maintain a synchronized processing pipeline: where the hippocampal prediction system operates slightly “ahead” of sensory processing, like a cache buffer.
If the processing gets “behind” the sensory input then you feel like you’re accessing memory because the electrical signal is reaching memory and sensory distribution simultaneously or slightly lagging.
So it means you’re constantly switching between your world map and the input and comparing them just to stabilize a “linear” experience - something which is a necessity for corporeal prediction and reaction.
undefined
Going to new places is really therapeutic (Barring somewhere obviously adverse), since that 'darting to reality' creates a sense of presence.
I often find myself lost in my mental maps in daily life (Living inside my head) unless I'm in a nice novel environment. Meditation helps, however.
It would be interesting to move beyond rats and into humans binned via navigating their local area through understanding of the street networks independent of any tooling, and those that can't get down the street without mapping software telling them what to do.
Anecdotally it is striking to see the contrast as a member of the former group talking to people of the latter. They have truly no idea where places are or how close they are to other places. It is like these network connections aren't being made by them at all. No sense of scale either of how large a place is or how far away another place might be. I imagine this dependency on turn by turn navigation with no spatial awareness leads to quite different outcomes in terms of modes of thinking.
I mean, when I think about going to a place I am constructing a mental map of the actual city map. I am considering geography, cardinal directions, major corridors and their connectivity along the route, rough estimates of distance, etc. My CPUs are being used no doubt. Others though it is like a blankness in that wake. CPUs idle. Follow the arrows. Who knows where north is? What is a mile?
This takes me to Zen and the Art of Motorcycle Maintenance. Your physical experience of something has to be analysed in accordance with your mental model of it in order to attain a diagnosis (in the book it was a motorcycle engine).
My take on this is, especially in regard to debugging IT issues, is that you have to constantly verify and update your mental model (check your premises!) in order to better weed out problems.
The way it is phrased, looks like a pre computed model confronted to real data. So... our current AIs except we have incremental continuous training (accumulated experience)?
And dreams are simulation-based training to make life easier, decision-making more efficient?
What kind of next level machinery is this?! ;D
I wonder if this also relates to playing music.
There was a neural net paper like this that generated a lot of discussion on HN, but that I haven't been able to find since (I probably downloaded it, but that teaches me to always remember to use Zotero because academic paper filenames are terrible.)
It was about replacing backprop with a mechanism that checked outcomes against predictions, and just adjusted parameters that deviated from the predictions rather than the entire path. It wasn't suitable for digital machines (because it isn't any more efficient on digital machines) but it worked on analog models. If anybody remembers this, I'd appreciate the link.
I might be garbling the paper because it's from memory and I'm not an expert, but hopefully it's recognizable.
[dead]