This is something that I've wondered about when it comes to things like self driving cars, and the difference between good and bad drivers.
When I'm driving I'm constantly making predictions about the future state of the highway and acting on that. For example before most people change lanes, even without using a signal they'll look and slightly move the car in that direction, up to a full second before they actually do it. Or I see two cars that are going to end up in a conflict state (trying to take the same location on the highway) so I pivot away from them and the recovery they will have to make.
Self driving cars for all I know are reactionary. They can't pick up on these things beforehand at this time and preemptively put them self in a safer position. Bad/distracted/unaware drivers are not only reactionary, they'll have a much slower reaction time than a self driving car.
Yea but what are you "doing" when you're imagining stuff into the future - you're running some sort of prediction based on the patterns you see in the present. You're planning out moves before you execute them. The biggest difference is accuracy and breadth of simultaneous tracking, and it is relatively difficult to tell when its a good time to slow react or fast react.
The more you drive the more you notice things. After a few years in the taxi, while going through the tunnel in central Phoenix, I pointed at the cars in the far-left lanes of the freeway and said to my passenger, "you see that car right there? It's going to change into the next lane, and that other guy is going to have to slam on his brakes." My passenger was amazed when exactly this happened seconds later.
This seems obviously wrong? Any system whose name includes the word "forecast" was built to predict the future in some domain / over some time horizon / to some level of granularity.
It's an interesting thought, but isn't that still a statistical response to stimuli based on learned experience? Albeit one more advanced and subtle
It no more requires reasoning about the future as such than does stopping when someone or something is actually in the way (and thus the car will hit it in the future)
I’m not sure about that, I mean this is something that client-side prediction in games is doing all the time, so why wouldn’t a self-driving car do it?
My theory is that this darting is the mechanism of consciousness. We look inward and outward in a loop, which generates the perception of being conscious in a similar way to how sequential frames of film create the illusion of motion. That "persistence of vision" is like the illusion of persistent, continuous consciousness created by the inward-outward regard sequence. Consciousness is a simple algorithm: look at the world, then look at the self to evaluate its reaction to the world. Then repeat.
But why does that feel like anything? I could write a program that concurrently processes its visual input and its internal model. I don't think it would be conscious, unless everything in the universe is conscious (a possibility I can't, admittedly, discount).
More of "because you are a continuous chemical reaction that started 4 billion years ago". A bunch of legacy crap gets left around from the time before higher order thought when the brain - muscle interactivity was just based on feelings.
If we had all those animals, especially those around the time of the cambrian explosion to experiment on as they developed it would probably make more sense in the 'but it does' department. This is also why your math teacher wants you to show your work.
Consciousness is an attention mechanism. That inward regard, evaluating how the self reacts to the world, is attention being payed to the body's feelings. The outward regard then maps those feelings on to local space. Consciousness is watching your feelings as a kind of HUD on the world. It correlates feels to things.
Just wait till you hear Geoffrey Hinton’s “little pink elephants” routine; it will all make sense then (it won’t). The mystery is almost rivaled by that other mystery of why some of us fail to be mystified.
Orchestrated objective reduction or just an emerging proeprty of:
Our 86 billion neurons, every single one deafeningly
complex molecular machine with hundred million of hundreds of different receptor types, monoaminoxidae, (reuptake)transporters, connections to other neurons.
From what...my friend say, this becomes evident with LSD.
They said it clearly amplified the internal part of some visual perception loop, in fairly straightforward ways. For example, intentionally trying to see something as it wasn't (like a shadow as a snake) would make it be seen that way (the shadow would take on a clear snake appearance, and even move a bit).
Some simple examples are all the face optical illusions (Thatcher, reverse mask, etc), that show our perception of a face is in no way direct.
I also have noticed a "ticking" effect at times. Maybe around 5-10Hz or so. Felt like some kind of global clock tick that was updating perception. Everything in between was interpolated. Course it could just be the drugs /shrug
Interesting idea called transparent self model by Thomas Metzinger, author of The Ego Tunnel where he explains it further.
The gist from my memory of 15+ years ago is that the brain needs to model the world and then itself within the world, creating a model that is transparent to itself, situated in the world.
It's a mechanism of intelligence, not consciousness. Intel is built up from path-integration, short-cuts, vicarious trial and error that begins in very tiny local areas and expands to landmark and non-landmark navigation. This switching between vision and hippocampus has always been theorized about as the fundamental sharp wave ripple threshold of how intelligence is built as most mammals can do this, so it's not the "algorithm of consciousness".
A particularly interesting part that I did not expect from the title:
> Before the rats encountered the detour, the research team observed that their brains were already firing in patterns that seemed to "imagine" alternate unfamiliar mental routes while they slept. When the researchers compared these sleep patterns to the neural activity during the actual detour, some of them matched.
> “What was surprising was that the rats' brains were already prepared for this novel detour before they ever encountered it,”
Suppose we simplify the scenario and think of experiences as draws from a discrete probability distribution, e.g. p=[0.1, 0.1, 0.7, 0.1].
Suppose further that all events are a draw of type 1, 2, 3, or 4, and that our memory kept a count and updated the distribution - it is essentially a frequency distribution.
When we encounter a stimulus, we have to (1) recognize it and (2) assign a reward valence to it. If we only ever observed '3', the distribution would become very peaked. Correspondingly, this suggests that we would recognize '3' events faster and be better at assigning a reward valence to those events.
Then if we ever encounter a non-3 event, we would recognize it more slowly - it is well-established that recognition is tied to encounter frequency - and do a poorer job assigning reward valence to it. Together this means that we would do a bad job selecting the appropriate response.
Perhaps this scenario-based dreaming keeps us (and rats) primed so we're not flat-footed in new scenarios.
The question then becomes - if these scenarios are purely imagined, where are they being sampled from? If we never observe 1, 2, and 4...how do we know that these are the true list of alternative scenarios?
Yeah this part was pretty weird. How do they know that was caused due to the rats' brains firing because they were imagining unfamiliar routes, vs something completely unrelated to the maze routes at all? Just because the hippocampus flash patterns matched doesn't mean that's what the rats were thinking about while sleeping I'd think
> The same brain networks that normally help us imagine shortcuts or possibilities can, when disrupted, trap us in intrusive memories or hallucinations.
There is a fine line between this an wisdom. The Default Mode Network (DMN) is the brain's "simulation machine". When you're not focused on a specific task, the DMN fires up, allowing you to daydream, remember the past, plan for the future, and contemplate others' perspectives.
Wisdom is not about turning the machine off; it's about becoming the director of the movie it's playing. A creative genius envisioning a new world and a person trapped in a state of torment isn't the hardware, but the learned software of regulation, awareness, and perspective.
Wisdom is the process of learning to aim this incredible, imaginative power toward flourishing instead of suffering. Saying "trap us in intrusive memories or hallucinations" is the negative side where there is also a positive side to it all.
>A creative genius envisioning a new world and a person trapped in a state of torment isn't the hardware, but the learned software of regulation, awareness, and perspective.
No, it's hardware. There is no amount of 'wisdom' bootstraps pulling that will make you not schizophrenic.
I mean, the brain is hardware as in we can take things like neurons and force them to do things like computation in a standalone fashion (biological computer). An FPGA would be the closest non-biological thing we've created. It's hardware that can be programmed like software.
Maybe we need a new acronym. Self Programmable Neuron Array. SPNA.
Wisdom is an arbitrary concept. The drive to avoid suffering is built from sensory and affective affinities and networks funnlled into the cog-mapping motor systems. Calling this wisdom is simply a simplistic narrative.
In effect, my position is that biological systems maintain a synchronized processing pipeline: where the hippocampal prediction system operates slightly “ahead” of sensory processing, like a cache buffer.
If the processing gets “behind” the sensory input then you feel like you’re accessing memory because the electrical signal is reaching memory and sensory distribution simultaneously or slightly lagging.
So it means you’re constantly switching between your world map and the input and comparing them just to stabilize a “linear” experience - something which is a necessity for corporeal prediction and reaction.
Wondering if you have any ideas on this, which can be quite jarring when it happens?
You are thinking about something, and then walk through a doorway into another room and suddenly completely lose track of what you were thinking.
The closest idea I've seen for that is: Jeff Hawkins in his Thousand Brain Theory of Intelligence made a statement that learning is a function of navigation and the world models we construct are set in the context of location we create them.
--------
Edit: Just read your piece on Faith:
"Faith, as it’s traditionally understood, is trivial bullshit compared to the towering, unseen faith we place in the empirical all day everyday."
Absolutely correct, and the traditional understanding of Hebrews 11:1 I don't believe reflects what the author (supposedly Paul) was trying to convey.
πίστις: Pistis can be translated as confidence, as in: I'm confident this chair won't collapse when I sit on on it. Much stronger than belief or faith.
ὑπόστασις: Hupostasis is also a much stronger word than assurance, it conveys substance, as in your past experience backs up your confidence.
I think we should be careful about materialistic reductions of awareness. Because some rats dreamed detours that ended up being correct in waking rat life, it does not follow that all instances of deja vu are misfirings. It's a tempting connection to draw, but it does not actually explain how the detours were dreamt to begin with, and this points to a deeper question about awareness in general. If I were pressed for an analogy, I might say something like "just because all books have ink does not mean that all ink lives in books." You know what I mean? There's a superset of experiences that cannot be easily explained away by caching, as tempting as it might be.
Not exactly. We don't know where optic-flow reactions that integrate senses, emotions, motor systems in the slightest. Study neural reuse or coordination dynamics. Some relationship between the brain and the world that isn't easily found in the brain alone is responsible.
Materialistic interpretations of the world around us are quite literally the only useful ones. If we didn't do that we'd be sleeping in caves and hitting each other with heavy rocks.
While I hold a similar view as Sean Carroll that it is basically hand-waving to say we'll never understand consciousness, I can't discount Donald Hoffman's Interface theory of perception and that evolutionary fitness requires we only perceive four dimensions (but there could be more as hypothesised in string theory).
Wrong. Materialistic only got us to a level. Now we're looking past materialism in neural reuse, coordination dynamics and ecological psychology and neurobiology. The causes are out there in contradictory correlations.
A word is a material? You can show me the brain state that corresponds repeatedly and with continuous accuracy a single word? I don't think so.
You can train a computer to correspond to an individual's idiosyncratic brain state for their word voxels, but no one has yet to reduce the material to a single repeatable voxel state.
“We refute (based on empirical evidence) claims that humans use linguistic representations to think.”
Ev Fedorenko Language Lab MIT 2024
The problem with the materialist POV is it doesn't solve the most basic question of brain states. No not everything is a material.
There clearly are processes, like oscillations, that require material to some extent, but are not material themselves. And that's the problem with the materialist camp. If the oscillations, dynamically integrated, are the source of intel/consciousness, then material may not even be a requirement of life. We may just be material sinks.
> There clearly are processes, like oscillations, that require material to some extent, but are not material themselves. And that's the problem with the materialist camp. If the oscillations, dynamically integrated, are the source of intel/consciousness, then material may not even be a requirement of life. We may just be material sinks.
I understand.
There is a however a flaw in that thinking.
There is no oscillation that exists outside of some material/medium to oscillate. I agree it is important to distinguish the water from the wave. There is no light wave without the photon.
Thus - I strongly suspect - there is no consciousness without the brain (or similar medium).
It's not a mind body problem, unfortunately, it's problem of hard indeterminism. We lack free will but the universe is not necessarily deterministic. Chaos has some level of intervention, like quantum darwinism, or gravity probability that is expressed somewhere between physical and process. This may be the interzone both share that is where the gateway exists, how DNA emerges, how neurons are evolved. The material may be inseparable both at origin and inexorably from the process, making the material simply the partner to the process. So materialism may simply be an illusion by itself.
As all our explanations are immaterial, they are post hoc observations, to claim any direction to the role of material is to sportscast the existence of material. There is no consciousness without the process, the material may be secondary as its explanation is a process as well.
We haven't found the format that finds the material in its place yet, whether its eliminative materialism, or another state-process pairing that cuts materialism down to a partner role. The jury is still out, but materialism isn't the answer.
Do you thing emergent properties are somehow not materialist? Do you know what the word means? Do you think it means only things that make a noise when you knock on them exist? You seem to be very confused about the conversation we're having.
If you're bringing up emergence when I've already raised ideas of ecological relations, then it's you who must be very confused about the conversation we're having.
This takes me to Zen and the Art of Motorcycle Maintenance. Your physical experience of something has to be analysed in accordance with your mental model of it in order to attain a diagnosis (in the book it was a motorcycle engine).
My take on this is, especially in regard to debugging IT issues, is that you have to constantly verify and update your mental model (check your premises!) in order to better weed out problems.
It would be interesting to move beyond rats and into humans binned via navigating their local area through understanding of the street networks independent of any tooling, and those that can't get down the street without mapping software telling them what to do.
Anecdotally it is striking to see the contrast as a member of the former group talking to people of the latter. They have truly no idea where places are or how close they are to other places. It is like these network connections aren't being made by them at all. No sense of scale either of how large a place is or how far away another place might be. I imagine this dependency on turn by turn navigation with no spatial awareness leads to quite different outcomes in terms of modes of thinking.
I mean, when I think about going to a place I am constructing a mental map of the actual city map. I am considering geography, cardinal directions, major corridors and their connectivity along the route, rough estimates of distance, etc. My CPUs are being used no doubt. Others though it is like a blankness in that wake. CPUs idle. Follow the arrows. Who knows where north is? What is a mile?
The way it is phrased, looks like a pre computed model confronted to real data.
So... our current AIs except we have incremental continuous training (accumulated experience)?
And dreams are simulation-based training to make life easier, decision-making more efficient?
There was a neural net paper like this that generated a lot of discussion on HN, but that I haven't been able to find since (I probably downloaded it, but that teaches me to always remember to use Zotero because academic paper filenames are terrible.)
It was about replacing backprop with a mechanism that checked outcomes against predictions, and just adjusted parameters that deviated from the predictions rather than the entire path. It wasn't suitable for digital machines (because it isn't any more efficient on digital machines) but it worked on analog models. If anybody remembers this, I'd appreciate the link.
I might be garbling the paper because it's from memory and I'm not an expert, but hopefully it's recognizable.
I don't know if it is the paper you are thinking of (likely not) but this idea of checking predictions against outcomes is a very common idea in less mainstream AI research, including the so called "energy-based models" of Yann LeCun and the reference frames of the thousand brains project.
A recent paper posted here also looked at Recurrent Neural Nets and how in simplifying the design to its core amounted to just having a latent prediction and repeatedly adjusting that prediction.
This is something that I've wondered about when it comes to things like self driving cars, and the difference between good and bad drivers.
When I'm driving I'm constantly making predictions about the future state of the highway and acting on that. For example before most people change lanes, even without using a signal they'll look and slightly move the car in that direction, up to a full second before they actually do it. Or I see two cars that are going to end up in a conflict state (trying to take the same location on the highway) so I pivot away from them and the recovery they will have to make.
Self driving cars for all I know are reactionary. They can't pick up on these things beforehand at this time and preemptively put them self in a safer position. Bad/distracted/unaware drivers are not only reactionary, they'll have a much slower reaction time than a self driving car.
Yea but what are you "doing" when you're imagining stuff into the future - you're running some sort of prediction based on the patterns you see in the present. You're planning out moves before you execute them. The biggest difference is accuracy and breadth of simultaneous tracking, and it is relatively difficult to tell when its a good time to slow react or fast react.
it’s a branch mispredict, not only is an active driver predicting multiple futures they’re also highly reactionary
The more you drive the more you notice things. After a few years in the taxi, while going through the tunnel in central Phoenix, I pointed at the cars in the far-left lanes of the freeway and said to my passenger, "you see that car right there? It's going to change into the next lane, and that other guy is going to have to slam on his brakes." My passenger was amazed when exactly this happened seconds later.
Yeah - machines dont have the ability to visualise / imagine out in the future.
This seems obviously wrong? Any system whose name includes the word "forecast" was built to predict the future in some domain / over some time horizon / to some level of granularity.
It's an interesting thought, but isn't that still a statistical response to stimuli based on learned experience? Albeit one more advanced and subtle
It no more requires reasoning about the future as such than does stopping when someone or something is actually in the way (and thus the car will hit it in the future)
I’m not sure about that, I mean this is something that client-side prediction in games is doing all the time, so why wouldn’t a self-driving car do it?
"It's just a statistical prediction machine"
My theory is that this darting is the mechanism of consciousness. We look inward and outward in a loop, which generates the perception of being conscious in a similar way to how sequential frames of film create the illusion of motion. That "persistence of vision" is like the illusion of persistent, continuous consciousness created by the inward-outward regard sequence. Consciousness is a simple algorithm: look at the world, then look at the self to evaluate its reaction to the world. Then repeat.
But why does that feel like anything? I could write a program that concurrently processes its visual input and its internal model. I don't think it would be conscious, unless everything in the universe is conscious (a possibility I can't, admittedly, discount).
> But why does that feel like anything?
Anthropic principle: because it does. If it didn't feel like anything, it wouldn't. But it does, so it does.
https://en.wikipedia.org/wiki/Anthropic_principle
> But it does, so it does.
Explain the first part of this sentence.
More of "because you are a continuous chemical reaction that started 4 billion years ago". A bunch of legacy crap gets left around from the time before higher order thought when the brain - muscle interactivity was just based on feelings.
If we had all those animals, especially those around the time of the cambrian explosion to experiment on as they developed it would probably make more sense in the 'but it does' department. This is also why your math teacher wants you to show your work.
I have a feeling the response would be “read the latter”
> But why does that feel like anything?
Consciousness is an attention mechanism. That inward regard, evaluating how the self reacts to the world, is attention being payed to the body's feelings. The outward regard then maps those feelings on to local space. Consciousness is watching your feelings as a kind of HUD on the world. It correlates feels to things.
Sure, but that still leaves the mystery of how qualia is generated in a mechanistic manner.
Just wait till you hear Geoffrey Hinton’s “little pink elephants” routine; it will all make sense then (it won’t). The mystery is almost rivaled by that other mystery of why some of us fail to be mystified.
Yes. Still perplexing to be thrown into the world. How is it that my individual experience is in this body but not another one? Etc
> But why does that feel like anything?
Orchestrated objective reduction or just an emerging proeprty of:
Our 86 billion neurons, every single one deafeningly complex molecular machine with hundred million of hundreds of different receptor types, monoaminoxidae, (reuptake)transporters, connections to other neurons.
Appeal to complexity? I see this common pattern when science-minded people need to explain something that's beyond their reach.
[dead]
From what...my friend say, this becomes evident with LSD.
They said it clearly amplified the internal part of some visual perception loop, in fairly straightforward ways. For example, intentionally trying to see something as it wasn't (like a shadow as a snake) would make it be seen that way (the shadow would take on a clear snake appearance, and even move a bit).
Some simple examples are all the face optical illusions (Thatcher, reverse mask, etc), that show our perception of a face is in no way direct.
I also have noticed a "ticking" effect at times. Maybe around 5-10Hz or so. Felt like some kind of global clock tick that was updating perception. Everything in between was interpolated. Course it could just be the drugs /shrug
https://en.wikipedia.org/wiki/Saccade
Sure. But it's not really the motion I experienced, it was the polling.
And funny enough this gets really close to the non-dualistic philosophies of zen buddhism.
You could probably go further upstream and make a loose comparison to the concept of dependent arising (Pratītyasamutpāda):
https://plato.stanford.edu/entries/mind-indian-buddhism/
https://en.wikipedia.org/wiki/Prat%C4%ABtyasamutp%C4%81da
or T.S. Eliot's `Little Gidding`
Interesting idea called transparent self model by Thomas Metzinger, author of The Ego Tunnel where he explains it further.
The gist from my memory of 15+ years ago is that the brain needs to model the world and then itself within the world, creating a model that is transparent to itself, situated in the world.
Anil Seth would have it the other way around that you predictively generate a perception of the world and then use your senses to refine that.
Consciousness could still be the self reaction to this sub-conscious predictive/generative function.
Is this also the reason why darting eyes movements can be linked (and is predictive of or can detect) to mental health issues like schizophrenia, etc?
<<look at the world, then look at the self to evaluate its reaction to the world. Then repeat>>
Who's doing the looking?
It's a mechanism of intelligence, not consciousness. Intel is built up from path-integration, short-cuts, vicarious trial and error that begins in very tiny local areas and expands to landmark and non-landmark navigation. This switching between vision and hippocampus has always been theorized about as the fundamental sharp wave ripple threshold of how intelligence is built as most mammals can do this, so it's not the "algorithm of consciousness".
This whole consciousness debate is just trumped up bs.
[dead]
A particularly interesting part that I did not expect from the title:
> Before the rats encountered the detour, the research team observed that their brains were already firing in patterns that seemed to "imagine" alternate unfamiliar mental routes while they slept. When the researchers compared these sleep patterns to the neural activity during the actual detour, some of them matched.
> “What was surprising was that the rats' brains were already prepared for this novel detour before they ever encountered it,”
Suppose we simplify the scenario and think of experiences as draws from a discrete probability distribution, e.g. p=[0.1, 0.1, 0.7, 0.1].
Suppose further that all events are a draw of type 1, 2, 3, or 4, and that our memory kept a count and updated the distribution - it is essentially a frequency distribution.
When we encounter a stimulus, we have to (1) recognize it and (2) assign a reward valence to it. If we only ever observed '3', the distribution would become very peaked. Correspondingly, this suggests that we would recognize '3' events faster and be better at assigning a reward valence to those events.
Then if we ever encounter a non-3 event, we would recognize it more slowly - it is well-established that recognition is tied to encounter frequency - and do a poorer job assigning reward valence to it. Together this means that we would do a bad job selecting the appropriate response.
Perhaps this scenario-based dreaming keeps us (and rats) primed so we're not flat-footed in new scenarios.
The question then becomes - if these scenarios are purely imagined, where are they being sampled from? If we never observe 1, 2, and 4...how do we know that these are the true list of alternative scenarios?
Yeah this part was pretty weird. How do they know that was caused due to the rats' brains firing because they were imagining unfamiliar routes, vs something completely unrelated to the maze routes at all? Just because the hippocampus flash patterns matched doesn't mean that's what the rats were thinking about while sleeping I'd think
Seems to support the idea that dreams are rehearsals for real life.
I wish some of my dreams really were
> The same brain networks that normally help us imagine shortcuts or possibilities can, when disrupted, trap us in intrusive memories or hallucinations.
There is a fine line between this an wisdom. The Default Mode Network (DMN) is the brain's "simulation machine". When you're not focused on a specific task, the DMN fires up, allowing you to daydream, remember the past, plan for the future, and contemplate others' perspectives.
Wisdom is not about turning the machine off; it's about becoming the director of the movie it's playing. A creative genius envisioning a new world and a person trapped in a state of torment isn't the hardware, but the learned software of regulation, awareness, and perspective.
Wisdom is the process of learning to aim this incredible, imaginative power toward flourishing instead of suffering. Saying "trap us in intrusive memories or hallucinations" is the negative side where there is also a positive side to it all.
>A creative genius envisioning a new world and a person trapped in a state of torment isn't the hardware, but the learned software of regulation, awareness, and perspective.
No, it's hardware. There is no amount of 'wisdom' bootstraps pulling that will make you not schizophrenic.
The brain isn't hardware, it's biology and oscillation and integrations in optic flow. It can't be dichotomized into hardware or software.
I mean, the brain is hardware as in we can take things like neurons and force them to do things like computation in a standalone fashion (biological computer). An FPGA would be the closest non-biological thing we've created. It's hardware that can be programmed like software.
Maybe we need a new acronym. Self Programmable Neuron Array. SPNA.
The brain is nothing like a computer.
They both have input and output and obey the laws of material nature. They take in information, make decisions, and then take action.
Or is the brain not mathematical?
Wisdom is an arbitrary concept. The drive to avoid suffering is built from sensory and affective affinities and networks funnlled into the cog-mapping motor systems. Calling this wisdom is simply a simplistic narrative.
Yeah. Not sure "flourishing" and "suffering" form a useful dichotomy for "wisdom" to begin with. Life is way more complicated than that.
This matches my hypothesis on Deja vu
https://kemendo.com/Deja-Vu-Experiment.html
I think it also supports my three loops hypothesis as well:
https://kemendo.com/ThreeLoops.html
In effect, my position is that biological systems maintain a synchronized processing pipeline: where the hippocampal prediction system operates slightly “ahead” of sensory processing, like a cache buffer.
If the processing gets “behind” the sensory input then you feel like you’re accessing memory because the electrical signal is reaching memory and sensory distribution simultaneously or slightly lagging.
So it means you’re constantly switching between your world map and the input and comparing them just to stabilize a “linear” experience - something which is a necessity for corporeal prediction and reaction.
Very interesting, will have a read in depth.
Wondering if you have any ideas on this, which can be quite jarring when it happens?
You are thinking about something, and then walk through a doorway into another room and suddenly completely lose track of what you were thinking.
The closest idea I've seen for that is: Jeff Hawkins in his Thousand Brain Theory of Intelligence made a statement that learning is a function of navigation and the world models we construct are set in the context of location we create them.
--------
Edit: Just read your piece on Faith: "Faith, as it’s traditionally understood, is trivial bullshit compared to the towering, unseen faith we place in the empirical all day everyday."
Absolutely correct, and the traditional understanding of Hebrews 11:1 I don't believe reflects what the author (supposedly Paul) was trying to convey.
Ἔστιν δὲ πίστις ἐλπιζομένων ὑπόστασις, πραγμάτων ἔλεγχος οὐ βλεπομένων
πίστις: Pistis can be translated as confidence, as in: I'm confident this chair won't collapse when I sit on on it. Much stronger than belief or faith.
ὑπόστασις: Hupostasis is also a much stronger word than assurance, it conveys substance, as in your past experience backs up your confidence.
I think we should be careful about materialistic reductions of awareness. Because some rats dreamed detours that ended up being correct in waking rat life, it does not follow that all instances of deja vu are misfirings. It's a tempting connection to draw, but it does not actually explain how the detours were dreamt to begin with, and this points to a deeper question about awareness in general. If I were pressed for an analogy, I might say something like "just because all books have ink does not mean that all ink lives in books." You know what I mean? There's a superset of experiences that cannot be easily explained away by caching, as tempting as it might be.
Materialistic reduction has gotten us quite far in science.
It's also arrested our development. It's like a skill we've got comfortable with and now we arent willing to go further.
Not exactly. We don't know where optic-flow reactions that integrate senses, emotions, motor systems in the slightest. Study neural reuse or coordination dynamics. Some relationship between the brain and the world that isn't easily found in the brain alone is responsible.
Materialistic interpretations of the world around us are quite literally the only useful ones. If we didn't do that we'd be sleeping in caves and hitting each other with heavy rocks.
Or writing papers on Panpsychism.
While I hold a similar view as Sean Carroll that it is basically hand-waving to say we'll never understand consciousness, I can't discount Donald Hoffman's Interface theory of perception and that evolutionary fitness requires we only perceive four dimensions (but there could be more as hypothesised in string theory).
Wrong. Materialistic only got us to a level. Now we're looking past materialism in neural reuse, coordination dynamics and ecological psychology and neurobiology. The causes are out there in contradictory correlations.
Literally everything is materialist. If it's not it either A) doesn't actually exist or B) you just don't understand it yet.
It's inherent to the meaning of the word.
A word is a material? You can show me the brain state that corresponds repeatedly and with continuous accuracy a single word? I don't think so.
You can train a computer to correspond to an individual's idiosyncratic brain state for their word voxels, but no one has yet to reduce the material to a single repeatable voxel state.
“We refute (based on empirical evidence) claims that humans use linguistic representations to think.” Ev Fedorenko Language Lab MIT 2024
The problem with the materialist POV is it doesn't solve the most basic question of brain states. No not everything is a material.
There clearly are processes, like oscillations, that require material to some extent, but are not material themselves. And that's the problem with the materialist camp. If the oscillations, dynamically integrated, are the source of intel/consciousness, then material may not even be a requirement of life. We may just be material sinks.
> There clearly are processes, like oscillations, that require material to some extent, but are not material themselves. And that's the problem with the materialist camp. If the oscillations, dynamically integrated, are the source of intel/consciousness, then material may not even be a requirement of life. We may just be material sinks.
I understand.
There is a however a flaw in that thinking.
There is no oscillation that exists outside of some material/medium to oscillate. I agree it is important to distinguish the water from the wave. There is no light wave without the photon. Thus - I strongly suspect - there is no consciousness without the brain (or similar medium).
It's not a mind body problem, unfortunately, it's problem of hard indeterminism. We lack free will but the universe is not necessarily deterministic. Chaos has some level of intervention, like quantum darwinism, or gravity probability that is expressed somewhere between physical and process. This may be the interzone both share that is where the gateway exists, how DNA emerges, how neurons are evolved. The material may be inseparable both at origin and inexorably from the process, making the material simply the partner to the process. So materialism may simply be an illusion by itself.
As all our explanations are immaterial, they are post hoc observations, to claim any direction to the role of material is to sportscast the existence of material. There is no consciousness without the process, the material may be secondary as its explanation is a process as well.
We haven't found the format that finds the material in its place yet, whether its eliminative materialism, or another state-process pairing that cuts materialism down to a partner role. The jury is still out, but materialism isn't the answer.
Do you thing emergent properties are somehow not materialist? Do you know what the word means? Do you think it means only things that make a noise when you knock on them exist? You seem to be very confused about the conversation we're having.
If you're bringing up emergence when I've already raised ideas of ecological relations, then it's you who must be very confused about the conversation we're having.
Your work seems pretty good to me, have you seen Steven Byrne's blog theorising about symbol grounding in the brain?
No I havent, I’ll have to look it up, thanks for the recommendation.
VR cannot be essential to decoding the brain as it deals in topological maps and affinities.
Going to new places is really therapeutic (Barring somewhere obviously adverse), since that 'darting to reality' creates a sense of presence.
I often find myself lost in my mental maps in daily life (Living inside my head) unless I'm in a nice novel environment. Meditation helps, however.
This takes me to Zen and the Art of Motorcycle Maintenance. Your physical experience of something has to be analysed in accordance with your mental model of it in order to attain a diagnosis (in the book it was a motorcycle engine).
My take on this is, especially in regard to debugging IT issues, is that you have to constantly verify and update your mental model (check your premises!) in order to better weed out problems.
It would be interesting to move beyond rats and into humans binned via navigating their local area through understanding of the street networks independent of any tooling, and those that can't get down the street without mapping software telling them what to do.
Anecdotally it is striking to see the contrast as a member of the former group talking to people of the latter. They have truly no idea where places are or how close they are to other places. It is like these network connections aren't being made by them at all. No sense of scale either of how large a place is or how far away another place might be. I imagine this dependency on turn by turn navigation with no spatial awareness leads to quite different outcomes in terms of modes of thinking.
I mean, when I think about going to a place I am constructing a mental map of the actual city map. I am considering geography, cardinal directions, major corridors and their connectivity along the route, rough estimates of distance, etc. My CPUs are being used no doubt. Others though it is like a blankness in that wake. CPUs idle. Follow the arrows. Who knows where north is? What is a mile?
The way it is phrased, looks like a pre computed model confronted to real data. So... our current AIs except we have incremental continuous training (accumulated experience)?
And dreams are simulation-based training to make life easier, decision-making more efficient?
What kind of next level machinery is this?! ;D
I wonder if this also relates to playing music.
There was a neural net paper like this that generated a lot of discussion on HN, but that I haven't been able to find since (I probably downloaded it, but that teaches me to always remember to use Zotero because academic paper filenames are terrible.)
It was about replacing backprop with a mechanism that checked outcomes against predictions, and just adjusted parameters that deviated from the predictions rather than the entire path. It wasn't suitable for digital machines (because it isn't any more efficient on digital machines) but it worked on analog models. If anybody remembers this, I'd appreciate the link.
I might be garbling the paper because it's from memory and I'm not an expert, but hopefully it's recognizable.
I don't know if it is the paper you are thinking of (likely not) but this idea of checking predictions against outcomes is a very common idea in less mainstream AI research, including the so called "energy-based models" of Yann LeCun and the reference frames of the thousand brains project.
A recent paper posted here also looked at Recurrent Neural Nets and how in simplifying the design to its core amounted to just having a latent prediction and repeatedly adjusting that prediction.
If it wasn't a thread on HN, it's probably not. I don't think it was LeCun. It was a long, well-illustrated paper with a web version.
I'm pretty sure it was a Geoffrey Hinton's paper who published it shortly before leaving Google.
[dead]