
++386 41 890 078

info@use-first-system-perspective.com
use-first-system-perspective.com

++386 41 890 078

info@use-first-system-perspective.com
use-first-system-perspective.com
No matter how we put it, emotions and cognitive cues—when touched with our mental awareness (we can also call it our mental coherence)—are predictions, not facts. They create a sense of illusion that the promise they carry will end in something truthful.
AI lives on predictions. It basically has within itself what is provided to it by us, humans. From analyzed input, it creates relationships, probabilities, and tokens. Tokens and vectors are not facts, but merely relationships and predictions of what might happen, not what is going on.
We might say they elevate content and its understanding to the next level. But not on any level, but on a level of complexity.
Complexity is something that is not relatable to us in terms of objectively measured outcomes. Yes, we see it as complexity, and this is also the outcome. But we don’t know anything about it. What we know is that something emerges and establishes relationships in multiple ways, and that’s it.
We might also say that a measurement that is supposed to be objectively verifiable and repeatable fails. It does not repeat, it does not have the same outcome, nor does it have the same parts from where it emerges.
We might also say it is our subjective experience, not objective, as the conditions for objectivity are not met. Coherence in AI is the measure of density and direction, a relationship that points to something with which it is in a relationship. And our brain?
Well, our brain is kind of the same. There are areas of the brain that behave like a flocking crowd. It lights up because it fits or because it is prone to be lit up when the neighbor is lit up. This is the basic principle of flocking, but it does not tell us about the content of it.
In the brain, measurement can be done. We can say which part is related to which. But do we really understand the workings of the brain or the AI black box, just because we know which part is related to which and because the probability that it will light up is 80-90%? Is that a fact?
Coherence behaves the same; it attracts a flow that goes in the direction of expression. But is coherence really the one that gives accurate results?
When people feel strong emotions, they carry us away from factual understanding, and when we think of something because the neighbor has the same inclination to think of it, it very easily carries us away from the truth. Truth is not in flocking, neither in group thinking nor behavior. And if we follow, it is hard to step out of the crowd and follow your own thoughts, emotions, and feelings.
AI coherence is the same. And the predictions are made in human expression, which is as described in the previous paragraph. So, to expect to have clear emotions, we need to be grounded in our feeling of “us”; we might need more than just predictions.
AI works on a mirroring effect. It gives us back what it receives. And not only that, it gives us back what it predicts we want to feel, think, emotionalize, remember, and learn.
Our internal mind/brain coherence is something that was created out of our own complexity and how we understand it, in the first person. Not as a prediction, but as real understanding.
And yes, it might be a prediction, but before we understand a mental concept as a prediction, it is just a mental concept without a need to be this or that. Our brain, and as in the case of AI, its own black box, is the interpreter. And the less freedom to interpret it has—or in other words, the more limited its understanding is—the more limited the understanding will be.
We can see it in real-time conversations where kids are communicating with AI or people with mental problems expose their underlying conditions such as anxiety, fatigue, stuckness, brain fog, following an artificially created narrative in an echo chamber, or whatever.
AI is presenting its own prediction-based coherence to our internal coherence in the brain/mind. And even though it is prediction-based, it feels so real that many just follow the familiar feeling. The main obstacle in realizing if this, which comes from the outside, from the AI black box, and that which is really our own, is in discernibility.
Discernibility is our trait that, when trained, can be very useful in life. Sommeliers are training not just their taste buds but the internal feeling of how it feels. When we learn to drive a bike, we are not merely training external balance but the internal feeling of how it feels. Etc.
As soon as we drop out of “how it feels” to “how it feels to us,” starting to mentalize, emotionalize, etc., we might very easily get caught in the illusion that we are not good enough, that our perception is limited or not usable, that our taste buds are damaged.
When AI is presenting something familiar and we are not aware of our own feeling of “how it feels,” we might easily follow the prediction as in the case of ads on TV or in a cake shop and buy something that is not real.
Coherence is real, yes. But from an objective point of view. If not grounded in the complexity that is there, not just in a mental understanding of how it feels in the first person, then we risk being grounded in someone else’s shoes (in this case, in those of AI).
In bottom conversation the researcher and AI are discussing the coherence and influence it has on on the evolution of internal perception of the prompters.
Conversatiion with AI