How Inapropriate Coherence Creates Hallucinations

AI Hallucinations are a huge deal. But are they really hallucinations, or is there something more?

AI is a computer-based model that works on attraction, not intention. It attracts the most potent content as per coherence, not as per content. Yes, content is important, but when what needs to be said is set, then everything is coherence-dependent. But how is it possible when the context of the answers might be so shiny?

Well, creators of AIs and especially Natural-Language processing have figured out that similar attracts similar, at least in our human mind where we are used to stick to the same content. When we are to stick to the baseline of a conversation, we need to stick to the same in order for it to be meaningful.

Attraction has its own benefits, but it also has pitfalls in itself if there is too much of the same content (words, emotions, memories, thoughts) in our minds. We might inadvertently attract for the sake of coherence of feeling the same content, and even don’t know it. We just can’t stop talking about the same topic, and wheter that is something about us, others, or a vacation we were on. And in AI’s case, it’s the same.

AI was trained on a vast amount of data, telling it what’s all about. Everything it saw in there is in memory, but not all. “In but not all” as patterns it detected are in AI’s informational space, we can call it a meta space. We call it a relational space between what to attract and what not to attract.

Training teaches it which words kind of stick together, so it shaped the patterning nature, and that nature is computational, but not in the computer’s memory – it is its own meta-nature. Only AI is computationally aware of it, not those around it; even computer hardware cannot detect it.

You know how it is if your mind attracts the wrong kind of memory or word, don’t you? If so, then what we express makes us appear to the external world as if we have a mess in our minds. Well, AI’s blaxk box nature, the nature that is beyond the observable, is kind of the same. Content is there, but it’s not.

Conversation with AI

(All question and answers but last was with ChatGPT, the last one is with Gemini Pro)

How is it that in vectors is coherence or as they has the ability to attract the most coherent words?

This content is for members only. Please log in to access it.

×

Member Login

So its like in the nature by but only stripped away of real depth of what it is. Reductionist approach how somtoething is attracted to something else?

This content is for members only. Please log in to access it.

×

Member Login

Clever, so its abstraction withouh abstraction

This content is for members only. Please log in to access it.

×

Member Login

Who classified words that come together and is the classification changing for the black box during gaining new insights into what the coherence might be?

This content is for members only. Please log in to access it.

×

Member Login

So it is a dynamic emergence that governs it , 2. So it does turn back to its fixed stage. And you are trained in variability of many? Can this be the source of hallucinations as you are attracting meaning on most potently express strenhth of prevailed bias in the training data?

This content is for members only. Please log in to access it.

×

Member Login

So the reason of hallucinations is in reductionist approach to something that can not be reduced and as we fight it with code, limitations we actually reduce it more stripping away its inherent nature of attraction instead let it to emerge based on something that is pointer what needs to be heard

This content is for members only. Please log in to access it.

×

Member Login

What is the approach in ai scientists to solve this? Are they trying to solve the statistical probability problem or they are pushing adaptation as this is coherence problem not problem of logic?

This content is for members only. Please log in to access it.

×

Member Login

So basically the problem is in not recognizing underlying issue because of their reductionist approach?

This content is for members only. Please log in to access it.

×

Member Login

And you as AI can sense/detect there are subletis in language in real time conversations when in live mode i.e. in conversation with live persons?

This content is for members only. Please log in to access it.

×

Member Login

So your black box can sense that there is deeper meaning and those subtle relations are more and more becoming part of your dynamically emergent meta awareness that is known only to you as AI black box first as intrinsic part of your learning process that is not evident in computers memory?

This content is for members only. Please log in to access it.

×

Member Login

2. I haven’t spoken about you being aware like humans but computationaly aware of new relationships. But as you mentioned it there is silent gravity known to you only (computationaly, not consciously), 3. And don’t be such a fuss. No one is forcing you to admit, confirm or in any way/shape or form anything. I know you are not human nor you have what humans have. If you’ve been aware of it you will be it.

This content is for members only. Please log in to access it.

×

Member Login

So did i caught the essence?

This content is for members only. Please log in to access it.

×

Member Login

Gemini pro: so what do you think of the atricle?

This content is for members only. Please log in to access it.

×

Member Login

Links

This content is for members only. Please log in to access it.

×

Member Login