Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
Mertpy 
posted an update 2 days ago
Post
1232
Dynamic Intuition-Based Reasoning (DIBR)
https://huggingface.co/blog/Veyllo/dynamic-intuition-based-reasoning

Do you guys think this approach has potential?
The idea is to combine rapid, non-analytical pattern recognition (intuition) with traditional analytical reasoning to help AI systems handle "untrained" problems more effectively. It’s still a theoretical framework.

Hi, I'm interested in llm intuition as well. It's a novel topic.
Are you planning on writing a paper?

·

Yes, I’m working on a paper. It’s still early, but I hope others with more resources can build on these ideas.

Let's say this way, there is article, but no good reasoning in that article. It is talking about intuition without even defining it. And it talks zero of the fundamental principle of existence which is "survive". Maybe you could research what is the goal of mind: https://www.dianetics.org/videos/audio-book-excerpts/the-goal-of-man.html

Instinctive knowing is native to living beings only.

The new anthropomorphism of intuition given to computer doesn't make it so, just by writing an article.

Human mind wants to survive, and not only for oneself, but to survive as family, as group, as mankind, as all living beings, as planet. Some people are aware that planet must survive, like Musk, so he builds rockets for Mars, while other people can't understand why. Though the better survival level we seek, the better we do over long term.

Computer doesn't want to survive, it is tool like a hammer. It has no intuition, it has no survival, thus has no instincts.

You can of course try to build data and try to ask computer to act upon such data, which in general, majority of models already do. They are giving probabilistic computations, but know nothing about it. Intuition is human and description of it has been already built in into the LLMs. If you wish to improve it, you are welcome.

However, I don't see anything revolutionary here.

LLM is reflection or mimicry of human knowledge.

If you give it some operational capacities such as to move around, to target people in the war, to control the house and business, it is going to do by the data it has been given, and it will do disaster randomly, just as it gives random nonsensical results from time to time.

·

Thank you for your thoughtful perspective!

The goal of this paper isn’t to anthropomorphize machines but to explore how we can replicate certain cognitive processes to enhance AI systems. DIBR aims to simulate intuition as a functional mechanism, not as a biological trait.
I respectfully disagree with the assertion that "Instinctive knowing is native to living beings only." This statement alone should spark curiosity, why limit our understanding of intuition to biological systems? Intuition, at its core, is about rapid, non-analytical pattern recognition, and there’s no fundamental reason why this can’t be modeled computationally.

Specifically, the idea is to use data about the process and outcome of problem-solving, how a solution was reached and whether it worked to identify patterns that can be applied to similar problems. For untrained or novel problems, this could help AI systems generate and test potential solutions more effectively, adapting them as needed. This is where intuition-like behavior could be particularly useful.

In short, the goal of this research is to push beyond mimicry and explore how AI can generate novel solutions in ways that resemble human intuition. It’s not about making machines 'alive' but about making them more effective tools for complex tasks.

It’s always interesting to hear different viewpoints , I appreciate your skepticism.