Tarun Mittal's picture

Tarun Mittal PRO

Tar9897

AI & ML interests

I believe that LLMs can never get us to AGI. Sure there may be neat tricks here and there but to create true consciousness requires something else. It requires a cross between Mathematics, Philosophy, Biology, and Computer Science. Only then will we be able to get somewhere. I personally think through adapting and emulating emotional learning from the very start and then improving our model based on synthetic and meta-data would lead us to better places than would LLMs which would pretty much run into a wall very soon.

Recent Activity

liked a Space 28 days ago
Tar9897/Qwen-QwQ-32B-Preview
liked a Space about 1 month ago
Qwen/Qwen2.5-Coder-demo
replied to their post 4 months ago
I believe in order to make models reach Human-Level Learning, serious students can start by developing an intelligent neuromorphic agent. We develop an intelligent agent and make it learn about grammar patterns as well as about different word categories through symbolic representations, following which we dwell into making the agent learn about other rules of the Language. In parallel with grammar learning, the agent would also use language grounding techniques to link words to their sensory representations and abstract concepts which would mean the agent learns about the word meanings, synonyms, antonyms, and semantic relationships from both textual data as well as perceptual experiences. The result would be the agent developing a rich lexicon and conceptual knowledge base that underlies its language understanding as well as generation. With this basic knowledge of grammar and word meanings, the agent can then learn to synthesize words and phrases so as to express specific ideas or concepts. Building on this, the agent would then learn how to generate complete sentences which the agent would continuously refine and improve. Eventually the agent would learn how to generate sequence of sentences in the form of dialogues or narratives, taking into account context, goals, as well as user-feedback. I believe that by gradually learning how to improve their responses, the agent would gradually also acquire the ability to generate coherent, meaningful, and contextually appropriate language. This would allow them to reason without hallucinating which LLMs struggle at. Developing such agents would not require a lot of compute and the code would be simple & easy to understand. It will definitely introduce everyone to symbolic AI and making agents which are good at reasoning tasks. Thus solving a crucial problem with LLMs. We have used a similar architecture to make our model learn constantly. Do sign up as we start opening access next week at https://octave-x.com/
View all activity

Articles

Organizations

Octave-X's profile picture

Tar9897's activity

No public activity