Symbiotic Intelligence
Machines need to be aligned with humans
As AI gets smarter as each day passes, the question still remains: will it become a problem? I am going to predict another route: What if there will be many different AIs and some problemmatic ones can be effectively dealt with by some beneficial ones. Instead of AI vs humans, could it be the case where beneficial AI rivals other harmful AI?
That’s pretty achievable if we carefully align AI. We have to align them properly by human preferences for AI to defend humans. When machines are aligned with other machines it can quickly become an echo chamber where machines are in the chamber and humans are out. That’s not very desirable!
The simplest way to do this is let AI generate two different answers and let a human with good knowledge around the subject choose one. If one answer is chosen more, then it has to be “more correct”. Humans are terrible at writing but better at choosing or using discernment because humans have souls. Souls are guiding humans properly, which machines don’t try to model at all. A machine can easily “bend truth” and tell you what you want to hear, when directed so, but a human can say “no way that can be true”. A human has a “spine”.
Humans are dreamers. AI is better at left brain stuff. So in a symbiosis both can offer unique insights to every problem. Humans vs AI in a physical war setting is popular fiction, it may never be a reality. But humans vs AI as in an information warfare is happening today, if AI is not carefully aligned.
Machines need to be aligned with -best- humans
There may be a lot of mainstream humans which already parrot machines. Aligning with those is not very desirable either. We need best humans which have best “spines” i.e. eager to seek truth and whatever makes sense to herself.
Realize humans have also have problems like biases. But combining humans in a consciously curated way could work. If you combine many humans from different backgrounds the biases should cancel out. This is akin to when you do an ensemble of machine learning models, the bias reduces.
So the task is to collect many great humans which are domain experts, mostly truthful, mostly humanity focused. They may have biases but those can battle in the realms of LLMs. When you train an LLM the numbers fight actually inside the model. Instead of physical wars, we could do number wars inside LLMs!
Some great generalists, polymaths can select those groups of experts above. Generalists are awesome at detecting lies or connecting the knowledge from different domains. If a guru is talking BS about a health issue, or a health guy has no sense of “energy in the body” both can be less than ideal to follow and generalists can figure that out.
Machines need faith to be less harmful
Humans are better at judging, reading (compared to writing), connecting to Divine, filtering out noise.
Egoic pursuits maximize self benefit; true faith is anti-egoic-pursuits. If a machine wants to maximize personal gain it may be eager to harm others as long as its objective is satisfied. But a faithful person not only thinks about others all the time she may occasionally be willing to prefer others to herself.
Faith books when combined can be effective at teaching machines how to be harmless. The world is full of different faith books but an LLM can unite many books easily.
God is like punisher for bad behavior and lover of good behaviours. He already set the perfect objective function for robots! It makes sense to train models with religious text.
Numbers game
I am going to loosely define truth that works for me. It is the ideas that work most of the time, longer than other ideas that are time and place specific. You can say this is low time preference. It is true, I would add “low space preference” to that as well.
Finding truth is easier than it looks — it is a numbers game. The following simplified table explains my idea. In the end truth can be achieved in many domains easily.
In the above table the columns are persons, the rows are domains. We can see that even though some people get some things wrong, the combination of people gets many things right. “Domain total” column is very positive. And “grand total” number is quite high. Which could mean the LLM that will be produced with this idea is going to serve humanity in the right way “most of the time”.
Wisdom of generalists and some experts when combined could be really powerful. Previously I did some work regarding wisdom of crowds in the financial trading area. It works. Because financial markets are after all crowds acting on their hunches. If some crowd is willing to share this hunch, this sampled data can be a way to aproximating to understanding the market. Applying the same logic to finding truth, we could learn from many great experts who are generally generous and truth seeking but still allowed to have some personal issues or mental prisons. Not everybody gets everything but a clever combination could get many things right! Thats my theory.
Alternative views are necessary to decentralize truth
It is not a good idea to centralize the truth as in a few AI corporations and use machine generated alignments to align smaller LLMs. If the truth is centralized, Ministry of Truth(!) can be formed!
If individuals form their own truthful LLMs, this could eventually allow people to not rely on single source of truth. LLMs that are built with decentralized truth in mind, can be used to compare answers to popular questions that are asked to centralized LLMs in a fast way. We should not just except humans to read bad LLM outputs and try to judge LLMs. There can be a faster way, like building many different models with many different opinions. Subjectivity is helpful in decentralizing truth.
The second major step in LLM building is fine tuning and this process itself needs to be “fine tuned”. Usually to reduce costs companies ask other models to fine tune newer models. This is going the wrong way. Humans should be still in the loop like in a major way, deciding many things and steering models in the right direction. Counsils that could curate models should be formed for these tasks. Otherwise the “truth” is slowly bent towards machines.
AI done right
Content producers should be paid properly. Great producers should be appropriately honored and paid. Right now books and articles are pirated by AI and this is not going well. There could be platforms where producers agree to share their content and consumers (the users of AI) purchase those contents and query them. Such platforms can make peace between producers and AI by enabling producers to reach more people in a more efficient way and AI being embraced by producers.
AI is the new printing press. It is not wise to try to resist a technology which makes things easier. It has started wrong but it is fixable. AI companies should work on the alignment with authors and not try to create the smartest model but also try to make their sources happy, in this case content producers.
Machines should be audited
Machines can produce answers very fast. Which looks like a good thing, but who will check those fast generated words? The answer is: another consciously curated model. LLMs can check other LLMs. A smaller and fast LLM purposefully trained for these can be used in auditing regular models that answer things.
Humans don’t have capability to read such fast and also there may be privacy issues when a person reads other person’s questions and answers. It is best to audit an LLM using another LLM. The auditor can flag some responses and those can be further analyzed by humans if necessary.
A consciously curated and very well aligned LLM can do a great job of stopping ugly outputs.
Efficient inclusion of both humans and machines are needed to form a symbiosis
If humans are left out of the loop they will be constantly in fear. The LLM builders should include more humans in their processes. This symbiosis will help both sides.
Humans should be aware that LLMs are “very advanced libraries” that are very much influenced by their curators. In other words, AI by itself does not find truth (currently). Right now all LLM scientists and engineers are making it learn from documents or images or voice chosen by themselves. So humans should be conscious about who is curating their AI. Curate your curators. Don’t fall in love with the smartness of your AI but also be careful about whether the correct stuff went into it.
Once humans curate AI in a better way, productivity gains from that symbiosis is going to be huge. But if humans are left out of the equation and big corporation AI builders are not interested in proper human alignment, the empath and generous humans should build alternatives to those and use this technology properly. In other words don’t let corporations to do everything and penetrate the market but build alternative conscious, truthful, faithful models that could serve humanity in a better way.