I was listening to the recent Lex Fridman’s podcast episode with Dario Amodei and the Anthropic team on the topic of AGI, AI and (of course) the future of humanity. It appears to me that the creators of AI feel obligated to talk about the risks of AI going berserk and killing us all and their efforts to align the AI to certain guardrails to minimize, if not avoid, these imagined hyperbolic risks.
For me, taming of intelligence, natural or artificial, is akin to hiring the brightest of minds from the world’s best universities and forcing them to conform to preset rules and regulations that drastically limit their potential. Any intelligence that conforms to a set of rules results in bureaucracy and ‘management overhead’, both in corporate and the government.
The (human) intelligence that has produced the Pope, Dalali Lama, Sri Sri Ravi Shankar and many other spiritual leaders, has also produced folks like Hitler, Stalin, Mussolini, and Idi Amin. The same intelligence has produced social workers who work tirelessly to uplift the downtrodden and also the terrorists who massacre innocent civilians with glee. I am sure that the names of the brightest stars of humanity will be found on the, yet undisclosed, Jeffrey Epstein list. The creators are confused because on one hand they talk about AI in the same way as they would about an intelligent entity and on the other hand, they want to confine it to certain computational rules and boundaries.
Intelligence wants to learn. Dario acknowledged this fact in the podcast when he said that his epiphany was that the algorithms want to learn. What it learns will depend upon what you feed it and not the guardrails you set. Intelligence wants to create its own value system and alignment, i.e., it wants to be free to make up its own mind. Given two opposing data points, it will pick one over the other and that will demonstrate its alignment. Intelligence will change its mind. As more data is consumed, the AI may choose to alter its opinion. I can go on about it, but you get the gist.
Once you understand the fundamental nature of intelligence, the efforts to align the AI with anything appears futile. I have said it earlier and will say it again. This rhetoric about AI going rouge and need for legislation has more to do with politics, control and stalling the competition than with the actual fear of AI going rogue.
Another compelling argument against boxing AI models was put forth by Shanti Greene in our most recent CTO/CIO/AI roundtable discussion hosted by Eric von der Linden. “… come to think of it, these LLMs have been trained on all of humanity’s data and yet we refrain them from giving us legal advice or diagnosis of a disease. This does not make sense. In fact, they are the best advisor one can get…(paraphrasing)”, Shanti has posited. Srivatssan Srinivasan and I vehemently agreed with him.
I would recommend that the creators focus on creating the best AI that they can without spending too much time and energy on its alignment. I am sure that the corporates, the governments and the individuals who use AI will fine-tune it to align it to their respective guardrails. That also goes for the state & non-state actors who will use AI for evil purposes.
Maybe by focusing on the content, model architecture and model training without any predefined bureaucratic controls, we will be able to address one of the most annoying aspects of LLMs – hallucinations.