July 28, 2023 | V. "Juggy" Jagannathan
In this blog, I highlight a recent podcast I did with my friend, colleague and team lead Thomas Schaaf on summarizing conversations. Next, I look at an interesting debate amongst artificial intelligence (AI) luminaries surrounding existential threat. Lastly, I cover the recent groundbreaking release of a new open-sourced model from Meta: Llama 2.
In the recent Inside Angle podcast with Thomas, we explore the arc of evolution of speech recognition and natural language understanding (NLU). The main thrust of the conversation was focused on how to summarize conversations. Not just any conversation, but the ones between doctors and patients. And the summary needs to be in the form of a clinical note that can be uploaded to an electronic health record (EHR). Thomas has been leading a team of machine learning specialists and researchers in building the solution for this task. He is a rare individual with a unique expertise that spans both speech recognition and NLU. Listen to this podcast to learn about the challenges that we must overcome to develop this capability.
Munk debates have been around for a while – established in 2008 by the late Peter Munk. They bring together the brightest thinkers on specific big issues of the day to have a substantial civil debate twice a year. When I saw that they had one focused on AI featuring two Turing Award recipients (Yoshua Bengio and Yann LeCun) to argue on opposite sides of a proposition – I tuned in and listened to the debate. The proposition? "AI research and development poses an existential threat." And the definition of the term existential threat?
“An existential risk is one that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for future development." - Nick Bostrom
Yoshua Bengio (Université de Montréal professor) and Max Tegmark (MIT professor) were arguing for the proposition (there is an existential threat). Yann LeCun (chief AI scientist for Meta and NYU professor) and Melanie Mitchell (Sante Fe Institute professor) were arguing against the proposition (there is not an existential threat).
At the beginning of the debate, the audience of about 3,000 were split two thirds to one third for the proposition. However, 92 percent of those present were willing to be persuaded to change their position based on the debate. On the for side of the proposition, Max argued that once “superhuman AI” is developed, the risk of extinction can arise through 1) malicious or negligent use, 2) we lose our agency (and jobs) 3) rogue AI goes amuck.
Yoshua is worried about loss of agency and superhuman AI becoming our evil master. It smacks of science fiction. Melanie acknowledged there are all kinds of risks with AI, but conflating them to the level of existence extinction deflects from what we should be doing now. This means addressing the risks and threats that are right in front of our face: disinformation, bias, etc. Yann looked at the problem from an engineering perspective. The antidote for a bad guy with AI is a good guy with AI. Yann urged us to think of AI assistants as a swarm of intelligent students who are brighter than you. All professors love the idea of having such students!
As I mentioned, loss of agency was one of the topics of debate. Another was the level of risk and the timeline for superhuman AI. Is it within this decade or next? Or is it so far away in the future that you can ignore it? Lots of comparisons to nuclear bomb development and climate change. Yann pointed to all the spectacular hype and perceived negative consequences around every new technology introduced over the centuries and pointed people to the pessimists’ archive. The pessimists archive is a catalog of spectacular negative predictions that failed! Yann advocated for open systems and solutions that encourage transparency and development of proper guardrails in the use of AI. All in all, an interesting debate. At the end of the debate the audience was split 60-40 for the proposition that there is an existential threat, and we need to take action to avert it.
Yann LeCun undoubtedly knew about the imminent release of Meta’s Llama 2 at the Munk debate. Meta made good on the position Yann took there. When Llama 1 was released, it was a restricted release – only available to the academic community for experimentation. Llama 2 has been released with the most permissive Apache 2 licensing terms. Any academic organization, commercial entity or person can now download the model and use it for commercial or research purpose. They must agree not to use it for nefarious purposes and must be identifiable (not anonymous). The models can be downloaded via Microsoft Azure, Amazon AWS and HuggingFace. Three size models are being released: 7 billion, 13 billion and 70 billion parameters. In another departure from Open AI GPT and Google’s Bard, Meta has released a detailed paper that describes what was used to train the model and how it was trained. Kudos on the transparency – probably designed to assuage the EU regulators who recently passed the AI Act. In any case, this is a potential game changer in the evolution and development of solutions using generative AI. An MIT Technology Review article on this front expresses the same opinion.
I am always looking for feedback and if you would like me to cover a story, please let me know! Leave me a comment below or ask a question on my blogger profile page.
“Juggy” Jagannathan, PhD, is an AI evangelist with four decades of experience in AI and computer science research.