The Controversial AI Moratorium Letter
Sundays, The Sequence Scope brings a summary of the most important research papers, technology releases and VC funding deals in the artificial intelligence space.
Next Week in The Sequence
Edge 279: Explore cross-silo federated learning(FL), Amazon’s research on personalized FL and IBM’s FL framework.
Edge 280: Deep Dive into Alpaca, Stanford University LLM that matches GPT-3.5 performance.
📝 Editorial: The Controversial AI Moratorium Letter
Last week, the AI community found itself divided by a controversial letter during a crucial phase of innovation. Over 1,400 leaders and researchers in the industry recently signed an open letter urging all AI labs to pause the training of AI systems more powerful than GPT-4 for at least six months. The Future of Life Institute, a non-profit based in Pennsylvania, published the letter. The letter warns that contemporary AI systems are now competing with humans in general tasks, and advanced AI poses a risk of flooding the media with false information and automating many jobs. The signatories requested that the pause be public, verifiable, and include key actors, and called on governments to institute a moratorium if the pause couldn't be enacted quickly.
Emad Mostaque, CEO of Stability AI, signed the letter but later tweeted that he didn't think a six-month pause was the best idea. On the other hand, Yann LeCun, the chief AI scientist for Meta, didn't sign the letter because he disagreed with its premise, although he later deleted his tweet. David Deutsch, a visiting professor of physics at the Centre for Quantum Computation at the University of Oxford, declined to sign the letter, stating that it read like a suggestion to stop developing anything whose effects we can’t prophesy, which gives totalitarian governments and organized criminals a chance to catch up.
The argument posed by the AI letter is a divisive one. One side sees foundation AI models as something like nuclear weapons that could unleash massive harm, while the other side believes that an AI moratorium is the equivalent of the restrictions that were once imposed in technologies like the printing press, the telegraph, or electricity that enabled massive leaps forward and created massive wealth, improving the quality of life for generations.
At The Sequence, we avoid getting involved into controversial arguments that we believe are not direct contributors to the progress of AI. Today, I would make an exception given the level of debate caused by the AI letter. In my opinion, the moratorium on AI development is not only a bad idea but also an impractical one. The risks posed by large AI models are real, and a thoughtful path towards regulation and safety controls is certainly needed. The research behind many of these models is publicly available, and many implementations are open source without any constraints. Authoritarian governments, terrorist organizations, and bad actors already have access to this technology, whether we like it or not. The only way to mitigate the risk is to continue advancing research and improving their alignment and safety. Foundational AI models represent the biggest technological breakthrough of many generations and, as such, should not be restricted but carefully nurtured to align with the “better angels of our nature”.
In his posthumous Meditations book, Marcus Aurelius said: “The mind adapts and converts to its purposes the obstacle to our acting. The impediment to action advances action. What stands in the way becomes the way." The most popular version of this quote certainly applies to the current state foundation AI models: “The obstacle is the way”.
🔎 ML Research
PRESTO
Google Research published a paper and open sourced a version of PRESTO, a dataset for task-oriented dialogues. PRESTO is based on over 500 million conversations between users and virtual assistants —> Read more.
Reflexion
Researchers from MIT and Northeastern University published a paper presenting Reflexion, a technique used to identify mistakes in LLMs. Reflexion simulates human’s self-reflection capabilities by asking models to find possible mistakes and optimize the respective prompts —> Read more.
ART
Researchers from the University of Washington, Microsoft, Meta AI, Allen Institute of AI and University of California, Irvine published a paper detailing ART, a tool that uses frozen LLMs to generate intermediate reasoning steps. ART uses a few-shot technique to decompose a task into multistep micro tasks that simulate reasoning —> Read more.
22 Billion Vision Transformer
Google Research published a paper detailing a dense vision transformer with 22 billion parameters. This is a significant size improvement over previous architectures which average low single digit billion parameters —> Read more.
Robots that Learn from Videos
Meta AI published two papers about techniques used in embodied agents to learn from agents. One of the papers proposes a technique called VC-1 that masters sensorimotor skills from videos. The other paper details a method called ASC for object manipulation —>Read more.
📌 EVENT: Join us at LLM in Production conference – the first of its kind
How can you actually use LLMs in production? There are still so many questions. Cost. Latency. Trust. How are the best navigating this? MLOps community decided to create the first free virtual conference to go deep into these unknowns. Come hear technical talks from over 30 speakers working at companies like Notion, You.com, Adept.ai, and Intercom.
You will also get the opportunity to join workshops that will teach you how to set up your use cases and skip over all the headaches.
🤖 Cool AI Tech Releases
TensorFlow and Keras 2.12
The new releases of TensorFlow and Keras are now available with interesting features that include a new exporting format —> Read more.
Kubeflow 1.7
The new release of Kubeflow is now available —> Read more.
Dolly
Databricks open sourced Dolly, a ChatGPT like model that can follow instructions —> Read more.
🛠 Real World ML
Airbnb discusses their ML + human in the loop architecture for their Categories platform —> Read more.
📡AI Radar
Several leaders from the AI, tech and ethics work called for a six months pause in the development and training of foundation models more powerful than GPT-4.
Microsoft announced Security Copilot, a platform to enable generative AI in cyber security scenarios.
Microsoft launched a new version of Teams built from the ground up with generative AI capabilities powered by CoPilot 365.
Nephyne announced a $2 million round to build a Python-based spreadsheet for data scientists.
Decentralized AI platform Fetch.ai raised $40 million for its autonomous agents platform.
Jigso raised $7.5 million seed round to build a new AI assistant to surface employee data.
Oscilar, an AI-driven tech firm incubated by one of the founders of Confuent, emerged out of stealth mode with $20 million from the founding team.
DataDome raised $42 million to expand it AI-driven cyber security platform.