🔹◽️Edge#162: EleutherAI’s GPT-NeoX-20B, one of the largest open-source language models
This is an example of TheSequence Edge, a Premium newsletter that our subscribers receive every Tuesday and Thursday. On Thursdays, we dive deep into one of the freshest research papers or technology frameworks that is worth your attention.
💥 What’s New in AI: EleutherAI’s GPT-NeoX-20B, one of the largest open-source language models
We’ve seen the accelerating race to build massively large transformer models. From GPT-2 with 1.5 billion parameters, to Microsoft’s Turing NLG with 17 billion parameters, to GPT-3 passing 175 billion parameters and Google’s Switch Transformer reaching 1.6 trillion parameters, and now, Wu Dao 2.0 containing a mind-blowing 1.75 trillion parameters. Most of these transformer models are not open-source and are difficult to fully access.
Just yesterday, in an effort to democratize access to large language models, EleutherAI, in partnership with CoreWeave, released a 20 billion parameter open-source language model named GPT-NeoX-20B. For those who are looking to serve GPT-NeoX-20B without managing infrastructure, CoreWeave has partnered with Anlatan, the creators of NovelAI, to create GooseAI, a fully managed inference service delivered by API.
Let’s dive deeper into the practical implementation of GPT-NeoX-20B and GooseAI.
The History
The research collective EleutherAI was founded in 2020 by Connor Leahy, Sid Black and Leo Gao with the goal of ensuring that large language models remain widely accessible to aid in research towards the safe use of AI systems.
In comparison, the well-known GPT3 is still closed to the public at large and has prohibitively expensive training costs – both are significant hurdles to researchers interested in studying and using it and businesses building products on top of it. From the beginning, EleutherAI has been deeply committed to helping tackle AI safety while focusing on making large language models more accessible.
“From spam and astroturfing to chatbot addiction, there are clear harms that can manifest from the use of these models already today, and we expect the alignment of future models to be of critical importance. We think the acceleration of safety research is extremely important; and the benefits of having an open-source model of this size and quality available for that research outweigh the risks.” – Connor Leahy, EleutherAI
In 2021, CoreWeave built a state-of-the-art NVIDIA A100 cluster for distributed training. They have partnered with EleutherAI to train GPT-NeoX-20B using this cluster.
“This investment in the AI community was a no-brainer for us, after hearing so many frustrations with large models being too expensive to deploy at scale and too hard to access.” – Brian Venturo, CoreWeave
GPT-NeoX-20B
With its beta release on February 2nd on GooseAI, EleutherAI claims GPT-NeoX-20B to be the largest publicly accessible language model available. At 20 billion parameters, GPT-NeoX-20B is a powerhouse trained on EleutherAI’s curated collection of datasets, The Pile.
When EleutherAI developed The Pile’s 825GB dataset, no public datasets suitable for training language models of this size existed. The Pile is now widely used as a training dataset for many current cutting-edge models, including the Beijing Academy of Artificial Intelligence’s Wu Dao (1.75T parameters, multimodal), AI21’s Jurassic-1 (178B parameters), Anthropic’s language assistant (52B parameters), and Microsoft and NVIDIA’s Megatron-Turing NLG (340B parameters).
In short, GPT-NeoX-20B is more accessible to developers, researchers, and tech founders because it is fully open-source and less expensive to serve compared to similar models of its size and quality.
GPT-NeoX-20 B’s codebase offers a straightforward configuration using YAML files, enabling users to launch training runs across hundreds of GPUs with a single line of bash script.
GPT-NeoX-20B is cheaper to deploy than GPT-3 on a performance-adjusted basis.
For developers who are currently using OpenAI’s GPT-3 API, any applications that rely solely on prompting are likely to work in GPT-NeoX-20B with only minor modifications.
Performance Comparison by Model
The following table shows a comparison of GPT-NeoX-20 B’s performance, relative to other publicly available NLP models, to answer factual questions in a variety of domains. GPT-NeoX-20B outperforms its peers by a statistically significant margin:
The other interesting comparison is of factual knowledge by subject group:
GPT-NeoX-20B on GooseAI
Before being fully released to the public on February 9th, GooseAI is currently offering a week-long beta of EleutherAI’s GPT-NeoX-20B.
GooseAI is a fully managed inference service delivered by API. Anyone who is looking to serve GPT-NeoX-20B without managing infrastructure can start serving the model via a fully managed API endpoint on GooseAI today, before the official release.
With feature parity to other well-known APIs, GooseAI delivers a plug-and-play solution for serving open-source language models at over 70% cost savings by simply changing 1 line of code, helping to unlock everything that GPT-NeoX-20B has to offer. GooseAI delivers all the advantages of CoreWeave Cloud, with zero infrastructure overhead, including impressive spin-up times and responsive auto-scaling.
Relying on CoreWeave’s founding principle, GooseAI was created to provide a more accessible cloud infrastructure for developers and founders that is purpose-built for compute-intensive workloads.
“CoreWeave’s platform has continuously evolved to address one common pain point shared by all of our clients: legacy cloud providers make it extremely difficult to scale because they offer limited high-performance compute options at monopolistic prices.” – Brian Venturo, CoreWeave.
Conclusion
GPT-NeoX-20B is a glimpse into the next generation of what powerful AI systems could look like, and EleutherAI works on removing the current barriers to research on the understanding and safety of such models. You can join the community of NeoX-20B developers at GooseAI for a preview period to serve GPT-Neox-20B from February 2nd through February 8th, before its full, open-source release on February 9th.