π Guest post: Unlock the Power of BLOOM With the Broadest Range of GPUs Served On-Demand*
In this guest post, CoreWeave introduces BLOOM deploy on their platform and guides you through their arsenal of GPUs to ensure you find the compute that delivers the best possible performance-adjusted cost.
BLOOM, from BigScience is one of the most exciting open source models you can deploy, and itβs now available on CoreWeave.Β
At 176 billion parameters, BLOOM is larger than OpenAIβs 175-billion-parameter LLM, GPT-3, and is also able to output coherent text in 46 languages as well as 13 programming languages. It can also be instructed to perform text tasks it has not been explicitly trained for.
BLOOM is the work of more than 1,000 researchers from around the world who collaborated with institutions like Hugging Face, the French government and the Montreal AI Ethics Institute to ensure that AI research remains open, inclusive and responsible to the betterment of humanity.
To learn more about how to deploy BLOOM on CoreWeave with our easy-to-use examples, please visit our documentation.Β
CHOOSING THE RIGHT GPU FOR YOU
A major benefit of working with a specialized cloud provider is the ability to match the complexity of your workloads across a wide variety of GPUs, ensuring you find the compute that delivers the best possible performance-adjusted cost.
With the broadest selection of GPUs on the market coupled with the industryβs fastest spin-up times and most responsive auto-scaling available through CoreWeaveβs InferenceService, you can consume compute more efficiently, serve end-user demand faster in real-time, and lower inference latency.
To help you get started, hereβs some guidance on how we think about our arsenal of GPUs for model serving:
NVIDIA RTX 4000
Just because the Turing-based RTX 4000 is the smallest GPU that CoreWeave offers doesn't mean it's not cost-effective. If you need to run inference for models such as the Fairseq 2.7B or GPT Neo 2.7B or smaller, this can be an excellent value for less intensive inference workloads.
However, if you are saturating the GPU with inference requests, then the more recent GPUs such as the A4000 or A5000 may serve you better. Larger contexts may require the RTX 5000, depending on how efficient your inference engine is.
NVIDIA RTX 5000
The Turing-based RTX 5000 is the smallest GPU that can run inference for the GPT-J 6B or Fairseq 6.7B models. It has double the RAM, a bit more memory bandwidth than the RTX 4000 and a much faster base clock rate.
If your 2.7B models are running out of RAM with a larger context, this is the next step up and will give you faster inference to boot.
NVIDIA A4000
The Ampere-based A4000 is a small step up from the RTX 5000, although it may not look like it at first glance. The clock rate is half that of the RTX 5000, but the boost clock nearly matches the base clock of the older GPU. What makes the difference is the number of shader cores, which is doubled.
However, the number of tensor cores is half that of the RTX 5000. Whether the A4000 or the RTX 5000 work better for your workload depends on your inference framework and what instructions you use.
NVIDIA A5000
The Ampere-based A5000 is a good step up from the A4000 and has been observed to be faster at running GPT-J 6B and Fairseq 6.7B than the A4000 for inference. It is also the smallest GPU that can be comfortably used for fine tuning smaller models, such as Fairseq, GPT Neo 1.3B or GPT Neo 2.7B.
If your model fits comfortably inside 24GB, this card is a better value proposition than the A6000. It can also host the Fairseq 13B model for inference, although it is tight at 24GB.
NVIDIA A6000
If your workload is intense enough, the Ampere-based A6000 is one of the best values for inference. It is CoreWeave's recommended GPU for fine-tuning, due to the 48GB of RAM, which allows you to fine-tune up to Fairseq 13B on a single GPU. The 48GB of RAM also allows you to batch-train steps during fine-tuning for better throughput.
The A6000 is the smallest single GPU that can host the GPT NeoX 20B model.
NVIDIA A40
Because of the value proposition, the A40 is our recommended GPU for larger-scale training jobs. The A6000 is slightly faster, but the A40 has more robust GPU drivers and more availability at CoreWeave. CoreWeave can help with setting this up.Β
The A40βs 48GB of RAM allows you to batch-train steps during fine tuning for better throughput and the CoreWeave Finetuning Machine Learning Models Guide defaults to the A40 for this reason.
NVIDIA A100 40GB
The A100 40GB PCI-E nearly doubles the performance of the A40/A6000 on a single GPU basis for many workloads due to double the memory bandwidth. However, it has 8GB less RAM than the A40. This makes it difficult to host the larger models such as GPT NeoX 20B on a single GPU.
Pairs of A100 PCI-E can make excellent inference nodes if inference throughput is your primary concern. A100 NVLINK is recommended for distributed training and inference when model parallelism is required.
NVIDIA A100 80GB
With double the RAM and 30% more memory bandwidth than the A100 40GB PCI-E, this is the best GPU for large model inference on a single GPU. 20B models run as fast and comfortably on an A100 80GB PCI-E as 13B models do on an A6000.
A100 NVLINK is recommended for distributed training and inference when model parallelism is required.
To learn more about our flexible GPU pricing and infrastructure rates, please visit our pricing page.
HAVE QUESTIONS? NEED MORE EXAMPLES? GET IN TOUCH!
The CoreWeave support team is always ready to roll up our sleeves and help guide our clients through benchmarking workloads, maximizing our InferenceService and getting the absolute most out of the entire tech stack.Β
We have a rich library of inference examples, including documentation on serving BLOOM 176B, GPT-J-6B, Stable Diffusion, and more.
If youβre ready, then sign up for a free trial account.Β Or to speak with one of our engineers first, please contact us.