We are happy to congratulate our partners Run:AI, a leader in compute orchestration for AI workloads, who have been recognized in The Forrester Wave™: AI infrastructure, Q4 2021 report published by Forrester Research. You want to check this report for sure: subtitled “The 13 Providers That Matter Most And How They Stack Up”, this report is Forrester’s first ever AI Infrastructure Wave™. The creation of this new Forrester Wave™ guide reflects the increasing momentum of AI adoption, and the demands that come with diverse AI workloads and scaling IT infrastructure.
Forrester applies a consistent methodology to scoring AI infrastructure vendors for their guides, offering IT professionals and other AI infrastructure buyers some much needed clarity in evaluating an array of purchasing options for on-prem infrastructure, cloud services, or a mix of both. Forrester scored each company using 25 different criteria. In order to be considered, a vendor must have a comprehensive, differentiated AI infrastructure solution, at least ten enterprise customer references with proven revenue from the vendor’s AI infrastructure solution, and significant market presence and/or Forrester client interest.
Run:AI received the highest possible scores against Forrester’s software and operations criteria and on-par scores in the criteria of inferencing, innovation roadmap, and partner ecosystem.
You can download the report for free here.
The report notes that “Run:AI’s sweet spot is enterprises that want complete flexibility in the hardware they choose to use and where they choose to run it.”
It also states that “Reference customers like [Run:AI’s] ability to finely define how to prioritize and share resources among larger teams.”
From our partners*: “I’m proud that Run:AI has been recognized by Forrester as one of the AI infrastructure providers that matter most, alongside what we see as some of the biggest technology brands in the world,” said Omri Geller, Run:AI’s co-founder and CEO. “Run’s AI orchestration platform complements and enhances GPU hardware, ensuring data scientists get the computing resources they need, when they need it, dynamically and automatically.”
Run:AI’s compute orchestration platform creates a pool of GPU resources which can be dynamically allocated to Data Science teams to meet their computing needs. With advanced scheduling, fractional GPUs, guaranteed quotas and other advanced features, Run:AI reduces idle GPU time to ensure organizations get the maximum utilization from their compute resources, speeding AI model development, training and inferencing.