📝 Editorial
Throughout the last few decades, the software industry has evolved based on paradigms that abstract the complexities of the corresponding hardware architectures. Entire trends such as virtualization or containerization were born out of the need to abstract software computations from hardware topologies. Machine learning (ML) has sort of reversed that trend and reintroduced strong dependencies between software in the form of ML models and specialized hardware. Those dependencies do not only materialize in the optimization of ML models for specific hardware topologies but in the correct utilization of hardware resources across different stages of the lifecycle of ML solutions, such as training, execution, and optimization. Training a large deep learning model often requires rudimentary levels of coordination in order to select and assign the right computation resources. For ML to achieve mainstream adoption, we need to abstract away the dependencies on computation resources and hardware architectures.
The “virtualization” of ML hardware and compute is a well-known problem in the industry. There are several companies working in the space. Among those, Run:AI stands out as one of the most complete platforms for managing computation resources in ML solutions. The Run:AI platform enables the allocation and reusability of GPU resources for ML solutions given the sense of “unlimited compute.” Just last week, Run:AI announced a new $75M round of funding led by tier1 funds like Tiger Global Management and Insight Partners. The round represents not only a solid validation to Run:AI’s initial traction but also to the relevance of the ML hardware virtualization space. ML took us back to the era of dependencies between code and hardware, and now we need solutions like Run:AI abstract those dependencies away.
🔺🔻 TheSequence Scope is our Sunday free digest. To receive high-quality educational content about the most relevant concepts, research papers, and developments in the ML world every Tuesday and Thursday, please subscribe to TheSequence Edge 🔺🔻
🗓 Next week in TheSequence Edge:
Edge#175: we explore StyleGANs, explain the original StyleGAN Paper, overview Open Source StyleGANs.
Edge#176: a deep dive inside Meta’s New Architecture for AI agents that can reason like humans and animals.
Now, let’s review the most important developments in the AI industry this week
🔎 ML Research
Multimodal AI at Meta
Meta (Facebook) AI Research (FAIR) published a detailed blog post outlining their research efforts in multimodal AI →read more on FAIR blog
The AI Behind Alexa Hunches
Amazon Research discusses the deep learning techniques used to enable tasks reminders with Alexa Hunches →read more on Amazon blog
Multimodal Transformers
Google Research published a paper discussing multimodal bottleneck transformers (MBT), a technique used to extend transformer models to multimodal environments →read more on Google Research blog
Reasoning Over Private Data
Meta (Facebook) AI Research (FAIR) published a paper and dataset outlining a methodology to perform question-answering tasks over datasets with different levels of privacy →read more on FAIR blog
🤖 Cool AI Tech Releases
New GPT-3 Capabilities
OpenAI launched new capabilities to its API that enables to edit or insert text in text completion scenarios →read more on OpenAI blog
Large Data Inserts/Updates
LinkedIn details the architecture behind Opal, an internal platform to build a mutable dataset that facilitates large inserts and updates in databases →read more on LinkedIn engineering blog
💎 We recommend
ML teams face a crowded and complex marketplace for ML infrastructure tools. This "ML Observability Checklist" offers a buyer’s guide with product and technical requirements to consider when assessing an ML Observability platform.
🛠 Real World ML
AI to Help Blind Children
Microsoft Research discusses the techniques behind PeopleLens, an AI-based solution to help blind children interact with people more easily →read more on Microsoft Research blog
💸 Money in AI
AI/ML
AI optimization and orchestration company Run:ai raised $75 million in a Series C round led by Tiger Global Management and Insight Partners. Hiring in the US and Israel.
AIOps platform Selector raised $28 million in a Series A funding round led by Two Bear Capital, SineWave Ventures, and Atlantic Bridge. Hiring in Canada/USA/Remote.
AI Quality management solutions TruEra raised a $25 million Series B funding round led by Menlo Ventures. Hiring in India and the US.
Edge AI platform Quadric raised $21 million in a Series B funding round led by NSITEXE, Inc. Hiring in Burlingame, CA/US.
Autonomous cloud management company Sedai raised $15 million in Series A funding led by Norwest Venture Partners. Hiring in the US/India/Remote.
Synthetic data company Synthetaic raised $13 million in a Series A financing round. Hiring in Delafield, WI/US.
AI-powered
Cyber insurance company Cowbell Cyber raised $100 million in the capital led by Anthemis Group. Hiring in the US and remote.
Maritime technology company Nautilus Labs raised $34 Million in Series B funding round led by Microsoft’s venture fund M12. Hiring across the globe.
Sales automation startup RightBound added $15.5 million to its Series A round led by Innovation Endeavors. Hiring in Tel Aviv/Israel.
Video understanding technology Twelve Labs raised $5 million in a seed round led by Index Ventures. Hiring in Seoul, South Korea.
*This news digest is presented by Superb AI’s team. We thank Superb AI for their support of TheSequence.
About Superb AI
Superb AI is an advanced DataOps platform looking to transform the way Computer Vision teams prepare and iterate on datasets. The Superb AI Suite provides automation products and tools across all steps of the data preparation workflow, including data labeling, auditing, management and curation.