đ EditorialÂ
The ecosystem of deep learning development frameworks has gone from incredibly fragmented to being concentrated around two big names: TensorFlow (Keras included) and PyTorch. A few years ago, a dozen deep learning stacks, such as MxNet, Caffe 2 and Microsoftâs CNTK, showed a similar level of adoption and even comparable with TensorFlow and PyTorch. That picture has changed in the last few years, with the majority of deep learning research and development being concentrated in TensorFlow and PyTorch at levels that it was hard to envision another framework having a real chance to rival those two. Somewhat quietly, a new framework has been boosting its capabilities and adoption within the machine learning community. Â
JAX was initially released by Google Research in 2018 with the objective of streamlining high-performance numerical computing. The framework enables capabilities such as vectorization, JIT-compilation and gradient-based optimization in a very modular and simple programming model. While it was not intended as a deep learning framework in the first place, JAX has seen relevant adoption within the deep learning community. This has been partly influenced by the adoption of AI powerhouses like Google Research and, very notably, DeepMind, which has been very public about their adoption of JAX. As a result, JAX has quickly increased its tech stackâs depth. Just this week, Google Research open-sourced a new ranking library of ranking algorithms for JAX. Â
JAX is still in a relatively nascent stage, but it is the first framework that shows the potential to grow to levels of adoption similar to TensorFlow and PyTorch. For now, it might be a good idea to not sleep on JAX. It might become one of the most relevant deep learning frameworks of the next few years.    Â
đşđťTheSequence Scope â our Sunday edition with the industryâs development overview â is free. To receive high-quality content about the most relevant developments in the ML world every Tuesday and Thursday, please subscribe to TheSequence Edge đşđť
đ Next week in TheSequence Edge:
Edge#217:Â we publish the recap of our recent ML testing series.
Edge#218: we deep dive into BlenderBot 3, a 175B parameter model that can chat about every topic and organically improve its knowledge.
Now, letâs review the most important developments in the AI industry this week
đ ML Research
Video-Text LearningÂ
Google Research published a paper detailing a new method for question-answering in video streams âread more on Google Research blog
Automated ReasoningÂ
Amazon Research published an insightful conversation that highlights the viewpoints of different researchers about the intersection of logic and AI âread more on Amazon Research blogÂ
Text Game SimulationÂ
AI researchers from Microsoft and the University of Arizona published a paper detailing TextWorldExpress, a high-performance text game simulator that can be used to train language models âread more in the original research paperÂ
đ We recommend: Discover how to deploy Weights and Biases model registry into production quickly
Training an ML model nowadays is the easy part â managing the lifecycle of the experiments and the model is where things get complicated. Luckily, Weights and Biases provide the developer tools that, with a couple lines of code, let you keep track of hyperparameters, system metrics, and outputs so you can compare experiments, and easily share your findings with colleagues. However, the value of the model comes from operationalising and turning the model into a prediction service. This requires making data available to these services consistently to show how the models were trained.
Hopsworks 3.0 introduced a new Feature View abstraction and now supports KServe for model serving. Together these two features provide the APIs to consume data from the feature store consistently between training and production and allow you to deploy models from your Weights and Biases model registry into production quickly, providing a Rest Endpoint to perform prediction requests against.
đ¤ Cool AI Tech Releases
RaxÂ
Google Research open source Rax, a JAX-based library for supervised ranking algorithms âread more on Google Research blog
Moderation EndpointÂ
OpenAI released a more accurate version of its Moderation Endpoint to detect undesired content âread more on OpenAI blog
ImplicitronÂ
Meta AI open-sourced Implicitron, a framework within PyTorch3D focused on 3D object reconstruction âread more on Meta AI blog
đ Real World MLÂ
AI Tutoring ServiceÂ
Microsoft details the principles behind an AI-based math tutoring service that improves studentâs learning process âread more on Microsoft AI blogÂ
đ¸ Money in AI
Hyundai launches Boston Dynamics AI Institute with $400 million as a kick-off investment.
Cloud data loss prevention platform Nightfall AI raised $40 million in Series B funding round led by WestBridge Capital. Hiring remote.
AI-as-a-Service (AIaaS) startup DataProphet raised $10 million in a Series A funding round led by Knife Capital. Hiring in Cape Town/South Africa.
Drug discovery and development company Insilico raised $35 million in their Series D2 round led by Prosperity7. Hiring globally.
Conversational intelligence company Jiminny raised a $16.5 million series A funding round led by Kennet Partners. Hiring in Sofia/Bulgaria and London/UK.
Supply automation service Expedock raised $13.5 million in Series A funding round led by Insight Partners. Hiring in San Francisco, CA (US).
Medical conversation AI startup Abridge raised $12.5 million in a Series A round. Hiring in Pittsburg, PA (US) and remote.