🐙 Edge#220: Dive into Meta AI’s Make-A-Scene, which pushes the boundaries of AI art synthesis
The new model uses text-to-image and image-to-image generation to produce astonishing artistic outputs.
On Thursdays, we dive deep into one of the freshest research papers or technology frameworks that is worth your attention. Our goal is to keep you up to date with new developments in AI to complement the concepts we debate in other editions of our newsletter.
💥 What’s New in AI: Dive into Meta AI’s Make-A-Scene, which pushes the boundaries of AI art synthesis
Artificial intelligence (AI) research in text-to-image synthesis has gone off the charts in recent months. Models like OpenAI’s DALL-E 2, GLIDE or Google’s Parti and Imagen have shown the possibilities of emulating creative expression using deep learning. Despite the progress, these models still have very tangible limitations in terms of generating images that capture the complete semantics of textual inputs. Recently, Meta AI unveiled a new method called Make-A-Scene that uses some clever techniques to address some of the fundamental challenges in text-to-image synthesis.
The process of emulating human creative expression using text-to-image synthesis is currently facing various roadblocks: