🤖➕👨💻Human-AI Collaborative Writing
Weekly news digest curated by the industry insiders
Writing is one of the ultimate expressions of human cognition and a long-term focus for artificial intelligence (AI) models. Applying AI to improve writing is nothing new, but most success stories have been constrained to short-form writing. From Google predictive search to Grammarly, the examples of applying AI to short-form writing are everywhere. Long-form writing is another story as it typically involves aspects such as creativity, the complex orchestration of ideas, and following specific writing styles. The last few years have seen an explosion in innovation in language models, but we are still far from solving long-form writing. However, we seem to be on the path in which AI models can effectively help humans to improve their writing.
Meet CoAuthor – a research project unveiled by Stanford University last week that uses OpenAI’s GPT-3 to improve long-form writing. The core idea of CoAuthor is to nudge writers to deviate from their comfort zone and experiment with their writing. As a writing “collaborator,” CoAuthor uses variations in the writer’s style, suggests different vocabulary and so on, helping writers to expand their creative capabilities. At each session with CoAuthor, writers can interact with GPT-3 for suggestions that can be accepted, modified or rejected. All interactions are recorded at the keystroke level with the corresponding timestamps. The dataset is then used to study the impact of GPT-3 suggestions in the improvement of the text produced, which can lead to the design of better AI writing assistants. CoAuthor represents an ingenious use of GPT-3 and similar models to improve long-form writing. The project is still in its early stages, but the potential is already evident. The next step will be to see how well CoAuthor performs in comparison to other AI writing assistants and human editors. That last sentence was injected by CoAuthor, what do you think?
🔺🔻TheSequence Scope – our Sunday edition with the industry’s development overview – is free. To receive high-quality content about the most relevant developments in the ML world every Tuesday and Thursday, please subscribe to TheSequence Edge 🔺🔻
🗓 Next week in TheSequence Edge:
Edge#223: we discuss different types of diffusion; explain OpenAI’s GLIDE, a guided diffusion method for photorealistic image generation; explore Hugging Face text-to-image catalog.
Edge#224: we dive deep into AlexaTM 20B, Amazon’s new language super model, which is also capable of few-shot learning.
Now, let’s review the most important developments in the AI industry this week
🔎 ML Research
Stanford University published a paper detailing CoAuthor, interface, a dataset, and an experiment to improve long-form writing using deep learning →read more
Logical Reasoning in Language Models
DeepMind published a paper proposing a question-answering language model that is highly compatible with the rules of logic→read more
Reverse Engineering NTK
Berkeley AI Lab (BAIR) published a paper proposing a methodology for neural architecture design and its applicationto the design of high-performance kernels→read more
Tips in Product Reviews
Amazon Research published a paper detailing a method to evaluate the validity of tips included in product reviews →read more
🤖 Cool AI Tech Releases
OpenAI added Outpainting capabilities to its DALL-E text-to-image synthesis model, which allows extending an image beyond its original visual elements →read more
🛠 Real World ML
Click-To-Rate Predictions at LinkedIn
LinkedIn shares practical lessons about the architecture powering its click-to-rate prediction models →read more
MySQL to MyRocks Migration at Uber
Uber discusses the process of migrating from MySQL to MyRocks to power a large-scale distributed database →read more
💸 Money in AI