S. Somasegar on the Present and Future of Generative AI
Former Sr.Vice President of Microsoft's Developer Division and now a managing partner at Madrona Ventures, S. Somasegar has been one of the most active VCs evangelizing the potential of generative AI.
👤 Quick bio
Tell us a bit about yourself. Your background, current role and how did you get started in machine learning (ML)?
Spent nearly 27 years at Microsoft working mostly on platforms and my last role at Microsoft was running the Developer Division. To the extent a developers needs anything to be able to build applications and services on top of the Microsoft platform, my team was responsible for that. Products like Visual Studio, Visual Studio Code, .NET, Visual Studio Team Services and programming languages like VB, C#, TypeScript, F#, etc. were some of what my team delivered. While at Microsoft, I was also responsible for a lot of the Global Development Centers that we had around the world including R&D centers in India, China and Israel. A little over 7 years ago, I left Microsoft and came to Madrona Ventures, the leading Venture Capital firm in Seattle. I am a Managing Director at Madrona Ventures and am mostly focused on investments in enterprise and B2B focused start-ups including next generation cloud infrastructure, ML/AI, Intelligent and Generative Applications and SaaS. Some of my investments while at Madrona include Snowflake, UiPath, Pulumi, StatSig, SeekOut, Go1, Clari, Coda, CommonRoom and Temporal.
🛠 ML Work
Madrona Ventures has been one of the most active VC firms evangelizing the potential of generative AI and foundation models. Could you elaborate on your investment thesis and vision of the generative AI landscape?
Madrona believes that all applications will be intelligent. Intelligent Applications leverage ML models that use both historical and real-time data to build continuous learning systems and deliver insights that can help users make better decisions. We believe that Generative AI is a subsegment of the broader intelligent applications theme and we believe there will be both generative native apps (e.g., apps that natively leverage FMs) and generative enhanced apps (e.g., apps that currently exists and will embed generative features). Today, most Generative AI is used in high affordability use cases (where the generated content doesn’t have to be 100% correct) and across modalities such as text, image, or code generation; however, we believe new multi-modal applications of Generative AI will emerge across new modalities and as models get more accurate, they will be deployed in lower affordability use cases. We also are big believers in Generative AI being immensely helpful in enterprise use cases as well as consumer use cases.
Your tenure as Corporate Vice President leading Microsoft’s Developer Division gave you a unique perspective about developer’s productivity and innovation. What are the most important software development toolset categories that will be reimagined with foundation models?
Developer tools, such as GitHub Copilot, can turn natural language prompts into coding suggestions. This makes software engineers more productive and allows them to focus on more important aspects of coding. We're also seeing other generative code companies that assist with not only code creation, but also unit testing, flagging suspicious behavior in the code base, and fixing other code issues. These type of developer tools will become re-imagined through Foundation Models.
There is a lot of research in new generative AI areas such as chain-of-thought reasoning, regular learning or action taking . What are the most interesting new capabilities that you foresee in a new generation of foundation models?
One interesting new capability we are seeing is the idea of “agents”. Agents are powerful units that can be used to determine which actions to take and in what order. Agents can interpret natural language (capable of reasoning an input), connect to external knowledge bases (capable of connecting to your databases), and then leverage APIs to take action (sends an email or imports/loads a file).
Generative AI is certainly stealing all the headlines these days but there are a lot of exciting developments in other areas of AI such as self-supervised learning, few-shot learning or even federated and reinforcement learning. Is fine tuning all we need? Do you see other deep learning trends in the horizon with an equal potential of transformer-based pretrained models?
The pace of activity continues to be extraordinary as we are seeing order of magnitude shifts every 6 months. While techniques like fine-tuning and few shot learning are improving performance of today's models, it is unclear how necessary they will be in tomorrow's models. We believe that all the excitement around generative models in academia and by practitioners will continue to yield new techniques for getting the most out of language models. As mentioned earlier, one trend we are particularly excited about is a trend of building "agents" on top of language models that are capable of working with external data sources and performing actions. We are still in the early days of leveraging LLMs to perform actions, but when they can we believe there will be an exciting new wave of applications built. For example a next generation travel application would use an LLM to understand the query "book me a flight to Seattle", have an up to date database of the most recent flight information, understand the parameters around date of travel, class of travel, cost basis and have the capability to book an appropriate flight.
💥 Miscellaneous – a set of rapid-fire questions
Will there be trillion dollar generative-AI-based companies ? If so, will you venture to predict that OpenAI will be one of them?
Well, Microsoft is a trillion dollar company on its way to become a generative AI company. Does that count ;) ? But yes, I believe OpenAI has the potential to be a trillion dollar Generative-AI company.
Biggest roadblock for the mainstream adoption of generative AI?
Computational resources. Training generative models can be expensive and there is currently a lack of GPUs. Legal/ethical/compliance concerns are another important set of things to continue working on as they could be another big roadblock for mainstream adoption.
Controversial question, which top 10 tech incumbent in dangerously falling behind in the generative AI race?
I think the top 10 technology companies are all making huge strides in Generative AI – some, like Microsoft, ahead of the pack. I do think that companies like Google and Amazon are working hard on generative AI but unclear as they haven’t been too vocal yet about what they are doing. I think the rest of this year is going to be very interesting in that we will hear more from the top technology companies about what they are all doing and that will shed a little more clarity on who might be falling behind.
Is the Turing Test still relevant? Any clever alternatives?
The principle behind Turing Test is still relevant. The Turing test at its core is a measurement for how good an autonomous agent is relative to a human at performing a task. We are seeing LLMs being applied to a variety of tasks and based on the evaluation mechanism that developers use, it is good to benchmark human performance compared to model performance. For example, how does an AI generated summary compare to a human generated summary? What is exciting is that we are actually beginning to see AI perform just as well, if not better than humans at tasks like summarization and question answering! However, there are many tasks that humans still outperform AI like programming. For these tasks, we like copilot type experiences where AI is assistive. Perhaps an alternative measurement will be to what degree an AI is assistive vs automates an entire task.