The Sequence Chat: Jeff Bussgang – Flybridge Capital, Harvard Business School, About Investing in Generative AI
A VC perspective about generative AI market trends, competitive landscape and startups in the space.
Quick bio
Tell us a bit about yourself: Your background, current role, and how you got started in artificial intelligence and venture capital.
I was a computer science major at Harvard College and focused on AI and NLP. After graduation, I worked for two years at BCG and then went to Harvard Business School. After HBS, I joined a Series A startup called Open Market. After five years there (including an IPO in 1996), I cofounded a startup called Upromise. After three years there, I cofounded an early stage VC firm, Flybridge, with a few friends who had backed my previous two startups. My investment practice has been rooted in enterprise software, SaaS, and AI/ML. AI/ML has been a greater focus in the last few years.
🛠 AI Work
Flybridge has been a very active early-stage investor in the generative AI space and has a long track record in enterprise software. Could you please elaborate on your investment thesis and vision?
We believe the age of AI is as important, if not more so, than the age of the Internet, which I lived through as an entrepreneur and investor in recent decades. The foundation and platforms upon which our startups are building have become more and more powerful. As a result, the entrepreneurs we are investing in can leverage some extraordinary capabilities to solve problems today.
One of the most challenging questions when investing in generative AI is determining where the value will accrue in the long term. How do you differentiate potentially disruptive companies from mere features of other products powered by generative AI?
We are focused on investing in startups that are staying close to customer needs and building on top of the platforms to create unique, differentiated solutions for both (1) vertical applications / uses cases; and (2) horizontal applications / use cases. Further, we are investing in (3) development tools that accelerate enterprise AI application development. Finally, we are investing in (4) new applications that could not have existed before the age of AI.
You can make a case that RLHF enabled the transition from GPT-3 to the mainstream ChatGPT phenomenon. What do you think is the next research breakthrough that can unlock the next wave of innovation in generative AI?
I am not a researcher, but I am intrigued by the tuning of models that are particular and purpose built for a use case. The history of software is that, yes, horizontal tools can be powerful building blocks (e.g., compute, cloud, databases). But customized software – sometimes full stack – for specific use cases can also be the most useful way to solve a problem. I look forward to the next wave of GPT-based models that build on the foundational advancements but are purpose built.
One of the most intriguing frictions in the generative AI space is the balance between open-source and closed-source/API-based models. How do you see the evolution of these two distribution models over the next few years? Who will emerge as the winner in the end?
Both! There will be hundreds of enterprise AI applications operating in each of thousands of companies. Just as the cloud is heterogeneous, just as databases are heterogeneous (and include open source platforms like our portfolio company MongoDB), the enterprise AI stack will be heterogeneous.
A lot of the large investments in generative AI have been in companies building massive foundation models. How far do you think the scaling laws can go in this area? Would we see LLMs that surpass $10B in pretraining/fine-tuning costs?
I really don’t know. The scaling laws I had thought were in place seem to be no longer in place. The scale of the investment is simply extraordinary.
SaaS incumbents have been incredibly fast, arguably faster than any other tech trend in history, to incorporate conversational capabilities into their platforms. Would we see a new form of SaaS platforms built with generative AI from the ground up? How could that look?
Yes. Existing SaaS franchises are under threat. Some will adapt. Others will be supplanted by native generative AI companies. We are entering an extremely volatile period of value creation and destruction.
The world today runs on established computing platforms such as the browser, cloud computing, and mobile phones, but you can make the case that any sufficiently disruptive tech trend in history can unlock a new computing platform. Could generative AI power a new generation of computing platforms? Give us an ambitious vision of that future.
Yes. It will be exciting to see what new modalities emerge. Maybe goggles or some AR/VR capability. Maybe vehicles. Maybe devices we can’t yet imagine. It’s going to be fun.
💥 Miscellaneous – a set of rapid-fire questions
Do you think there will be trillion-dollar native generative AI-based companies? If so, would you venture to predict that OpenAI will be one of them?
Yes. OpenAI has a cap on its valuation so perhaps not them.
Which large tech incumbent (Apple, Microsoft, Google, Amazon, Meta) is more vulnerable to be disrupted by generative AI?
All of them. Apple may be the least vulnerable as the phone form factor seems so enduring. Amazon next as their logistics infrastructure seems so enduring. Software-only businesses like Microsoft, Google, and Meta are very vulnerable to disruption.
What are the most significant mistakes that you see entrepreneurs making while building in the generative AI space? How about the most common mistakes investors make?
Not moving fast enough to adjust to current market conditions.
Who is your favorite mathematician or computer scientist, and why?
My father, Julian Bussgang, who passed away earlier this year. He invented the Bussgang Theorem, an important mathematical theorem in the field of signals, radar, and stochastic processing.