Edge 352: Inside the Embeddings Architecture Powering Job Recommendations at LinkedIn
Some insights about one of the largest embedding architectures ever built.
Embeddings have become one of the most important components of large language model(LLMs) applications in recent months. Entire market segments such as vector databases have emerged as a mechanism to power embedding architectures. However, embedding architectures are still in very early stages and only a handful of organizations have successfully implement them at scale. That’s why is super important to learn from those companies about the best practices and techniques used by these organizations. Recently, LinkedIn published some details about their use of Embedding Based Retrieval (EBR) technology to transform its search and recommendation systems. If you’ve ever come across the “Jobs You Might Be Interested In” feature or noticed the tailored content in your LinkedIn Feed and Notifications, you’ve seen EBR in action.
So, what’s EBR? In simple terms, it’s a technique used in the early stages of recommendation systems. It scans a vast array of items (like job postings or feed articles) and identifies those that are most relevant based on their similarity to a given request. Think of it as finding items that are “nearby” in a digital space. Once these items are identified, another AI model ranks them to present the most pertinent ones to the user.
To streamline the use of EBR, LinkedIn has rolled out several new tools and features: