📝 Guest Post: An introduction to Similarity Search*
In this guest post, Frank Liu, Director of Operations & ML Architect @ Zilliz, conducts a quick tour of Similarity Search, comparing embeddings and vector search strategies.
An introduction to Similarity Search
Vector similarity search is finding similar vectors in a high-dimensional vector space. It has become a critical tool for many applications, such as recommendation systems, image and video search, and natural language processing (generative AI). This blog will discuss the concept of vector similarity search, its importance, and some techniques used to perform this search.
What is Vector Similarity Search?
Embedding Vectors represent various data types, including text, images, and audio, and are mathematical representations of data points. Vector similarity search is a technique that involves finding similar vectors to a query vector. By applying a liberal quantity of vector algebra to embeddings, we can perform scalable semantic analysis using just basic mathematical operators. For example, in vector similarity search, you can use a distance metric such as L1 distance, L2 distance, and cosine similarity to determine the similarity distance between two vectors. Semantically similar pieces of unstructured data are “near” one another. In contrast, dissimilar pieces of unstructured data are “far” from one another.
Why is Vector Similarity Search Important?
Vector similarity search is an essential tool for a wide range of applications. Such as
Product recommender systems on e-commerce sites to recommend products to users based on browsing history
Image/video/audio similarity search to find similar images based on visual features)
Anomaly detection, natural language processing to find semantically similar documents to a given query
and the current famous example to assist generative AI tools like ChatGPT.
Comparing embeddings
Let's go through a couple of word embedding examples. For the sake of simplicity, we'll use word2vec
, an old model which uses a training methodology based on skipgrams. BERT and other modern transformer-based models will be able to provide you with more contextualized word embeddings, but we'll stick with word2vec
for simplicity. If you're interested in learning more, Jay Alammar delivers a great tutorial on word2vec
.
Some prep work
Before beginning, we'll need to install the gensim
library and load a word2vec
model.
% pip install gensim --disable-pip-version-check
% wget https://s3.amazonaws.com/dl4j-distribution/GoogleNews-vectors-negative300.bin.gz
% gunzip GoogleNews-vectors-negative300.bin
Requirement already satisfied: gensim in /Users/fzliu/.pyenv/lib/python3.8/site-packages (4.1.2)
Requirement already satisfied: smart-open>=1.8.1 in /Users/fzliu/.pyenv/lib/python3.8/site-packages (from gensim) (5.2.1)
Requirement already satisfied: numpy>=1.17.0 in /Users/fzliu/.pyenv/lib/python3.8/site-packages (from gensim) (1.19.5)
Requirement already satisfied: scipy>=0.18.1 in /Users/fzliu/.pyenv/lib/python3.8/site-packages (from gensim) (1.7.3)
--2022-02-22 00:30:34-- https://s3.amazonaws.com/dl4j-distribution/GoogleNews-vectors-negative300.bin.gz
Resolving s3.amazonaws.com (s3.amazonaws.com)... 52.216.20.165
Connecting to s3.amazonaws.com (s3.amazonaws.com)|52.216.20.165|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1647046227 (1.5G) [application/x-gzip]
Saving to: GoogleNews-vectors-negative300.bin.gz
GoogleNews-vectors- 100%[===================>] 1.53G 2.66MB/s in 11m 23s
2022-02-22 00:41:57 (2.30 MB/s) - GoogleNews-vectors-negative300.bin.gz saved [1647046227/1647046227]
gunzip: GoogleNews-vectors-negative300.bin: unknown suffix -- ignored
Now that we've done all the prep work required to generate word-to-vector embeddings, let's load the trained word2vec
model.
>>> from gensim.models import KeyedVectors
>>> model = KeyedVectors.load_word2vec_format('GoogleNews-vectors-negative300.bin', binary=True)
Example 0: Marlon Brando
Let's take a look at how word2vec
interprets the famous actor Marlon Brando.
>>> print(model.most_similar(positive=['Marlon_Brando']))
[('Brando', 0.757453978061676), ('Humphrey_Bogart', 0.6143958568572998), ('actor_Marlon_Brando', 0.6016287207603455), ('Al_Pacino', 0.5675410032272339), ('Elia_Kazan', 0.5594002604484558), ('Steve_McQueen', 0.5539456605911255), ('Marilyn_Monroe', 0.5512186884880066), ('Jack_Nicholson', 0.5440199375152588), ('Shelley_Winters', 0.5432392954826355), ('Apocalypse_Now', 0.5306933522224426)]
Marlon Brando worked with Al Pacino in The Godfather and Elia Kazan in A Streetcar Named Desire. He also starred in Apocalypse Now.
Example 1: If all of the kings had their queens on the throne
Vectors can be added and subtracted from each other to demo underlying semantic changes.
>>> print(model.most_similar(positive=['king', 'woman'], negative=['man'], topn=1))
[('queen', 0.7118193507194519)]
Who says engineers can't enjoy a bit of dance-pop now and then?
Example 2: Apple, the company, the fruit, ... or both?
The word "apple" can refer to both the company as well as the delicious red fruit. In this example, we can see that word2vec
retains both meanings.
>>> print(model.most_similar(positive=['samsung', 'iphone'], negative=['apple'], topn=1))
>>> print(model.most_similar(positive=['fruit'], topn=10)[9:])
[('droid_x', 0.6324754953384399)]
[('apple', 0.6410146951675415)]
"Droid" refers to Samsung's first 4G LTE smartphone ("Samsung" + "iPhone" - "Apple" = "Droid"), while "apple" is the 10th closest word to "fruit".
Vector search strategies
Now that we've seen the power of embeddings, let's briefly look at some ways we can conduct nearest neighbor search. First, let's review some standard methods to provide a high-level overview of how vector search works at scale. Note that some of these methods are not exclusive to each other - it's possible, for example, to use quantization in conjunction with space partitioning.
Linear search
The most straightforward but naïve nearest neighbor search algorithm is good old linear search: computing the distance from a query vector to all other vectors in the vector database.
For obvious reasons, naïve search does not work when scaling our vector database to tens or hundreds of millions of vectors. Still, when the total number of elements in the database is small, this can be the most efficient way to perform a vector search since a separate data structure for the index is not required. At the same time, you can perform inserts and deletes relatively quickly.
Due to the lack of space complexity and constant space overhead associated with naïve search, this method can often outperform space partitioning even when querying across a moderate number of vectors.
Space partitioning
Space partitioning is not a single algorithm but a family of algorithms using the same concept.
K-dimensional trees (kd-trees) are perhaps the most well-known in this family and work by continuously bisecting the search space (splitting the vectors into “left” and “right” buckets) like binary search trees.
Inverted file index (IVF) is also a form of space partitioning, and works by assigning each vector to its nearest centroid - searches are then conducted by first determining the query vector's closest centroid and conducting the search around there, significantly reducing the total number of vectors that need to be searched.
Quantization
Quantization is a technique for reducing the database's total size by reducing the vectors' precision.
Scalar quantization (SQ), for example, works by multiplying high-precision floating point vectors with a scalar value, then casting the elements of the resultant vector to their nearest integers. As a result, SQ reduces the effective size of the entire database (e.g., by a factor of eight for conversion from float64_t
to int8_t
) and has the positive side-effect of speeding up vector-to-vector distance computations.
Product quantization (PQ) is another technique similar to dictionary compression. In PQ, all vectors are split into equally-sized subvectors, and each subvector is then replaced with a centroid.
Hierarchical Navigable Small Worlds
Hierarchical Navigable Small Worlds (HNSW) is a graph-based indexing and retrieval algorithm. HNSW works differently from product quantization: instead of improving the searchability of the database by reducing its effective size, HNSW creates a multi-layer graph from the original data. Upper layers contain only "long connections," while lower layers have only "short connections" between vectors in the database (see the next section for an overview of vector distance metrics). Individual graph connections are created a-la skip lists.
With this architecture in place, searching becomes fairly straightforward – we greedily traverse the uppermost graph (the one with the longest inter-vector connections) for the vector closest to our query vector. We then do the same for the second layer, using the result of the first layer search as the starting point. This continues until we complete the search at the bottommost layer, the result of which becomes the nearest neighbor of the query vector.
<div align="center">
<img align="center" src="./pic/hnsw_visualized.png">
</div>
<p style="text-align:center"><sub>HNSW, visualized (from https://arxiv.org/abs/1603.09320).</sub></p>
Approximate Nearest Neighbors Oh Yeah
Due to its playful and unintuitive name, ANNOY is my favorite ANN algorithm. Approximate Nearest Neighbors Oh Yeah (ANNOY) is a tree-based algorithm popularized by Spotify (Spotify used ANNOY in their music recommendation system). Despite the strange name, ANNOY's underlying concept is reasonably straightforward – binary trees.
ANNOY works by randomly selecting two vectors in the database and bisecting the search space along the hyperplane separating those two vectors. This is done iteratively until there are fewer than some predefined parameters NUM_MAX_ELEMS
per node. Since the resulting index is essentially a binary tree, we can search on O(log n) complexity.
<div align="center">
<img align="center" src="https://raw.github.com/spotify/annoy/master/ann.png">
</div>
<p style="text-align:center"><sub>ANNOY, visualized (from https://github.com/spotify/annoy).</sub></p>
Commonly used similarity metrics
The best vector databases are useless without similarity metrics – methods for computing the distance between two vectors. Numerous metrics exist so that we will discuss only the most commonly used subset here.
Floating point vector similarity metrics
The most common floating point vector similarity metrics are, in no particular order, L1 distance, L2 distance, and cosine similarity. The first two values are distance metrics (lower values imply more similarity while higher values imply lower similarity), while cosine similarity is a similarity metric (higher values imply more similarity).
L1 distance is also commonly referred to as Manhattan distance, aptly named after the fact that getting from point A to point B in Manhattan requires moving along one of two perpendicular directions. The second equation, L2 distance, is simply the distance between two vectors in Euclidean space. The third and final equation is cosine distance, equivalent to the cosine of the angle between two vectors. Note the equation for cosine similarity works out to be the dot product between normalized versions of input vectors a and b.
With a bit of math, we can also show that L2 distance and cosine similarity are effectively equivalent when it comes to similarity ranking for unit norm vectors:
Recall that unit norm vectors have a magnitude of 1:
With this, we get:
Since we have unit norm vectors, cosine distance works out to be the dot product between a and b (the denominator in equation 3 above works out to be 1):
Essentially, for unit norm vectors, L2 distance and cosine similarity are functionally equivalent! Always remember to normalize your embeddings.
Binary vector similarity metrics
Binary vectors, as their name suggest, do not have metrics based in arithmetics a-la floating point vectors. Similarity metrics for binary vectors instead rely on either set mathematics, bit manipulation, or a combination of both (it's okay, I also hate discrete math). Here are the formulas for two commonly used binary vector similarity metrics:
The first equation is called Tanimoto/Jaccard distance, and is essentially a measure of the amount of overlap between two binary vectors. The second equation is Hamming distance, and is a count of the number of vector elements in a and b which differ from each other.
You can most likely safely ignore these similarity metrics, since the majority of applications use cosine similarity over floating point embeddings.
Wrapping up
Vector similarity search is an essential tool for a wide range of applications. It involves finding vectors similar to a query vector in a high-dimensional vector space. We reviewed some standard methods to provide a high-level overview of how vector search works at scale. We also quickly looked at a few metrics for computing the distance between two vectors. I hope you found this helpful, and let me know in the comments below what you would like me to cover next time! In the meantime, try it for yourself with the free trial of Zilliz or download open-source Milvus!