💡 ML Concept of the Day: What is Prefix-Tuning?
Continuing with our series about fine tuning methods in foundation models, we would like to start diving into a group of techniques that sit in between vanilla in-context learning and hard core fine-tuning. Some experts might not consider these techniques in the realm fine-tuning but I think they deserve to be covered in this series. The first group of these new methods is known as prefix-tuning and it was proposed by Stanford University researchers in 2021.