Edge 360: Meet Ghostbuster: An AI Technique for Detecting LLM-Generated Content
Created by Berkeley University, the new method uses a probability distribution method to detect the likelihood of AI-generated tokens within a document.
The rapid evolution of large language models(LLMs) has created new challenges in terms of differentiating between human and AI-generated content. Recently, we have seen all sorts of solutions emerge to try to tackle this challenge, but the number of false positives is quite concerning. Recently, Berkeley AI Research(BAIR) published a new paper introducing a technique for identifying AI-generated content.
Ghostbuster, as presented in a recent research paper, emerges as a formidable solution for the identification of AI-generated text. Its operational framework revolves around the meticulous calculation of the likelihood of generating each token within a document under the scrutiny of various weaker language models. Subsequently, Ghostbuster employs a fusion of functions derived from these token probabilities to serve as inputs for a conclusive classifier.