In this guest post, Jimmy Whitaker, Data Scientist in Residence at Human Signal, focuses on guiding users in building an agent using the Adala framework. He dives into the integration of Large Language Model-based agents for automating data pipelines, particularly for tasks like data labeling. The article details the process of setting up an environment, implementing an agent, and the iterative learning approach that enhances the agent's efficiency in data categorization. This approach combines human expertise with AI scalability, making these agents more effective in precise data tasks.
LLM-based agents have remarkable capabilities in problem-solving, leading to a surge in their application across various industries. They can adapt their instructions to accomplish tasks, all through human-generated prompts. Unsurprisingly, channeling these capabilities reliably is becoming a crucial task.
Adala is a framework for creating LLM-based agents to automate data pipelines, including tasks like data labeling and data generation. Its primary function is to provide these agents with guided learning opportunities, enabling them to act within the confines of a ground truth dataset and learn through dynamic feedback. The concept behind Adala is to combine human precision and expertise with AI model scalability. By doing so, these agents become more efficient in tasks where accurate data categorization is paramount.
This article aims to guide you through building your first data labeling agent using the Adala framework. The focus will be on understanding the underlying principles of these agents, setting up the necessary environment, and implementing a simple yet effective agent capable of classifying data based on a provided ground truth dataset. Through this, you will gain insights into the technical aspects of creating such an agent and the practical applications and benefits it offers.
Getting started with Adala
In this example, we will work through the Classification Skill Example notebook provided by Adala, using the pre-built classification skill. We aim to develop an agent that aids in data labeling for text classification, specifically categorizing product descriptions.
Adala agents are autonomous and have the novel ability to teach themselves, acquiring skills through iterative learning. As their environment evolves, agents continuously refine these skills. The agent will then teach itself by comparing its predictions to the ground truth dataset, using trial and error to refine its labeling instructions. In many cases, having LLMs perform these tasks can be sufficient. However, relying solely on LLMs comes at a high operational cost. Curating a dataset to distill this prior knowledge into a simpler model is more cost-effective over time.
Creating an Initial Dataset
We begin by creating an initial dataset for the agent to learn from. To show the learning process for the agents, we’ll start with a labeled dataset in a pandas DataFrame (df) that will be used as our ground truth data.
import pandas as pd
df = pd.DataFrame([
{"text": "Apple product with a sleek design.", "category": "Electronics"},
{"text": "Laptop stand for the kitchen.", "category": "Furniture/Home Decor"},
{"text": "Chocolate leather boots.", "category": "Footwear/Clothing"},
{"text": "Wooden cream for surfaces.", "category": "Furniture/Home Decor"},
{"text": "Natural finish for your lips.", "category": "Beauty/Personal Care"}
])
df
This code generates a dataframe with product descriptions and their corresponding categories, like so:
Building Your First Adala Agent
Building an Adala agent involves integrating two critical components: skills and the environment.
Skills - Skills represent the capabilities of an agent. An agent can possess multiple skills, typically enabled by Large Language Models (LLMs). For our purpose, the agent's skill is text classification, designed to operate within the context of a DataFrame, which we have already established.
Environment - The environment provides the setting in which the agent functions. For Adala, this typically involves incorporating a ground truth dataset, but can also include human-in-the-loop feedback through various channels. In our example, the ground truth is the DataFrame created previously. The agent utilizes the label column within this DataFrame to compare its predictions against actual labels, enabling it to refine its accuracy over time.
We start with the pre-built `ClassificationSkill`. This skill restricts the LLM output to the data labels. When run, this skill generates predictions in a new column within our DataFrame, enriching our environment with valuable insights. In practical scenarios, the environment can be set up to gather ground truth signals from actual human feedback, further enhancing the learning phase of the agent.
Here's how to set up your Adala agent:
from adala.agents import Agent
from adala.environments import StaticEnvironment
from adala.skills import ClassificationSkill
agent = Agent(
skills=ClassificationSkill(
name='product_category_classification',
input_template='Text: {text}',
output_template='Category: {predicted_category}',
labels={'predicted_category':[
"Footwear/Clothing",
"Electronics",
"Food/Beverages",
"Furniture/Home Decor",
"Beauty/Personal Care"
]},
),
environment=StaticEnvironment(
df=df,
ground_truth_columns={'predicted_category': 'category'}
)
)
As we continually add data to our ground truth dataset, the agent gains access to more sophisticated and diverse information, enhancing its learning and predictive capabilities.
Agent Learning
The learning process of the agent involves three distinct steps:
Application of Skills: Initially, the agent utilizes the LLM to predict categories for the examples in our ground truth dataset.
Error Analysis: After predictions, the agent evaluates its performance by calculating the classification accuracy. This phase involves a detailed analysis of where and why errors occurred, providing critical insights into the agent’s current capabilities.
Skill Improvement: Based on the insights gained from the error analysis, the agent updates its approach. This improvement involves refining the prompt, incorporating examples, and modifying instructions to enhance the skill’s accuracy and effectiveness.
The agent autonomously cycles through these steps when the `learn` function is called. This iterative process of applying skills, analyzing results, and making improvements enables the agent to align its predictions more closely with the ground truth dataset. The cycle can repeat until the agent achieves a state where errors are minimized or eliminated.
agent.learn()
This command displays the enhanced classification skill. For instance, the skill to categorize products is now fine-tuned with specific examples, showcasing the agent's improved understanding and classification accuracy:
These examples illustrate the agent’s ability to accurately label products based on their primary function or purpose, demonstrating the effectiveness of the learning process.
Testing the Agent’s Skill
With the agent's skills refined, it's time to assess its categorization capability using new product descriptions. The following example showcases a test DataFrame:
test_df = pd.DataFrame([
"Stainless steel apple peeler.", # Potential categories: Electronics or Food/Beverages
"Silk finish touch screen.", # Potential categories: Electronics or Beauty/Personal Care
"Chocolate coated boots.", # Potential categories: Footwear/Clothing or Food/Beverages
"Natural wood fragrance.", # Potential categories: Beauty/Personal Care or Furniture/Home Decor
"Leather grain snack bar.", # Potential categories: Footwear/Clothing or Food/Beverages
], columns=['text'])
predictions = agent.run(test_df)
The `run` command enables the agent to apply its learned classification skill to the new dataset, predicting the most appropriate category for each product description. These predictions can be incorporated into data labeling platforms like Label Studio for human verification. This is one of the key aspects of Adala - incorporating human feedback by way of ground truth data. Once reviewed, we can optionally add this data to our environment to iteratively improve our classification skill.
Where does Adala fit?
Adala, as a framework for creating autonomous data agents, occupies a unique niche in the landscape of LLM-based applications. Its comparison with other notable LLM implementations like OpenAI GPTs and AutoGPT highlights its distinctive role and capabilities. Unlike the broad applicability of OpenAI GPTs in generating text and engaging in conversation, Adala's focus is narrower yet deeper in its domain of data processing.
Another differentiator is that Adala is a framework for building “data-centric” agents guided by human feedback, whether a ground truth dataset or reviewing predictions via different channels. This specialization makes Adala more suited for tasks that require precision and reliability, a critical aspect of machine learning and data analysis.
Although it can utilize the same GPT models from OpenAI, Adala has the capacity to support multiple runtimes depending on the domain-specific use case, economics, or even requirements for data privacy.
Conclusion
Adala distinguishes itself in the generative AI arena today by enhancing data labeling accuracy and efficiency, and the community will continue work to automate data pipelines that fuel AI models and applications. Adala's focused approach to data labeling makes it a vital tool for combining human-like meticulousness with AI scalability.
Adala is under active development with new releases every two weeks. The latest version, 0.3.0, includes additional skills and environments along with a number of other enhancements. To keep abreast of these developments, follow the Adala repository on GitHub. Also, try these features and capabilities on your own data and share your feedback with us in the Adala Discord!