TheSequence

TheSequence

Share this post

TheSequence
TheSequence
Edge 290: Inside Koala, Berkeley University’s LLaMA-Based Model Fine-Tuned with ChatGPT Dialogues
Copy link
Facebook
Email
Notes
More

Edge 290: Inside Koala, Berkeley University’s LLaMA-Based Model Fine-Tuned with ChatGPT Dialogues

The model provides a lighter, open-source alternative to ChatGPT and includes EasyLM, a framework for training and fine-tuning LLMs.

May 11, 2023
∙ Paid
32

Share this post

TheSequence
TheSequence
Edge 290: Inside Koala, Berkeley University’s LLaMA-Based Model Fine-Tuned with ChatGPT Dialogues
Copy link
Facebook
Email
Notes
More
1
Share
Created Using Midjourney

The accidental leak of the weights associated with Meta AI’s LLM LLaMA has sparked a tremendous level of innovation in the open-source LLM space. Since the furtious leak, we have seen models like Alpaca, Vicuna, ChatLlama and several others expand on the foundations of LLaMA to build innovative conversational agents that match the capabilities of ChatGPT. One of the latest addition to the list is Koala( yes I know, another animal-named model), a chatbot created by Berkeley AI Research(BAIR) that fine-tunes LLaMA on conversations gathered from the internet.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Jesus Rodriguez
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More