Interested in how generative AI can help optimize your productivity? Read our guide on developer experience >
We all know that AI is changing the world. But what happens when you combine AI with the power of open source?
Over the past year, there has been an explosion of open source generative AI projects on GitHub: by our count, more than 8,000. They range from commercially backed large language models (LLMs) like Meta’s LLaMA to experimental open source applications.
These projects offer many benefits to open source developers and the machine learning community—and are a great way to start building new AI-powered features and applications.
In this article, we’ll explore:
- The differences between open source LLMs and closed source pre-trained models
- Best practices for fine-tuning LLMs
- The open source LLMs available today
- What the future holds for the rapidly evolving world of generative AI
Let’s jump in.
Interested in building with LLMs? Check out our guide on prompt engineering > |
Open source vs. closed source LLMs
By now, most of us are familiar with LLMs: neural network-based language models trained on vast quantities of data to mimic human behavior by performing various downstream tasks, like question answering, translation, and summarization. LLMs have disrupted the world with the introduction of tools like ChatGPT and GitHub Copilot.
Open source LLMs differ from their closed counterparts regarding the source code (and sometimes other components, as well). With closed LLMs, the source code—which explains how the model is structured and how the training algorithms work—isn’t published.
“When you’re doing research, you want access to the source code so you can fine-tune some of the pieces of the algorithm itself,” says Alireza Goudarzi, a senior researcher of machine learning at GitHub. “With closed models, it’s harder to do that.”
Open source LLMs help the industry at large: because so many people contribute, they can be developed faster than closed models. They can also be more effective for edge cases or specific applications (like local language support), can contain bespoke security controls, and can run on local models.
But closed models—often built by larger companies—have advantages, too. For one, they’re embedded in systems with filters for biased information, inappropriate language, and other questionable content. They also frequently have security measures baked in. Plus, they don’t need fine-tuning, a specialized skill set requiring dedicated people and teams.
“Closed, off-the-shelf LLMs are high quality,” notes Eddie Aftandilian, a principal researcher at GitHub. “They’re often far more accessible to the average developer.”
How to fine-tune open source LLMs
Fine-tuning open source models is done on the large cloud provider hosted by the LLM, such as AWS, Google Cloud, or Microsoft Azure. Fine-tuning allows you to optimize the model by creating more advanced language interactions in applications like virtual assistants and chatbots. This can improve model accuracy anywhere from five to 10 percent.
As for best practices? Goudarzi recommends being careful about data sampling and being clear about the specific needs of the application you’re trying to build. The curated data should match your needs exactly since the models are pre-trained on anything you can find online.
“You need to emphasize certain things related to your objectives,” he says. “Let’s say you’re trying to create a model to process TV and smart home commands. You’d want to preselect your data to have more of a command form.”
This will help optimize model efficiency.
Choosing your model
Which open source model is best for you? Aftandilian recommends focusing on models’ performance benchmarks against different scenarios, such as reasoning, domain-specific understanding of law or science, and linguistic comprehension.
However, don’t assume that the benchmark results are correct or meaningful.
“Rather, ask yourself, how good is this model at a particular task?” he says. “It’s pretty easy to let benchmarks seep into the training set due to lack of deep understanding, skewed performance, or limited generalization.”
When this happens, the model is trained on its own evaluation data. “Which would make it look better than it should,” Aftandilian says.
You should also consider how much the model costs to run and its overall latency rates. A large model, for instance, might be exceptionally powerful. But if it takes minutes to generate responses versus seconds, there may be better options. (For example, the models that power GitHub Copilot in the IDE feature a latency rate of less than ten milliseconds, which is well-suited for getting quick suggestions.)
Open source LLMs available today
There are several open source commercially licensed models available. These include:
- OpenLLaMA: An open source reproduction of Meta’s LLaMA model, developed by Berkeley AI Research, this project provides permissively licensed models with 3B, 7B, and 13B parameters, and is trained on one trillion tokens. OpenLLaMA models have been evaluated on tasks using the lm-evaluation-harness and perform comparably to the original LLaMA and GPT-J across most tasks. But because of the tokenizer’s configuration, the models aren’t great for code generation tasks with empty spaces.
- Falcon-Series: Developed by Abu Dhabi’s Technology Innovation Institute (TII), Falcon-Series consists of two models: Falcon-40B and Falcon-7B. The series has a unique training data pipeline that extracts content with deduplication and filtering from web data. The models also use multi-query attention, which improves the scalability of inference. Falcon can generate human-like text, translate languages, and answer questions.
- MPT-Series: A set of decoder-only large language models, MPT-Series models have been trained on one trillion tokens spanning code, natural language text, and scientific text. Developed by MosaicML, these models come in two specific versions: MPT-Instruct, designed to be task-oriented, and MPT-Chat, which provides a conversational experience. It’s most suitable for virtual assistants, chatbots, and other interactive user engagement tools.
- FastChat-T5: A large transformer model with three billion parameters, FastChat-T5 is a chatbot model developed by the FastChat team through fine-tuning the Flan-T5-XL model. Trained on 70,000 user-shared conversations, it generates responses to user inputs autoregressively and is primarily for commercial applications. It’s a strong fit for applications that need language understanding, like virtual assistants, customer support systems, and interactive platforms.
The future of open source LLMs
There’s been a scurry of activity in the open source LLM world.
“Developers are very active on some of these open source models,” Aftandilian says. “They can optimize performance, explore new use cases, and push for new algorithms and more efficient data.”
And that’s just the start.
Meta’s LLaMA model is now available for commercial use, allowing businesses to create their own AI solutions.
Goudarzi’s team has been thinking about how they can distill open source LLMs and reduce their size. If smaller, the models could be installed on local machines, and you could have your own mini version of GitHub Copilot, for instance. But for now, open source models often need financial support due to their extensive infrastructure and operating costs.
One thing that surprised Goudarzi: originally, the machine learning community thought that more advanced generative AI would require more advanced algorithms. But that hasn’t been the case.
“The simple algorithm actually stays the same, regardless of how much it can do,” he says. “Scaling is the only change, which is completely mind-blowing.”
Who knows how open source LLMs will revolutionize the developer landscape.
“I’m excited that we’re seeing so many open source LLMs now,” Goudarzi says. “When developers start building with these models, the possibilities are endless.”
Tags:
Written by
Related posts
So many tokens, so little time: Introducing a faster, more flexible byte-pair tokenizer
We released a new open source byte-pair tokenizer that is faster and more flexible than popular alternatives.
How to generate unit tests with GitHub Copilot: Tips and examples
Learn how to generate unit tests with GitHub Copilot and get specific examples, a tutorial, and best practices.
How developers spend the time they save thanks to AI coding tools
Developers tell us how GitHub Copilot and other AI coding tools are transforming their work and changing how they spend their days.