Meta’s Llama 3 Proves a Big Win for the Open-Source Community

Why Trust Techopedia
KEY TAKEAWAYS

  • Meta has launched Llama 3, its latest AI model boasting high performance and accessibility via Meta AI in certain regions, and promises further enhancements.
  • Llama 3's open-source release places Meta as a major player in the LLM market, challenging proprietary models like GPT 3.5 and Gemini.
  • While Llama 3 lacks multimodal capabilities at launch, Meta plans to introduce them later.
  • We ask AI experts their opinions on how this may shake up the ever-growing AI market.

Meta’s latest artificial intelligence model, Llama 3, which was released this week, is the the tech giant’s ‘next-generation language model’.

It is, in the company’s words, “the most capable openly available LLM [large language model] to date”.

Arriving in two forms — 8 billion and 70 billion parameter models — Llama 3 demonstrates state-of-the-art performance across multiple industry benchmarks and can be accessed in certain regions via the organization’s new virtual assistant, Meta AI.

It comes with a promise from Meta that, in the coming months, it will introduce new capabilities, including larger context windows, additional model sizes, and performance enhancements.

The release of Llama 3 has not only made Meta arguably the best open-source LLM, but also enables it to compete against some of the top proprietary models, including GPT 3.5 and Gemini.

We ask experts for their views and dig into how the published stats compare to the nearest competitors.

What We Know About Llama 3

The organization’s announcement blog post wasted no words on how much of an improvement this model is over the past generation of Llama 2 models.

“Our new 8B and 70B parameter Llama 3 models are a major leap over Llama 2 and establish a new state-of-the-art for LLM models at those scales.

“Thanks to improvements in pre-training and post-training, our pre-trained and instruction-fine-tuned models are the best models existing today at the 8B and 70B parameter scale,” the post said.

One key factor enabling this performance is the high-quality training data used. For instance, Llama 3 was pre-trained on over 15 trillion tokens collected from publicly available sources, a dataset seven times larger than the one used to power Llama 2.

Researchers also used data filtering pipelines such as heuristic filters, NSFW filters, semantic deduplication approaches, and text classifiers used to predict data quality and eliminate low-quality input.

However, perhaps the biggest development between Llama 2 and Llama 3 is that the latter will be accessible via the Meta AI assistant. While Llama 2 was open source, it didn’t have the same ease of accessibility as other tools like ChatGPT or Gemini — and the addition of Meta AI helps to address this.

Now users in Australia, Canada, Ghana, Jamaica, Malawi, New Zealand, Nigeria, Pakistan, Singapore, South Africa, Uganda, Zambia, and Zimbabwe can access Meta AI (and Llama 3) on Facebook, Instagram, WhatsApp, and Messenger.

The model will also be available on AWS, Databricks, Google Cloud, Hugging Face, Kaggle, IBM WatsonX, Microsoft Azure, Nvidia NIM, and Snowflake in the future.

How Llama 3 Puts Meta in an Extremely Comfortable Position?

The release of Llama 3 puts Meta in an extremely comfortable spot in the LLM market. It now not only has one of the most high-performance models available but also has one of the most accessible.

Kjell Carlsson, Ph.D, head of strategy at Domino Data Lab, said to Techopedia:

“It is practically a given that Llama 3 will quickly become the de facto standard for companies looking to build truly differentiating GenAI applications.

“Unlike GPT-4 and Gemini — giant, proprietary models that must be hosted in the cloud — Llama 3 provides enterprises with a free, open-source model that they can control, fine-tune, and build on and which they can host wherever they need it.”

Carlsson also notes that its ‘relatively small size’ also means that it can meet the cost and speed requirements of applications that need to scale, while overall improvements mean that it can outperform most models of similar size.

After all, smaller language models require less computing power than larger ones. The more parameters a model has – the more it costs to train or run.

What Meta’s Release Means for the Open Source Market

Arguably, the biggest winner (other than Meta) from this release is the open-source market, which now has another high-performance LLM for users to choose from.

Moses Guttmann, the CEO and co-founder of ClearML, a continuous machine learning platform, told Techopedia via email:

“The release of Llama 3 by Meta, with its advanced capabilities, is poised to set a new benchmark in the open-source generative AI market.

“By pushing the limits of what open-source models can achieve, Llama 3 challenges other contributors to elevate their own offerings, accelerating innovation across the board. These latest advancements reaffirm our belief in the potential of open-source AI to rival closed-source alternatives.”

The better open-source models perform, the more viable they become as an alternative to proprietary, black box AI systems that offer limited visibility over how models are trained or make decisions.

That being said, other industry experts, such as Luca Soldaini, senior applied research scientist at Allen Institute for AI (AI2), suggest that more transparency is needed to truly enrich the open-source ecosystem.

Soldaini told Techopedia:

“It’s great to see more and more models openly releasing their weights, but the open source community needs access to all other parts of the AI pipeline-its data, training, logs, code, and evaluations.

 

“This is what will ultimately accelerate our collective understanding of these models, but also improve accuracy, reduce bias, and move us closer to a more meaningful use of AI.”

Llama 3’s Performance By the Numbers

While 2023 was a great year for open source development – with releases of models like Llama 2, Falcon 180B, and Mistral 7B, they didn’t reach the level that Llama 3 has in terms of raw performance.

Based on the initial materials released by Meta, Llama 3 8B outperforms Google’s open AI model, Gemma 7B, and Mistral AI’s Mistral 7B on MMLU, GPQA, HumanEval, GSM-8K, and MATH performance benchmarks.

At the same time, Meta Llama 3 70B outperforms top proprietary performers like Gemini Pro 1.5 and Claude 3 Sonnet on key performance benchmarks. More specifically, Llama 3 70B scored higher than Gemini Pro 1.5 and Claude 3 Sonnet on benchmarks like MMLU, HumanEval, and GSM-8K, while being competitive on benchmarks like GPQA and MATH.

If we look at MMLU scores in particular, which offer an imperfect measure for how well an LLM understands language, we can see that Llama 3’s 82.0 is very close to GPT -4’s 86.4, and Gemini Ultra’s 90. This suggests that the gap between open and closed source models is closing to a razor’s edge.

Llama 3 8B vs Gemma 7B – It and Mistral 7B Instruct

Benchmark? Llama 3 8B? Gemma 7B – It Mistral 7B Instruct?
MMLU? 68.4 53.3 58.4
GPQA? 34.2 21.4 26.3
HumanEval? 62.2 30.5 36.6
GSM-8K? 79.6 30.6 39.9
MATH? 30.0 12.2 11.0

Llama 3 70B vs Gemini Pro 1.5 and Claude 3 Sonnet

Benchmark? Llama 3 70B? Gemini Pro 1.5? Claude 3 Sonnet?
MMLU? 82.0 81.9 79.0
GPQA? 39.5 41.5 38.5
HumanEval? 81.7 71.9 73.0
GSM-8K? 93.0 91.7 92.3
MATH? 50.4 58.5 40.5

The Bottom Line

Llama 3 represents a big win for the open-source community. AI researchers now have a new high-performance model to experiment with, which they can use to enhance their understanding of the field and develop high-quality, transparent solutions.

Of course, it’s important to note that Llama 3, upon its release, is a text-based model and doesn’t have multimodal capabilities like GPT-4 and Gemini, but the organization has confirmed that these capabilities will be added in the future.

Related Terms

Related Article

',a='';if(l){t=t.replace('data-lazy-','');t=t.replace('loading="lazy"','');t=t.replace(/