You are currently offline

Meta says that users of the new Llama models should expect more "steerability"

Meta's Llama 3, a new entry in the company's open generative AI models, has been released with two models available: Llama 3 8B and Llama 3 70B. Meta claims these models are a significant improvement compared to the previous-gen Llama models, with Llama 3 8B being among the best-performing generative AI models available today for its parameter count. The models have been trained on two custom-built 24,000 GPU clusters and have shown superior performance on popular AI benchmarks like MMLU, ARC, and DROP.

Llama 3 8B outperforms other open models such as Mistral’s Mistral 7B and Google’s Gemma 7B on at least nine benchmarks, including MMLU, ARC, DROP, GPQA, HumanEval, GSM-8K, MATH, AGIEval, and BIG-Bench Hard. Meta also claims that the larger-parameter-count Llama 3 model, Llama 3 70B, is competitive with flagship generative AI models, including Google’s Gemini 1.5 Pro.

Meta has also developed its own test set covering use cases ranging from coding and creative writing to reasoning to summarization, and Llama 3 70B came out on top against Mistral’s Mistral Medium model, OpenAI’s GPT-3.5, and Claude Sonnet.

Meta says that users of the new Llama models should expect more "steerability," a lower likelihood to refuse to answer questions, and higher accuracy on trivia questions, questions pertaining to history and STEM fields such as engineering and science, and general coding recommendations. This is due to a much larger dataset of 15 trillion tokens, or a mind-boggling ~750,000,000,000 words, which is seven times the size of the Llama 2 training set.

Meta has also developed new data-filtering pipelines to boost the quality of its model training data and updated its pair of generative AI safety suites, Llama Guard and CybersecEval, to prevent the misuse of and unwanted text generations from Llama 3 models and others. The company is also releasing a new tool, Code Shield, designed to detect code from generative AI models that might introduce security vulnerabilities. However, filtering isn't foolproof, and tools like Llama Guard, CyberSecEval, and Code Shield only go so far.

Meta says that the Llama 3 models will soon be hosted in managed form across a wide range of cloud platforms, including AWS, Databricks, Google Cloud, Hugging Face, Kaggle, IBM’s WatsonX, Microsoft Azure, Nvidia’s NIM, and Snowflake. The Llama 3 models might be widely available, but they are not open source, as Meta forbids developers from using Llama models to train other generative models. App developers with more than 700 million monthly users must request a special license from Meta that the company will grant based on its discretion.

Meta is currently training Llama 3 models over 400 billion parameters in size, with the ability to "converse in multiple languages," take more data in, and understand images and other modalities as well as text. The company's goal is to make Llama 3 multilingual and multimodal, have longer context, and continue to improve overall performance across core large language model capabilities such as reasoning and coding.

Meta says that users of the new Llama models should expect more "steerability"
Meta says that users of the new Llama models should expect more "steerability"
Share Article:
blank

blank strive to empower readers with accurate insightful analysis and timely information on a wide range of topics related to technology & it's impact

Post a Comment (0)
Previous Post Next Post