© Spark AI
M
Frameworks
Meta Llama 4
The latest generation of openly available mixture-of-experts LLMs from Meta, optimized for multimodal understanding and agentic systems.
What is Llama 4?
Meta Llama 4 is the newest iteration in Meta’s openly available family of large language models. It represents a massive architectural leap by utilizing a Mixture-of-Experts (MoE) design. It comes in two primary sizes—Llama 4 Scout and Llama 4 Maverick—and is optimized for multimodal understanding, tool-calling, and powering complex agentic systems.
Key Features
- Mixture-of-Experts Architecture: Utilizes multiple expert sub-networks, ensuring massive total parameter counts (up to 400B for Maverick) while keeping active parameters highly efficient for inference.
- Multimodal & Multilingual: Processes both text and up to 5 images as input, while offering robust multilingual support across 12 languages, including Arabic, Spanish, Hindi, and more.
- Massive Context Length: Llama 4 Scout supports a staggering 10 million token context length, allowing it to process vast codebases and entire document libraries in a single prompt.