Mistral AI has introduced its new Magistral Small 1.2 model on Hugging Face, available under the Apache 2.0 license. With 24 billion parameters, this model stands out for its efficiency and reasoning capabilities, and it can run locally on a single RTX 4090 GPU or even on a MacBook with 32 GB of RAM.
Among its main new features are its multimodal capability thanks to an integrated vision encoder, significantly improved performance on benchmarks such as AIME and GPQA compared to its 1.1 version results showcased in its launch announcement and compatibility with multiple languages (including Spanish, English, French, Portuguese, Chinese, and Arabic). It also offers a context window of up to 128k tokens and the introduction of special reasoning tokens [THINK] and [/THINK], which encapsulate internal processes without revealing them to the end user.
This launch takes place in a context where SLMs (Small Language Models) are gaining increasing popularity in the market. All this comes shortly after Mistral AI received a new funding round that placed its valuation at $13 billion.