Salesforce AI's Moirai 2.0: New Time Series Model Tops Benchmarks
Salesforce AI Research has unveiled Moirai 2.0, a significant leap forward in the realm of time series foundation models. Built upon an innovative decoder-only transformer architecture, this new model has rapidly claimed the top position on the GIFT-Eval benchmark, widely recognized as the gold standard for evaluating time-series forecasting models. Moirai 2.0 distinguishes itself not only by achieving superior performance but also by doing so with remarkable efficiency: it boasts a 44% faster inference speed and is 96% smaller in size compared to its predecessor, all without compromising accuracy. This combination of speed, compactness, and precision positions it as a potential game-changer for both academic research and practical enterprise applications.
A core innovation behind Moirai 2.0’s capabilities lies in its architectural shift. Unlike prior models that might rely on masked encoders, Moirai 2.0 adopts a decoder-only transformer. This design choice significantly enhances its ability to model autoregressive forecast generation, a process where future values are predicted sequentially based on past observations. This architectural refinement also improves scalability, allowing the model to perform effectively on larger and more intricate datasets. Further efficiencies are gained through its ability to predict multiple data points, or “tokens,” simultaneously, rather than just one, which contributes to greater stability during forecasting. The model also incorporates advanced data filtering mechanisms during training, automatically excluding low-quality or non-forecastable time series to bolster its robustness. Additionally, new techniques like patch token embedding and random masking have been integrated to improve the model’s ability to encode information about missing values and maintain robustness when dealing with incomplete data during the prediction phase.
Moirai 2.0’s impressive generalization capabilities stem from its expanded and diverse pretraining dataset. This foundation includes real-world datasets such as GIFT-Eval Pretrain and Train, synthetic time series generated through Chronos mixup and KernelSynth procedures from Chronos research, and valuable internal operational data sourced directly from Salesforce IT systems. This broad and varied data mix ensures that Moirai 2.0 can adeptly generalize across a multitude of forecasting tasks and domains, making it highly adaptable to different business needs.
The performance metrics of Moirai 2.0 underscore its breakthrough status. It has achieved the best MASE (Mean Absolute Scaled Error) score on GIFT-Eval among non-data-leaking models, which is an industry-accepted metric for forecast accuracy. Furthermore, its CRPS (Continuous Ranked Probability Score) performance matches that of previous state-of-the-art models. When directly compared to Moirai_large, its predecessor, Moirai 2.0 exhibits a 16% improvement in MASE and a 13% improvement in CRPS. These accuracy gains, coupled with the dramatic reduction in inference time (44% faster) and parameter size (96% smaller), signify that high-performance, scalable forecasting is now more accessible than ever before.
For practitioners, Moirai 2.0’s advancements translate into tangible benefits across critical enterprise domains. Its capabilities extend far beyond academic benchmarks, finding practical applications in areas such as IT operations for proactive capacity scaling and anomaly detection, sales forecasting for accurate revenue predictions, demand forecasting to optimize inventory management, and supply chain planning for improved scheduling and reduced waste. The significantly reduced model size and enhanced speed mean that high-quality forecasting can now be applied at an unprecedented scale, empowering businesses to make smarter, faster decisions regardless of the complexity of their data infrastructure.
Salesforce has made Moirai 2.0 accessible to developers and data scientists, facilitating seamless integration into existing workflows. The model and related open-source modules are available on Hugging Face, allowing for straightforward implementation. Developers can load Moirai 2.0, prepare their datasets, generate forecasts, and visualize results using standard Python libraries and a streamlined workflow, with full examples and notebooks provided by Salesforce for deeper experimentation.
By democratizing access to cutting-edge, general-purpose forecasting technology, Moirai 2.0 is poised to reshape the landscape of time series modeling. Its flexibility across various domains, enhanced robustness, faster inference, and lower computational demands pave the way for businesses and researchers globally to harness the power of forecasting for transformative decision-making.