Telegram Group & Telegram Channel
Forwarded from Machinelearning
🌟MiniMax-M1: открытя reasoning‑LLM с контекстом 1M

MiniMax-M1 — первая в мире open-weight гибридная reasoning‑LLM c 1M контекстом (8× DeepSeek R1) и гибридной архитектурой MoE + lightning attention.
• 456 млрд параметров (45,9 млрд активируются на токен), сверхэффективная генерация — 25% FLOPs DeepSeek R1 на 100K токенов
• Обучение через RL с новым алгоритмом CISPO, решающим реальные задачи от математики до кодинга
• На обучение было потрачено $534K, две версии — 40K/80K “thinking budget”
• Обходит DeepSeek R1 и Qwen3-235B на бенчмарках по математике и кодингу,
• Топ результат на задачах для software engineering и reasoning



Бенчмарки:
AIME 2024: 86.0 (M1-80K) vs 85.7 (Qwen3) vs 79.8 (DeepSeek R1)

SWE-bench Verified: 56.0 vs 34.4 (Qwen3)

OpenAI-MRCR (128k): 73.4 vs 27.7 (Qwen3)

TAU-bench (airline): 62.0 vs 34.7 (Qwen3)

LongBench-v2: 61.5 vs 50.1 (Qwen3)


➡️ Попробовать можно здесь

Hugging Face: https://huggingface.co/collections/MiniMaxAI/minimax-m1-68502ad9634ec0eeac8cf094
GitHub: https://github.com/MiniMax-AI/MiniMax-M1
Tech Report: https://github.com/MiniMax-AI/MiniMax-M1/blob/main/MiniMax_M1_tech_report.pdf


@ai_machinelearning_big_data

#llm #reasoningmodels #minimaxm1
Please open Telegram to view this post
VIEW IN TELEGRAM



group-telegram.com/machinelearning_interview/1862
Create:
Last Update:

🌟MiniMax-M1: открытя reasoning‑LLM с контекстом 1M

MiniMax-M1 — первая в мире open-weight гибридная reasoning‑LLM c 1M контекстом (8× DeepSeek R1) и гибридной архитектурой MoE + lightning attention.
• 456 млрд параметров (45,9 млрд активируются на токен), сверхэффективная генерация — 25% FLOPs DeepSeek R1 на 100K токенов
• Обучение через RL с новым алгоритмом CISPO, решающим реальные задачи от математики до кодинга
• На обучение было потрачено $534K, две версии — 40K/80K “thinking budget”
• Обходит DeepSeek R1 и Qwen3-235B на бенчмарках по математике и кодингу,
• Топ результат на задачах для software engineering и reasoning



Бенчмарки:
AIME 2024: 86.0 (M1-80K) vs 85.7 (Qwen3) vs 79.8 (DeepSeek R1)

SWE-bench Verified: 56.0 vs 34.4 (Qwen3)

OpenAI-MRCR (128k): 73.4 vs 27.7 (Qwen3)

TAU-bench (airline): 62.0 vs 34.7 (Qwen3)

LongBench-v2: 61.5 vs 50.1 (Qwen3)


➡️ Попробовать можно здесь

Hugging Face: https://huggingface.co/collections/MiniMaxAI/minimax-m1-68502ad9634ec0eeac8cf094
GitHub: https://github.com/MiniMax-AI/MiniMax-M1
Tech Report: https://github.com/MiniMax-AI/MiniMax-M1/blob/main/MiniMax_M1_tech_report.pdf


@ai_machinelearning_big_data

#llm #reasoningmodels #minimaxm1

BY Machine learning Interview





Share with your friend now:
group-telegram.com/machinelearning_interview/1862

View MORE
Open in Telegram


Telegram | DID YOU KNOW?

Date: |

In the United States, Telegram's lower public profile has helped it mostly avoid high level scrutiny from Congress, but it has not gone unnoticed. Artem Kliuchnikov and his family fled Ukraine just days before the Russian invasion. Emerson Brooking, a disinformation expert at the Atlantic Council's Digital Forensic Research Lab, said: "Back in the Wild West period of content moderation, like 2014 or 2015, maybe they could have gotten away with it, but it stands in marked contrast with how other companies run themselves today." Lastly, the web previews of t.me links have been given a new look, adding chat backgrounds and design elements from the fully-features Telegram Web client. The S&P 500 fell 1.3% to 4,204.36, and the Dow Jones Industrial Average was down 0.7% to 32,943.33. The Dow posted a fifth straight weekly loss — its longest losing streak since 2019. The Nasdaq Composite tumbled 2.2% to 12,843.81. Though all three indexes opened in the green, stocks took a turn after a new report showed U.S. consumer sentiment deteriorated more than expected in early March as consumers' inflation expectations soared to the highest since 1981.
from sg


Telegram Machine learning Interview
FROM American