Deepseeek R1-0528 comes with the powerful open source challenge to Openi O3 and Google Gemini 2.5 Pro

Deepseeek R1-0528 comes with the powerful open source challenge to Openi O3 and Google Gemini 2.5 Pro

Join our daily and weekly newsletters for newest updates and exclusive content to cover the industry. Learn more


The whale returned.

After rocking the global AI and Business Community early this year With the enristance of January 20 release of its hit Open source Reason Ai Model R1The Chinese Startup Defesseek – A Spinaff Of The Hong Kong Hong Kong Quantitative Airts Capital Management – Released Dreesseek-R1-0528An Important Update carrying the free and open Dreeseeek model near the competition of reasoning capabilities with a paid model such as Openii’s O3 and Google Gemini 2.5 Pro

This update is designed to provide a stronger performance in complex acts of mathematics, science and programming, with enhanced parts for developers and researchers.

As with this first, Depeek-R1-0528 is available under Doors and Open MIT LicenseSupport commercial use and permission of developers to adapt to the model of their needs.

Open-source Model weights available through AI Code Sharing Community Hugging Faceand detailed documentation is provided for local distributors or participates through DeepSessek API.

The existing Deepsessek API users automatically have their models of information updated at R1-0528 with no additional cost. The current cost for Dereseeeek API

For those who seek to run local model, Dereseeeek published detailed GitHub repository instructions. The company also encourages community to give feedback and questions through their service email.

Individual users can try this for free Depeedek website hereEven if you need to provide a phone number or access to Google Account to sign in.

Benchmark Reason and Benchmark Improved

In the core of updating is important improvement in the modeling ability to handle difficult tasks of reasoning.

Derepsineek explained its new model card in spending these improvements from the development of computation and applying post-training algorithmic optimations. This method has resulted in famous advances in different benchmarks.

At AIME 2025 test, for example, the accuracy of the Repeek-R1-05528 from 77.5%, indicating the deepest 23,000 tokens per question compared to the previous version.

The coding performance also sees an enthusiasm, with the accuracy of LiveCodeBench dataset from 63.5% to 73.3%. In the “last human exam,” making more than doubles, reaching 17.7% from 8.5%.

These advances put Dreeseek-R1-0528 closer to the performance of established models Openia’s O3 and Gemini 2.5 ProAccording to internal evaluations – both of the models with rate limits and / or require paid subscriptions to access.

Ux upgrade and new features

More than performance improvements, Dereseek-R1-0528 is introduced many new features addressed to develop user experience.

Update adds support for json output and function calling, features that should be easier for developers in the options of applications and overworking models.

The front-hand capabilities also refined, and Deepheeek says these changes will make a smooth, more efficient interaction for users.

In addition, the rate of the model of the model is lowered, gives a more reliable and consistent consequence.

A famous update is to introduce system prompts. Unlike the previous version, which requires a special token to start output to activate “mind” mode, this update takes the need, update update for developers.

Small variants for those with more limited compute budgets

Alongside this release, dereseeek has damaged the chain cause to reason with a small variant, deepsek-r1-0528-qwen3-8b8-qwen3-8b, which should help those who have stopped running necessary

This distilled version is reported to achieve state-of-the-art performance at open source models such as AIME 2024, qwen3-8b-thinking out of qwen3-235b-thinking.

According to Wataldodthat runs an 8-billion parameter large language model (LLM) in the middle precision (FP16) requires about 16 GB of GPU memory, up to about 2 GB per billion parameters.

Therefore, a high-end GPU with at least 16 GB of VRAM, such as NVIVIA RTX 3090 or 4090, enough to run an 8B LLM in FP16 accuracy. For more models, GPUs with 8-12 GB of VRAM, such as RTX 3060, can be used.

Deresteek believes this distilled model proves useful for academic applications and industrial applications that require small models.

Initial reaction to AI and influence influence

Update has already caught attention and admiration from developers and social media enthusiasts.

Haider aka “@slow_developer“X shared that Deepsesseek-R1-0528” is not really credible to coding, “which is talking about the word tests. According to the first attempt at first attempt. According to the first attempt at the arrival of that show.

While, Oral al -magic Posted That “derepesehek refers to the king: O3 and Gemini 2.5 Pro,” which reflects approval that new updates are heading to top performers.

Another influence on AI news and hesitant Chubbycommented that “Derepesehek cooked!” and highlighted how new version is nearly with O3 and Gemini 2.5 pro.

Chubby still shows that the last update of R1 can indicate that Dereseeeek is preparing to release long expected and regarded as “R2” model ahead.

Looking forward

The release of DeepSesseek-R1-0528 Understores DeepSec’s Commental Delivery of high performance, open-sourter models mainly with objective models. By combining measurable benchmark benchmarks with practical features and a permissive open-source license, repeseeek-r1-0528 set up most recent language model capabilities.

Let me know if you want to add any quotes, adjust the tune further, or promote additional elements!

Leave a Reply

Your email address will not be published. Required fields are marked *