The excitement surrounding DeepSeek is hard to ignore. With its open-source R1 AI model, it’s not just making waves but also outperforming OpenAI’s previous o1 model when it comes to reasoning in areas like mathematics, science, and programming. It’s become the top free app download in the US, overtaking ChatGPT. This buzz has made a significant impact on major companies like Microsoft, Meta, and NVIDIA, all experiencing drops in their stock prices due to the rapid rise of DeepSeek.
Meta’s leading AI expert, Yann LeCun, emphasizes that DeepSeek’s success is deeply rooted in the open-source approach of its primary model. Nevertheless, this surge in popularity isn’t without its challenges—security issues are becoming a real concern. Recently, DeepSeek had to limit user sign-ups because of “large-scale malicious attacks” on their network, yet regular users continue to enjoy its services without any disruption.
Industry insiders are lauding DeepSeek for surpassing even proprietary AI models, but there’s another side of the coin. Some detractors point to its open-source nature, arguing that being freely accessible to anyone may be a double-edged sword. To provide some background, DeepSeek’s power comes from its open-source V3 model.
Interestingly, training the model only set them back around $6 million, which is relatively modest compared to some other high-budget AI projects. This amount is particularly noteworthy given the constraints on developing cutting-edge AI systems, often due to a lack of robust training datasets.
DeepSeek’s rise comes on the heels of OpenAI and SoftBank’s monumental $500 billion Stargate Project, aiming to fuel AI infrastructure development across the US. During the launch, former President Donald Trump proclaimed it as the largest AI infrastructure undertaking ever, set to “safeguard the future of technology” within American borders.
In summary, while DeepSeek aligns with OpenAI’s foundational goal of making AI advancements freely available for the benefit of humanity, it also raises significant security and safety concerns, especially in light of recent cyber threats. One might start to see the reasoning behind OpenAI CEO Sam Altman’s stance on favoring closed-source advanced AI models, viewing it as “a simpler path to ensure safety.”