DeepSeek AI is the latest AI bot to hit the internet. It has taken the world by storm, and for mostly the right reasons. Not only has it shaken up big US tech giants like Nvidia, wiping billions off their market caps, but it has also prompted responses from the likes of OpenAI CEO, Sam Altman, the creator of ChatGPT, who praised the LLM model and even declared it as competition.
This reportedly only cost $6 million for DeepSeek to develop, compared to the billions that other companies are spending to get ahead in the race. So, if you’re curious about what DeepSeek is, how it differs from its rivals, and whether it’s worth switching to, is it Chinese? We will answer the top 10 burning questions here.
What is DeepSeek?
DeepSeek is a company based in Hangzhou, China, founded in July 2023. It hasn’t been long since its inception. The company’s chatbot, DeepSeek AI Assistant, was launched on the Apple App Store and quickly rose to become the top free app shortly after its release on January 10.
Who is the founder of DeepSeek?
DeepSeek was founded by Liang Wenfeng, who reportedly funded the company through his hedge fund, according to MIT Technology Review.
Where did DeepSeek get the computing power to create powerful AI?
As per MIT Technology Review, DeepSeek developed most of its AI products using Nvidia A100, which its founder, Liang, gained access to back when they were not banned in China.
How can you use DeepSeek?
There are two ways you can use DeepSeek. You can either download the DeepSeek AI Bot from the Apple App Store or search for DeepSeek on the App Store and you’ll see the result “DeepSeek AI Assistant.” It is 36.5 MB in size and currently has a 4.6-star rating. Alternatively, you can visit chat.deepseek.com to access the web experience.
Is DeepSeek R1 as powerful as OpenAI GPT-4?
When compared to GPT-4, DeepSeek R1 has a quality score of 90.8 on the MMLU benchmark, versus GPT-4’s 86.4. For the context window, DeepSeek R1 has 128k tokens, while GPT-4 has 8,192 tokens. Another fundamental difference is that DeepSeek R1 is open-source, while GPT-4 is not. The input and output cost also differ significantly, with DeepSeek R1 requiring only a fraction of the cost of GPT-4.
How was DeepSeek so cheap to make?
As reported by various sources, DeepSeek only took $5.5 to $6 million to develop, which is a fraction of the cost of most popular AI models.
Are there any other open-source models like DeepSeek R1?
Contrary to popular belief, companies like Meta do have open-source AI models in the form of its Llama large language model family.
Also Read: Samsung Galaxy S25 Ultra to support Bluetooth S Pen, but there’s a catch- Here’s what we know
What is DeepSeek R1 good at?
Users have come to a common consensus that DeepSeek R1 excels in both maths and reasoning. However, it may not be the best for creative tasks, such as writing and more.
Is DeepSeek based in China, and should you use it?
Recently, there have been many national security concerns regarding AI models, whether in the US or India, which has raised alarms about whether DeepSeek should be used. DeepSeek is indeed based in China and operates out of there. This is why the model follows Chinese content regulations.
What’s the major difference compared to models like ChatGPT or Google Gemini?
Simply put, the way DeepSeek operates is fundamentally different from how ChatGPT works. This is because DeepSeek is open-source and relies on inference-time computing. What this means is that it helps keep costs low and reduces the need for intensive computing resources.