Hey,
Make sure to check out MktContext, we’ve been following them ourselves.
It’s run by a professional money manager, trader, and investor who has been timing the market since 2014.
They predict market direction by studying the economy and market internals, sharing their insights on their blog so you can follow along and improve your portfolio returns.
Join 7,000+ subscribers in timing and beating the market — it’s free!
I'll let MktContext take over from here...
Which stocks are facing an existential threat?
We called it! In last Sunday's post we wrote that the market was about to stall. We came to that conclusion based on the Put/Call Skew indicator, which is one of the best indicators for timing market tops. On Monday, all of the major US tech firms including Mag 7 crashed due to the announcement of DeepSeek.
This could be an existential threat. Analysts are comparing this to the Dot-Com moment in 2000 as internet businesses collapsed. Today’s post is a deep dive into DeepSeek and which stocks will benefit or be hurt by it. All equity investors should be paying attention to this news since the indexes are heavily concentrated in overvalued AI names!
DeepSeek - The new AI in town
First off, what is DeepSeek? DeepSeek is an emerging AI language model developed by a Chinese startup founded just two years ago. It grabbed headlines recently by topping the Apple App Store charts shortly after its US launch. Importantly, its capabilities rival established models like ChatGPT, but at a fraction of the cost — reportedly $6M compared to the billions spent by the likes of OpenAI, Google, and Meta.
The novel technology works by shrinking larger models, being more selective in the data it uses, and using self-learning techniques. Efficiency was crucial due to the US’ chip export ban. DeepSeek has already disrupted the AI market in China, forcing Chinese tech giants like ByteDance, Tencent, Baidu, and Alibaba to lower their prices.
This is a revolutionary announcement for several reasons. For one, it challenges the notion that US leads in AI development. China has always been extremely proficient at software and has a good track record in building AI. Secondly, it’s open source so it’s widely accessible and developers can freely access and improve it, further suppressing costs.
But the main reason it led to a meltdown in Nvidia stock (-17% in one day) is it calls into question the billions of dollars that tech firms have spent on compute power. Nothing worse than spending tons of money on something others can replicate cheaply or for free.
Debunking FALSE facts
With this announcement, investors are questioning the need for all the AI infrastructure built over the past few years. What’s the point of expensive Nvidia GPUs if AI can be built without it? Should US tech stocks be valued so highly if they no longer have the technical edge? AI-related stocks cratered on Monday including semiconductors, servers/hardware, networking, fiber-optics, and even nuclear and natural gas (no need for energy if there’s no compute!)
We have a different view on this, but first let’s debunk a few myths. Many sources quote this DeepSeek model only cost $6M to build. But as written in their own whitepaper, that is excluding a bunch of data costs, overhead, and personnel. $6M is just for the final training run, not the total cost. So yes it’s cheaper, but no, you can’t replicate a model for that price.
The fact is, DeepSeek has been known in tech circles for months. Zuckerberg commented on it in a prior interview, and yet this week they still announced a 50% increase in AI capex. Would they spend that much money if they could really build a full-fledged model for practically free? Similarly, the Chinese government announced a 1T Yuan spending plan for AI this week. Again, why would they spend that if they already had a dirt cheap solution?
There’s also the assumption that DeepSeek circumvented Biden’s export bans and got their hands on cutting edge Nvidia H100 chips. The reality is more nuanced — sanctions restricted chips with high bandwidth memory, but the crux of DeepSeek’s breakthrough is they overcame the lack of memory. And it was done out of necessity so they could use older chips. The ban simply was not enough as the US underestimated their ability to innovate around the problem.
Lastly, is DeepSeek better than the latest ChatGPT (by OpenAI), Claude (by Anthropic), or Llama (by Meta)? The jury is, for practical purposes, they are comparable. Probably because DeepSeek learned from existing models which is what allowed them to speed up training. In fact, almost every model borrows from the large models and is why capabilities are converging. The only impressive part of DeepSeek is it was done with low costs. If you have unrestrained access to chips and memory to throw at the problem, you can still produce a better language model.
AI will be a lot cheaper
Not only did DeepSeek prove that you can build a cutting edge model with last-gen chips, they went ahead and open-sourced it so everyone can replicate it themselves. This revolution cut in half the training and inference costs across the industry. The DeepSeek whitepaper is available for all to see, and the US companies are rushing to incorporate it in their own models:
In the long run, cheaper language models (i.e. model commoditization) is a good thing. As Trump put it, “Instead of spending billions and billions, you’ll spend less and come up with the same solution.” No doubt there will be positive flow-through effects for AI demand. There’s a term called “Jevons Paradox” which the CEO of Microsoft explains thusly:
Think about Uber. When Uber made transport cheap and widely available, more people started using cars than before. People who wouldn’t or couldn’t drive (kids, elders, drunk adults) were now on wheels. Similarly, when batteries became cheaper and more efficient, fewer of them would be needed to power our electronics… but more and more electronics started using batteries! The demand increased. When AI is free and available, it becomes accessible to the masses.
Here’s where it gets funny — on Monday as NVDA was cratering, software stocks shot to the moon. Cheaper AI models means the ones building on top of AI are the beneficiaries. Similar to cellphone telecom carriers, who became the “dump pipe” that the entire internet was built on top of. With cheaper models, software companies can easily incorporate AI features into their products and turn this into actual revenue. This will usher in a second wave of widespread AI adoption for the software industry.
Clearly, software executives are over the moon with cheaper AI models:
“No NVIDIA supercomputers or $100M needed. The real treasure of AI isn’t the model — they’ve become commodities. The true value lies in data and metadata, the oxygen fueling AI’s potential. The future’s fortune? It’s in our data. Deepgold.”
-Marc Benioff, CEO of Salesforce.com, Jan 27, 2025
ASML, the world’s largest lithography equipment manufacturer (they make the machines that make chips) stands to be obsoleted if computing power is rendered unnecessary. Yet, their CEO had this to say in light of their blowout earnings quarter:
"Anyone that lowers cost is good news for ASML… Lower cost means AI can be used in more applications, more applications mean more chips."
-Christophe Fouquet, CEO of ASML, Jan 29, 2025
We’ll talk about the “more chips” part in a moment, but suffice it to say, the AI ramp is only just starting.
More chips?
As we alluded to in the beginning of this post, investors are calling into question the billions of dollars Mag 7 have already spent on Nvidia chips that could become obsolete. Well, we are here to tell you that this is NOT an existential threat to mega-cap tech companies as a whole.
One big misconception is that companies won’t need as many GPUs for training anymore. But the thing about reasoning models is they are scalable, which means adding more GPUs can still improve performance. So while some users may be content with “good enough”, overall chip demand isn’t going anywhere.
Part of the reason why DeepSeek is so efficient is that it shifts the compute burden from training to inferencing. It’s therefore cheaper to train but the reasoning process during query (called chain-of-thought) requires more RAM. Therefore, more and better chips are still needed.
Furthermore, DeepSeek’s decision to open source will radically drive demand. Think of all the companies with privacy concerns, regulations, or sensitive client data (e.g. banks, government, healthcare). They can’t send internal data to a third party to train a model, but they can certainly build a private cloud to do it on their own. Open source enables wider adoption.
Similarly, smaller companies with limited resources can also train models now. It opens up the field to more players, and drives demand for new hardware, equipment, data centers, and more chips. Stocks like ANET, VRT, COHR which produce related equipment should benefit.
In the rest of this note, we discuss exactly how each of the Mag 7 stocks will be affected and which ones you should be buying. Why is Meta benefiting from DeepSeek while Google is losing the race? Can King Nvidia continue to thrive under this new regime? Head on over to MktContext to find out!
Disclaimer: This publication is for educational purposes only. The authors are not investment advisors and nothing here is investment advice. Always do your own due diligence.