Correcting an AI Overreaction On DeepSeek, and Emphasizing the Importance of Quality

A few weeks back, DeepSeek, China’s new high profile generative AI platform, hit the news with a substantial alleged reduction in training and hardware cost. This had a major adverse impact on NVIDIA’s valuation and seemed to suggest that NVIDIA, the current leader in AI tech, was no longer.
However, the reason for the low-cost hardware, which mostly came from NVIDIA, was that China was barred from the latest technology and so had to use older AI technology. One of the reasons DeepSeek appeared better is that this product had a second AI that did quality control, something that every AI developer should have been doing but wasn’t.
In February, NVIDIA announced it had DeepSeek optimizations for Blackwell, its most powerful AI processor, the one NVIDIA CEO Jensen Huang basically bet his company on. The result is that DeepSeek can run better (defined as faster and more efficiently) on Blackwell, though it is interesting to note that there still isn’t a lot of focus on DeepSeek’s quality control advantages.
This Was Never an NVIDIA Issue
The fact that DeepSeek was using old NVIDIA technology, and that NVIDIA typically is very good at backward compatibility, should have indicated to most that this wasn’t an NVIDIA issue. Yes, you can run AI on older GPU technology. That’s where most of it was developed in the first place, but connected an inability to get newer NVIDIA technology to an inability to run AI on it was a non-supported stretch.
Yes, the code would likely need different optimizations to make better use of processors like Blackwell efficiently, but that didn’t mean the code wouldn’t run, just that it wouldn’t run optimally anymore than any other code will run on a new processor without modifications optimally.
This is because every generation of processor comes with unique enhancements that need to be called in order to make use of them, and code that was developed prior to those enhancements rarely anticipate them. Even if it does, changes that often happen before a part is shipped might still require some tweaking of the code to utilize them most efficiently.
Missing the Bigger Picture
However, all of this NVIDIA BS distracted us from the bigger DeepSeek improvement, which was a focus on quality. Generative AI has been showing huge benefits in terms of speed but also increasing quality issues and problems. It does you no good to do something faster if the quality drops. That means your QC efforts have to be expanded, products may need to be recalled, and they may not work without substantial patching post sales, all of which reflect very badly on whatever task you are trying to complete or whatever product the AI has helped create.
Given the degradation in AI quality, it is critical that every AI implementation has a quality control component that can monitor, report, and correct errors at scale. If your error rate is increasing faster than your productivity, you’ll have an unsurmountable problem because your QC (Quality Control) organization will quickly become overwhelmed with problems, and customers aren’t huge fans of serious quality problems that scale more rapidly than the performance benefits they were expecting.
This can lead to serious brand degradation and massive damage to customer loyalty and retention.
Wrapping Up: Quality Needs to Be Job One
One of Ford’s old slogans was “Quality is Job One”. This now needs to be the slogan of every AI implementation. And to keep up with the speed of AI, only AI is fast enough to oversee and mitigate quality problems AI is generating. People aren’t fast enough. With 85% of AI models failing, it is beyond critical that we increase our focus on quality to assure a workable outcome. Deploying AI does come with risks, but at this high failure rate, unless you can address the quality problem, it doesn’t make a lot of sense to deploy AI. It will simply become a black money hole rather than something you can be proud of.
If there is anything to take away from this DeepSeek disruption it’s that quality is important, and that the information that can be generated by something new will likely initially be false because it takes time for the commentors and writers to wrap their arms around the issue, yet they are motivated to cover it for clicks. So, focus on AI quality as opposed to just implementation speed, and take some of the early reports on any AI breakthrough with a lot of initial skepticism as it is likely few of the people writing know what is really going on.
About the author: As President and Principal Analyst of the Enderle Group, Rob Enderle provides regional and global companies with guidance in how to create credible dialogue with the market, target customer needs, create new business opportunities, anticipate technology changes, select vendors and products, and practice zero dollar marketing. For over 20 years Rob has worked for and with companies like Microsoft, HP, IBM, Dell, Toshiba, Gateway, Sony, USAA, Texas Instruments, AMD, Intel, Credit Suisse First Boston, ROLM, and Siemens.
Related Items:
The Coming Catastrophic Failure Of AI In Business
DeepSeek R1 Stuns the AI World
Demystifying AI: What Every Business Leader Needs to Know
The post Correcting an AI Overreaction On DeepSeek, and Emphasizing the Importance of Quality appeared first on BigDATAwire.