2 – DeepSeek: now the truth comes out, and it’s not far from what I suspected
DeepSeek rocked the world with newer, cheaper, faster to build AI models in January.
American tech experts (who are spending billions) said, “no way.”
I told you, “Watch and see."
China was very likely to have used American know-how and our open-source AI models as part of the workup to train its models. Just like insisting that “partners” doing business in China are required to reveal critical trade secrets that Beijing “borrows” - to put it gently – as a way to build native industries.
Reports this morning suggest I may have been on to something. (Read)
Chinese researchers apparently used a technique called “distillation” to hoover information from larger, top tier models that was then, in turn, used to train smaller, cheaper and faster to build versions.
Silicon Valley researchers were gob smacked but then recreated the scenario.
CNBC reports “researchers at Berkeley said, they recreated OpenAI’s reasoning model for $450 in 19 hours last month. Soon after, researchers at Stanford and the University of Washington created their own reasoning model in just 26 minutes, using less than $50 in compute credits, they said. The startup Hugging Face recreated OpenAI’s newest and flashiest feature, Deep Research, as a 24-hour coding challenge.”
My guess is that global intelligence organizations are very aware of what’s happened, but that information will never see the light of day for a variety of reasons.
It’s a tough row to hoe.
The West built AI then proudly trumpeted the fact that it was “open source” which is a lot like waving an all you can eat sign in front of a ginormous, hungry Viking looking for dinner.
Keith’s Investing Tip: Technology is accelerating at a tremendous pace and every investor who is thinking in terms of what was will be left behind by what “will be.” Most investors could probably double their tech allocation and still not have enough.
No comments:
Post a Comment