From my own fractured understanding, this is indeed true, but the “DeepSeek” everybody is excited about, which performs as well as OpenAI’s best products but faster, is a prebuilt flagship model called R1. (Benchmarks here.)
The training data will never see the light of day. It would be an archive of every ebook under the sun, every scraped website, just copyright infringement as far as the eye can see. That would be the source they would have to release to be open source, and I doubt they would.
But DeepSeek does have the code for “distilling” other companies’ more complex models into something smaller and faster (and a bit worse) - but, of course, the input models are themselves not open source, because those models (like Facebook’s restrictive Llama model) were also trained on stolen data. (I’ve downloaded a couple of these distillations just to mess around with them. It feels like having a dumber, slower ChatGPT in a terminal.)
Theoretically, you could train a model using DeepSeek’s open source code and ethically sourced input data, but that would be quite the task. Most people just add an extra layer of training data and call it a day. Here’s one such example (I hate it.) I can’t even imagine how much data you would have to create yourself in order to train one of these things from scratch. George RR Martin himself probably couldn’t train an AI to speak in a comprehensible manner by feeding it his life’s work.
From my own fractured understanding, this is indeed true, but the “DeepSeek” everybody is excited about, which performs as well as OpenAI’s best products but faster, is a prebuilt flagship model called R1. (Benchmarks here.)
The training data will never see the light of day. It would be an archive of every ebook under the sun, every scraped website, just copyright infringement as far as the eye can see. That would be the source they would have to release to be open source, and I doubt they would.
But DeepSeek does have the code for “distilling” other companies’ more complex models into something smaller and faster (and a bit worse) - but, of course, the input models are themselves not open source, because those models (like Facebook’s restrictive Llama model) were also trained on stolen data. (I’ve downloaded a couple of these distillations just to mess around with them. It feels like having a dumber, slower ChatGPT in a terminal.)
Theoretically, you could train a model using DeepSeek’s open source code and ethically sourced input data, but that would be quite the task. Most people just add an extra layer of training data and call it a day. Here’s one such example (I hate it.) I can’t even imagine how much data you would have to create yourself in order to train one of these things from scratch. George RR Martin himself probably couldn’t train an AI to speak in a comprehensible manner by feeding it his life’s work.