Explore Tencent's Hunyuan-Large, a 389B parameter MoE model with 52B active parameters. Discover its top benchmarks, technical innovations, and real-world applications
Silly license. Can be used worldwide, just not within the European Union where I live… (But it’s the same with Meta’s most recent models. The Llama 3.2 usage policy also contains a clause like that.)
Unfortunately every AI company is going to keep avoiding EU like a plague since it’s not clear how they want to enforce AI rules and nobody wants to be caught in a legal battle.
Practically none of the open source AI models are open source. At least not in the sense that term is used for software. Some people try to apply the word to AI models or just use it as a buzzword. It doesn’t mean you get the source to recreate it (the dataset in this case). And they also restrict use in different ways. Open Source in the AI world just means you’re able to download the weights and do inference on your own hardware. And you can do it with this model. Yet the license contains quite some limitations. I think we should stop using the term open source for AI before it loses all it’s meaning.
That doesn’t mean they’re all licensed the same. Some are licensed under a proper free software license and while you usually still don’t get the dataset, you get all the freedoms to use/run, share and modify the models to your liking.
IMHO the OSI is right, the designation “open source” should be reserved for those models that are actually open source (including training data). And apparently there are a few models that actually meet this criterion: “Though none are confirmed, the handful of models that Bdeir told MIT Technology Review are expected to land on the list are relatively small names, including Pythia by Eleuther, OLMo by Ai2, and models by the open-source collective LLM360.” (https://www.technologyreview.com/2024/08/22/1097224/we-finally-have-a-definition-for-open-source-ai/)
Perhaps it would also be useful to have a name for models that release their weights under an OSI license, maybe “open weight”? However, this model would not even meet that… (same for Llama).
Perhaps it would also be useful to have a name for models that release their weights […]
open-weight?
I think the companies mostly stopped releasing the training data after a lot of them got sued for copyright infringement. I believe Meta’s first LLaMA still came with a complete list of datasets that went in. And I forgot the name if the project but the community actually recreated it due to the licensing of the official model at that time that only allowed research. But things changed since then. Meta opened up a lot. Training got more extensive and is still prohibitively expensive (maybe even more so). And the landscape got riddled with legal issues, compared to the very early days where it was mostly research with less attention by everyone.
Could this lead to increased difficulties in releasing open-source models? By keeping their models closed-source, companies may avoid potential copyright infringement issues that could arise from making their everything publicly available
Seems they’ve outlined the used datasets in Annex B of their paper. I haven’t checked if the list is exhaustive and if the training code and scripts to prepare the data are there… If they are, I’d say this is indeed a proper open-source model. And the weights are licensed under an Apache license.
Silly license. Can be used worldwide, just not within the European Union where I live… (But it’s the same with Meta’s most recent models. The Llama 3.2 usage policy also contains a clause like that.)
We should really get some proper AI policy out.
Unfortunately every AI company is going to keep avoiding EU like a plague since it’s not clear how they want to enforce AI rules and nobody wants to be caught in a legal battle.
i.e. it’s most definitely not open source.
Practically none of the open source AI models are open source. At least not in the sense that term is used for software. Some people try to apply the word to AI models or just use it as a buzzword. It doesn’t mean you get the source to recreate it (the dataset in this case). And they also restrict use in different ways. Open Source in the AI world just means you’re able to download the weights and do inference on your own hardware. And you can do it with this model. Yet the license contains quite some limitations. I think we should stop using the term open source for AI before it loses all it’s meaning.
That doesn’t mean they’re all licensed the same. Some are licensed under a proper free software license and while you usually still don’t get the dataset, you get all the freedoms to use/run, share and modify the models to your liking.
IMHO the OSI is right, the designation “open source” should be reserved for those models that are actually open source (including training data). And apparently there are a few models that actually meet this criterion: “Though none are confirmed, the handful of models that Bdeir told MIT Technology Review are expected to land on the list are relatively small names, including Pythia by Eleuther, OLMo by Ai2, and models by the open-source collective LLM360.” (https://www.technologyreview.com/2024/08/22/1097224/we-finally-have-a-definition-for-open-source-ai/)
Perhaps it would also be useful to have a name for models that release their weights under an OSI license, maybe “open weight”? However, this model would not even meet that… (same for Llama).
open-weight?
I think the companies mostly stopped releasing the training data after a lot of them got sued for copyright infringement. I believe Meta’s first LLaMA still came with a complete list of datasets that went in. And I forgot the name if the project but the community actually recreated it due to the licensing of the official model at that time that only allowed research. But things changed since then. Meta opened up a lot. Training got more extensive and is still prohibitively expensive (maybe even more so). And the landscape got riddled with legal issues, compared to the very early days where it was mostly research with less attention by everyone.
Could this lead to increased difficulties in releasing open-source models? By keeping their models closed-source, companies may avoid potential copyright infringement issues that could arise from making their everything publicly available
So would the granite models count as “open source”? They do publish the training data they used.
Seems they’ve outlined the used datasets in Annex B of their paper. I haven’t checked if the list is exhaustive and if the training code and scripts to prepare the data are there… If they are, I’d say this is indeed a proper open-source model. And the weights are licensed under an Apache license.