

That is true, of course, but LLMs, image and video generation, and so on, will, in my view, lead to fewer and fewer people being willing to publish their creative works, because the staggering output of AI models will not only make it increasingly unlikely that they will receive compensation for their work, but also that they will receive recognition for it.
As a result, I think, there will likely be fewer and fewer people willing to accept that their work is being used for free to train precisely those models from which only the people who steal their work benefit - without this theft, the business model of OpenAI and the like simply cannot function.
In my view, this will sooner or later lead to a vicious cycle in which the models are trained predominantly only with content they have generated themselves. This will then lead to a stagnation of what we understand as culture - for these models are neither creative nor intelligent: they can merely combine existing content to create something that appears new; however, they cannot produce anything truly new. Nevertheless, given its ever-expanding reach, it will likely be this repetitive AI output that has a significant influence on popular culture, at the very least.















It doesn’t matter that you’re not a scientist. Your approach is still interesting - and yes: maybe it will inspire people or even spur research on these topics. I’m keeping my fingers crossed!