I’ve recently noticed this opinion seems unpopular, at least on Lemmy.
There is nothing wrong with downloading public data and doing statistical analysis on it, which is pretty much what these ML models do. They are not redistributing other peoples’ works (well, sometimes they do, unintentionally, and safeguards to prevent this are usually built-in). The training data is generally much, much larger than the model sizes, so it is generally not possible for the models to reconstruct random specific works. They are not creating derivative works, in the legal sense, because they do not copy and modify the original works; they generate “new” content based on probabilities.
My opinion on the subject is pretty much in agreement with this document from the EFF: https://www.eff.org/document/eff-two-pager-ai
I understand the hate for companies using data you would reasonably expect would be private. I understand hate for purposely over-fitting the model on data to reproduce people’s “likeness.” I understand the hate for AI generated shit (because it is shit). I really don’t understand where all this hate for using public data for building a “statistical” model to “learn” general patterns is coming from.
I can also understand the anxiety people may feel, if they believe all the AI hype, that it will eliminate jobs. I don’t think AI is going to be able to directly replace people any time soon. It will probably improve productivity (with stuff like background-removers, better autocomplete, etc), which might eliminate some jobs, but that’s really just a problem with capitalism, and productivity increases are generally considered good.
There are a lot of problems with it. Lots of people could probably tell you about security concerns and wasted energy. Also there’s the whole comically silly concept of them marketing having AI write your texts and emails for you, and then having it summarize the texts and emails you get. Just needlessly complicating things.
Conceptually, though, most people aren’t too against it. In my opinion, all the stuff they are labeling “generative AI” isn’t really “AI” or “generative”. There are lots of ways that people define AI, and without being too pedantic about definitions, the main reason I think they call it that, other than marketing, is that they are really trying to sway public opinion by controlling language. Scraping all sorts of copywritten material, and re-jumbling it to spit out something similar, is arguably something we should prohibit as copyright infringement. It’s enough of a gray area to get away with short term. By convincing people with the very language they use to describe it that they aren’t just putting other people’s material in a mixer, they are “generating new content”, they hope to have us roll over and sign off on what they’ve been doing.
Saying that humans create stories by jumbling together previous stories is a BS cop out, too. Obviously, we do, but humans have not, and do not have to give computers that same right. Also, LLMs are very complex, but they are also way way less complex than human minds. The way they put together text is closer to running a story through Google translate 10 times than it is to a human using a story for inspiration.
There are real, definite benefits of using LLMs, but selling it as AI and trying to force it into everything is a gimmick.