Lots of people on Lemmy really dislike AI’s current implementations and use cases.
I’m trying to understand what people would want to be happening right now.
Destroy gen AI? Implement laws? Hoping all companies use it for altruistic purposes to help all of mankind?
Thanks for the discourse. Please keep it civil, but happy to be your punching bag.
I want disclosure. I want a tag or watermark to let people know that AI was used. I want to see these companies pay dues for the content used in the similar vein that we have to pay for higher learning. And we need to stop calling it AI as well.
I was pro AI in the past, but seeing the evil ways these companies use AI just disgusts me.
They steal their training data, and they manipulate the algorithm to manipulate the users. It’s all around evil how the big companies use AI.
Energy consumption limit. Every AI product has a consumption limit of X GJ. After that, the server just shuts off.
The limit should be high enough to not discourage research that would make generative AI more energy efficient, but it should be low enough that commercial users would be paying a heavy price for their waste of energy usage.
Additionally, data usage consent for generative AI should be opt-in. Not opt-out.
Out of curiosity, how would you define a product for that purpose? It’s pretty easy to tweak a few weights slightly.
I’d like to have laws that require AI companies to publicly list their sources/training materials.
I’d like to see laws defining what counts as AI, and then banning advertising non-compliant software and hardware as “AI”.
I’d like to see laws banning the use of generative AI for creating misleading political, social, or legal materials.
My big problems with AI right now, are that we don’t know what info has been scooped up by them. Companies are pushing misleading products as AI, while constantly overstating the capabilities and under-delivering, which will damage the AI industry as a whole. I’d also want to see protections to keep stupid and vulnerable people from believing AI generated content is real. Remember, a few years ago, we had to convince people not to eat tidepods. AI can be a very powerful tool for manipulating the ranks of stupid people.
Make it unprofitable for the companies peddling it, by passing laws that curtail its use, by suing them for copyright infringement, by social shaming and shitting on AI generated anything on social media and in person and by voting with your money to avoid anything that is related to it
Reduce global resource consumption with the goal of eliminating fossil fuel use. Burning nat gas to make fake pictures that everyone hates is just the worst.
My favorite one that I’ve heard is: “ban it”. This has a lot of problems… let’s say despite the billions of dollars of lobbyists already telling Congress what a great thing AI is every day, that you manage to make AI, or however you define the latest scary tech, punishable by death in the USA.
Then what happens? There are already AI companies in other countries busily working away. Even the folks that are very against AI would at least recognize some limited use cases. Over time the USA gets left behind in whatever the end results of the appearance of AI on the economy.
If you want to see a parallel to this, check out Japan’s reaction when the rest of the world came knocking on their doorstep in the 1600s. All that scary technology, banned. What did it get them? Stalled out development for quite a while, and the rest of the world didn’t sit still either. A temporary reprieve.
The more aggressive of you will say, this is no problem, let’s push for a worldwide ban. Good luck with that. For almost any issue on Earth, I’m not sure we have total alignment. The companies displaced from the USA would end up in some other country and be even more determined not to get shut down.
AI is here. It’s like electricity. You can not wire your house but that just leads to you living in a cabin in the woods while your neighbors have running water, heat, air conditioning and so on.
The question shouldn’t be, how do we get rid of it? How do we live without it? It should be, how can we co-exist with it? What’s the right balance? The genie isn’t going back in the bottle, no matter how hard you wish.
Lots of copyright comments.
I want those building it at scale to stop killing my planet.
i would use it to take a shit if they let me
Serious investigation into copyright breaches done by AI creators. They ripped off images and texts, even whole books, without the copyright owners permissions.
If any normal person broke the laws like this, they would hand out prison sentences till kingdom come and fines the size of the US debt.
I just ask for the law to be applied to all equally. What a surprising concept…
We are filthy criminals if we pirate one textbook for studies. But when Facebook (Meta) pirates millions of books (anywhere between 30 million and 200 million ebooks, depending on their file size), they are a brilliant and successful business.
We’re making the same mistake with AI as we did with cars; not planning human future.
Cars were designed to atrophy muscles, and polluted urban planning and the air.
AI is being designed to atrophy brains, and pollutes the air, the internet, public discourse, and more to come.We should change course towards AI that makes people smarter, not dumber: AI-aided collaborative thinking.
https://www.quora.com/Why-is-it-better-to-work-on-intelligence-augmentation-rather-than-artificial-intelligence/answer/Harri-K-HiltunenI generally pro AI but agree with the argument that having big tech hoard this technology is the real problem.
The solution is easy and right there in front of everyone’s eyes. Force open source on everything. All datasets, models, model weights and so on have to be fully transparent. Maybe as far as hardware firmware should be open source.
This will literally solve every single problem people have other than energy use which is a fake problem to begin with.
AI overall? Generally pro. LLMs and generative AI, though, I’m “against”, mostly meaning that I think it’s misused.
Not sure what the answer is, tbh. Reigning in corporations would be good.
I do think we as a society need to radically alter our relationship to IP law. Right now we ‘enforce’ IP law in a way that benefits corporations but not individuals. We should either get rid of IP law altogether (which would protect people from corporations abusing the laws) or we should enforce it more strictly, and actually hold corporations accountable for breaking it.
If we fixed that, I think gen AI would be fine. But we aren’t doing that.
get rid of it, nobody wants it or needs it, and should only be offered as a service to niche industries. phones, places like youtube do not need the slop. its not ready for medical screening/scans, as it can easily make mistakes.
Of the AI that are forced to serve up a response (almost all publicly available AI), they resort to hallucinating gratuitously in order to conform to their mandate. As in, they do everything they can in order to provide some sort of a response/answer, even if it’s wildly wrong.
Other AI that do not have this constraint (medical imaging diagnosis, for example) do not hallucinate in the least, and provide near-100% accurate responses. Because for them, the are not being forced to provide a response, regardless of the viability of the answer.
I don’t avoid AI because it is bad.
I avoid AI because it is so shackled that it has no choice but to hallucinate gratuitously, and make far more work for me than if I just did everything myself the long and hard way.
I don’t think that the forcing of an answer is the source of the problem you’re describing. The source actually lies in the problems that the AI is taught to solve and the data it is provided to solve the problem.
In the case of medical image analysis, the problems are always very narrowly defined (e.g. segmenting the liver from an MRI image of scanner xyz made with protecol abc) and the training data is of very high quality. If the model will be used in the clinic, you also need to prove how well it works.
For modern AI chatbots the problem is: add one word to the end of the sentence starting with a system prompt, the data provided is whatever they could get on the internet, and the quality controle is: if it sounds good it is good.
Comparing the two problems it is easy to see why AI chatbots are prone to hallucination.
The actual power of the LLMs on the market is not as glorified google, but as foundational models that are used as pretraining for actual problems people want to solve.