Kneecapped to uselessness. Are we really negating the efforts to stifle climate change with a technology that consumes monstrous amounts of energy only to lobotomize it right as it’s about to be useful? Humanity is functionally removeded at this point.
I’ve seen a big uptick in that word usage, I don’t like seeing them and use a replacing extension to intercept and censor them to a more appropriate word, while showing an asterisk so I know it was censored. Now I don’t have to see the word, but I still get to see who is being a bigoted jerk.
Edit: ya so I guess on lemmy people think it’s cool to throw ableist slurs.
You may not, but the company that packaged the rice did. The cooking instructions on the side of the bag are straight from the FDA. Follow that recipe and you will have rice that is perfectly safe to eat, if slightly over cooked.
I wish I had the source on hand, but you’ll just have to trust my word - after all, 47% of the time, it’s right 100% of the time!
Joking aside, I do wish I had the link to the study as it was cited in an article from earlier this year about AI making stuff up even when it cited sources (literally lying about what was in the sources it claimed it got the info from) and how the companies behind these AI collectively shrugged their shoulders and said “there’s nothing we can do about it” when asked what they intend to do about these “hallucinations,” as they call them.
I do hope you can find it! It’s especially strange that the companies all implied that there was no answer (especially considering that reducing hallucinations has been one of the primary goals over the past year!) Maybe they meant that there was no answer at the moment. Much like how the wright Brothers had no way to control the random pitching and rolling of their aircraft and had no answer to it. (Of course the invention of the aileron would fix that later.)
deleted by creator
Kneecapped to uselessness. Are we really negating the efforts to stifle climate change with a technology that consumes monstrous amounts of energy only to lobotomize it right as it’s about to be useful? Humanity is functionally removeded at this point.
I agree with the sentiment but as an autistic person I’d appreciate it if you didn’t use that word
EDIT: downvotes? Come on, lemmy, what gives? If this had been an anti-trans slur you’d have already grabbed your pitchforks!
I’ve seen a big uptick in that word usage, I don’t like seeing them and use a replacing extension to intercept and censor them to a more appropriate word, while showing an asterisk so I know it was censored. Now I don’t have to see the word, but I still get to see who is being a bigoted jerk.
Edit: ya so I guess on lemmy people think it’s cool to throw ableist slurs.
If you’re asking an LLM for advice, then you’re the exact reason they need to be taught to redirect people to actual experts.
Do you think AI is supposed to be useful?!
Its sole purpose is to generate wealth so that stock prices can go up next quarter.
Great advice. I always consult FDA before cooking rice.
You may not, but the company that packaged the rice did. The cooking instructions on the side of the bag are straight from the FDA. Follow that recipe and you will have rice that is perfectly safe to eat, if slightly over cooked.
Can’t help but notice that you’ve cropped out your prompt.
Played around a bit, and it seems the only way to get a response like yours is to specifically ask for it.
Honestly, I’m getting pretty sick of these low-effort misinformation posts about LLMs.
LLMs aren’t perfect, but the amount of nonsensical trash ‘gotchas’ out there is really annoying.
deleted by creator
Especially since the stats saying that they’re wrong about 53% of the time are right there.
That’s right around 9% lower than the statistic that 62% of all statistics on the Internet are made up on the spot!
I wish I had the source on hand, but you’ll just have to trust my word - after all, 47% of the time, it’s right 100% of the time!
Joking aside, I do wish I had the link to the study as it was cited in an article from earlier this year about AI making stuff up even when it cited sources (literally lying about what was in the sources it claimed it got the info from) and how the companies behind these AI collectively shrugged their shoulders and said “there’s nothing we can do about it” when asked what they intend to do about these “hallucinations,” as they call them.
I do hope you can find it! It’s especially strange that the companies all implied that there was no answer (especially considering that reducing hallucinations has been one of the primary goals over the past year!) Maybe they meant that there was no answer at the moment. Much like how the wright Brothers had no way to control the random pitching and rolling of their aircraft and had no answer to it. (Of course the invention of the aileron would fix that later.)
Better chat models exist w
This one even provides sources to reference.