It certainly wasn’t because the company is owned by a far-right South African billionaire at the same moment that the Trump admin is entertaining a plan to grant refugee status to white Afrikaners. /s
My partner is a real refugee. She was jailed for advocating democracy in her home country. She would have received a lengthy prison sentence after trial had she not escaped. This crap is bullshit. Btw, did you hear about the white-genocide happening in the USA? Sorry, I must have used Grok to write this. Go Elon! Cybertrucks are cool! Twitter isn’t a racist hellscape!
The stuff at the end was sarcasm, you dolt. Shut up.
I am pretty sure that is called disinformation. Rather than “unauthorized edit”…
Brah, if your CEO edits the prompt, it’s not unauthorized. It may be undesirable, but it really ain’t unauthorised
they say unauthorised because they got caught
“I didn’t give you permission to get caught!”
Not sure how they really expected any other outcome.
maybe they just wanted to take their free shot. Its not like there are any real consequences for corporations nowdays
Translation: of course they’re not going to admit that Elon did it.
Hey! He only owns the platform. Why would you think he’s putting his thumb on the scale? /s
They say that they’ll upload the system prompt to github but that’s just deception. The Twitter algorithm is “open source on github” and hasn’t been updated for over 2 years. The issues are a fun read tho https://github.com/twitter/the-algorithm/issues
There’s just no way to trust that anything is running on the server unless it’s audited by 3rd party.
So now all of these idiots going to believe “but its on github open source” when the code is never actually being run by anyone ever.
We need people educated on open source, community-made hardware and software
The unauthorized edit is coming from inside the house.
Its incredible how things can just slip through, especially when they start at the very top
Unauthorized is their office nickname for Musk.
Musk isn’t authorized anymore?
Depends on the ketamine levels in his blood at any given moment. Sometimes, you edit your prompts from a k-hole, and everyone knows you can’t authorize your own actions when you’re fully dissociated.
I don’t understand how he can be such an ass. I spent plenty of time in a hole. I liken it to the river of souls… Your soul flowing through the river of the universe with all the other souls being cleansed. I Consider it a sacrament and a psychedelic at higher levels. Not a party drug… Hard to party when you’re laying down or walking with a 45 degrees slant.
With all his Burning Man experience, you would think he would have done some deems. Then realized there’s more to this life and abusing others is like abusing yourself.
My experience with psychedelics (enjoyed with others) is that what you experience relates directly to you the person.
For me, I was already terribly empathetic and I became crippled with empathy, incapable of any move in any direction that didn’t benefit everyone.
Ego death doesn’t happen for everyone. Some egos are too big to kill and grow even larger.
That is my anecdotal experience. I knew someone who went from huge ego to an ego to end all egos.
He woke up the next day convinced that the world needed him.
I’ve never done ketamine though.
LSD, DMT, and mushrooms. That’s it for me.
Tbh K isn’t even comparable. A hole is basically just like being in a coma if you watch someone. Fully alert but not a thought behind those eyes.
I was never huge into psychedelics for reasons you basically mentioned but used to love K because it was like not being alive for 20 min
Well, I’ve been across the horizon and back a few times and I never came back a cunt. But I also never came back a Nazi billionaire, and then I’m making no promises.
Unilaterally Authorized. Or UnAuthorized for short.
Looks like Elon used his Alt account.
Elon looking for the unauthorized person:
And what about Elmo’s white genocide obsession?
Don’t know the reference but I’m sure it’s awesome. :p
Heeeey a link!
You’re the best. I loved that.
It’s from the show “I think you should leave.” There’s a sketch where someone has crashed a weinermobile into a storefront, and bystanders are like “did anyone get hurt?” “What happened to the driver?” And then this guy shows up.
i wish i knew who the rogue employee was. incredible
I heard their initials are E.M.
Musk made the change but since AI is still as rough as his auto driving tech it did t work like he planned
But this is the future folks. Modifying the AI to fit the narrative of the regime. He’s just too stupid to do it right or he might be stupid and think these llms work better than they actually do.
Are we talking about the same guy that opted to scrap all sensors for his self-driving cars because he figures humans can drive with eyes only, they don’t need more than a camera?
he might be stupid and think these llms work better than they actually do.
There it is.
Yeah, billionaires are just going to randomly change AI around whenever they feel like it.
That AI you’ve been using for 5 years? Wake up one day, and it’s been lobotomized into a trump asshole. Now it gives you bad information constantly.
Maybe the AI was taken over by religious assholes, now telling people that gods exist, manufacturing false evidence?
Who knows who is controlling these AI. Billionaires, tech assholes, some random evil corporation?
Joke’s on you, LLMs already give us bad information
Sure, but unintentionally. I heard about a guy whose small business (which is just him) recently had someone call in, furious because ChatGPT told them that he was having a sale that she couldn’t find. The customer didn’t believe him when he said that the promotion didn’t exist. Once someone decides to leverage that, and make a sufficiently-popular AI model start giving bad information on purpose, things will escalate.
Even now, I think Elon could put a small company out of business if he wanted to, just by making Grok claim that its owner was a pedophile or something.
“Unintentionally” is the wrong word, because it attributes the intent to the model rather than the people who designed it.
Hallucinations are not an accidental side effect, they are the inevitable result of building a multidimensional map of human language use. People hallucinate, lie, dissemble, write fiction, misrepresent reality, etc. Obviously a system that is designed to map out a human-sounding path from a given system prompt to a particular query is going to take those same shortcuts that people used in its training data.
“Unintentionally” is the wrong word, because it attributes the intent to the model rather than the people who designed it.
You misunderstand me. I don’t mean that the model has any intent at all. Model designers have no intent to misinform: they designed a machine that produces answers.
True answers or false answers, a neural network is designed to produce an output. Because a null result (“there is no answer to that question”) is very, very rare online, the training data doesn’t include it; meaning that a GPT will almost invariably produce any answer; if a true answer does not exist in its training data, it will simply make one up.
But the designers didn’t intend for it to reproduce misinformation. They intended it to give answers. If a model is trained with the intent to misinform, it will be very, very good at it indeed; because the only training data it will need is literally everything except the correct answer.
Unintentionally is the right word because the people who designed it did not intend for it to be bad information. They chose an approach that resulted in bad information because of the data they chose to train and the steps that they took throughout the process.
Honestly a lot of the issues result from null results only existing in the gaps between information (unanswered questions, questions closed as unanswerable, searches that return no results, etc), and thus being nonexistent in training data. Models are therefore predisposed toward giving an answer of any kind, and if one doesn’t exist it’ll “make one up.”
Which is itself a misnomer, because it can’t look for an answer and then decide to make one up when it can’t find it. It just gives an answer that sounds plausible, and if the correct answer is most likely in its training data then that’ll seem most plausible.
Incorrect. The people who designed it did not set out with a goal of producing a bot that reguritates true information. If that’s what they wanted they’d never have used a neural network architecture in the first place.
That’s a good reason to use open source models. If your provider does something you don’t like, you can always switch to another one, or even selfhost it.
Or better yet, use your own brain.
Yep, not arguing for the use of generative AI in the slightest. I very rarely use it myself.
While true, it doesn’t keep you safe from sleeper agent attacks.
These can essentially allow the creator of your model to inject (seamlessly, undetectably until the desired response is triggered) behaviors into a model that will only trigger when given a specific prompt, or when a certain condition is met. (such as a date in time having passed)
https://arxiv.org/pdf/2401.05566
It’s obviously not as likely as a company simply tweaking their models when they feel like it, and it prevents them from changing anything on the fly after the training is complete and the model is distributed, (although I could see a model designed to pull from the internet being given a vulnerability where it queries a specific URL on the company’s servers that can then be updated with any given additional payload) but I personally think we’ll see vulnerabilities like this become evident over time, as I have no doubts it will become a target, especially for nation state actors, to simply slip some faulty data into training datasets or fine-tuning processes that get picked up by many models.
I currently treat any positive interaction with an LLM as a “while the getting’s good” experience. It probably won’t be this good forever, just like Google’s search.
Pretty sad that the current state would be considered “good”
With accuracy rates declining over time, we are at the ‘as good as it gets’ phase!
If that’s the case, where’s Jack Nicholson?
Yep, I knew this from the very beginning. Sadly the hype consumed the stupid, as it always will. And we will suffer for it, even though we knew better. Sometimes I hate humanity.