

Waterballoon molotovs.
And indeed on nr 2. Even sillier is of course that the AI doom people thought they could control people like this.


Waterballoon molotovs.
And indeed on nr 2. Even sillier is of course that the AI doom people thought they could control people like this.


Yeah, because putting _'s around something is another way to your text TIL and all that.


The post after that is also a bit worrying


¯\_(ツ)_/¯ Don’t forget to slash your slash \\ becomes \
E: wait the _ also needs to be slashed.


Nice. Good luck at your new job!


Ah suddenly when it reaches the class he feels he should be a part of (or is a part of, I don’t know how much money he makes) violence is suddenly a problem.
It’s not easy to be a cop, and that’s basically what you are around here, but thank you for doing it.
…


the total cost was under $20,000
Doubt. Esp as the cost of training (or just the slamming of sites to get information, and all the extra costs related to that) are not included.


and that a subsequent increase in “productivity” is expected with it.
Oh no… they def will blame the users before blaming the faulty tools. Hope you will not be the one who gets blamed as a wrecker or something when the eventual increase isn’t there (or other metrics fall off a cliff).


Up next, when the first agent fails, implement an agent that checks the other agent. Both of these need agents to check for malicious inputs of course. And translation agents.


It can do trillions of calculations per second. All of them wrong.


So, they are planning to use an ai to fix the sec bugs that their ai generates? Good hussle, if a bit obvious.


Yeah, I intentionally only mentioned the start of the article and the Swartz bit because I didn’t want to lead with what I thought of it all, and was curious what others thought. (And I had not finished it yet because it is a bit long).
I was struck with the notion how many of them are all true AGI believers (which as you said the author took at face value) or rich greedy assholes (like you said), and how we, the people of the sneer, are right that you simply can’t work with these people. Like I feel more validated in the idea that EA is not the right way.
Another detail I noticed, nobody mentioned deepseek, again.


Yep, and would make us all happier, and keep us in control. (deleting all the HP printers is next).


New Yorker article on Sam Altman dropped. Aaron Swartz apparently called him a sociopath. The article itself also had wat looked like an animated AI generated image of Altman so here is the archive.is link (if you can get the latter to load, I was having troubles).
“New interviews and closely guarded documents shed light on the persistent doubts about the head of OpenAI.”


Which skeletons are in your closet?
I’m sure you already have lists of those and are ready to publish them Trace.


Our framing for superintelligence is a humanist superintelligence, and that means that there’s a very clear test that everyone should use to judge whether we are living up to our principles, and that is: does this technology make us all healthier, happier as a species, and keep us all in control.
Going to be difficult, as soon as they develop a superintelligence it tries to delete the entire microsoft codebase.


So if Bender took over he wouldn’t count. As he wants to ‘kill all humans (except Fry)’. Seems like a loophole.


Ah the Epstein drive. (oof that aged…)
Small note however, iirc James S. A. Corey has mentioned the expanse is not hard sf. I don’t have a quote for that however.
That explains why Yud is using twitter so much nowadays. I mean they did ban him right? right?