• 0 Posts
  • 162 Comments
Joined 8 months ago
cake
Cake day: August 26th, 2024

help-circle
  • As I noted on the YouTube video, this is doubly heinous as a lot of CA community college instructors are “freeway flyers” - working at multiple campuses, sometimes almost 100 miles apart, just to cobble together a full-time work schedule for themselves. Online, self-paced, forum-based class formats were already becoming popular even before the pandemic, and I’ve been in such classes where the professor indicated that I was one of maybe 3 or 4 students who bothered to show up to in-person office hours. I have to wonder if that will end up being a hard requirement at some point. The bottom rung on the higher-education ladder is already the most vulnerable, and this just makes it worse.


  • I have to agree. There are already at least two notable and high-profile failure stories with consequences that are going to stick around for years.

    1. The Israeli military’s use of “AI” targeting systems as an accountability sink in service of a predetermined policy of ethnic cleansing.
    2. The DOGE creeps wanting to rewrite bedrock federal payment systems with AI assistance.

    And sadly more to come. The first story is likely to continue to get a hands-off treatment in most US media for a few more years yet, but the second one is almost certainly going to generate Tacoma Narrows Bridge-level legends of failure and necessary restructuring once professionals are back in command. The kind of thing that is put into college engineering textbooks as a dire warning of what not to do.

    Of course, it’s up to us to keep these failures in the public spotlight and framed appropriately. The appropriate question is not, “how did the AI fail?” The appropriate question is, “how did someone abusively misapply stochastic algorithms?”





  • Another thread worth pulling is that biotechnology and synthetic biology have turned out to be substantially harder to master than anticipated, and it didn’t seem like it was ever the primary area of expertise for a lot of these people anyway. I don’t have a copy of any of Kurzweil’s books at hand to look at his predicted timelines for that stuff, but they’re surely way off.

    Faulty assumptions about the biological equivalence of digital neural network algorithms have done a lot of unexamined heavy lifting in driving the current AI bubble, and keeping the harder stuff on the fringes of the conversation. That said, I don’t doubt that a few refugees from the bubble-burst will attempt to inflate the next bubble on the back of speculative biotech, and I’ve seen a couple of signs of that already.




  • Notwithstanding the subject matter, I feel like I’ve always gotten limited value from these Oxford-style university debates. KQED used to run a series called Intelligence Squared US that crammed it into an hour, and I shudder to think what that’s become in the era of Trump and AI. It seems like a format that was developed to be the intellectual equivalent of intramural sports, complete with a form of scoring. But that contrivance renders it devoid of nuance, and also means it can be used to platform and launder ugly bullshit, since each side has to be strictly pro- or anti-whatever.

    Really, it strikes me as a forerunner of the false certainty and point-scoring inherent in Twitter-style short-form discourse. In some ways, the format was unconsciously pared down and plopped online, without any sort of inquiry into its weaknesses. I’d be interested to know if anyone feels any different.