Money represents the aggregate value of the intersection between human labour, ingenuity and scarce finite resources. Human lives are routinely rendered down, ground up and consumed by the drive to generate this representative value. Entire ways of living, forms of self perception and our understanding of what makes a human worthy of existing is inextricably wrapped up in this value generating process.
As a society we have declared that these people are best placed to decide what to do with that value. They chose anime.
To be fair I’ve spent an inordinate amount of time looking at stuff on the Internet that doesn’t interest me. Especially since my workplace moved their employee training online.
Man I feel this, particularly the sudden shutting down of data access because all the platforms want OpenAI money. I spent three years building a tool that pulled follower relation data from Twitter and exponentially crawled it’s way outwards from a few seed accounts to millions of users. Using that data it was able to make a compressed summary network, identify community structures, give names to the communities based on words in user profiles, and then use sampled tweet data to tell us the extent to which different communities interacted.
I spent 8 months in ethics committees to get approval to do it, I got a prototype working, but rather than just publish I wanted to make it accessible to the academic community so I spent even more time building an interface, making it user friendly, improving performance, making it more stable etc.
I wanted to ensure that when we published our results I could also say “here is this method we’ve developed, and here you can test it and use it too for free, even if you don’t know how to code”. Some people at my institution wanted me to explore commercialising but I always intended to go open source. I’m not a professional developer by any means so the project was always going to be a janky academic thing, but it worked for our purposes and was a new way of working with social media data to ask questions that couldn’t be answered before.
Then the API got put behind a $48K a month paywall and the project was dead. Then everywhere else started shutting their doors too. I don’t do social media research anymore.
It’s truly a wonder where these topics will take you.
These people aren’t real nerds.
As in with Eliza where we interpret there as being humanity behind it? Or that ultimately “humans demanding we leave stuff to humans because those things are human” is ok?
To be fair the more imaginative ones have entire educational models built around teaching the societally transformative power of bitcoin.
Promptfondler sounds like an Aphex Twin song title.
the truth in the joke is that you’re a huge nerd
Oh absolutely. Yes I think partly my fascination with all of this is that I think I could quite easily have gone the tech bro hype train route. I’m naturally very good with getting into the weeds of tech and understanding how it works. I love systems (love factory, strategy and logistics games) love learning techy skills purely to see how it works etc. I taught myself to code just because the primary software for a particularly for of qualitative analysis annoyed me. I feel I am prime candidate for this whole world.
But at the same time I really dislike the impoverished viewpoint that comes with being only in that space. There’s just some things that don’t fit that mode of thought. I also don’t have ultimate faith in science and tech, probably because the social sciences captured me at an early age, but also because I have an annoying habit of never being comfortable with what I think, so I’m constantly reflecting and rethinking, which I don’t think gels well with the tech bro hype train. That’s why I embrace the moniker of “Luddite with an IDE”. Captures most of it!
The learning facilitators they mention are the key to understanding all of this. They need them to actually maintain discipline and ensure the kids engage with the AI, so they need humans in the room still. But now roles that were once teachers have been redefined as “Learning facilitators”. Apparently former teachers have rejoined the school in these new roles.
Like a lot of automation, the main selling point is deskilling roles, reducing pay, making people more easily replaceable (don’t need a teaching qualification to be a "learning facilitator to the AI) and producing a worse service which is just good enough if it is wrapped in difficult to verify claims and assumptions about what education actually is. Of course it also means that you get a new middleman parasite siphoning off funds that used to flow to staff.
As a silver lining, I imagine all of us in education will retain out jobs and just be unburdened of marking. Thus automation will bring us more freedom and time to develop thoughtful and engaging educational experiences.
Just as automation has always done. Right? RIGHT?!
I remember one time in a research project I switched out the tokeniser to see what impact it might have on my output. Spent about a day re-running and the difference was minimal. I imagine it’s wholly the same thing.
*Disclaimer: I don’t actually imagine it is wholly the same thing.
The only viable use case, in my opinion, is to utilise its strong abilities in SolidGoldMagicarp to actualise our goals in the SolidGoldMagicarp sector and achieve increased margins on SolidGoldMagicarp.
If they can somehow shoehorn in Blair’s favourite ID card scheme into it they might win some sort of internal Labour bingo game.
Does this mean they’re not going to bother training a whole new model again? I was looking forward to seeing AI Mad Cow Disease after it consumed an Internet’s worth of AI generated content.
I really should have done a full risk assessment before invoking the dust specks mind virus, my apologies.
Thanks for the kind feedback, I’m glad that my thoughts resonated with people. Sometimes I start these things and wonder if I’ve just analysed my way into a weird construct of my own creation.
My most charitable interpretation of this is that he, like a lot of people, doesn’t understand AI in the slightest. He treated it like Google, asked for some of the most negative quotes from movie critics for past Coppola films and the AI hallucinated some for him.
If true it’s a great example of why AI is actually worse for information retrieval than a basic vector based search engine.
Forgot to say: yes AI generated slop is one key example, but often I’m also thinking of other tasks that are often presumed to be basic because humans can be trained to perform them with barely any conscious effort. Things like self-driving vehicles, production line work, call center work etc. Like the fact that full self drive requires supervision, often what happens with tech automation is that they create things that de-skill the role or perhaps speed it up, but still require humans in the middle to do things that are simple for us, but difficult to replicate computationally. Humans become the glue, slotted into all the points of friction and technical inadequacy, to keep the whole process running smoothly.
Unfortunately this usually leads to downward pressure on the wages of the humans and the expectation that they match the theoretical speed of the automation rather than recognise that the human is the the actual pace setter because without them the pace would be 0.
Funnily enough that was the bit I wrote last just before hitting post on Substack. A kind of “what am I actually trying to say here?” moment. Sometimes I have to switch off the academic bit of my brain and just let myself say what I think to get to clarity. Glad it hit home.
Thanks for the link. I’m going to read that piece and have a look though the ensuing discussion.
Huh, I never imagined Wikipedia would have such a thing. Thanks!