It’s pretty random in terms of what is or isn’t doable.
For me it’s a big performance booster because I genuinely suck at coding and don’t do too much complex stuff. As a “clean up my syntax” and a “what am I missing here” tool it helps, or at least helps in figuring out what I’m doing wrong so I can look in the right place for the correct answer on something that seemed inscrutable at a glance. I certainly can do some things with a local LLM I couldn’t do without one (or at least without getting berated by some online dick who doesn’t think he has time to give you an answer but sure has time to set you on a path towards self-discovery).
How much of a benefit it’s for a professional I couldn’t tell. I mean, definitely not a replacement. Maybe helping read something old or poorly commented fast? Redundant tasks on very commonplace mainstream languages and tasks?
I don’t think it’s useless, but if you ask it to do something by itself you can’t trust that it’ll work without singificant additional effort.
Sorta kinda. It depends on where you put that line. I think because online drama is fun when we got to the “vibe coding” name we moved to the assumption that all AI assistance is vibe coding, but in practice there’s the percentage of what you do that you know how to do, the percentage you vibe code because you can’t figure it out yourself off the top of your head and the percentage you just can’t do without researching because the LLM can’t do it effectively or the stuff it can do is too crappy to use as part of something else.
I think if the assumption is you should just “git gud” and not take advantage of that grey zone where you can sooort of figure it out by asking an AI instead of going down a Google rabbit hole then the performative AI hate is setting itself up for defeat, because there’s a whole bunch of skill ranges where that is actually helpful for some stuff.
If you want to deny that there’s a difference between that and just making code soup by asking a language model to build you entire pieces of software… well, then you’re going to be obviously wrong and a bunch of AI bros are going to point at the obvious way you’re wrong and use that to pretend you’re wrong about the whole thing.
This is basic online disinformation playbook stuff and I may suck at coding, but I know a thing or two about that. People with progressive ideas should get good at beating those one of these days, because that’s a bad outcome.
People seem to disagree but I like this. This is AI code used responsibly. You’re using it to do more, without outsourcing all your work to it and you’re actively still trying to learn as you go. You may not be “good at coding” right now but with that mindset you’ll progress fast.
I think the effects of it are… a bit more nuanced than that, perhaps?
I can definitely tell there are places where I’m plugging knowledge gaps fast. I just didn’t know how to do a thing, I did it AI-assisted once or twice and I don’t need to be AI assisted anymore because I understood how it works now. Cool, that. And I wouldn’t have learned it from traditional sources because asking in public support areas would have led to being told I suck and should read the documentation and/or to a 10 video series on Youtube where you can watch some guy type for seven hours.
But there are also places where AI assistance is never going to fill the blanks for me, you know? Larger trends, good habits, technical details or best practices that just aren’t going to come up from keeping a smart autocorrect that can explain why something was wrong.
Honestly, in those spaces the biggest barrier is still what it was: I don’t necessarily want to “progress” on those areas because I don’t need it and it’s not my job. I can automate a couple things I didn’t know how to automate before, and that’s alright. For the rest, I will probably live with the software someone else has made when it exists.
The problem is hubris, right? I know what I don’t know and which parts I care to learn. That’s fine. Coding assistant LLMs are a valid tool for someone like that to slightly expand their reach and I presume there’s a lot of people like that. It’s the random entrepeneurs who have been sold by big corpos that they don’t need a real programmer to build their billion-dollar app anymore that are going to crash and burn and may take some of the software industry down with them.
It catches things like spelling errors in variable names, does good autocomplete, and it’s useful to have it look through a file before committing it and creating a pull request.
It’s very useful for throwaway work like writing scripts and automations.
It’s useful not but a 10x multiplier like all the CEOs claim it is.
Fully agreed. Everybody is betting it’ll get there eventually and trying to jockey for position being ahead of the pack, but at the moment there isn’t any guarantee that it’ll get to where the corpos are assuming it already is.
Which is not the same as not having better autocomplete/spellcheck/“hey, how do I format this specific thing” tools.
I think the main barriers are context length (useful context. GPT-4o has “128k context” but it’s mostly sensitive to the beginning and end of the context and blurry in the middle. This is consistent with other LLMs), and just data not really existing. How many large scale, well written, well maintained projects are really out there? Orders of magnitude less than there are examples of “how to split a string in bash” or “how to set up validation in spring boot”. We might “get there”, but it’ll take a whole lot of well written projects first, written by real humans, maybe with the help of AI here and there. Unless, that is, we build it with the ability to somehow learn and understand faster than humans.
I don’t know, some of these guys have acccess to a LOT of code, and even more debate about what those good codebases entail.
I think the other issue is more relevant. Even 128K tokens is not enough for something really big, and the memory and processing costs for that do skyrocket. People are trying to work around it with draft models and summarization models, so they try to pick out the relevant parts of a codebase in one pass and then base their code generation just on that, and… I don’t think that’s going to work reliably at scale. The more chances you give a language model to lose their goddamn mind and start making crap up unsupervised the more work it’s going to be to take what they spit out and shape it into something reasonable.
Yeah, the AI corpos are putting a lot of effort into parsing big contexts right now. I suspect because they think (probably correctly) that coding is one of the few areas where they could get paid if their AIs didn’t have the memory of a goldfish.
And absolutely agreed that making sure the FOSS alternatives keep pace is going to be important. I’m less concerned about hating the entire concept than I am about making sure they don’t figure out a way to keep every marginally useful application behind a corporate ecosystem walled garden exclusively.
We’ve been relatively lucky in that the combination of PR brownie points and general crappiness of the commercial products has kept an incentive to provide a degree of access, but I have zero question that the moment one of these things actually makes money they’ll enshittify the freely available alternatives they control and clamp down as much as possible.
It’s pretty random in terms of what is or isn’t doable.
For me it’s a big performance booster because I genuinely suck at coding and don’t do too much complex stuff. As a “clean up my syntax” and a “what am I missing here” tool it helps, or at least helps in figuring out what I’m doing wrong so I can look in the right place for the correct answer on something that seemed inscrutable at a glance. I certainly can do some things with a local LLM I couldn’t do without one (or at least without getting berated by some online dick who doesn’t think he has time to give you an answer but sure has time to set you on a path towards self-discovery).
How much of a benefit it’s for a professional I couldn’t tell. I mean, definitely not a replacement. Maybe helping read something old or poorly commented fast? Redundant tasks on very commonplace mainstream languages and tasks?
I don’t think it’s useless, but if you ask it to do something by itself you can’t trust that it’ll work without singificant additional effort.
A lot of words to just say vibe coding
Sorta kinda. It depends on where you put that line. I think because online drama is fun when we got to the “vibe coding” name we moved to the assumption that all AI assistance is vibe coding, but in practice there’s the percentage of what you do that you know how to do, the percentage you vibe code because you can’t figure it out yourself off the top of your head and the percentage you just can’t do without researching because the LLM can’t do it effectively or the stuff it can do is too crappy to use as part of something else.
I think if the assumption is you should just “git gud” and not take advantage of that grey zone where you can sooort of figure it out by asking an AI instead of going down a Google rabbit hole then the performative AI hate is setting itself up for defeat, because there’s a whole bunch of skill ranges where that is actually helpful for some stuff.
If you want to deny that there’s a difference between that and just making code soup by asking a language model to build you entire pieces of software… well, then you’re going to be obviously wrong and a bunch of AI bros are going to point at the obvious way you’re wrong and use that to pretend you’re wrong about the whole thing.
This is basic online disinformation playbook stuff and I may suck at coding, but I know a thing or two about that. People with progressive ideas should get good at beating those one of these days, because that’s a bad outcome.
People seem to disagree but I like this. This is AI code used responsibly. You’re using it to do more, without outsourcing all your work to it and you’re actively still trying to learn as you go. You may not be “good at coding” right now but with that mindset you’ll progress fast.
I think the effects of it are… a bit more nuanced than that, perhaps?
I can definitely tell there are places where I’m plugging knowledge gaps fast. I just didn’t know how to do a thing, I did it AI-assisted once or twice and I don’t need to be AI assisted anymore because I understood how it works now. Cool, that. And I wouldn’t have learned it from traditional sources because asking in public support areas would have led to being told I suck and should read the documentation and/or to a 10 video series on Youtube where you can watch some guy type for seven hours.
But there are also places where AI assistance is never going to fill the blanks for me, you know? Larger trends, good habits, technical details or best practices that just aren’t going to come up from keeping a smart autocorrect that can explain why something was wrong.
Honestly, in those spaces the biggest barrier is still what it was: I don’t necessarily want to “progress” on those areas because I don’t need it and it’s not my job. I can automate a couple things I didn’t know how to automate before, and that’s alright. For the rest, I will probably live with the software someone else has made when it exists.
The problem is hubris, right? I know what I don’t know and which parts I care to learn. That’s fine. Coding assistant LLMs are a valid tool for someone like that to slightly expand their reach and I presume there’s a lot of people like that. It’s the random entrepeneurs who have been sold by big corpos that they don’t need a real programmer to build their billion-dollar app anymore that are going to crash and burn and may take some of the software industry down with them.
It catches things like spelling errors in variable names, does good autocomplete, and it’s useful to have it look through a file before committing it and creating a pull request.
It’s very useful for throwaway work like writing scripts and automations.
It’s useful not but a 10x multiplier like all the CEOs claim it is.
Fully agreed. Everybody is betting it’ll get there eventually and trying to jockey for position being ahead of the pack, but at the moment there isn’t any guarantee that it’ll get to where the corpos are assuming it already is.
Which is not the same as not having better autocomplete/spellcheck/“hey, how do I format this specific thing” tools.
I think the main barriers are context length (useful context. GPT-4o has “128k context” but it’s mostly sensitive to the beginning and end of the context and blurry in the middle. This is consistent with other LLMs), and just data not really existing. How many large scale, well written, well maintained projects are really out there? Orders of magnitude less than there are examples of “how to split a string in bash” or “how to set up validation in spring boot”. We might “get there”, but it’ll take a whole lot of well written projects first, written by real humans, maybe with the help of AI here and there. Unless, that is, we build it with the ability to somehow learn and understand faster than humans.
I don’t know, some of these guys have acccess to a LOT of code, and even more debate about what those good codebases entail.
I think the other issue is more relevant. Even 128K tokens is not enough for something really big, and the memory and processing costs for that do skyrocket. People are trying to work around it with draft models and summarization models, so they try to pick out the relevant parts of a codebase in one pass and then base their code generation just on that, and… I don’t think that’s going to work reliably at scale. The more chances you give a language model to lose their goddamn mind and start making crap up unsupervised the more work it’s going to be to take what they spit out and shape it into something reasonable.
Yeah, it’s still super useful.
I think the execs want to see dev salaries go to zero, but these tools make more sense as an accelerator, like giving an accountant excel.
I get a bit more done faster, that’s a solid value proposition.
It’s not much use with a professional codebase as of now, and I say this as a big proponent of learning FOSS AI to stay ahead of the corpocunts
Yeah, the AI corpos are putting a lot of effort into parsing big contexts right now. I suspect because they think (probably correctly) that coding is one of the few areas where they could get paid if their AIs didn’t have the memory of a goldfish.
And absolutely agreed that making sure the FOSS alternatives keep pace is going to be important. I’m less concerned about hating the entire concept than I am about making sure they don’t figure out a way to keep every marginally useful application behind a corporate ecosystem walled garden exclusively.
We’ve been relatively lucky in that the combination of PR brownie points and general crappiness of the commercial products has kept an incentive to provide a degree of access, but I have zero question that the moment one of these things actually makes money they’ll enshittify the freely available alternatives they control and clamp down as much as possible.