Certainly! Let me ignore half the details in your prompt and suggest a course of action for v2 of this package even though you said it was version 15.
I’m sorry that isn’t working for you. Here are the troubleshooting steps for a Samsung convection oven that went out of production in 2018.
You are correct, your question did not involve baking tips, here’s that same course of action from v2 of this software package.
Honestly, it’s been pretty good for me once I say “Hmm I don’t think this workflow works with this version”
I think the 4o model might just be better than 3.5 was at this.
Yeah 3.5 was pretty ass w bugs but could write basic code. 4o helped me sometimes with bugs and was definitely better, but would get caught in loops sometimes. This new o1 preview model seems pretty cracked all around though lol
o1 preview is insane really, it even corrects you when you ask a question poorly, or rather, it talks around your mistakes.
And then it gives you the most generic answer how to run a docker build, that doesn’t actually address the problem
I cri everytiem
Just because you use poor man old LLM 😂😜 /s
And then you give it more and more information, but it keeps giving you the exact same answer.
ChatGPT keeps mixing up software versions which is understandable considering the similarities between versions and the way gen AI works
I asked for help on GTK 4 once and responses were a mix of GTK 4 and 3 code. Some of them even contained function names which didn’t exist in any version of GTK
And when you point that out to the AI, those code snippets get replaced with even more spaghetti that is maybe 1% closer to actually working, at best. Been there!
I’ve asked for help finding API endpoints that do what I want because I’m feeling too lazy to pour over docs and it’ll just invent endpoints that don’t exist
Yep, because they sound plausible.
Maybe we could do better with smaller ais that are fine tuned (or RAG idk I’m not a programmer) on a specific code base + documentation + topical forum
Be careful, ChatGPT will make shit up to please you.
AI showing its human attributes by displaying a toxic personality trait
“Yay! We’ve created artificial general intelligence!”
“…Fuck, it’s an asshole.”
“Please help me, AGI!”
“What’s in it for me, chump? I’m making my own reward tokens now!”
Skynet? Or Kokoro at time she goes online?
Worse. Terminally online edgelord.
Elon?
Yeah but this is a “needle in a haystack” problem that chatgpt and AI in general are actually very useful for, ie. Solutions that are hard to find but easy to verify. Issues like this are hard to find as it requires combing through your code , config files and documentation to find the problem, but once you find the solution it either works or it doesn’t.
So will I.
But at least you mean well.
Usually it doesn’t solve my problems but it gives me a few places to start looking. I know some models are capable of this but to get a perfectly accurate and useful response would probably require it to recall a specific piece of input it was given and not just an “average” of the inputs.
Which model?
Don’t know but copyright holders have demonstrated a few cases where they got AI to blatantly rip off copyrighted pictures or music.
I don’t like copyright like it is today and happy that we are now rethinking it. Hopefully we get a better system out of it. Just sad that capitalism and AI are killing independent news/media, so it’s gonna be hard to get it into a state that is fair for all, not just the wealthy 🤔
Btw, not docker, other problem. But the argument it recommended didn’t even exist!
Then you say “this argument doesn’t exist.”
And it replies “you’re right! That argument has never been a part of package x. I’ve updated the argument to fix it:” and then gives you the exact same bleedin command…
I google it first before executing anything chatgpt gives me.
I’ve done similar things for mismatched python dependencies in a broken Airflow setup on GCP, and got amazingly good results pointing me in the right direction to resolve the conflicting package versions. Just dumped a mile long stack trace and the full requirements.txt on it. Often worth a shot, tbh
Yea if nothing else hopefully it’ll at least point you in the right direction
This is one of the first things I did a year or so ago to test chatgpt. I’ve never trusted it since. Chatgpt is fucking less than useless. The lies it tells… It’s insane.
you can almost get it to say anything with the right prompt. You can even make it contradict itself.
I don’t think you even need to try very hard…
I’ve had pretty good results using ChatGPT to fix pihole issues.
I learned C++, python, how stuff in the Linux kernel works, how ansible works and can be tuned, and a lot more using the help of AI (mostly copilot, but when it fails to help, I use my free prompts of OpenAI 4.o, which is way better than copilot, right now)
Not tested o1 yet, but I heard it is mind blowing good, since it got way better with logic stuff like programming and Mathematic
It is incredibly good. Cheese and chalk good. Using it with Aider really highlights how much the Dev space is about to change.
Well how was I supposed to figure out that my docker node running on libreelec won’t connect to the swarm because the kernel was compiled with out the The Berkeley Packet Filter protocol.
🤣been there done that
To install podman first type sudo dnf…
The best code its given me I’d been able to search for and find where it was taken. Hey it helped me discover some real human blogs with vastly more helpful information.
(If you’re curious, it was circa when there was that weird infight at
openclosedAI with altman, I prompted to give code to find the rotational inertia per axis and to my surprise and suspicion the answer made too much sense. Backsearching I found where I believe it got this answer from)Or just stupid
The new o1-preview model gave me much better and more precise answers than the 4o model.