• froztbyte@awful.systems
    link
    fedilink
    English
    arrow-up
    18
    ·
    1 day ago

    the prompt-related pivots really do bring all the chodes to the yard

    and they’re definitely like “mine’s better than yours”

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      14
      ·
      edit-2
      20 hours ago

      The latest twist I’m seeing isn’t blaming your prompting (although they’re still eager to do that), it’s blaming your choice of LLM.

      “Oh, you’re using shitGPT 4.1-4o-o3 mini _ro_plus for programming? You should clearly be using Gemini 3.5.07 pro-doubleplusgood, unless you need something locally run, then you should be using DeepSek_v2_r_1 on your 48 GB VRAM local server! Unless you need nice sounding prose, then you actually need Claude Limmerick 3.7.01. Clearly you just aren’t trying the right models, so allow me to educate you with all my prompt fondling experience. You’re trying to make some general point? Clearly you just need to try another model.”