• Sundray@lemmus.org
    link
    fedilink
    English
    arrow-up
    4
    ·
    9 hours ago

    I suppose this is preferable to human intelligence, where the answer would come back as “Why would you want to do that?”

  • Katana314@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    15 hours ago

    Whoa, I was weirdly into My Little Pony for a while but I didn’t realize it powered data centers.

  • petey@aussie.zone
    link
    fedilink
    arrow-up
    7
    ·
    17 hours ago

    I have my local LLM rig (powered by solar) for asking stupid questions because I feel it’s unreasonable to ask a data centre somewhere why spoons taste funny

  • chiliedogg@lemmy.world
    link
    fedilink
    arrow-up
    28
    ·
    1 day ago

    I think doing this exact post but changing the request could really speak to the inefficiency of AI.

    Something like “What is 8x12”, go through the whole sequence, and have it spit out “Eight times twelve is 114”

    • ikt@aussie.zoneOP
      link
      fedilink
      arrow-up
      14
      ·
      edit-2
      1 day ago

      tbf they are getting significantly better, one of the best things that hasn’t really filtered through to the mainstream is MOE / mixture of experts

      the tldr is that back in the chatgpt4 days, wayyy back in the ye olden times of 2024, ai would essentially go through the entire library every single question to find an answer

      Now the libraries are getting massive but the queries are getting faster at responding because instead of going through the entire library for every question, they only need a part of it, just like in a library instead of querying all of human knowledge for what is 8x12, it just goes to the maths section saving a lot of power and time

      In the case of chat.mistral.ai it doesn’t even go through the library to maths, it just makes a quick python script and outputs the answer that way:

    • TeddE@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      19 hours ago

      Just as a tangent:

      This is one reason why I’ll never trust AI.

      I imagine we might wrangle the hallucination thing (or at least be more verbose about it’s uncertainty), but I doubt it will ever identify a poorly chosen question.

      • marcos@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        18 hours ago

        Making the LLMs warn you when you ask a known bad question is just a matter of training it differently. It’s a perfectly doable thing, with a known solution.

        Solving the hallucinations in LLMs is impossible.

        • Leon@pawb.social
          link
          fedilink
          English
          arrow-up
          3
          ·
          17 hours ago

          That’s because it’s a false premise. LLMs don’t hallucinate, they do exactly what they’re meant to do; predict text, and output something that’s legible and human written. There’s no training for correctness, how do you even define that?

          • ikt@aussie.zoneOP
            link
            fedilink
            arrow-up
            1
            ·
            9 hours ago

            There’s no training for correctness, how do you even define that?

            I guess can chat to these guys who are trying:

            By scaling reasoning with reinforcement learning that rewards correct final answers, LLMs have improved from poor performance to saturating quantitative reasoning competitions like AIME and HMMT in one year

            https://huggingface.co/deepseek-ai/DeepSeek-Math-V2

            • Leon@pawb.social
              link
              fedilink
              English
              arrow-up
              1
              ·
              8 hours ago

              Sure, when it comes to mathematics you can do that with extreme limitations on success, but what about cases where correctness is less set? Two opposing statements can be correct if a situation changes, for example.

              The problems language models are expected to solve go beyond the scope of what language models are good for. They’ll never be good at solving such problems.

              • ikt@aussie.zoneOP
                link
                fedilink
                arrow-up
                1
                ·
                7 hours ago

                i duno you’re in the wrong forum, you want hackernews or reddit, no one here knows much about ai

                although you do seem to be making the same mistake others made before where you want point to research happening currently and then extrapolating that out to the future

                ai has progressed so fast i wouldn’t be making any “they’ll never be good at” type statements

  • serpineslair@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    ·
    1 day ago

    It’s shit a lot of people don’t think about tbh. Like imagine all the meaningless shit people have done on the internet, and that shit literally travels across the whole world using impressive feats of technology. I’m saving this post for later. 🤣

    • NeatNit@discuss.tchncs.de
      link
      fedilink
      arrow-up
      11
      ·
      1 day ago

      Infrastructure is all about unbelievable feats of engineering that are taken for granted. Sewage systems, running water, electricity, roads, public transport, cars, physical mail, and grocery stores/supermarkets are all unbelievable achievements that we all take for granted to varying degrees, and that’s just off the top of my head. IP networking is just more of that. Absolutely crazy, and by design we don’t think about it.

      But AI (also depicted in this gif) is not in the same category IMO, for a lot of reasons.

      • Bobo The Great@startrek.website
        link
        fedilink
        arrow-up
        1
        ·
        8 hours ago

        You might not like AI for what it stands or for the negative impact it has on the world, but you can’t deny that LLM like we have today are a marvel of technology, an incredibly complex technology that would have felt science fiction just a decade ago.

        • NeatNit@discuss.tchncs.de
          link
          fedilink
          arrow-up
          1
          ·
          8 hours ago

          Yeah, but it’s not good infrastructure. It’s not sustainable, it’s privately controlled, and it’s destined to be enshittified. Infrastructure needs to be well thought out and publicly regulated, AI is the opposite.

      • Fergie434@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        24 hours ago

        Simply doing a traceroute to a website in another country a long time ago fascinated me. Seeing it hit all of the routers in other cities then across the ocean to another continent and back in less than 100ms blew my mind.

        Led me down that path and now I’ve been a network engineer for over 10 years.

    • f314@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      1 day ago
      .container {
          display: grid;
          place-items: center;
      }
      

      The real answer is of course, as in most cases, “it depends.”