• snoons@lemmy.ca
    link
    fedilink
    English
    arrow-up
    28
    ·
    14 hours ago

    …training them to be skeptical and not blindly trust what comes out of the machine…

    This is always what I don’t understand about using ai in it’s current form. If you can’t know if it’s right or wrong, and have to double check it, why use it in the first place? Would it not be more efficient and easier to just use the couple of petaflops you have in your own head to solve the problem or write that email?

    I think then, that it is more of a novelty that has yet to ware off for some people and is conisistently buoyed by the ceos that push it.

    • FishFace@piefed.social
      link
      fedilink
      English
      arrow-up
      4
      ·
      11 hours ago

      If you can’t know if it’s right or wrong, and have to double check it, why use it in the first place?

      Me and my partner alternate doing the cooking. She doesn’t know if I’m going to make a mistake and serve her something she doesn’t like (it has happened). Does that mean she’s better off doing all the cooking herself?

      “If it’s not perfect, it’s useless” is a fallacy. So the question is, how good does it have to be to be useful? That depends on the task, and especially on the cost (however you measure it - dollars or hours or whatever) of verifying whether the result is good compare to the cost of a person doing the task.

      • Andonome@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        11 hours ago

        When you cook well, you can eat the food.

        When the bot says something, you always need to look up if it’s correct. That’s the ‘cook a new meal from scratch’ bit, not the ‘taste it’ bit.

        You need to look things up every time, or do the taste test by asking if the bot’s answer ‘smells true’ (which is tempting, but a bad idea).

        • FishFace@piefed.social
          link
          fedilink
          English
          arrow-up
          3
          ·
          9 hours ago

          If you are using the bot just to perform things that you could easily look up, then yes, that is pointless.

        • FishFace@piefed.social
          link
          fedilink
          English
          arrow-up
          2
          ·
          9 hours ago

          It’s comparable because it’s a negative outcome that may cost something (cooking a new meal, ordering a takeaway) to fix, but can be checked quite easily. Information that is factually incorrect has a negative outcome as well, and can also be checked quite easily - but the negative outcome, and the ease of checking, varies vastly across the space of all possible information.

          I am encouraging you to think about situations where the negative outcome is not that bad, and the ease of checking quite high. Does that make using AI more practical?

    • OfCourseNot@fedia.io
      link
      fedilink
      arrow-up
      1
      ·
      11 hours ago

      The petaflops sometimes… flop. The only use case I personally have for llms, and they are brilliant at that, is when a word just won’t come to mind—I can give it a precise description of it but my brain refuses to produce the word, in English nor Spanish.

  • BeigeAgenda@lemmy.ca
    link
    fedilink
    English
    arrow-up
    6
    ·
    13 hours ago

    Ubuntu will do well if they start with an ai package that can be installed, and let the users slowly test and adopt it, anything else than that and people will just switch distro.