incompetent half-assing is rarely this morally righteous of an act too, since your one act of barely-competent-enough incompetence is transmuted into endless incompetence by becoming training data/qc feedback

        • supersquirrel@sopuli.xyzOP
          link
          fedilink
          arrow-up
          8
          ·
          edit-2
          3 months ago

          How about you remember we all pretty much know that?

          This is just the same old strategy of continously refocusing a conversation about the huge amounts of waste the modern global economy creates on a moral failure of individuals to recycle.

          Like waves arms at the unfurling chaos dragon in the sky what does that matter at this late stage of entanglement with weaponized and proud ignorance? Go give someone you love a genuine compliment, that is actually resisting in the way you think you are describing but you are not.

        • WhyJiffie@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          5
          ·
          3 months ago

          if you haven’t noticed yet, we want these “smart” vehicles to be abolished, along with their non-stop automatic surveillance and data mining they do

        • Solumbran@lemmy.world
          link
          fedilink
          arrow-up
          5
          ·
          3 months ago

          It’s like saying that if employees are paid and treated like shit, they should still work hard because the opposite would be immoral.

          That’s bullshit, if AIs are crap because they train on unwilling people, it’s the company’s fault, not the people who are coerced into working for free.

            • supersquirrel@sopuli.xyzOP
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              3 months ago

              Question is, are there any honest companies anymore?

              Wrong question.

              The right question is if there are there any industries in countries like the US that are effectively regulated enough after the long acidic erosion of state functions by decades of neoliberalism and deregulation (especially financial deregulation) to threaten unscrupulous companies enough into behaving as if they were honest companies when they would really rather just save a buck and kill and maim a handful of innocent people?

              Example A: large corporations were bullied into pretending they were pro-trans and pro-gender fluidity right up until the precise moment after they stopped being bullied into pretending they were.

              My line of reasoning is the only way you are going to understand why planes are all of the sudden accidentally crashing into helicopters in ways in a decade or two ago most engineers and pilots involved in the industry would have never let happen even if it took screaming down the CEO who was casually telling them to cut a corner they knew would lead to innocent children and people dying…

              That sense of trust people had about pilots and aerospace industry engineers and regulators was why we were raised to feel a sense of indirect pride in pilots because they remind us when they walk by in a neat and professional uniform that we exist in a society that has magic adults who whisk people into the sky and back so they can see loved ones and somehow do it with incredible safety, kindness and consistency as if it was just a simple matter of filing routine paperwork (no shade at secretaries, that shit ain’t easy either).

    • LambdaRX@sh.itjust.works
      link
      fedilink
      arrow-up
      17
      ·
      3 months ago

      I didn’t ask to solve captchas, if someone wants to have accurate data, they better hire someone to train AI.

        • supersquirrel@sopuli.xyzOP
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          3 months ago

          Also expect your AI to be engaged in some heady and deep forms of self hatred that is going to take decades to unravel.

          Sad angry people in, sad angry robots out.

          • TranquilTurbulence@lemmy.zip
            link
            fedilink
            English
            arrow-up
            2
            ·
            3 months ago

            If you use internet discussions as training data, you can expect to find all sorts of crazy biases. Completely unfiltered data should produce a chatbot that exaggerates many human traits which completely burying others.

            For example, on Reddit and Lemmy, you’ll find lots of clever puns. On Mastodon, you’ll find all sorts of LGBT advocates or otherwise queer people. On Xitter, you’ll find all the racists and white supremacists. There are also old school forums that amplify things even further.

    • IsoSpandy@lemm.ee
      link
      fedilink
      arrow-up
      14
      ·
      3 months ago

      I didn’t agree to train their AI though you know. If the data is unreliable, then get your own data. Why would I tell it to recognize a stop sign.

      If they open source the model and the weights and allow me turn on my seat warmers on my own car without a subscription, then maybe… Just maybe some day I would help them. Till then, gtfo.

    • latenightnoir@lemmy.world
      link
      fedilink
      arrow-up
      11
      ·
      edit-2
      3 months ago

      I genuinely think that’s a noble sentiment and I share that concern. However, this would entail making a deal with the Devil at this point, and pretty much literally.

      Most if not all relevant models nowadays are owned by outwardly unscrupulous people, which means any correct interaction we have with their models only serves to build up the Devil’s throne.

      It is a downright tragedy that people will suffer as a result of said models, but that fault is not on us. Besides the fact that they’re essentially stealing labour and data in order to train their models, they’re also using them to dish out propaganda, to replace workers and throwing them in a ditch, to cause yet another financial bubble which’ll flush the toilet when it inevitably pops - again.

      We need to let them fail, otherwise we are just encouraging others to use us in the same exact ways.