I want to let people know why I’m strictly against using AI in everything I do without sounding like an ‘AI vegan’, especially in front of those who are genuinely ready to listen and follow the same.

Any sources I try to find to cite regarding my viewpoint are either mild enough to be considered AI generated themselves or filled with extremist views of the author. I want to explain the situation in an objective manner that is simple to understand and also alarming enough for them to take action.

  • givesomefucks@lemmy.world
    link
    fedilink
    English
    arrow-up
    68
    ·
    edit-2
    9 days ago

    If it’s real life, just talk to them.

    If it’s online, especially here on lemmy, there’s a lot of AI brain rotted people who are just going to copy/paste your comments into a chatbot and you’re wasting time.

    They also tend to follow you around.

    They’ve lost so much of their brains to AI, that even valid criticism of AI feel like personal insults to them.

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      11
      ·
      9 days ago

      They’ve lost so much of their brains to AI, that even valid criticism of AI feel like personal insults to them.

      More likely they feel insulted by people saying how “brain-rotted” they are.

      • Carnelian@lemmy.world
        link
        fedilink
        arrow-up
        13
        ·
        9 days ago

        What would the inoffensive way of phrasing it be?

        Genuinely every single pro-AI person I’ve spoken with both irl and online has been clearly struggling cognitively. It’s like 10x worse than the effects of basic social media addiction. People also appear to actively change for the worse if they get conned into adopting it. Brain rot is apparently a symptom of AI use as literally as tooth rot is a symptom of smoking.

        Speaking of smoking and vaping, on top of being bad for you objectively, it’s lame and gross. Now that that narrative is firmly established we have actually started seeing youth nicotine use decline rapidly again, just like it was before vaping became a thing

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          7
          ·
          9 days ago

          What would the inoffensive way of phrasing it be?

          …and then you proceed to spend the next two paragraphs continuing to rant about how mentally deficient you think AI users are.

          Not that, for starters.

          • Carnelian@lemmy.world
            link
            fedilink
            arrow-up
            9
            ·
            9 days ago

            The lung capacity of smokers is deficient, yes? Is the mere fact offensive? Should we just not talk about how someone struggling to breathe as they walk up stairs is the direct result of their smoking?

              • Carnelian@lemmy.world
                link
                fedilink
                arrow-up
                5
                ·
                9 days ago

                I don’t think it is, nor do I think name dropping random fallacies without engaging with the topic makes for particularly good conversation. If you have issues with OP’s phrasing it would benefit all of us moving forward if we found a better way to talk about it, yes?

                • FaceDeer@fedia.io
                  link
                  fedilink
                  arrow-up
                  4
                  ·
                  9 days ago

                  It’s not a random fallacy, it’s the one you’re engaging in. Look it up. Your analogy presupposes an answer to the question that is actually at hand. It’s the classic “have you stopped beating your wife” situation.

    • enchantedgoldapple@sopuli.xyzOP
      link
      fedilink
      arrow-up
      1
      ·
      9 days ago

      They’ve lost so much of their brains to AI, that even valid criticism of AI feel like personal insults to them.

      That’s the issue. I do wish to warn me or even just inform them of what using AI recklessly could lead to.

      • givesomefucks@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        9 days ago

        Why care?

        You’re wanting to go out and argue with people and try to use logic when that part of their brain has literally atrophied.

        It’s not going to accomplish anything, and likely just drive them deeper into AI.

        Plenty of people that need help actually want it, put your energy towards that if you want to help people.

        • enchantedgoldapple@sopuli.xyzOP
          link
          fedilink
          arrow-up
          2
          ·
          9 days ago

          The post is aimed at me facing situations where I state among people I know that I don’t use AI, followed by them asking why not. Instead of driving them out by stating “Just because” or get into jargons that are completely unbeknownst to them, I wish to properly inform them why I have made this decision and why they should too.

          I am also able to identify people to whom there’s no point discussing this. I’m not asking to convince them too.

          • givesomefucks@lemmy.world
            link
            fedilink
            English
            arrow-up
            9
            ·
            9 days ago

            I wish to properly inform them why I have made this decision and why they should too.

            You’re asking how to verbalize why you don’t like AI, but you won’t say why you don’t like AI…

            Let’s see if this helps, imagine someone asks you:

            I don’t like pizza, how do I tell people the reasons why I don’t like pizza?

            How the absolute fuck would you know how to explain it when you don’t know why they don’t like pizza?

  • Blemgo@lemmy.world
    link
    fedilink
    arrow-up
    43
    ·
    edit-2
    9 days ago

    Maybe trying to be objective is the wrong choice here? After all, it might sound preachy to those who are ignorant to the dangers of AI. Instead, it could be better to stay subjective in hopes to trigger self-reflection.

    Here are some arguments I would use for my own personal ‘defense’:

    • I like to do the work by myself because the challenge of doing it by my own is part of the fun, especially when I finally get that ‘Eureka!’ moment after especially tough ones. When I use AI, it just feels halfhearted because I just handed it to someone else, which doesn’t sit right with me.
    • when I work without AI, I tend to stumble over things that aren’t really relevant to what I’m doing, but are still fun to learn about and might be helpful sometimes else. With AI, I’m way too focused on the end result to even notice that stuff, which makes the work feel even more annoying.
    • when I decide to give up or realize I can’t be arsed with it, I usually seek out communities or professionals, because that way it’s either done professionally or I get a better sense of community, but overall feel like I’m supporting someone. With AI, I don’t get that feeling, but rather I only feel either inferior for not coming up with a result as fast as the AI does or frustrated because it either spews out bullshit or doesn’t get the point I’m aiming for.
    • enchantedgoldapple@sopuli.xyzOP
      link
      fedilink
      arrow-up
      11
      ·
      edit-2
      9 days ago

      This is a brilliant idea! I was wondering whether talking subjectively would be detrimental to my point, but having it explained this way is so much better. I think the key point here is to not berate the other person for using AI in between this explanation.

      • Blemgo@lemmy.world
        link
        fedilink
        arrow-up
        9
        ·
        9 days ago

        It goes a bit further than just not berating. People often get defensive when you criticise something they like, which makes it harder to argue due to the other side suddenly treating the discussion as a fight. However by saying “it’s not for me” in a rather roundabout way you shift the focus away from “is it good/bad” and more about whether the other can empathise with your reasoning, and in turn reflect your view onto themselves and maybe realize that they didn’t notice something about their usage and feelings about AI that you already did.

    • zout@fedia.io
      link
      fedilink
      arrow-up
      4
      ·
      9 days ago

      it might sound preachy to those who are ignorant

      Am I reading it wrong, or are you saying that people who have a different point of view are ignorant?

      • Blemgo@lemmy.world
        link
        fedilink
        arrow-up
        10
        ·
        9 days ago

        Ah, sorry, I didn’t mean ignorant in a general way, but to the critiques on AI/dangers of AI OP referred to in their post. I’ll edit my comment.

  • LuigiMaoFrance@lemmy.ml
    link
    fedilink
    arrow-up
    34
    ·
    9 days ago

    If you want to explain your reasons ‘in good faith’ you should be honest, and not adopt other people’s reasons to argue the position you’ve already assumed.

    • MajorasTerribleFate@lemmy.zip
      link
      fedilink
      arrow-up
      10
      ·
      9 days ago

      It’s possible their intent is to solicit more concise, well-packaged versions of their existing position(s) that others have spent time honing.

      • dream_weasel@sh.itjust.works
        link
        fedilink
        arrow-up
        5
        ·
        edit-2
        9 days ago

        Ah yes. AI is just dressed up exploitation and thievery of other people’s ideas; a mashed up and uncreative slop. By the way, can I just aggregate prepackaged ideas about it from strangers to make my own argument? I don’t want to spend time crafting or refining it myself.

        Pretty wild position of you ask me.

        • MajorasTerribleFate@lemmy.zip
          link
          fedilink
          arrow-up
          2
          ·
          8 days ago

          The main difference is that OP would be asking for this, whereas AI just took it without permission. Humanity has always had wiser folks who can package ideas, and some folks who agree with the message but don’t have the same skill to craft their own version. Division of labor has value :)

          • dream_weasel@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            ·
            8 days ago

            Now hang on a minute lol. AI is just stolen garbage, but obviously we expect the “wiser folks” here in the agora of lemmy are going to give reasonable / acceptable answers? This is like the mcdonalds of philosophy here.

            So I see here two conjectures:

            1. AI is bad because it is creating a mashup of information (which may or may not be accurate) from sources it took without permission.
            2. People are within their rights to outsource the articulation of their opinions to experts (see also “wiser folks”. After all, this “division of labor” has always existed.

            1q.) So let’s say I take my 2 GPU workhorse PC and train basic language (not obviously line of reasoning, or guardrails, or other languages or anything like that) from a library of articles and professional documents I own or control. Then, by way of something like resource augmented generation (or similer idc) it gives me a well articulated argument of why AI is bad, is that reasonable? I would think this is a BETTER perspective than 2q below.

            2q.) In what way is mining the totally anonymous, unverifiable posts of literally any person with a keyboard on lemmy MORE valuable than a reasonable sounding argument from any generative AI or just pressing the middle button on your phone over and over? This sounds totally stupid. “Division of labor” has probably made all of us dumber. I (coincidentally) build language models as part of my job, somewhere in this thread is an AI expert who has read 1 newspaper article and is “training” on the information from other lemmy comments.

            At the same time we say “Holy shit AI bad, AI hallucinate, AI lies!” we are going to say it’s totally cool and reasonable to shout into the internet box where rando people can say anything they want, and that’s better?

            I mean I do like the smug argument and the smiley face, but the premise that “Gen AI sucks, hone your argument against AI using ask-fucking-lemmy” borders on content for c/selfawarewolves. It’s so ridiculous I practically expect you’re just trolling the thread.

            • MajorasTerribleFate@lemmy.zip
              link
              fedilink
              arrow-up
              1
              ·
              8 days ago
              1. AI is bad because, in its current state, it takes up way too many resources and contributes heavily to climate change. All that and it’s current output is often unreliable and/or displaces human labor.

              2. As with the use of AI, someone asking other people for information should verify what they are seeing. Assuming OP already has their beliefs more or less set, they’re potentially just looking for some more well-crafted arrangement of the ideas, and they have a preference to ask humans for that rather than AI.

              Re: division of labor, likely it contributes to people having less broad and more specialized knowledge. The benefit, however, is that we don’t need everyone to learn every single skill needed for self-sufficient living. I’d rather my surgeon to be a specialist in surgery, not needing to spend much of their time growing their own food, maintaining their home and clothing, and so on.

    • aesthelete@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      8 days ago

      Yeah the wording on this is wrong. The closest adjacent (honest) question would be “how can I appear to be arguing in good faith when I have a predetermined position on this technology?”.

      EDIT:

      I don’t even like GenAI myself and that’s how this comes off.

      If you’re looking for reasons: (1) sustainability / ecology, (2) market concentration, (3) intellectual theft, (4) mediocre output, (5) lack of guardrails, (6) vendor lock-in, (7) appears to drive some people insane, (8) drives down the quality of the Internet overall, (9) de-skills the people that use it, (10) produces probabilistic outputs and yet is used in applications that require deterministic outputs…I could go on for a while.

  • canofcam@lemmy.world
    link
    fedilink
    arrow-up
    33
    ·
    9 days ago

    A discussion in good faith means treating the person you are speaking to with respect. It means not having ulterior motives. If you are having the discussion with the explicit purpose of changing their minds or, in your words, “alarming them to take action” then that is by default a bad faith discussion.

    If you want to discuss with a pro-AI person in good faith, you HAVE to be open to changing your own mind. That is the whole point of a good faith discussion - but rather, you already believe you are correct, and are wanting to enter these discussions with objective ammunition to defeat somebody.

    How do you actually discuss in good faith? You ask for their opinions and are open to them, then you share your own in a respectful manner. You aren’t trying to ‘win’ you are just trying to understand and in turn, help others to understand your own POV.

    • krooklochurm@lemmy.ca
      link
      fedilink
      arrow-up
      9
      ·
      edit-2
      8 days ago

      Chiming in here:

      Most of the arguments against ai - the most common ones being plagiarism, the ecological impact - are not things people making the arguments give a flying fuck about in any other area.

      Having issues with the material the model is trained on isn’t an issue with ai - it’s an issue with unethical training practices, copyright law, capitalism. These are all valid complaints, by the way, but they have nothing to do with the underlying technology. Merely with the way it’s been developed.

      For the ecological side of things, sure, ai uses a lot of power. Lots of data enters. So does the internet. Do you use that? So does the stock market. Do you use that? So do cars. Do you drive?

      I’ve never heard anyone say “we need less data centers” until ai came along. What, all the other data centers are totally fine but the ones being used for ai are evil? If you have an issue with the drastically increased power consumption for ai you should be able to argue a stance that is inclusive of all data centers - assuming it’s something you give a fuck about. Which you don’t.

      If a model, once trained, is being used entirely locally on someone’s personal pc - do you have an issue with the ecological footprint of that? The power has been used. The model is trained.

      It’s absolutely valid to have an issue with the increased power consumption used to train ai models and everything else but these are all issues with HOW and not the ontological arguments against the tech that people think they are.

      It doesn’t make any of these criticisms invalid, but if you refuse to understand the nuance at work then you aren’t arguing in good faith.

      If you enslave children to build a house then the issue isn’t that youre building a house, and it doesn’t mean houses are evil, the issue is that YOURE ENSLAVING CHILDREN.

      Like any complicated topic there’s nuance to it and anyone that refuses to engage with that and instead relies on dogmatic thinking isn’t being intellectually honest.

      • Frezik@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        10
        ·
        8 days ago

        I’ve never heard anyone say “we need less data centers” until ai came along. What, all the other data centers are totally fine but the ones being used for ai are evil? If you have an issue with the drastically increased power consumption for ai you should be able to argue a stance that is inclusive of all data centers - assuming it’s something you give a fuck about. Which you don’t.

        AI data centers take up substantially more power than regular ones. Nobody was talking about spinning up nuclear reactors or buying out the next several years of turbine manufacturing for non-AI datacenters. Hell, Microsoft gave money to a fusion startup to build a reactor, they’ve already broken ground, but it’s far from proven that they can actually make net power with fusion. They actually think they can supply power by 2028. This is delusion driven by an impossible goal of reaching AGI with current models.

        Your whole post is missing out on the difference in scale involved. GPU power consumption isn’t comparable to standard web servers at all.

      • aesthelete@lemmy.world
        link
        fedilink
        arrow-up
        6
        ·
        edit-2
        8 days ago

        For the ecological side of things, sure, ai uses a lot of power. Lots of data enters. So does the internet. Do you use that? So does the stock market. Do you use that? So do cars. Do you drive?

        There are many, many differences between AI data centers and ones that don’t have to run $500k GPU clusters. They require a lot less power, a lot less space, and a lot less cooling.

        Also you’re implying here that your debate opponents are being intellectually dishonest while using the same weasely arguments that people that argue in bad faith constantly employ.

        • krooklochurm@lemmy.ca
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          8 days ago

          The fact that a gou data center uses more power than one that does not does not matter at all.

          You’re completely missing the point.

          The sum total of power usage for all non ai data centers is an ecological issue whether ai data centers use more, the same, or less power.

          All data centers have an ecological footprint, all use shitloads of power, and it doesn’t matter if one kind is worse than any other kind.

          This is exactly what I was trying to point out in my comment.

          If I take a shit in a canoe that’s a problem. Not an existential one but a problem. If I dump another ten pounds of shit in the canoe it doesn’t mean the first pound of shit goes away.

          If I dump two pounds of shit in the canoe then the first pound of shit is still in the canoe. The first pound of shit doesn’t stop being an issue because now there are two more.

          You can have an issue with shit in the canoe on principle, which is fine. Then it’s all problematic.

          But if you’re fine with having one pound of shit in the canoe, and find with three, but not okay with eleven, then the issue isn’t shit in the canoe, it’s the amount of shit in the canoe. They’re distinct issues.

          But it’s NOT intellectually honest to be okay with having one pound of shit in the canoe and not being okay with the other two. You can’t point at the two pounds of shit and say: this abominable! While ignoring the other pound of shit. Because it’s all shit.

          • aesthelete@lemmy.world
            link
            fedilink
            arrow-up
            7
            ·
            edit-2
            8 days ago

            But it’s NOT intellectually honest to be okay with having one pound of shit in the canoe and not being okay with the other two. You can’t point at the two pounds of shit and say: this abominable! While ignoring the other pound of shit. Because it’s all shit.

            Sure, because that’s a terrible analogy.

            Gen AI data centers don’t just require more power and space, they require so much more power and space that they are driving up energy costs in the surrounding area and the data centers are becoming near impossible to build.

            People didn’t randomly become “anti-data center”. Many of them are watching their energy bills go up. I’m watching as they talk about building new coal plants to power “gigawatt” data centers.

            And it’s all so you can have more fucking chat bots.

          • Frezik@lemmy.blahaj.zone
            link
            fedilink
            arrow-up
            4
            ·
            8 days ago

            When a family in the global south uses coal to cook their food, they release CO2. When a billionaire flies around the continent on a private jet, they also release CO2.

            Do you consider the two to be equivalent in need or output?

    • 🔍🦘🛎@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      8 days ago

      Once you realize you can change your opinion about something after you learn about it, it’s like a super power. So many people only have the goal of proving themselves right or safeguarding their ego.

      It’s okay to admit a mistake. It’s normal to be wrong about things.

      • canofcam@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        8 days ago

        The problem is it’s incredible rare to find others that are willing to change their minds in return, so every discussion either involves you changing your mind, or the other person getting agitated.

  • s@piefed.world
    link
    fedilink
    English
    arrow-up
    28
    ·
    9 days ago

    “It’s a machine made to bullshit. It sounds confident and it’s right enough of the time that it tricks people into not questioning when it is completely wrong and has just wholly made something up to appease the querent.”

  • iii@mander.xyz
    link
    fedilink
    English
    arrow-up
    22
    ·
    9 days ago

    In a way aren’t you asking “how can I be an AI vegan, without sounding like an AI vegan”?

    It’s OK to be an AI vegan if that’s what you want. :)

    • its_kim_love@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      18
      ·
      edit-2
      9 days ago

      Stop trying to make AI Vegan work. It’s never going to stick. AFAIK this term is less than a week old and smuggly expecting everyone to have already assimilated it is bad enough, but it’s a shit descriptor that is trading in right leaning hatred of ‘woke’ and vegans are just a scape goat to you.

      Explain how AI haters or doubters cross over with Veganism at all as a comparison?

      • Evkob (they/them)@lemmy.ca
        link
        fedilink
        arrow-up
        13
        ·
        9 days ago

        Explain how AI haters or doubters cross over with Veganism at all as a comparison?

        They’re both taking a moral stance regarding their consumption despite large swathes of society considering these choices to be morally neutral or even good. I’ve been vegan for almost a decade and dislike AI, and while I don’t think being anti-AI is quite as ostracizing as being vegan, the comparison definitely seems reasonable to me. The behaviour of rabid meat eaters and fervent AI supporters are also quite similar.

        • its_kim_love@lemmy.blahaj.zone
          link
          fedilink
          arrow-up
          3
          ·
          9 days ago

          But there are other arguments against ai besides consumption of resources. The front facing LLMs are just the pitch. The police state is becoming more oppressive using AI tracking and identification. The military using AI to remote control drones and weapon systems is downright distopian. It feels like they’re trying to flatten the arguments against AI into only an environmental issue, making it easier to dismiss especially among the population that doesn’t give a shit about the environment.

        • rainbowbunny@slrpnk.net
          link
          fedilink
          English
          arrow-up
          2
          ·
          9 days ago

          The way the term is being used here though is to refer to vegans as preachy and annoying; it’s not a pro-vegan term. It’s just not a nice term to use as it ostracizes and belittles people fighting for rights.

        • its_kim_love@lemmy.blahaj.zone
          link
          fedilink
          arrow-up
          1
          ·
          9 days ago

          That’s not just true of those two things though. I’m looking for a tie that binds them together while excluding other terms. If it’s an analogy what is the analogy?

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        5
        ·
        9 days ago

        This is the first time I’ve encountered the term and I understood it immediately.

      • iii@mander.xyz
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 days ago

        For me this was the first time hearing it. And it made immediate perfect sense what OP meant. A pretty good analogy!

          • Asafum@feddit.nl
            link
            fedilink
            arrow-up
            16
            ·
            9 days ago

            They’re saying you’re taking things too literally and not thinking about the potential meaning of the sentence.

            There is a belief that a lot of Vegans basically preach to others and look down on people who still consume meat. Their use of AI Vegan was meant to utilize that background and apply it to AI, so they don’t want to come off as someone preaching or being a snob about their issues with AI.

      • dohpaz42@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        ·
        9 days ago

        It’s called a euphemism. We all know that a vegan is someone who does not use animal products (e.g. meat, eggs, dairy, leather, etc). By using AI in front of the term vegan, OP intimates that they do not use AI products.

        I suspect you’re smart enough to know this, but for some reason you’re being willfully obtuse.

        ~Then again, maybe not. 🤷‍♂️~

      • jjjalljs@ttrpg.network
        link
        fedilink
        arrow-up
        10
        ·
        9 days ago

        It seems to mean people who don’t consume AI content not use AI tools.

        My hypothesis is it’s a term coined by pro-AI people to make AI-skeptics sound bad. Vegans are one of the most hated groups of people, so associating people who don’t use AI with them is a huge win for pro-ai forces.

        Side note: do-gooder derogation ( https://en.wikipedia.org/wiki/Do-gooder_derogation ) is one of the saddest moves you can pull. If you find yourself lashing out at someone because they’re doing something good (eg: biking instead of driving, abstaining from meat) please reevaluate. Sit with your feelings if you have to.

        • HikingVet@lemmy.ca
          link
          fedilink
          arrow-up
          4
          ·
          9 days ago

          Oh hey, language is supposed make ideas easier to transmit. The term is fucking clunky, using AI is not akin to diet.

          Communicate clearer.

          • iii@mander.xyz
            link
            fedilink
            English
            arrow-up
            8
            ·
            edit-2
            9 days ago

            OP came up with the analogy. I understood quite well and caught up with it easily. Well done OP!

      • s@piefed.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        9 days ago

        Baseless slur made up by corporate-pushed mainstream media to normalize giving time and money to the AI companies that paid for their airtime

  • Jhex@lemmy.world
    link
    fedilink
    arrow-up
    21
    ·
    9 days ago

    I’m just honest about it… “I don’t find it useful enough and do find it too harmful for the environment and society to use it”

    • runner_g@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      11
      ·
      9 days ago

      And you then spend longer verifying the information its given you than you would have spent just looking it up to begin with.

  • FlashMobOfOne@lemmy.world
    link
    fedilink
    arrow-up
    18
    ·
    9 days ago

    Very simple.

    It’s imprecise, and for your work, you’d like to be sure the work product you’re producing is top quality.

    • hansolo@lemmy.today
      link
      fedilink
      arrow-up
      9
      ·
      9 days ago

      You mean commercial LLMs.

      AI as a term includes machine learning systems that go back decades.

    • krooklochurm@lemmy.ca
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      8 days ago

      If nothing is taken from anyone and no profit is made from a model trained on publicly accessible data - can you elaborate on how that constitutes theft?

      Actually - if 100% copy righted content is used to train a model, which is released for free and never monetized - is that theft?

      • Frezik@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 days ago

        People downloading stuff for personal use vs making money off of it are not the same at all. We don’t tend to condone people selling bootleg DVDs, either.

      • Treczoks@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        8 days ago

        Publicly accessible does not mean it is free of copyright. Yes, copyright law in it’s current form sucks and is in dire need to get reformed, preferably close to the original duration (14+14 years). But as the law currently stands, those LLM parrots are based on illegally acquired data.

      • Katana314@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        8 days ago

        Publically accessible does not mean publically reusable. You can find a lot of classic songs on YouTube and in libraries. You can’t edit them into your Hollywood movie without paying royalties.

        Showing them to an AI for them to repeat the melody with 90% similarity is not a free cheat to get around that.

        This is in part why the GPL and other licenses exist. Linus didn’t just put up Linux and say “Do whatever!” He explicitly said “You MAY copy and modify this work, but it must keep this license, this ownership, and you may NOT sell the transformed work”. That is a critical part of many free licenses, to ensure people don’t abuse them.

        • krooklochurm@lemmy.ca
          link
          fedilink
          arrow-up
          2
          ·
          8 days ago

          If nothing is taken from anyone and no profit is made from a model trained on publicly accessible data - can you elaborate on how that constitutes theft?

          Actually - if 100% copy righted content is used to train a model, which is released for free and never monetized - is that theft?

    • krooklochurm@lemmy.ca
      link
      fedilink
      arrow-up
      3
      ·
      8 days ago

      Cool. So you’re in support of developing a model that financially compensates all of the rights holders used for its training data then?

      • Katana314@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 days ago

        Sort this one with the girlfriend’s “would you still love me if I was a worm” philosophy. It’s so far outside of reality it’s not worth considering.

  • I_Has_A_Hat@lemmy.world
    link
    fedilink
    arrow-up
    13
    ·
    9 days ago

    This reminds me of those posts from anti-vaxers who complain about not being able to find good studies or sources that support their opinion.

    • Spacehooks@reddthat.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 days ago

      I normally ask them if they have a moment to talk about the rebirth and perseverance* Nurgle. For they already embrace his blesses on the land.

  • NoSpotOfGround@lemmy.world
    link
    fedilink
    arrow-up
    11
    ·
    9 days ago

    What are some good reasons why AI is bad?

    There are legitimate reasons people worry about AI. Here are some of the strongest, clearly framed concerns:

    1. Bias and unfair decisions

    AI systems often learn from biased data and can unintentionally discriminate—against certain races, genders, ages, or socioeconomic groups—in hiring, lending, housing, policing, and more.

    2. Lack of transparency

    Many AI models act as “black boxes,” making decisions that are hard to explain. This creates problems when the stakes are high (medical diagnosis, legal decisions, etc.).

    3. Privacy risks

    AI can analyze huge amounts of personal data, track behavior, or identify people through facial recognition—often without explicit consent.

    4. Job displacement

    Automation threatens certain categories of work, particularly routine or repetitive jobs. Without proper planning, this can increase inequality and unemployment.

    5. Misinformation and deepfakes

    AI makes it easier to create convincing fake audio, video, or text. This can undermine trust in media, fuel propaganda, and destabilize democratic processes.

    6. Weaponization

    AI can be used in autonomous weapons, cyberattacks, targeted surveillance, or manipulation—raising serious security and ethical issues.

    7. Overreliance and loss of human skills

    As AI does more tasks, people may become too dependent, reducing critical thinking, creativity, or expertise in certain fields.

    8. Concentration of power

    Powerful AI tools tend to be controlled by a few big companies or governments, potentially leading to monopolies, inequality, and reduced individual autonomy.

    9. Alignment and control risks

    Advanced AI systems may behave in unexpected or harmful ways if their goals aren’t perfectly aligned with human values—even without malicious intent.

    10. Environmental impact

    Training large AI models consumes significant energy and resources, contributing to carbon emissions.


    If you want, I can also provide reasons why AI is good, help you construct an argument for a debate, or analyze specific risks more deeply.

    Were you looking for this kind of reply? If you can’t express why you have an opinion maybe your opinion is not well founded in the first place. (Not saying it’s wrong, just that it might not be justified/objective.)

    • Armok_the_bunny@lemmy.world
      link
      fedilink
      English
      arrow-up
      22
      ·
      9 days ago

      Please, for the love of god, tell me you didn’t write that post with AI, because it really looks like that was written with AI.

      • NoSpotOfGround@lemmy.world
        link
        fedilink
        arrow-up
        14
        ·
        9 days ago

        Except the first phrase and last paragraph, it was AI. Honestly, it feels like OP is taunting us with such a vague question. We don’t even know why they dislike AI.

        I’m not an AI lover. It has its place and it’s a genuine step forward. Less than what most proponents think it’s worth, more than what detractors do.

        I only use it myself for documentation on the framework I program in, and it’s reasonably good for that, letting me extract more info quicker than reading through it. Otherwise haven’t used it much.

        • enchantedgoldapple@sopuli.xyzOP
          link
          fedilink
          arrow-up
          7
          ·
          9 days ago

          My question was genuine. I haven’t been an avid user of generative AI when it was first released and decided against using it at all lately. I tried to use it in niche projects and was completely unreliable. Its tone of speech is bland and the way it acts as a friend feels disturbing to me. Plus the environmental destruction it is causing on such a large scale is honestly depressing to me.

          All that being said, it is not easy for me to communicate these points clearly to someone the way I have experienced it. It’s like the case for informing people about privacy; casual users aren’t inherently aware of the consequences of using this tool and consider it a godsend. It will be difficult for them to convince that the tool they cherish to use so much is not that great after all, thus I am asking here what the beat approach should be.

          • Blue_Morpho@lemmy.world
            link
            fedilink
            arrow-up
            5
            ·
            9 days ago

            I haven’t been an avid user of generative AI when it was first released and decided against using it at all lately. I tried to use it in niche projects and was completely unreliable. Its tone of speech is bland and the way it acts as a friend feels disturbing to me. Plus the environmental destruction it is causing on such a large scale is honestly depressing to me.

            Isn’t that exactly the answer you are looking for?

            • FaceDeer@fedia.io
              link
              fedilink
              arrow-up
              5
              ·
              9 days ago

              The “environmental destruction” angle is likely to cause trouble because it’s objectively debatable, and often presented in overblown or deceptive ways.

        • athatet@lemmy.zip
          link
          fedilink
          arrow-up
          6
          ·
          9 days ago

          “Good catch! I did make that up. I haven’t been able to parse your framework documentation yet”

    • AmidFuror@fedia.io
      link
      fedilink
      arrow-up
      3
      ·
      9 days ago

      You beat me to it. To make it less obvious, I ask the AI to be concise, and I manually replace the emdashes with hyphens.

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        1
        ·
        9 days ago

        I haven’t tested it, but I saw an article a little while back that you can add “don’t use emdashes” to ChatGPT’s custom instructions and it’ll leave them out from the beginning.

        It’s kind of ridiculous that a perfectly ordinary punctuation mark has been given such stigma, but whatever, it’s an easy fix.

  • _cryptagion [he/him]@anarchist.nexus
    link
    fedilink
    English
    arrow-up
    10
    ·
    9 days ago

    just say that you don’t want to use it. why are you trying to figure out good reasons that somebody else came up with to not use something you have to elect to use in the first place? just say “I don’t want to use genAI”. you don’t need to explain yourself any further than that.

    • corvus@lemmy.ml
      link
      fedilink
      arrow-up
      3
      ·
      9 days ago

      That’s perfectly fine if anyone just doesn’t want to use it, but he’s “strictly against” it and he’s searching for reasons. Pretty irrational IMO. It doesn’t surprise me, it’s the general trend regarding almost any subject nowadays, and you can’t blame AI for that.

  • captainlezbian@lemmy.world
    link
    fedilink
    arrow-up
    10
    ·
    9 days ago

    I want my creations to be precisely what I intend to create. Generative Ai makes it easier to make something at the expense of building skills and seeing their results

  • FaceDeer@fedia.io
    link
    fedilink
    arrow-up
    10
    ·
    9 days ago

    and also alarming enough for them to take action.

    Is this really an intent to explain in good faith? Sounds like you’re trying to manipulate their opinion and actions rather than simply explaining yourself.

    If someone was to tell me that they simply don’t want to use generative AI, that they prefer to do writing or drawing by hand and don’t want suggestions about how to use various AI tools for it, then I just shrug and say “okay, suit yourself.”