• TxzK@lemmy.zip
    link
    fedilink
    arrow-up
    90
    ·
    10 days ago

    nah, I want chat gpt to be my wife, since I don’t have a real one

    /s

  • orca@orcas.enjoying.yachts
    link
    fedilink
    arrow-up
    64
    ·
    9 days ago

    I just had Copilot hallucinate 4 separate functions, despite me giving it 2 helper files for context that contain all possible functions available to use.

    AI iS tHe FuTuRE.

      • Dragonstaff@leminal.space
        link
        fedilink
        English
        arrow-up
        17
        ·
        9 days ago

        Not the person you replied to, but my job wants us to start using it.

        The idea that this will replace programmers is dumb, now or ever. But I’m okay with a tool to assist. I see it as just another iteration of the IDE.

        Art and creative writing are different beasts altogether, of course.

        • FrChazzz@lemm.ee
          link
          fedilink
          English
          arrow-up
          11
          ·
          9 days ago

          My wife uses AI tools a lot (while I occasionally talk to Siri). But she uses it for things like: she’s working on a book and so she used it to develop book cover concepts that she then passed along to me to actually design. I feel like this is the sort of thing most of us want AI for—an assistant to help us make things, not something to make the thing for us. I still wrestle with the environmental ethics of this, though.

          • slacktoid@lemmy.ml
            link
            fedilink
            English
            arrow-up
            2
            ·
            8 days ago

            The environmental impacts can be solved easily by pushing for green tech. But that’s more a political problem that a technical problem IMO. Like stop subsizing oil and gas and start subsizing nuclear (in the short term) and green energy in the long term.

      • tweeks@feddit.nl
        link
        fedilink
        arrow-up
        8
        ·
        9 days ago

        It’s cutting my programming work in half right now with quality .NET code. As long as I stay in the lead and have good examples + context in my codebase, it saves me a lot of time.

        This was not the case for co-pilot though, but Cursor AI combined with Claude 3.7 is quite advanced.

        If people are not seeing any benefit, I think they have the wrong use cases, workflow or tools. Which can be fair limitations depending on your workplace of course.

        You could get in a nasty rabbit hole if you vibe-code too much though. Stay the architect and check generated code / files after creation.

      • orca@orcas.enjoying.yachts
        link
        fedilink
        arrow-up
        7
        ·
        9 days ago

        I use it extremely sparingly. I’m critical of anything it gives me. I find I waste more time fixing its work and removing superfluous code more than I do gaining value from it.

      • dependencyinjection@discuss.tchncs.de
        link
        fedilink
        arrow-up
        4
        ·
        9 days ago

        Our tiny company of software engineers have embraced it in the IDE for what it is, a tool.

        As a tool we have saved a crazy amount of man hours and as I don’t work for ghouls we recently got pay increases and a reduction in hours.

        There are only 7 of us including the two owner / engineers and it’s been a game changer.

    • ftbd@feddit.org
      link
      fedilink
      arrow-up
      7
      ·
      9 days ago

      The Copilot code completion in VSCode works surprisingly well. Asking Copilot in the web chat about anything usually makes me want to rip my hair out. I have no idea how these two could possibly be based on the same model

      • Prime_Minister_Keyes@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        ·
        9 days ago

        It quite depends on your use case, doesn’t it? This decades-old phrase about an algorithm in Fractint always stuck with me: “[It] can guess wrong, but it sure guesses quickly!”
        Part of my job is getting an overview - just some generic leads and hints - about topics completely unknown to me, really fast. So I just ask an LLM, verify the links it gives and create a response within like 10-15 minutes. So far, no complaints.

      • orca@orcas.enjoying.yachts
        link
        fedilink
        arrow-up
        2
        ·
        9 days ago

        Yeah I find the code completion is pretty consistent and learns from other work I’m doing. Chat and asking it to do anything though is a hallucinogenic nightmare.

  • SaharaMaleikuhm@feddit.org
    link
    fedilink
    arrow-up
    57
    ·
    9 days ago

    Had it write some simple shader yesterday cause I have no idea how those work. It told me about how to use the mix and step functions to optimize for GPUs, then promptly added some errors I had to find myself. Actually not that bad cause now after fixing it I do understand the code. Very educational.

    • TheOakTree@lemm.ee
      link
      fedilink
      arrow-up
      29
      ·
      9 days ago

      This is my experience using it for electrical engineering and programming. It will give me 80% of the answer, and the remainder 20% is hidden errors. Turns out the best way to learn math from GPT is to ask it a question you know the answer (but not the process) to. Then, reverse engineer the process and determine what mistakes were made and why they impact the result.

      Alternatively, just refer to existing materials in the textbook and online. Then you learn it right the first time.

      • Cataphract@lemmy.ml
        link
        fedilink
        arrow-up
        5
        ·
        8 days ago

        thank you for that last sentence because I thought I was going crazy reading through these responses.

          • Cataphract@lemmy.ml
            link
            fedilink
            arrow-up
            1
            ·
            6 days ago

            ok, I finally figured out my view on this I believe. I was worried I was being a grumpy old man who was just yelling at the AI (still probably am, but at least I can articulate why I feel this is a negative reply to my concerns)

            It’s not reproducible.

            I personally don’t believe asking an AI with a prompt then “troubleshooting” it is the best educational tool for the masses to be promoted to each-other. It works for some individuals, but as you can see the results will always vary with time.

            There are so many promotional and awesome educational tools that emphasize the “doing” part instead of reading. You don’t need to ask an AI prompt then try to fix all the horrible shit when there is always a statistically likely chance you will never be able to solve it and the AI gave you an impossible answer to fix.

            I get some people do it, some people succeed, and some people are maybe so lonely that this interaction is actually preferable since it seems like some weird sort of collaboration. The reality is that the AI was trained unethically and has so many moral and ethical repercussions that just finding a decent educator or forum/discord to actually engage with is whole magnitudes better for society and your own mental processes.

    • Klear@sh.itjust.works
      link
      fedilink
      arrow-up
      16
      ·
      9 days ago

      Shaders are black magic so understandable. However, they’re worth learning precisely because they are black magic. Makes you feel incredibly powerful once you start understanding them.

    • irelephant [he/him]🍭@lemm.ee
      link
      fedilink
      English
      arrow-up
      4
      ·
      9 days ago

      I used it yesterday because I couldn’t get mastodon’s version of http signing working. It spat out a shell script which worked, which is more than my attempts did.

  • SasquatchBanana@lemmy.world
    link
    fedilink
    arrow-up
    52
    ·
    9 days ago

    What is up with the rise of pro AI people on here? I just “talked” to some kind of person in support of it. Are tankies pro AI now?

    • cm0002@lemmy.world
      link
      fedilink
      arrow-up
      57
      ·
      9 days ago

      Most don’t have a problem with AI itself and even find it useful in its proper use cases

      What most of us hate about it is the corporations shoving it every which way where it doesn’t belong, doesn’t work and down all our throats for profits so they can make line go up

      • ArchRecord@lemm.ee
        link
        fedilink
        English
        arrow-up
        19
        ·
        9 days ago

        Seconded. I genuinely understand most of the hate against AI, but I can’t understand how some people are so completely against any possible implementation.

        Sometimes, an LLM is just good at rewording documentation to provide some extra context and examples. Sometimes it’s good for reformatting notes into bullet points, or asking about that one word you can’t put your finger on but generally remember some details about, but not enough for the thesaurus to find it.

        Limited, sure, but not entirely useless. Of course, when my fucking charity fundraising platform starts adding features where you can speak to it and tell it “donate $x to x charity” instead of just clicking the buttons yourself, and that’s where the development budget is going… yeah, I’m not exactly happy about that.

    • Comtief@lemm.ee
      link
      fedilink
      arrow-up
      43
      ·
      9 days ago

      It’s pretty common to be pro-AI or at least neutral about it. Lemmy seems to have an echo chamber of hating it as far as I can tell, so maybe it’s just new people coming in?

      • Klear@sh.itjust.works
        link
        fedilink
        arrow-up
        5
        ·
        9 days ago

        Might be people who don’t give a shit one way or another are getting tired of the front page being filled with anti-AI memes every day.

    • decipher_jeanne@jlai.lu
      link
      fedilink
      English
      arrow-up
      23
      ·
      9 days ago

      It’s useful. I ain’t letting it write software. But I can let it write my stupid report and paperwork while feeding him the important bit. Because I really don’t want to bother.

      • SasquatchBanana@lemmy.world
        link
        fedilink
        arrow-up
        14
        ·
        9 days ago

        If that report and paperwork is inane, rat race stuff, i won’t be as hard. But if that is part of school work, you’re mentally cooked then.

        • Captain Aggravated@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          7
          ·
          9 days ago

          If the ubiquity of LLMs kills the MLA essay, it’ll be worth the price.

          I haven’t encountered an English teacher that knows how to teach someone how to make an effective argument or state a compelling case, what they know how to do is strictly adhere to the MLA handbook and spot minor grammatical pet peeves. From high school to university I’ve never had an English teacher call me up to discuss my paper to talk about how I could have more effectively made a point, but I’ve gotten commas written over with semicolons.

          • SasquatchBanana@lemmy.world
            link
            fedilink
            arrow-up
            5
            ·
            9 days ago

            This comment completelt encapsulates what is wrong. MLA essays? The format is going to die? You have issues with shitty teachers, where are there problems due to the systems in place, you are alright with AI taking away an important human experience? Like come on. Can we stop using AI and critically think for a bit.

            • Captain Aggravated@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              4
              ·
              edit-2
              9 days ago

              So, first, Yeah I’d be in favor of killing the format itself; the MLA format seems to have two functions: 1. to force tens of thousands of young adults to buy MLA handbooks every semester from college book stores, and 2. to serve as a warning to any reader that the article they’ve found was written by an ENG112 student who didn’t give the first squirt of a monday morning’s piss about it because it was assigned to him more as a barrier for him to dodge than an exercise to strengthen him. Actual scholarship is done in the APA format and we’d be better off if we just taught that.

              Second, I reject the notion that writing tedious research papers qualifies as “an important human experience.” Again, a lot of folks are forced to dabble in it similarly to how they’re forced to dabble in mathematical proofs: once or twice in high school and once or twice in college, they’re required to rote memorize something for a couple weeks. I’m rather convinced that a lot of the time taken in school from about 7th grade up is designed to appear academic more than actually be academic.

              Third, I’m in the camp that says scrap most of the idea we have of formal academic writing, for multiple reasons. Chiefly, the more of those worthless English teachers we can put back into food service where they belong, the better. Stepping a little bit out of my comedy internet persona a bit, I do believe the idea of “impersonal, dry, boring, jargon-laden, complicated” research papers has gone beyond any practical function it may have had. There is something to be said for using standardized language, minimizing slang and such. To me, that would be a reason to write in something like VoA simplified English rather than Sesquipedalian Loquaciousness. I’m also not alone in the idea that scientific concepts and research are getting to the point that text on page isn’t the right tool for conveying it; Jupyter notebooks and other such tools are often better than a stodgy essay.

              Fourth, undoing the rigorous formats of “scholarly articles” may deal a blow to junk science. I’ve seen English teachers point to the essay format, presence of citations, presence in journals etc. as how you tell a written work has any merit. In practice this has meant that anyone wanting to publish junk science has an excellent set of instructions on how to make it look genuine. Hell, all you’ve got to do is cite a source that doesn’t exist and you’ve created truth from whole cloth, like that “you swallow 8 spiders in your sleep” thing.

              Finally, whether the problem lies in the bureaucracy that creates curricula or individual teachers, I’m in favor of forcing their hands by eliminating the long-form essay as a thing they can reasonably expect students to do on a “here’s this semester’s busywork, undergrad freshman” basis.

          • Hoimo@ani.social
            link
            fedilink
            arrow-up
            2
            ·
            9 days ago

            I had the same experience, but I recently helped my sister with a homework essay and she had a full page with the exact requirements and how they were graded.
            90% of the points were for content, the types of arguments, proper structure and such. Only 10% were for spelling and punctuation.
            Meaning she could hand in a complete mess, but as long as her argument was solid and she divided the introduction, arguments and conclusion into paragraphs, she’d still get a 9/10. No grumpy teachers docking half her grade for a few commas. She gets similar detailed instructions for every subject where I used to struggle with vague assignments like “give a good presentation”. It was so bad sometimes, the teacher let the class grade each other.

            (Note we aren’t American, not even English.)

        • decipher_jeanne@jlai.lu
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          9 days ago

          Ho no, I used the reformating and rewording machine to do something that I was doing for years and still suck at doing. I can’t write a legible well formated sentence to save my life. I just feed it whatever it needs to say, it plays it’s magic and get something that looks understandable to my coworkers.

          Okay I am exaggerating, but I really struggle to be either concise and this helps me.

      • Korhaka@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        7
        ·
        9 days ago

        Pretty much how I use it. Unimportant waste of time tasks like forms from HR and mandatory “anonymous” surveys. Refuse to do it until told directly and then get AI to write the most inoffensive and meaningless corporate bullshit.

        Of course not having to do the task at all would be a more efficient use of my time but we get ignored when we say these forms are pointless. Not heard any one day anything positive about them in over a year.

      • nekbardrun@lemmy.world
        link
        fedilink
        arrow-up
        7
        ·
        9 days ago

        Btw, I’d suggest to install deepseek (or any other model) locally so that you don’t give your data for free to others (also for security reasons).

        Take advantage that it is a free/open software and somewhat easy to install.

        • BobTheDestroyer@lemm.ee
          link
          fedilink
          English
          arrow-up
          11
          ·
          edit-2
          9 days ago

          I just got a high priority request to write a couple of VBA macros to fetch data from a database then use it to make a bunch of API queries. I know VBA about as well as I know Chinese or Icelandic. I figured out the query and told Chat GPT to use it in a macro. It wrote all the VBA code. I went through a few rounds of “fix this bug, add this feature” and now my client is happy and I didn’t have to think much about VBA. I knew what and how to ask it for what I wanted and it saved me days of searching google and reading about VBA. That’s high value to me because I don’t care about VBA and don’t really want to know how to use it.

        • WraithGear@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          ·
          9 days ago

          I mean, its great finding me sources of information.

          Could google do it? Not as good as it used too or as good as ai, but yes.

          I also use it to format my brain storms.

          Or where it’s located or what the name of the function i need.

          Or if i want info on items i am looking to buy, like what the benefit of waxing my bike chain is over oils. That sort of thing

            • WraithGear@lemmy.world
              link
              fedilink
              English
              arrow-up
              11
              ·
              edit-2
              9 days ago

              You are needlessly combative.

              if i have reason to doubt it, and that depends on how i feel the importance on the data i am working with, it gives me the source of its information. So i can verify it. Google is even worse at providing misinformation because it’s just a hose of information with a bunch of ads hidden amongst it, and people bidding for elevated results, and spam.

              No matter what tool you use to harness the internet, it’s up to the user to parse that info.

    • ArtificialHoldings@lemmy.world
      link
      fedilink
      arrow-up
      10
      ·
      9 days ago

      In addition to niche political audiences, Lemmy is full of tech professionals who have probably integrated AI into their daily workflow in some meaningful ways.

    • 𝓔𝓶𝓶𝓲𝓮@lemm.ee
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      9 days ago

      People hate new things, they get used to them when they become actually useful and forget what was the reason in the first place repeat ad nauseam

  • Aggravationstation@feddit.uk
    link
    fedilink
    arrow-up
    49
    ·
    9 days ago

    I don’t need Chat GPT to fuck my wife but if I had one and her and Chat GPT were into it then I would like to watch Chat GPT fuck my wife.

  • sheetzoos@lemmy.world
    link
    fedilink
    arrow-up
    32
    ·
    edit-2
    9 days ago

    People are constantly getting upset about new technologies. It’s a good thing they’re too inept to stop these technologies.

    • Wren@lemmy.world
      link
      fedilink
      arrow-up
      32
      ·
      9 days ago

      People are also always using one example to illustrate another, also known as a false injunction.

      There is no rule that states all technology must be considered safe.

      • sheetzoos@lemmy.world
        link
        fedilink
        arrow-up
        15
        ·
        9 days ago

        Every technology is a tool - both safe and unsafe depending on the user.

        Nuclear technology can be used to kill every human on earth. It can also be used to provide power and warmth for every human.

        AI is no different. It can be used for good or evil. It all depends on the people. Vilifying the tool itself is a fool’s argument that has been used since the days of the printing press.

        • wolframhydroxide@sh.itjust.works
          link
          fedilink
          arrow-up
          11
          ·
          edit-2
          9 days ago

          While this may be true for technologies, tools are distinctly NOT inherently neutral. Consider the automatic rifle or the nuclear bomb. In the rifle, the technology of the mechanisms in the gun is the same precision-milled clockwork engineering that is used for worldwide production automation. The technology of the harnessing of a nuclear chain reaction is the same, whether enriching uranium for a bomb or a power plant.

          HOWEVER, BOTH the automatic rifle and the nuclear bomb are tools, and tools have a specific purpose. In these cases, that SOLE purpose is to, in an incredibly short period of time, with little effort or skill, enable the user to end the lives of as many people as possible. You can never use a bomb as a power plant, nor a rifle to alleviate supply shortages (except, perhaps, by a very direct reduction in demand). Here, our problem has never been with the technology of Artificial Neural Nets, which have been around for decades. It isn’t even with “AI” (note that no extant “AI” is actually “intelligent”)! No, our problem is with the tools. These tools are made with purpose and intent. Intent to defraud, intent to steal credit for the works of others, and the purpose of allowing corporations to save money on coding, staffing, and accountability for their actions, the purpose of having a black box a CEO can point to, shrug their shoulders, and say “what am I supposed to do? The AI agent told me to fire all of these people! Is it my fault that they were all <insert targetable group here>?!”

          These tools cannot be used to know things. They are probabilistic models. These tools cannot be used to think for you. They are Chinese Rooms. For you to imply that the designers of these models are blameless — when their AI agents misidentify black men as criminals in facial recognition software; when their training data breaks every copyright law on the fucking planet, only to allow corporations to deepfake away any actual human talent in existence; when the language models spew vitriol and raging misinformation with the slightest accidental prompting, and can be hard-limited to only allow propagandized slop to be produced, or tailored to the whims of whatever despot directs the trolls today; when everyone now has to question whether they are even talking to a real person, or just a dim reflection, echoing and aping humanity like some unseen monster in the woods — is irreconcilable with even an iota of critical thought. Consider more carefully when next you speak, for your corporate-apologist principles will only help you long enough for someone to train your beloved “tool” on you. May you be replaced quickly.

          • sheetzoos@lemmy.world
            link
            fedilink
            arrow-up
            5
            ·
            9 days ago

            You’ve made many incorrect assumptions and setup several strawmen fallacies. Rather than try to converse with someone who is only looking to feed their confirmation bias, I’ll suggest you continue your learnings by looking up the Dunning Kruger effect.

            • wolframhydroxide@sh.itjust.works
              link
              fedilink
              arrow-up
              4
              ·
              edit-2
              9 days ago

              EDIT: now I understand. After going through your comments, I can see that you just claim confirmation bias rather than actually having to support your own arguments. Ironic that you seem to show all of this erudition in your comments, but as soon as anyone questions your beliefs, you just resort to logical buzzwords. The literal definition of the bias you claim to find. Tragic. Blocked.

              • 𝓔𝓶𝓶𝓲𝓮@lemm.ee
                link
                fedilink
                arrow-up
                5
                ·
                edit-2
                9 days ago

                Blocking individual on Lemmy is actually quite pointless as they still can reply to your comments and posts you just will not know about it while there can be whole pages long slander about you right under your nose

                I’d say it’s by design to spread tankie propaganda unabated

                • blind3rdeye@lemm.ee
                  link
                  fedilink
                  arrow-up
                  3
                  ·
                  9 days ago

                  Blocking means that you don’t have to devote your time and thoughts to that person. That’s pretty valuable. And even if they decide they are going to attack you, not-responding is often a good strategy vs that kind of crap anyway - to avoid getting pulled into an endless bad-faith argument. (I’d still suggest not announcing that you’ve blocked them though. Just block and forget about it.)

                • wolframhydroxide@sh.itjust.works
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  edit-2
                  8 days ago

                  You know what? They can go ahead and slander me. Fine. Good for them. They’ve shown they aren’t interested in actual argument. I agree with your point about the whole slander thing, and maybe there is some sad little invective, “full of sound and fury, signifying nothing”, further belittling my intelligence to try to console themself. If other people read it and think “yeah that dude’s right”, then that’s their prerogative. I’ve made my case, and it seems the best they can come up with is projection and baseless accusation by buzzword. I need no further proof of their disingenuity.

            • erin (she/her)@lemmy.blahaj.zone
              link
              fedilink
              arrow-up
              1
              ·
              8 days ago

              Can you point out and explain each strawman in detail? It sounds more like someone made good analogies that counter your point and you buzzword vomited in response.

              • sheetzoos@lemmy.world
                link
                fedilink
                arrow-up
                5
                ·
                edit-2
                8 days ago

                Dissecting his wall of text would take longer than I’d like, but I would be happy to provide a few examples:

                1. I have “…corporate-apologist principles”.

                — Though wolfram claims to have read my post history, he seems to have completely missed my many posts hating on TSLA, robber barons, Reddit execs, etc. I completely agree with him that AI will be used for evil by corporate assholes, but I also believe it will be used for good (just like any other technology).

                1. “…tools are distinctly NOT inherently neutral. Consider the automatic rifle or the nuclear bomb” “HOWEVER, BOTH the automatic rifle and the nuclear bomb are tools, and tools have a specific purpose”

                — Tools are neutral. They have more than one purpose. A nuclear bomb could be used to warm the atmosphere another planet to make it habitable. Not to mention any weapon can be used to defend humanity, or to attack it. Tools might be designed with a specific purpose in mind, but they can always be used for multiple purposes.

                There are a ton of invalid assumptions about machine learning as well, but I’m not interested in wasting time on someone who believes they know everything.

                • erin (she/her)@lemmy.blahaj.zone
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  8 days ago

                  I understand that you disagree with their points, but I’m more interested in where the strawman arguments are. I don’t see any, and I’d like to understand if I’m missing a clear fallacy due to my own biases or not.

        • FearMeAndDecay@literature.cafe
          link
          fedilink
          English
          arrow-up
          7
          ·
          9 days ago

          My big problems with AI are the climate cost and the unethical way that a lot of these models have been trained. If they can fix those, then yeah I don’t have an issue with people using it when it’s appropriate but currently lots of people are using it out of sheer laziness. If corpos are just using it to badly replace workers and kids are using it instead of learning how to write a fucking paragraph properly, then yeah, I’ll hate on AI

        • blind3rdeye@lemm.ee
          link
          fedilink
          arrow-up
          7
          ·
          9 days ago

          Every tech can be safe and unsafe? I think you’ve oversimplified to the point of meaninglessness. Obviously some technologies are safer than others, and some are more useful than others, and some have overwhelming negative effects. Different tech can and should be discussed and considered on a case by case basis - not just some “every tech is good and bad” nonsense.

        • FrChazzz@lemm.ee
          link
          fedilink
          English
          arrow-up
          7
          ·
          9 days ago

          Been this way since the harnessing of fire or the building of the wheel.

    • Dragonstaff@leminal.space
      link
      fedilink
      English
      arrow-up
      19
      ·
      9 days ago

      It’s such a weird question. Why would I need ChatGPT to fuck my wife when we have the Dildoninator 9000 with Vac-u-loc attachments and King fu grip?

  • HexesofVexes@lemmy.world
    link
    fedilink
    arrow-up
    29
    ·
    10 days ago

    “I have no math talent, but that’s ok I’ll use a tool to help” - absolutely no issues, math is hard and you don’t need most of it in “real life” (nonsense of course)

    “I can’t code so I’ll use a web page maker to help” - all good, learning to code is optional, it’s what you create that matters right?

    “Hey AI, break this concept down for me to help me learn it” - surprisingly, still good (though very ill advised, also built on plagiarism and putting private tutors out of work…).

    “I have no art talent, but that’s ok I’ll use a tool to help” - society melts down because…?

    I suppose it could just be a case of being happy to see talents we don’t have replaced by a tool? Then again, it might be artists are better at generating attractive looking arguments for their case.

    • UltraHamster64@lemmy.world
      link
      fedilink
      arrow-up
      39
      ·
      edit-2
      9 days ago

      “I have no math talent, but that’s ok I’ll use a tool to help”

      What tools do you need to replace “math talent”? If you’re talking about calculators - first of all they’re for arithmetic, not math - and second they still do not help you to “solve math problems”. You need logic, experience and intellect to do that. The only “tool” thag can help you is an online forum if someone already solved it.

      “I can’t code so I’ll use a web page maker to help”

      You still need to do stuff, think with your brain and spend time to build the web page. You need to have taste and work your sweat (and some tears) into it.

      “Hey AI, break this concept down for me to help me learn it”

      No, not good unless you want to be misinformed and/or manipulated

      society melts down because…?

      Because the massive group of people were screwd over without their consent to make a tool that going to devalue their work. If you look closely on the examples you yourself provided, you can see that they all respect copyright of others and are themselves often a good and productive work. Ai on the other hand were made “at the expense” of us, and we are rightfully mad.

      • HexesofVexes@lemmy.world
        link
        fedilink
        arrow-up
        10
        ·
        9 days ago

        Oh dear…

        Yes, copyright owners, but not the rights of the creator. Mathematical research is part of the publishing industry, and that strips the rights from creators of such works. Their work is mislabelled discovery, and no protection offered.

        That lovely tool you use to make a website? Yeah, £10 says there is open source code misappropriated there (much as AI generated code is pirated from GitHub, a lot of programs “borrow” code).

        Surely the mathematician and coder have equal claims to anger? It is their works being stolen too?

        • UltraHamster64@lemmy.world
          link
          fedilink
          arrow-up
          4
          ·
          9 days ago

          The people who advanced mathematical research got their glory and pride and the attention of peers. All of them get credited in the names of their own equations and theorems. Also, they all got paid.

          If the company is atleast somewhat creditable, it’s easier for them to license code properly. Besides, many open source code are licensed under MIT which permits fair use.

          You never get the name of artist from the generated piece, even if it’s a one-to-one copy of their art.

          • HexesofVexes@lemmy.world
            link
            fedilink
            arrow-up
            8
            ·
            9 days ago

            I’ll pause you right there - I am a mathematical researcher by trade. We don’t get paid, or glory, pride or much attention XD

            Trust me when I say, unlike in art, the folks who put in the legwork in mathematics tend to toil in obscurity. We don’t much mind it, the pay isn’t great but it does pay the bills.

            I’ll leave this thread with a thought - since I think we’re a little too far apart on opinion to bridge the gap. All fields require creativity, not all forms of creativity are equally rewarded, and therein lies the true root of the AI crisis.

      • weker01@sh.itjust.works
        link
        fedilink
        arrow-up
        4
        ·
        9 days ago

        Never used a computer algebra system? Like wolfram mathematica or sage math or maple? Then we have proof assisting software like coq and smt solvers like cvc5 or Z3.

        This is all software that can solve real math problems in an easy way.

      • nekbardrun@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        9 days ago

        What tools to to replace math work besides calculator?

        Mathematica is one example that solves integrals and do some elementary proof run-down for you.

        Granted that it is used mostly by STEM students. But I rarelly see someone totally forbiding the use of Mathematica as learning tool.

        If you want a more High-school tool, then geogebra is another great example (and also opensource.

        Pretty useful to plot the graphs and help you see what you’re getting wrong.

        I’m answering just to show that there are indeed mathematical tools used for the inbetween of a full math major and a “paltry peasant” that only needs to compute a good enough function for his problem be it an engineer working beams load, a chemist working enthalpy reactions or an biologist trying to find an EDO that best fits the data of prey/predator in a given ecosystem.

    • truthfultemporarily@feddit.org
      link
      fedilink
      arrow-up
      37
      ·
      10 days ago

      So basically there are now a couple studies that show critical thinking skills are on the decline due to AI use which is bad because. 1. Makes you easier to manipulate. 2. How do you check if the AI is Right?

      • AlfredoJohn@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        9 days ago

        Have they isolated the reality that we are also living in times where there are a plethora of factors that are decreasing people’s life outlooks? Depression also affects critical thinking skills. People are less inclined to practice critical thinking when most of their time is spent working for billionaires that are eroding our standard of living daily. I’m not going to be practicing how to be my best when the output of my work goes towards some rich societal parasite. And if that is taking most of my time when do I get to do this for myself when I’m tired after work? What about the fact education as a whole has been going into the shitter? If anything AI is masking how bad our current working conditions, work expectations, and dwindling education standards are truly affecting our falling critical thinking skills. I’m sorry but with all the external factors here I am extrenely skeptical of the results those studies are claiming to demonstrate, the controls to really isolate this down to purely AI being the root cause seems nearly impossible given the state of the world right now. But if you have links to these studies instead of random hearsay, I’d like to see how they isolated this down with controls on their study.

        Edit: i see you posted down below and yeah i think this underscores my entire point:

        “Furthermore, higher educational attainment was associated with better critical thinking skills, regardless of AI usage.”

        So it’s not a factor of AI use its the fact our educational standards have gone to shit. And further more they are drawing results by comparing different generations and ai tool use dependence between them. Which doesn’t isolate this decline to be just due to AI. I don’t know this seems like a flawed study that’s claiming correlation to be causation.

    • rockerface 🇺🇦@lemm.ee
      link
      fedilink
      arrow-up
      23
      ·
      10 days ago

      Except that being good at math or being good at designing a web page have nothing to do with memorising formulas or coding. It’s about being able to break down the problems into manageable pieces and applying your knowledge to bring structure to them. Which isn’t something you can replicate with a tool, if you don’t know how it’s done in the first place.

      If you know nothing about the general principles of math, you won’t be able to solve problems even with tools, because you won’t know which tools to use and how.

      • HexesofVexes@lemmy.world
        link
        fedilink
        arrow-up
        8
        ·
        10 days ago

        I’d somewhat disagree there.

        This isn’t about the intrinsic value of the skill, or a deep understanding, it is a utilitarian application to solve a problem.

        In this respect, tool using is seen as valuable. Mathematical tools (because of their ease of coding) have been popular for decades. Similarly, web page creation tools have existed for a long time - a complete novice can create professional looking pages with them.

        The results from these tools may lack substance and nuance, these being given only by deep understanding, but the same can be said of AI generated images.

        • rockerface 🇺🇦@lemm.ee
          link
          fedilink
          arrow-up
          7
          ·
          10 days ago

          You do need the skill to use the tools, though.

          a complete novice can create professional looking pages with them

          Not unless they already know what makes a page professional looking. Otherwise, how would they tell whether they’ve succeeded?

          • HexesofVexes@lemmy.world
            link
            fedilink
            arrow-up
            6
            ·
            10 days ago

            In much the same way a person can evaluate an art style and say “this is what I want”.

            Often, when people without knowledge attempt to create web pages, they’re not the best, they look good but aren’t well made. Much as AI art isn’t superior to a skilled artist.

      • daniskarma@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        8
        ·
        9 days ago

        Same applies to AI tools. Try to make a coherent program with it without knowing how to program. Try to make a pretty picture without knowing. You’ll end with very bad results.

        But if you know the base of a topic any tool can enhance your efficiency at it.

    • Whats_your_reasoning@lemmy.world
      link
      fedilink
      arrow-up
      22
      ·
      9 days ago

      The thing people seem to forget in this argument is that art is more than making pretty pictures. Art is used to convey emotional messages -it’s a unique act of human expression.

      To create art (whether it be through image, writing, or something else) brings a cathartic sense to the artist, and if done well, it can communicate intended emotions to a viewer. Are there people carefully programming modern AI to make art that fits that concept? Maybe - I have heard people talk about that scenario, but I haven’t seen any such art yet. Rather, the vast majority of modern AI images lack the nuance and emotional impact that real art carries. It’s hollow, uncoordinated, and lacks the “soul” people connect to in human-made art.

        • Whats_your_reasoning@lemmy.world
          link
          fedilink
          arrow-up
          10
          ·
          9 days ago

          You really don’t see the nuance to that? A human uses art to satirize the way other humans use art. A message is being conveyed. The message might be, “Fuck your idea of art,” but that’s still a message being sent from one human to other humans, through the medium of art.

          An AI can’t do that. An AI can’t understand the emotions underlying the concept of protest art. You can ask it to make up some absurd idea, or even to generate a realistic image of it, but it’s not likely to resonate with humans as well as human-made art does.

          It’s okay if this all sounds like gobbledygook - not everyone connects to art in the same way. But those that get it know exactly what I’m talking about.

          • Skullgrid@lemmy.world
            link
            fedilink
            arrow-up
            4
            ·
            9 days ago

            A human uses art to satirize the way other humans use art. A message is being conveyed. The message might be, “Fuck your idea of art,” but that’s still a message being sent from one human to other humans, through the medium of art.

            An AI can’t do that. An AI can’t understand the emotions underlying the concept of protest art.

            The AI art doesn’t appear out of nothing. Someone sets the actual content of the art in motion, and it’s not the fault of the AI that the stupid human controlling it typed in “big titty goth gf” instead of something that illustrates a better concept.

            What’s the excuse of the banana guy for making a shitty piece with no effort?

            • Whats_your_reasoning@lemmy.world
              link
              fedilink
              arrow-up
              7
              ·
              9 days ago

              What’s the excuse of the banana guy for making a shitty piece with no effort?

              You’re talking like there’s some rule about the effort required in order for something to qualify as “art,” as if the time-saving aspect of AI-generation is what disqualifies its images. That’s not how art works, and that’s not the issue with AI.

              For a lot of people, art is about expressing themselves. If you have an absurd idea to troll art by doing something inane like taping a banana to a wall, that is still expressing one’s self even if it seems low-effort. You don’t have to like it or agree with it, just as you don’t have to like or agree with what another person says.

              The AI art doesn’t appear out of nothing. Someone sets the actual content of the art in motion

              And unless the human takes great control in the generation of that image, other humans may feel something lacking in the result. At best, AI art resembles something made by someone who has the hand-eye coordination and technical skill required to make visual art, but who lacks the passion and training that allows them to connect emotionally with an audience.

              • Skullgrid@lemmy.world
                link
                fedilink
                arrow-up
                4
                ·
                edit-2
                9 days ago

                What’s the excuse of the banana guy for making a shitty piece with no effort?

                You’re talking like there’s some rule about the effort required in order for something to qualify as “art,” as if the time-saving aspect of AI-generation is what disqualifies its images. That’s not how art works, and that’s not the issue with AI.

                The banana art resembles something made by someone who has no hand-eye coordination or technical skill required to make visual art, and also lacks the passion and training that allows them to connect emotionally with an audience.

                And unless the human takes great control in the generation of that image, other humans may feel something lacking in the result. At best, AI art resembles something made by someone who has the hand-eye coordination and technical skill required to make visual art, but who lacks the passion and training that allows them to connect emotionally with an audience.

                Yeah, and that’s because the people using AI art generators are just expressing base shitty things, and the AI haters don’t see the pieces with effort put into them. This also goes against your other statement of

                The message might be, “Fuck your idea of art,” but that’s still a message being sent from one human to other humans, through the medium of art.

                An AI can’t do that. An AI can’t understand the emotions underlying the concept of protest art.

                AI art can do that, since it’s still a human generating the message in the end.

                EDIT : Can you meaningfully differenciate between a person writing a “plan” for a curator to tape a banana to a wall , and a person writing a “prompt” for a computer to generate an image that has a certain composition, lighting, colour, etc?

                • zbyte64@awful.systems
                  link
                  fedilink
                  arrow-up
                  4
                  ·
                  edit-2
                  9 days ago

                  If we can’t explain the difference, AI must be sentient? This argument reminds me of “God of the gaps”.

                • petrol_sniff_king@lemmy.blahaj.zone
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  8 days ago

                  The banana art resembles something made by someone who has no hand-eye coordination or technical skill required to make visual art,

                  Good thing they didn’t choose paints or acrylics, then, huh? That might have been embarrassing.

                  Why do you think this is a gotcha?

        • petrol_sniff_king@lemmy.blahaj.zone
          link
          fedilink
          arrow-up
          3
          ·
          8 days ago

          the vast majority of human art lacks the real nuance and emotional impact real art carries.

          1. It by-definition does not. The fact that you can’t see this I think makes you an inhuman monster.

          2. If this were true, why would I want any of it? Do you seriously consume art you think is garbage for no reason? Are you not busy? Is your life really so boring?

          • Skullgrid@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            8 days ago

            the vast majority of human art lacks the real nuance and emotional impact real art carries.

            It by-definition does not. The fact that you can’t see this I think makes you an inhuman monster.

            https://en.wikipedia.org/wiki/Sturgeon’s_law

            The top 40 charts of music? 35 out of the 40 are pure crap, manufactured by people that are playing it by the numbers based on market studies, all in the pursuit of money. Now on top of that, imagine the majority of amateur fluff that people produce that are just low quality, or things that people make that aren’t full of gravitas, nuance and actually emotionally impactful.

            • petrol_sniff_king@lemmy.blahaj.zone
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              8 days ago

              The top 40 charts of music? 35 out of the 40 are pure crap,

              Wow. This is a very old-man opinion.

              imagine the majority of amateur fluff that people produce that are just low quality,

              Are you comparing people’s weekend projects to, I dunno, Marvel movies?

              I like amateur fluff, you know? I look for niche indie games on steam or itch.io just because I want to see what people are up to—what fun ideas they have. That it seems to bother you they’re not Casablanca is very strange to me.

        • grrgyle@slrpnk.net
          link
          fedilink
          arrow-up
          2
          ·
          9 days ago

          That was actually a great article. Thanks for sharing it. There was a lot more context around that event than I’d thought.

      • Steve Dice@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        5
        ·
        9 days ago

        AI images lack the nuance and emotional impact that real art carries. It’s hollow, uncoordinated, and lacks the “soul” people connect to in human-made art.

        This exact same criticism was used in the past but aimed at digital art, and, before that, to photography.

      • HexesofVexes@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        9 days ago

        So, from a mathematician’s perspective, mathematical operations are careful constructs. Their validity and creation being an effort in creativity and, indeed, final catharsis.

        To separate the two, one need only dictate the medium of expression.

    • wolo@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      4
      ·
      9 days ago

      As I understand it, the core purpose of art is communication. Using a graphical editor to create web pages is still honest art in my opinion, because although you’re assembling it out of larger primitives, you’re still communicating a substantial message. It’s similar to collage; the pieces you’ve assembled aren’t your work, and the viewer knows that. The important part is how they’re arranged and the message that arrangement communicates.

      AI-generated art feels deceptive and hollow to a lot of people because when we see art, we expect it to communicate something substantial, but in the case of AI art, the model can’t magically add more meaning beyond the words of the prompt. Not to mention, the cultural grand larceny involved in creating AI art tools leaves a bad taste in most honest people’s mouths.

    • Korhaka@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 days ago

      You don’t need to understand the binary level to make a web page though. At some point you probably don’t need more than a basic awareness of the processes a few levels above/below what you will be using.

  • angrystego@lemmy.world
    link
    fedilink
    arrow-up
    22
    ·
    8 days ago

    Oh poor baby, do you need the dishwasher to wash your dishes? Do you need the washing mashine to wash your clothes? You can’t do it?

    • chetradley@lemm.ee
      link
      fedilink
      arrow-up
      9
      ·
      8 days ago

      The ownership, energy cost, reliability of responses and the ethics of scraping and selling other people’s work, but yeah.

  • max@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    19
    ·
    9 days ago

    To me the worse thing is, my collage uses ai to make the tests, I can see it’s made by it because of multiple correct options, and in a group the teacher said something like “why lost 1h to make when ai can make it in seconds”

    I like to use ai to “convert” citations, like APA to ABNT, I’m lazy for it and it’s just moving the position of the words so yeah