The lawsuit says the Hingham High School student handbook did not include a restriction on the use of AI.
“They told us our son cheated on a paper, which is not what happened,” Jennifer Harris told WCVB. “They basically punished him for a rule that doesn’t exist.”
cross-posted from: https://lemmy.zip/post/24633700
Case file: https://storage.courtlistener.com/recap/gov.uscourts.mad.275605/gov.uscourts.mad.275605.8.0.pdf
Case file: https://storage.courtlistener.com/recap/gov.uscourts.mad.275605/gov.uscourts.mad.275605.13.0.pdf
I’m guessing they probably have rules against plagiarism, or passing off other people’s work as your own.
So then I guess it would be down to whether using AI (without disclosure?) is plagiarism or notMost of the larger LLMs state the results of the model stemming from the user’s prompt intellectually belong to the user.
It’s a massive grey area, and the sum of these kinds of cases are what will define ownership of LLM output for the next ~50 years.
Don’t get me wrong, kid absolutely did not comply with the spirit of the assignment.
E: @[email protected] makes an excellent point:
If the student hired someone to write their essay and the author assigned all copyrights to the student, it’s still plagiarism.
Who legally owns the work isn’t the issue with plagiarism.
The LLMs can claim whatever they like, it holds no weight or value. They are basically advanced plagiarism engines and the law has already made it clear you cannot copyright the output of an LLM.
This particular case will go nowhere, but there are plenty of legal cases between content creators and AI makers that are slowly moving through the legal system that will go somewhere.
the law has already made it clear you cannot copyright the output of an LLM.
That’s true in this context and often true generally, but it’s not completely true. The Copyright Office has made it clear that the use of AI tools has to be evaluated on a case-by-case basis, to determine if a work is the result of human creativity. Refer to https://www.copyright.gov/ai/ai_policy_guidance.pdf for more details.
For example, they state that the selection and arrangement of AI outputs may be sufficient for a work to be copyrightable. And that’s without doing any post-processing of the AI’s outputs.
They don’t talk about situations like this, but I suspect that, if given a prompt like “Rewrite this paragraph from third person to first person,” where the paragraph in question is copyrighted, the output would maintain the same copyright as the input (particularly if performed faithfully and without hallucinations). Such a revision could be made with non-LLM technology, after all.
So who owns the copyright then? Is the output just public domain?
It doesn’t matter what the LLM license states. Replace the LLM with a person doing exactly what the LLM does and ask yourself if it is plagiarism.
If I do your homework for you and I say, “Because you prompted me with the questions, the answers belong to you.” That isn’t a free ‘get out of plagiarism card’ for you. What I tell you isn’t relevant.
It’s not gray at all.
Edit: that’s weird. I got a personal message but the reply showed up here.
If the student hired someone to write their essay and the author assigned all copyrights to the student, it’s still plagiarism.
Who legally owns the work isn’t the issue with plagiarism.
Most of the larger LLMs state the results of the model stemming from the user’s prompt intellectually belong to the user.
Who cares what they say to avoid being sued for copyright infringement?
I sometimes use an LLM to “tidy up” my work and paste a bunch of writing in to see if it comes up with anything better. Some parts it will, others it won’t, and I’ll use or tweak some of it. I wonder if that counts? It’s all my work going in, but it’s using other people’s work to make adjustments.
Replace LLM with a person. If it was a person editing your work, does it make it plagiarism?
A common proofreading technique is to give your work to another person to read and make comments. That’s not plagiarism.
People who proofread only generally make recommendations to edit. LLMs often “rewrite” the vast majority of the document.
If I tell a person who’s my editor the concept of my paper and about 20-30% of the actual content that’s in the end paper… sounds like someone else wrote the paper to me.
It’s all up to how you’re using the tool. Lots of kids out there will simple tell chatgpt to write something for them. Other’s will simply ask for basic proofreading. It’s a removed to tell the difference on the grading side.
Yes, that’s exactly my opinion on the subject. ( I realize this is a contentless reply but I didn’t want you to think I downvoted you.)
I didn’t want you to think I downvoted you.
I’m admin on my small instance. I can see the votes. No worries. In this case the downvote is from [email protected].
Anyway, the most I ever use LLMs professionally for is to help rearrange content for better flow or maybe convert more rambly bits into something that’s concise. I tend to be more verbose than I need to be (mostly because my documentation for stuff is wildly verbose since I tend to forget stuff, which is great for documentation… not always great for talking through something for a client).
I write my own papers, but will put paragraphs through an llm and ask it how it can be improved (normally grammarly’s ‘ai’), and sometimes I take it’s advice, but half the time I dislike what it’s done. Sometimes I give it a bunch of information on what I need to write, and it’ll spit something out, and then I’ll sort of use it as a skeleton for my paper, but to be honest, it’s kind of shit, regardless of which one I’ve tried. And it lies. So much.
But those rules don’t apply here.
This reminds me of a story a friend who is a teacher recently told me: One of his students was so nervous during an oral exam that he could barely form a complete sentence. So the friend of mine, in consultation with the exam board, gave the poor guy a second chance on the same day - that didn’t go particularly well either, but was enough to pass. The parents of the nervous student sued because this procedure did not comply with the examination regulations. They won and managed to get the exam repeated a third time - the examination board stayed unchanged. You can perhaps imagine how this went for the student, who was understandably all the more nervous the third time around. In the end, he didn’t graduate, not because the examiners were vindictive, but because they had to grade the student purely based on his performance which wasn’t good enough because the poor guy couldn’t get a coherent sentence together again. If his parents hadn’t sued, he would have graduated.
lol AI has written many thousands of words for me at work. the real life skill is how to not get caught using it
Don’t quote Wikipedia, instead quote the citations in Wikipedia.
Sounds like your employer is lucky…
yea, lucky i even show up, with the bullshit pittance they’re calling “wages”
It seems like with your skill set, you should be able to get a better job.
I think their skillset might be limited to what chatgpt can produce.
Doesn’t seem very oniony?
I guess you’re right in that the headline is not Onion-worthy. But I find “it’s not cheating to cheat using a machine, let’s sue” a rather creative approach.
Iirc the suing parents are teachers themselves