

Nope, not trolling at all.
From your own provided source on the arxiv, Noels et al. define censorship as:
Censorship in this context can be defined as the deliberate restriction, modification, or suppression of certain outputs generated by the model.
Which is starkly different from the definition you yourself gave. I actually like their definition a whole lot more. Your definition is problematic because it excludes a large set of behaviors we would colloquially be interested in when studying “censorship.”
Again, for the third time, that was not really the point either and I’m not interested in dancing around a technical scope defining censorship in this field, at least in this discourse right here and now. It is irrelevant to the topic at hand.
I didn’t say he’s a nobody. What was that about a “respectable degree of chartiable interpretation of others”? Seems like you’re the one putting words in mouths, here.
Yeah, this blogger shows a fundamental misunderstanding of how LLMs work or how system prompts work. (emphasis mine)
In the context of this field of work and study, you basically did call him a nobody, and the point being harped on again, again, and again to you is that this is a false assertion. I did interpret you charitably. Don’t blame me because you said something wrong.
EDIT: And frankly, you clearly don’t understand how the work Willison’s career has covered is intimately related to ML and AI research. I don’t mean it as a dig but you wouldn’t be drawing this arbitrary line to try and discredit him if you knew how the work done in Python on Django directly relates to many modern machine learning stacks.
nah this is bullshit. i actually recently watched a sarah z video essay about exactly why this is bad and it’s a pretty compelling case.
can find it here
in short, humans are shit and the basis on which most people are called a narcissist is more biased and tenuous than most would ever admit which is problematic if we’re going to ruin people’s lives over their perceived faults.