The main use case for LLMs is writing text nobody wanted to read. The other use case is summarizing text nobody wanted to read. Except they don’t do that either. The Australian Securities and…
Clearly this post is about LLMs not succeeding at this task, but anecdotally I’ve seen it work OK and also fail. Just like humans, which is the benchmark but they are faster.
The entire point here is that they can’t?
Clearly this post is about LLMs not succeeding at this task, but anecdotally I’ve seen it work OK and also fail. Just like humans, which is the benchmark but they are faster.
humans are clearly faster at generating utterly banal shit, as proven by your posts in this thread