I can’t believe this got released and this is still happening.
This is the revolution in search of RAG (retrieval augmented generation), where the top N results of a search get fed into an LLM and reprioritized.
The LLM doesn’t understand it want to understand the current, it loads the content info it’s content window and then queries the context window.
It’s sensitive to initial prompts, base model biases, and whatever is in the article.
I’m surprised we don’t see more prompt injection attacks on this feature.
A link to the mentioned BMJ publication.
https://www.bmj.com/content/363/bmj.k5094That’s why you should always have your backpack when you board a plane (along with your towel, of course).
Thanks! The OP is a mess (at least when using Boost).
At first I was at least impressed that it came up with such a hilarious idea, but then it of course turns out (just as with the pizza glue) it just stole it from somewhere else.
“AI” as we know them are fundamentally incapable of coming up with something new, everything it spits out is a combination of someone else’s work.
The big problem with these AI is not that the technology is fundamentally flawed. It’s that stupid humans just dump huge amounts of data at it without checking any of it for validity.
If you train a human on bullshit of course they’re not going to know the difference between truth and lies, how could they possibly do that if all they’re ever told is utter nonsense. I really would have thought better of people at Google, you’d have thought someone there would have had the intelligence to realize that not everything on the internet is solid gold.
This is why I’ve never really bothered that they were training on data from Reddit.
I thought this was fake, but I searched Google for “parachute effectiveness” and that satirical study is at the top, and literally every single link below it is a reference to that satirical study. I have scrolled for a good minute and found no genuine answer…
I have scrolled for a good minute and found no genuine answer…
Presumably because the effectiveness of parachutes is pretty self-evident and no one has done a formal study on the subject because why would they.
It’s not like they’re going to find that parachutes actually aren’t needed and the humans just float gently to the ground on their own. Or that a sufficiently tall top hat can be just as effective.
Parachute effectiveness is a very reasonable thing to study, it’s pretty important to know how one parachute design performs compared to other designs and the obvious baseline is no parachute. A lot of things which appear to be self-evident have been extensively studied, generally you don’t want to just assume you know how something works.
Though throwing people out of a plane at altitude with no parachute probably isn’t the most ethical way to study parachute effectiveness.
I think we know enough about aerodynamics that we can probably simulate it in the computer if we really cared to. My point is more that it’s probably never been studied at an academic level, I’m sure parachute manufacturers ans various militaries have studied all sorts of things. But none of that would have made it into a research paper.
Here’s a study on cadavers to determine whether people have the same number of nose hairs in each nostril. In academia there is no such thing as too trivial.
There’s plenty of studies on parachutes for spacecraft (eg, here’s one on aerodynamics of parachutes for mars landing) so if you follow the references somewhere down the line you’ll probably find studies on general parachute effectiveness.
You have to search using language that papers might actually use though. “Parachute effectiveness” means what the satirical paper is exploring, whether it prevents death or not. The only serious studies that might have used that language would be old WW2 studies that threw people out of planes with different parachutes to see how many survived.
If you want to know how to design an effective parachute, you should be looking at reference books like Parachute Recovery Systems instead.
This is a good point I didn’t think of. It’s possible that the answer is so obvious, that the only articles made about it would be jokes.
Why waste money on parachutes when everyone has a backpack at home?!
&udm=14 🙏
Better yet, don’t use Google for searching when possible.
Want to find stuff on the internet? Don’t! Follow for more tips like these!
Google search results have been inferior for a lot of things for a long time now
Inferior to what? Past Google? That’s not available anymore. Hate on Google what you want, the alternatives are worse.
I use duck duck go as default, and more than half of the time I have to go to Google to find what I’m looking for.
Yeah, Bing isn’t very useful.
It’s less sanitized which can be helpful under certain circumstances but for most searches that doesn’t come up.
Also its interface is god-awful it’s actually somehow worse than Google’s image search, and that’s pretty bad.
Yes, it is better for porn.
Inferior to other search engines. I’ve had more luck finding things I want using ddg when I haven’t been able to on Google. Maybe I’ve been lucky.
Contrariwise, I personally haven’t found Bing to be any better.
my younger brother has bing by default and whenever i try to search anything on his computer half the time i can’t find what i’m looking for
No I’m not sure about that I’ve never seen anything any better. Would you like to give some examples and don’t say bing because bing is definitely not better.
Pretty much all the other search engines just use bing in the background and so basically there’s no point using them.
Not an endorsement, because I’ve never used it, but kagi is an alternative which I believe has it’s own index. It isn’t free though
There is also searxng which is more of a meta search that will post your query to a bunch of search engines at once
I’ve heard of it but unless it was profoundly better I have very little interesting paying for a search engine.
I just searched “do maple trees have a tap root”, ai overview says yes. Literally all other reputable sources say no 🙄
Representative study participant jumping from aircraft with an empty backpack. This individual did not incur death or major injury upon impact with the ground
Hah! I just told someone the other day that LLMs trick people into seeing intelligence basically by cold reading! At last, sweet validation!
At least in the actual Gemini chat when asked about parachute study effectiveness correctly notes this study as satire.
Weird that it actually made note of that but didn’t put it in the summary
It’s because it has no idea what is important or isn’t important. It’s all just information to it and it all carries the same weight, whether it’s “parachutes aren’t any more effective than empty backpacks” or “this study is a satire making fun of other studies that extrapolate information carefully”. Even though that second bit essentially negates the first bit.
Bet the authors weren’t expecting their joke study to hit a second time like this, demonstrating that AI is just as bad at extrapolating since it extrapolated true information from false.
It’s reckless to use these AIs in searches. If someone jokes about pretending to be a doctor and suggesting a stick of butter being the best treatment for a heart attack and that joke makes it into the training set, how would an AI have any idea that it doesn’t have the same value as some doctors talking about patterns they’ve noticed in heart attack patients?
LLMs indeed have a way of detecting satire. The same way humans do.
It’s just that now they’re like the equivalent of five year olds in terms of their ability to detect sarcasm.