JAMA Internal Medicine presents an opinion article: “Can Artificial Intelligence Speak for Incapacitated Patients at the End of Life?” The authors, three doctors from UCSF, don’t seem to have heard…
I mean, while this idea is obviously a stupid one, I have seen some suggestion that an AI could be used to help interperet the brain activity of patients that are capable of thought but not communication, and thus help them communicate with doctors, rather than try to figure out what they might have said from prior history.
I do not recommend using the word “AI” as if it refers to a single thing that encompasses all possible systems incorporating AI techniques. LLM guys don’t distinguish between things that could actually be built and “throwing an LLM at the problem” – you’re treating their lack-of-differentiation as valid and feeding them hype.
I use a term I’ve seen used before, I’m not familiar enough with the details of the tech to know what what more technical term applies to this kind of device, but not to other types, and especially not what term will be generally recognized as referring to such. The hype guys are going to hype themselves up regardless in any case, seeing as that type tend to exist in an echo chamber as far as I can see.
As an autistic who struggles with communication and organizing thoughts, LLMs have been helping me process emotions and articulating things. Not perfectly in the way that you’d describe (hence i mostly don’t use LLM outputs themselves as replies), but my situation is much better than pre-November 2022
It is a shame LLM’s weren’t designed to be a common good to Disabled people though. We’re just a happy use case accident for these companies and AI manufacturers. It’s tricky because this could be done just as well, I figure, with specifically designed LLM’s instead of generic ones. @pavnilschanda@CarbonIceDragon
There are some efforts for LLM use for disabled people, such as GoblinTools. And you’re very right about disabled people benefitting from LLMs being a happy use case accident. With that being the reality, it’s frustrating how so many people who blindfully defend AI use disabled people as a shield against ethical concerns. Tech companies themselves like to use us to make themselves look good; see the “disability dongle” concept as a prime example.
I mean, while this idea is obviously a stupid one, I have seen some suggestion that an AI could be used to help interperet the brain activity of patients that are capable of thought but not communication, and thus help them communicate with doctors, rather than try to figure out what they might have said from prior history.
“could” is a word meaning “doesn’t”
I do not recommend using the word “AI” as if it refers to a single thing that encompasses all possible systems incorporating AI techniques. LLM guys don’t distinguish between things that could actually be built and “throwing an LLM at the problem” – you’re treating their lack-of-differentiation as valid and feeding them hype.
I use a term I’ve seen used before, I’m not familiar enough with the details of the tech to know what what more technical term applies to this kind of device, but not to other types, and especially not what term will be generally recognized as referring to such. The hype guys are going to hype themselves up regardless in any case, seeing as that type tend to exist in an echo chamber as far as I can see.
maybe with blockchain,
🦀 THEY DID NEUROIMAGING ON A DEAD SALMON 🦀
As an autistic who struggles with communication and organizing thoughts, LLMs have been helping me process emotions and articulating things. Not perfectly in the way that you’d describe (hence i mostly don’t use LLM outputs themselves as replies), but my situation is much better than pre-November 2022
It is a shame LLM’s weren’t designed to be a common good to Disabled people though. We’re just a happy use case accident for these companies and AI manufacturers. It’s tricky because this could be done just as well, I figure, with specifically designed LLM’s instead of generic ones. @pavnilschanda @CarbonIceDragon
There are some efforts for LLM use for disabled people, such as GoblinTools. And you’re very right about disabled people benefitting from LLMs being a happy use case accident. With that being the reality, it’s frustrating how so many people who blindfully defend AI use disabled people as a shield against ethical concerns. Tech companies themselves like to use us to make themselves look good; see the “disability dongle” concept as a prime example.
Yep! Very familiar! I actually wrote about LLM’s and blindness, as an example, here. https://robertkingett.com/posts/6593/ @pavnilschanda
deleted by creator
this remark demonstrates a stunning lack of any understanding of anything at all of any of the topics involved in this, amazing