The most colossal of all bad ideas - uploading medical diagnostic images to an AI
But a fuck-ton of people are doing it.
If are of a certain vintage human being, you probably watched Saturday Night Live when it was just linear television1. There was a series of ads for blue jeans named “Bad Idea" Jeans. David Spade, Phil Hartman, and Kevin Nealon are portrayed as suiting up to play some B-Ball, exchanging stories that are cringe worthy.
Why do I bring this up?
Mainly because earlier this week I found this beaut of an article on the NY Times:
I can’t express how colossally stupid this is. I mean, the richest fucking bastard in the mother-fucking universe — by far — just posts a query to his idolatrous fans on his dystopian social media shit-hole, and users fall all over themselves to take that MRI DVD and upload the image files to Twitter’s dollar-store AI tool.
I mean, what harm could that do?
Over the past few weeks, users on X have been submitting X-rays, MRIs, CT scans and other medical images to Grok, the platform’s artificial intelligence chatbot, asking for diagnoses. The reason: Elon Musk, X’s owner, suggested it.
First, this is just asking for unintended consequences. Leakage of private health information USED to be things that people tried to avoid. I mean until the ACA, the norm was one bad diagnostic could get you flagged as having a “pre-existing condition” and you are uninsurable. Like NO FUCKING reputable insurer would touch you even with Trump’s crooked, mushroom-shaped penis. No way, no how.
And now that Trump, and his first (butt) buddy Elonia Musk are looking for “Government efficiency” the ACA has a huge target on it to be either gutted, or just reversed.
And you know what that means?
Yeah, the mother-fucking “pre-existing conditions” will come back in, and huge swaths of the population will become uninsurable except by the shitty, super expensive, coverage of high-risk pools.
So these Twitter users who are heeding the call to action of Musky to upload their data are not the brighter bulbs in the chandelier.
“This is still early stage, but it is already quite accurate and will become extremely good,” Musk said in a post. The hope is that if enough users feed the A.I. their scans, it will eventually get good at interpreting them accurately. Patients could get faster results without waiting for a portal message, or use Grok as a second opinion.
Look, I am as much of a Heinlein fan as anyone, but the fact that Elonia has usurped the term “Grok” to be the shitty Twitter AI agent bugs the fuck out of me. And there is no way I would ever give that South African bred twat-waffle any of my personal information, medical or otherwise.
The decision to share information as sensitive as your colonoscopy results with an A.I. chatbot has alarmed some medical privacy experts.
“This is very personal information, and you don’t exactly know what Grok is going to do with it,” said Bradley Malin, a professor of biomedical informatics at Vanderbilt University who has studied machine learning in health care.
Ya think?
I will disagree, if there is anything Elonia can do to hurt the non-MAGAstanis using this data, he will totally do it. The ‘man’ is so fucking far up Trump’s ass that he gets a bird’s eye view of the Mango Mussolini’s tonsils every time he opens that mouth anus.
But the article does a decent job job of explaining the risks, and frankly, there is a reason that we have HIPPA regulations:
When you share your medical information with doctors or on a patient portal, it is guarded by the Health Insurance Portability and Accountability Act, or HIPAA, the federal law that protects your personal health information from being shared without your consent. But it only applies to certain entities, like doctors’ offices, hospitals and health insurers, as well as some companies they work with.
In other words, what you post on a social media account or elsewhere isn’t bound by HIPAA. It’s like telling your lawyer that you committed a crime versus telling your dog walker; one is bound by attorney-client privilege and the other can inform the whole neighborhood.
I really really like the Attorney/Dog Walker analogy.
When tech companies partner with a hospital to get data, by contrast, there are detailed agreements on how it is stored, shared and used, said Dr. Malin.
“Posting personal information to Grok is more like, ‘Wheee! Let’s throw this data out there, and hope the company is going to do what I want them to do,’” Dr. Malin said.
Wheee indeed.
Here is where I have some actual professional background. In the way back time I was the product manager for a communications software that had a big presence in healthcare. We made Fax servers, and besides vertical industries that relied on the technology (financial services, and legal) healthcare was a huge consumer of the technology. And we had to undergo audits on how we protected the faxed sheets. It was not easy, but it was sobering. It was something we took seriously, and we helped our customers sail through their audits for compliance, something that happened regularly.
And this is one place where people are playing with fire, around an open keg of gunpowder, by uploading their data to a billionaire whose maturity is locked into the inner 14 year old nerdy boy that he behaves like.
What could possibly go wrong?
For the youngs, this was when there was no internet, no streaming, no youtube. If you wanted to watch something, you had to have your eyes glued to a CRT and the images and sounds were broadcast over the air with radio waves and shit. Also, you likely used a VHS tape recorder so you wouldn’t have to be up until 1:00AM to see SNL.
How awesome was that?
I’ve got a story. A friends medical record is recorded in a My Chart type situation and 98% of the healthcare being
used can be shared with other doctors in that system and others. Somewhere on there is a list of medications that my friend said one day was looking at her entries to see a dosage that she was just upped by her primary doctor. She discovered that her meds listed had a response in there that she was using a medication that is sometimes used by people who also suffer from herpes and it indicated that she used it for a past herpes infection for her pain. My friend has never had herpes and also another one of her medications was sometimes used for severe depression and it said she had a major depressive disorder. She called the office of her primary and asked why this was included when she had never had Herpes and she took the depression medicine to help her sleep because she suffered with insomnia. But never was diagnosed with depression much less for a major depressive order. Her primary told her it was probably just what is in the indications in general for those drugs. I told her that’s bullshit because these records follow you forever. It took her quite a few times at different doctors to get her record cleared of that. I cautioned her to check it every time you see a doctor because one of my groups is using recording from AI to record drs notes. Of course now most of the people that use these systems are aware of this my friend wonders if someone was doing it to her on purpose to her. I said it should be corrected anytime you see a doctor that uses that system. That’s scary to me that some douche screws around like that with peoples records. So beware. I’m going to ask about that next time I see the doctor who used AI for recording anything in my files. My friend said the nurse that helped her didn’t have a clue as to why it wasn’t there before and now suddenly it’s recorded as being used for a diagnosis that she never had. That’s scary
I can’t imagine uploading my medical diagnostic tests to an AI, especially at the request of Muskrat; how stupid does one have to be to do this. I guess it proves one thing…. stupidity cannot be fixed.