Must Read
OpenAI announced on Wednesday, January 7, that it was coming out with a health-focused version of ChatGPT called ChatGPT Health. Called “a dedicated experience in ChatGPT designed for health and wellness,” this new experience would allow prospective users to feed their personal health information to ChatGPT.
The company says the goal of ChatGPT Health is to help make sense of scattered health information people have from doctors, tests, and health monitoring apps, as well as to “understand recent test results, prepare for appointments with your doctor, get advice on how to approach your diet and workout routine, or understand the tradeoffs of different insurance options based on your healthcare patterns.”
I can see the initiative means well, but as someone who holds healthy amounts of skepticism for AI initiatives, I had to ask myself, “Can ChatGPT offer true support for people in need of actual medical care?”
I dug around their documentation to resolve some of my own concerns about the upcoming rollout of ChatGPT Health.
While aimed to aid users in better advocating for themselves when it comes time to see a doctor, it also explicitly says in its announcement that it is designed to “support, not replace medical care.” I took a look at what limitations it lists that would make it eligible to be a helper, but not a replacement for actual doctor’s orders.
According to the OpenAI support page for ChatGPT Health, US users over the age of 18 can connect their electronic medical or health records so ChatGPT can talk to them specifically about health-related matters in the ChatGPT Health subsection of the app. Users have to sign in to providers to do this, which alludes to some necessary friction to keep your health data private.
Apple Health data can also be connected to ChatGPT Health. It also allows data from a number of third party applications — namely Peloton, MyFitnessPal, Function, Instacart, AllTrails, and Weight Watchers — to be added as well.
According to its support page, the information ChatGPT Health gets is siloed and used specifically for ChatGPT Health and no other functions.
In terms of data privacy, I feel like they took great care to assuage hawks watching out for egregious behavior.
People have to opt in by allowing their data to be used by ChatGPT to help them. More notably, OpenAI has stated what you can expect to see in a medical record, from “lab results, visit summaries, and clinical history.” It also suggests this feature only be used “if you are comfortable with this information being in ChatGPT.”
That said, OpenAI says the data you place in ChatGPT Health is not used to train its foundation models, and can be deleted or otherwise removed when you want it. Further, ChatGPT Health conversations, connected apps, memory, and files are available only within that service.
While a ChatGPT conversation can suddenly become a ChatGPT Health conversation if necessary, that ChatGPT Health conversation does not become a part of ChatGPT proper, though you can share your conversations with others if you wanted to. The onus is on your actions, it appears.
Perhaps one big potential draw of ChatGPT Health for those who are interested in using it is its focus on getting help from physicians to fine-tune the service over a two-year period. According to the press announcement, OpenAI said it worked with “more than 260 physicians who have practiced in 60 countries and dozens of specialties to understand what makes an answer to a health question helpful or potentially harmful — this group has now provided feedback on model outputs over 600,000 times across 30 areas of focus.”
The goal, it seems, is to encourage people to get medical help and to consolidate information beforehand, making it easier for patients to communicate how they’re feeling without oversimplification of any symptoms, to help see patterns a patient may not notice occurring over time, and to prioritize the safety of users foremost.
OpenAI added, “This physician-led approach is built directly into the model that powers Health, which is evaluated against clinical standards using HealthBench, an assessment framework we created with input from our network of practicing physicians.” HealthBench is said to evaluate the responses of ChatGPT Health based on rubrics by a physician to reflect quality in practice, to prioritize the safety of a user, to be clear and to use appropriate escalations of care and respect for the contexts of the person using the chatbot.
For myself, as a Filipino who knows healthcare is expensive and who has type 2 diabetes, it’d likely be prudent for me to take any advantage I can get if I have a health issue requiring a diagnosis.
While I wouldn’t want my data to forever be on ChatGPT Health — thank goodness I can delete my data without questions — it being able to track my fitness and encourage me to see a doctor to augment an exercise regimen or at least encourage a doctor’s appointment if something is amiss might be useful.
That said, I worry about whether or not ChatGPT takes race into consideration, since there is enough literature to note racial disparities in healthcare, though for varying reasons. If ChatGPT Health can help create equity in medical care by allowing patients to better advocate or communicate their worries, I’d be supportive of it.
In the meantime, I’ll let the US take the lead and see how it pans out as an actual service. – Rappler.com


