Using this technology to create the discharge summary is that the output is only as good as the input.
A physician advisor once had a terribly unfortunate incident in which a pregnant patient died. This initiated a mandatory investigation by the Ohio Department of Health (ODH).
This was one of those imperfect storms in which the request for records arrived after hours and was not seen by the Health Information Management (HIM) department in a timely fashion, and the rotating resident was not aware that the discharge summary was their responsibility. There was a critical inconsistency in the record, in addition to this delay in getting records (which turned out to be incomplete), and a review of all discharge summaries in the institution was triggered.
Whereas the Joint Commission requires that discharge summaries be completed within 30 days, the ODH’s policy is more restrictive. We were given two weeks to get all pending discharge summaries completed, or we were in jeopardy of being in violation of conditions of participation. In other words, get them done – or you won’t get paid to take care of Medicare patients anymore.
As you might imagine, this set off a firestorm of activity to get all discharge summaries completed. When tallied, there were about 1,200 incomplete summaries. In a large academic institution, there is quite a bit of turnover of healthcare providers, so many of the older ones no longer had an identifiable caregiver still under our employ. Although the medical staff bylaws called for an active member of the care team to generate the discharge summary, it was necessary to find an alternate methodology. Enter me.
When all was said and done, more than 200 discharge summaries had been dictated. Dusty paper charts were thumbed trying to sort out the hospital course. Needless to say, the short encounters were easier to detail than the long stays.
In the aftermath, designing an electronic discharge summary became a priority. Discrete fields were populated automatically from other documents that had to be completed prior to discharge. For instance, the medications were imported from the medication list, and there was a transition-of-care document that had other information like pending studies and follow-up appointments.
Two contributions were made to this process. Early on, a review was appalled to a “discharge” summary at our mortality review conference. If a patient had died, it was ridiculous to see recommended homegoing medications or follow-up appointments. If there were a negative outcome, could envision this being present in the record becoming quite hurtful and/or inflammatory to grieving family members.
This was the quickest implementation of a revision. Within days, IT had arranged it so if a patient’s disposition was deceased, all of the sections that were nonsensical under those circumstances grayed out and could not be edited. In essence, it converted to a death summary, and there were no patient instructions regarding discharge medications, activity, therapy orders, diet, pending studies, or medical follow-up.
The other contribution was to try to simplify the composition of the hospital course narrative. The Electronic Health Record (EHR) had a handoff tool that was intended to keep a running record of the encounter so residents could keep each other appraised when they were covering for one another. They invest a few moments each day to keep the narrative current and accurate. On the day of discharge, they would just need to update the final instructions. There was a radio button to import that narrative into the hospital course section. Of course, the story is only as good as the effort that the provider puts in.
Which brings people to ChatGPT, the artificial intelligence (AI) chatbot that can produce diverse writings such as poetry, songs, essays, and informational materials? Recently, there was an article published in The Lancet regarding the use of AI technology to generate discharge summaries. The sample they gave seemed rather generic, but it was an uncomplicated total hip replacement surgery, and one might imagine that perhaps it was reflective of a routine hospital course. It would have been interesting to see the ChatGPT tackle a complicated ICU case.
The problem with using this technology to create the discharge summary is that the output is only as good as the input. If the entire hospital course is copied and pasted, the chatbot may have as much difficulty sorting out the salient features as we do when we are reading the medical record. And it should go without saying, but it can’t, that the provider would be obligated to read and edit the document to ensure accuracy. Unfortunately, this will be the downfall. It is too easy to let technology do the work – similar to the shortfalls of computer-assisted coding or clinical documentation integrity. If the human being doesn’t put in the time and effort to ensure correctness, it is often not correct or complete.
However, ChatGPT or a similar tool will soon be composing discharge summaries. Since it won’t think documentation is a burden, on the whole, its discharge summaries will be as good if not better than the majority of those crafted now by overworked and under motivated providers. Medical staff bylaws are going to need a revision. The policy should include that the provider who cares for the patient must review the discharge summary; they are still going to have to affix their signature to it. No disclaimer saying “please forgive any errors the chatbot made” is going to absolve them of their responsibility; they will still be held accountable.
Documentation is a burden, it is a responsibility. Effective documentation contributes to excellent medical care by facilitating clinical communication. The issue is that providers are given precious little guidance as to how to craft accurate, reliable, and helpful documentation.
For More Information: https://icd10monitor.medlearn.com/is-chatgpt-appropriate-for-discharge-summaries/