NATURAL LANGUAGE PROCESSING is pretty much exactly what it sounds like. It’s an umbrella term that’s used to refer to the ability of machines to process and understand language as it’s written or spoken by human beings. While it would be nice to think that our languages make logical sense and follow basic rules of grammar and punctuation, we all know that’s not always the case. We use slang, proper nouns, abbreviations and acronyms, and not everyone can string a sentence together like Stephen King or J. K. Rowling.
That’s where natural language processing comes in. Loosely speaking, it’s a form of artificial intelligence that’s all about trying to analyze and understand either written or spoken language and the context that it’s being used in. A basic example of natural language processing in action is the predictive text that we see when we use our smartphones. As we type our messages, the phone’s operating system uses AI and natural language processing to try to guess at what we might be typing and which words might come next.
Natural language processing isn’t perfect, as you’ll have seen if you’ve ever used a tool like Google Translate, but it is pretty powerful and getting better all of the time. It can even be combined with machine learning to create a system that can not only understand what’s being said but which will also continue to improve the more it’s used. A great example here is Google Assistant, which gets better at understanding and responding to the commands of individual users. This can come in useful if you have an accent or a speech impediment, and it’s all made possible by a combination of natural language processing and machine learning.
Natural Language Processing
Not many people are brave enough to tackle the complicated topic of natural language processing, but Dr. Robert Wachter has given it a good shot. In an article for The Hospital Leader (the official blog for the Society of Hospital Medicine), Wachter shared an amazing description of natural language processing and the history of doctor note-taking.
Wachter argues that the ability for software to “read” notes could be one of the most important breakthroughs in digital medicine. He says, “A recent study linked the use of proton pump inhibitors to subsequent heart attacks. It did this by plowing through 16 million notes in electronic health records.”
Back at the dawn of medicine in the time of Hippocrates, medical records were written in prose by the physician. Wachter cites medical historian Stanley Reiser, who describes the case of Apollonius of Abdera, who lived in the 5th century BCE. His doctor’s note survived the test of time and reads, “There were exacerbations of the fever; the bowels passed practically nothing of the food taken; the urine was thin and scanty. No sleep… About the fourteenth day from his taking to bed, after a rigor, he grew hot; wildly delirious, shouting, distress, much rambling, followed by calm; the coma came on at this time.
Unfortunately, this story doesn’t have a happy ending. The final entry in Appollonius of Abdera’s medical record read, “Thirty-fourth day. Death.”
Of course, medical health records have changed a lot since then. As we learned more about the human body we were able to add new data points such as the results of blood tests and the stability of vital signs. Wachter explains that this was all well and good, until two major forces took root. The first was the change in audience for the doctor’s note, which was suddenly of interest to a huge number of stakeholders including “government officials, regulators, accreditors, payers, quality measurers and malpractice attorneys.”