Thursday, October 19, 2017

Markets Are Great When They Work, But They Don't in Most Aspects of Medicine

One of most maddening parts of the healthcare debate to me concerns the role of markets and the mythos that if we somehow let the free market work, a new era of low costs and high-quality care would usher upon us. I have written before that while I believe free markets are the best approach for optimizing the quantity and quality of most consumer goods, healthcare is inherently different. For the most part, we do not seek healthcare as a market good, but rather something we must use when we are sick to make us better (or to prevent us from getting sick). When we are acutely ill, there is very little choice we can make, and even when we are not sick, there are limits in information that prevent us from making the “best” purchasing decision (where best may not only refer to money, but also perceived quality and other aspects of care we value).

To say that healthcare will improve if we let markets operate is naive at best. While some healthcare organizations might perform better due to attention to cost and efficiency, at the end of the day, healthcare is not something we want to leave to pure market principles. But ironically, despite the lack of operating as a free market, healthcare is profitable for many. I provided a number of examples in that posting a few years ago, and now some new information has come to the fore.

One is an interesting new book by Elisabeth Rosenthal, Editor-in-Chief of Kaiser Health News and a physician and former correspondent for The New York Times [1]. Dr. Rosenthal's book explores how all of the major players of healthcare - insurance companies, hospitals, physicians, pharmaceutical companies, medical device manufacturers, and even codes and researchers operate under a set of “rules” of a highly dysfunctional market. These rules (with copious examples to back them up in the book and more elsewhere [2]), to quote, are:
  1. More treatment is always better. Default to the most expensive option.
  2. A lifetime of treatment is preferable to a cure.
  3. Amenities and marketing matter more than good care.
  4. As technologies age, prices can rise rather than fall.
  5. There is no free choice. Patients are stuck. And they’re stuck buying American.
  6. More competitors vying for business doesn’t mean better prices; it can drive prices up, not down.
  7. Economies of scale don’t translate to lower prices. With their market power, big providers can simply demand more.
  8. There is no such thing as a fixed price for a procedure or test. And the uninsured pay the highest prices of all.
  9. There are no standards for billing.
  10. Prices will rise to whatever the market will bear. The mother of all rules!
The latter rule drives home the point of this posting. Even though the book is written from a somewhat liberal political bent, a political conservative could also find cause with the book in its demonstration how the market is distorted by special interests that corrupt government attempts to regulate the market.

More specific aspects of market dysfunction are provided by two recent papers, both authored by OHSU faculty. The first paper by Prasad and Mailankody calls into questions the oft-stated high costs of drug development, which are used to justify the ever-increasing prices charged [3]. Some have been highly critical of their methodology [4] while others have noted that the costs are highly variable but still do not bear any connection to the prices charged [5]. There is no question that drug development is still expensive, and a pharmaceutical company may have many misses in between hits. But we need to be reasonable about using the cost of developing drugs to justify prices, especially in monopolistic or other situations where market-style choices are not available.

Another paper looks at repository corticotropin (rACTH) injection [6]. Although there is no evidence that this treatment is more effective for any indication than much cheaper synthetic corticosteroid drugs, its use has grown substantially, due both to intensive marketing efforts as well as conflicts of interest among those who use it most frequently. It is also one of a growing number of drugs whose price has risen substantially, long after its development.

Other countries besides the US struggle with how to price drugs and other aspects of healthcare. The methods they employ, from negotiating on a national level to saying no to drugs that do not pass muster in cost-benefit analyses, are probably the only realistic solution when markets do not work and when government control of them gets subverted by special interests.

References
1. Rosenthal, E (2017). How Healthcare Became Big Business and How You Can Take It Back. New York, NY, Penguin Press.
2. Rosenthal, E (2017). How Economic Incentives have Created our Dysfunctional US Medical Market. Medium. https://medium.com/@RosenthalHealth/how-economic-incentives-have-created-our-dysfunctional-us-medical-market-b681c51d6436.
3. Prasad, V and Mailankody, S (2017). Research and development spending to bring a single cancer drug to market and revenues after approval. JAMA Internal Medicine. Epub ahead of print.
4. Herper, M (2017). The Cost Of Developing Drugs Is Insane. That Paper That Says Otherwise Is Insanely Bad. Forbes, October 16, 2017. http://www.forbes.com/sites/matthewherper/2017/10/16/the-cost-of-developing-drugs-is-insane-a-paper-that-argued-otherwise-was-insanely-bad/.
5. Love, J (2017). Perspectives on Cancer Drug Development Costs in JAMA. Bill of Health. http://blogs.harvard.edu/billofhealth/2017/09/13/perspectives-on-cancer-drug-development-costs-in-jama/.
6. Hartung, DM, Johnston, K, et al. (2017). Trends and characteristics of us medicare spending on repository corticotropin. JAMA Internal Medicine. Epub ahead of print.

Tuesday, October 17, 2017

The Still-Incomplete Answering of Questions About Physician Time With Computers

Another couple of studies have been published documenting the amount of time physicians spend with computers in primary care [1] and ophthalmology [2] clinics. Clearly these and other recent studies [3,4] show that physicians spend too much time with the electronic health record (EHR), especially when phrases like “pajama time” enter into the vernacular to refer to documentation that must take place after work at home because it could not be completed during the day.

But one aspect of these studies that has always concerned me is that there is no measure of what is the appropriate amount of time for physicians to spend not in the presence of the patient. This includes tasks like reviewing data that will help inform making current decisions as well as entering data that other team members caring for the patient will use to inform their decision-making. While some dispute the value of our current approaches to measurement of quality of care delivered [5], I believe that most physicians accept there should be some measure of accountability for their decisions, especially given the high cost of care. This means that some time and effort must be devoted by physicians to measuring and improving the quality of care that they deliver.

The newest time-motion study from primary care once again reiterates the large amount of time that the EHR consumes of the physician day [1]. In this study, that time was found to be 5.9 hours of an 11.4-hour workday and 1.4 hours after hours. But if we look at the tasks on which this time was spent (Table 3 of the paper), we cannot deny that just about all of them are important to overall patient care, even if too much time is spent on them. Do we not want physicians to have some time for reviewing results, following up with patients, looking at their larger practice, etc.?

I have noted in the past that physicians have always spent a good deal of time not in the presence of patients. I have cited studies of this that even pre-date the computer era, but someone recently pointed me to an even older study from 1973 [6]. In this study of physicians in a general medicine clinic, 103 physicians were found to spend 37.8% of their time charting, 5.3% consulting, 1.7% in other activities, and the remaining 55.2% of time with the patient. So even in the 1970s, ambulatory physicians spent only slightly more than half of their time in the presence of patients. As one who started his medical training in that era, I can certainly remember time spent trying to decipher unreadable hand-writing as well as trying to track down paper charts and other missing information. I also remember caring for patients with no information except for what the patient could recollect.

Clearly we have a great deal of work to do to make our current EHRs better, especially in streamlining both data entry and retrieval. We also need to be careful not to equate measures like clicks and screens with performance, as a study from our institution found that those who efficiently navigated the most information in the record achieved the best results in a simulation task [7]. What we really need is studies that measure time taken for information-related activities in physician practice and determine which are most important to optimal patient care. Further research must also be done to optimize usability and workflow, including determining when other members of the team can contribute to overall efficiency of the care process.

References

1. Arndt, BG, Beasley, JW, et al. (2017). Tethered to the EHR: primary care physician workload assessment using ehr event log data and time-motion observations. Annals of Family Medicine. 15: 419-426.
2. Read-Brown, S, Hribar, MR, et al. (2017). Time requirements for electronic health record use in an academic ophthalmology center. JAMA Ophthalmology. Epub ahead of print.
3. Sinsky, C, Colligan, L, et al. (2016). Allocation of physician time in ambulatory practice: a time and motion study in 4 specialties. Annals of Internal Medicine. 165: 753-760.
4. Tai-Seale, M, Olson, CW, et al. (2017). Electronic health record logs indicate that physicians split time evenly between seeing patients and desktop medicine. Health Affairs. 36: 655-662.
5. Marcotte, BJ, Fildes, AG, et al. (2017). U.S. Health Care Reform Can’t Wait for Quality Measures to Be Perfect. Harvard Business Review, October 4, 2017. https://hbr.org/2017/10/u-s-health-care-reform-cant-wait-for-quality-measures-to-be-perfect.
6. Mamlin, JJ and Baker, DH (1973). Combined time-motion and work sampling study in a general medicine clinic. Medical Care. 11: 449-456.
7. March, CA, Steiger, D, et al. (2013). Use of simulation to assess electronic health record safety in the intensive care unit: a pilot study. BMJ Open. 3: e002549. http://bmjopen.bmj.com/content/3/4/e002549.long.

Tuesday, October 10, 2017

The Resurgence and Limitations of Artificial Intelligence in Medicine

I came of age in the biomedical informatics world in the late 1980s, which was near the end of the first era of artificial intelligence (AI). A good deal of work in what we called medical informatics at that time focused on developing “expert systems” that would aim to mimic, and perhaps someday replace, the cognition of physicians and others in healthcare.

But it was not to be, as excessive hype, stoked with misguided fears about losing out to Japan, led to the dreaded “AI winter.” Fortunately I had chosen to pursue research in information retrieval (search), which of course blossomed in the 1990s with the advent of the World Wide Web. The “decision support” aspect of AI did not go away, but rather was replaced with focused decision support that aimed to augment the cognition of physicians and not replace it.

In recent years, it seemed that the term AI had almost disappeared from the vernacular. My only use of it came in my teaching, where I consider it essential to learning to understand the history of the informatics field.

But now the term is seeing a resurgence in use [1]. Furthermore, modern AI systems take different approaches. Rather than trying to represent the world and create algorithms that operate on those representations, AI has reemerged due to the convergence of large amounts of real-world data, increases in storage and computational capabilities of hardware, and new computation methods, especially in machine learning.

This has given rise to a new generation of applications that again try to outperform human experts in medical diagnosis and treatment recommendations. Most of these successful applications employ machine learning, sometimes so-called “deep learning,” and include:
  • Diagnosing skin lesions – keratinocyte carcinomas vs. benign seborrheic keratoses and malignant melanomas vs. benign nevi [2]
  • Classifying metastatic breast cancer on pathology slide images [3]
  • Predicting longevity from CT imaging [4]
  • Predicting cardiovascular risk factors from retinal fundus photographs [5]
  • Detecting arrhythmias comparable to cardiologists [6]
Unfortunately, the hype is building back too, perhaps exemplified by the IBM Watson system [7]. I recently came across an interesting article by MIT Emeritus Professor Rodney Brooks that put a nice perspective on it and stimulated some of my own thinking [8].

From my perspective, the most interesting part of Brook's piece concerns “performance vs. competence.” He warns that we must not confuse performance on a single task, such as making the diagnosis from an image, with the larger task of competence, such as being a physician. As he states, “People hear that some robot or some AI system has performed some task. They then generalize from that performance to a competence that a person performing the same task could be expected to have. And they apply that generalization to the robot or AI system.”

I have no doubt that algorithmic accomplishments in the above medical examples will be used by physicians in the future, just as they now uses automated interpretation of EKGs and other tests that comers, in part, from earlier AI work. But I have a hard time believing that the practice of medicine will evolve to patients submitting pictures or blood samples to computers to obtain an automated diagnosis and treatment plan. It will be a long time before computers can replace the larger perspective that an experienced physician brings to a patient’s condition, to say nothing of the emotional and other support that goes along with the context of the diagnosis and its treatment. Indeed, the doctors of Star Trek are augmented by automated tools but in the end, still compassionate individuals who diagnose and treat patients.

Somewhat tongue in cheek, I won’t say that machines replacing physicians is impossible, since there is a quote in a different part of the article, attributed to Arthur C. Clarke, aimed at people like myself: “When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.” As someone who does not consider himself quite yet to be elderly, but is has worked in the field for several decades, I want be careful to not say that something is “impossible.”

But on the other hand, while I am certain that we will see growing numbers of tools to improve the practice of medicine based on machine learning and other analysis of data, it is very difficult for me to see no continued role for the empathetic physician who puts the findings in context and supports in other ways the patient whose diagnosis and treatment are augmented by AI.

References

1. Stockert, J (2017). Artificial intelligence is coming to medicine — don’t be afraid. STAT, August 18, 2017. https://www.statnews.com/2017/08/18/artificial-intelligence-medicine/.
2. Esteva, A, Kuprel, B, et al. (2017). Dermatologist-level classification of skin cancer with deep neural networks. Nature. 542: 115-118.
3. Liu, Y, Gadepalli, K, et al. (2017). Detecting cancer metastases on gigapixel pathology images. arXiv.org: arXiv:1703.02442. https://arxiv.org/abs/1703.02442.
4. Oakden-Rayner, L, Carneiro, G, et al. (2017). Precision radiology: predicting longevity using feature engineering and deep learning methods in a radiomics framework. Scientific Reports. 7: 1648. https://www.nature.com/articles/s41598-017-01931-w.
5. Poplin, R, Varadarajan, AV, et al. (2017). Predicting Cardiovascular Risk Factors from Retinal Fundus Photographs using Deep Learning, Arxiv.org. https://arxiv.org/abs/1708.09843.
6. Rajpurkar, P, Hannun, AY, et al. (2017). Cardiologist-Level Arrhythmia Detection with Convolutional Neural Networks, Arxiv.org. https://arxiv.org/abs/1707.01836.
7. Ross, C and Swetlit, I (2017). IBM pitched its Watson supercomputer as a revolution in cancer care. It’s nowhere close. STAT, September 5, 2017. https://www.statnews.com/2017/09/05/watson-ibm-cancer/.
8. Brooks, R (2017). The Seven Deadly Sins of AI Predictions. MIT Technology Review, October 6, 2017. https://www.technologyreview.com/s/609048/the-seven-deadly-sins-of-ai-predictions/.

Friday, October 6, 2017

HITECH Retrospective: Glass Half-Full or Half-Empty?

Last month, the New England Journal of Medicine published a pair of Perspective pieces about the Health Information Technology for Clinical and Economic Health (HITECH) Act (both available open access). The first was written by the current and three former Directors of the Office of the National Coordinator for Health IT (ONC) [1]. The second was written by two other national thought leaders who also have a wealth of implementation experience [2]. Both papers discuss the accomplishments and challenges, with the Directors’ piece more positive (glass half-full) than the outside thought leaders (glass half-empty).

In the first piece, Washington et al. pointed to the accomplishments of the HITECH era, where we have finally seen digitization of the healthcare industry, one of the last major industries to do so. The funding and other support provided by the HITECH Act have led to near-universal adoption of electronic health records (EHRs) in hospitals and substantial uptake in physician offices. They also point to a substantial body of evidence that supports the functionality required under the “meaningful use” program.

These authors also note the shortcomings of this rapid adoption, when not only the people but also healthcare organizations and even EHR systems were not ready for rapid uptake. They acknowledge that many healthcare providers are frustrated by poor usability and lack of actionable information, which they attribute in part to proprietary standards and information blocking. They advocate moving forward with a push for interoperability, secure and seamless flow to data, engagement of patients, and development of a learning health system.

Halamka and Tripathi, on the other hand, take a somewhat more negative view. While acknowledging the gains in adoption that have occurred under HITECH, they note (my emphasis), “We lost the hearts and minds of clinicians. We overwhelmed them with confusing layers of regulations. We tried to drive cultural change with legislation. We expected interoperability without first building the enabling tools. In a sense, we gave clinicians suboptimal cars, didn’t build roads, and then blamed them for not driving.” They note that the process measures of achieving meaningful use have become an end in themselves, without looking at the larger picture of how to improve quality, safety, and cost of healthcare. They do point a path forward, calling for streamlining of requirements to insure interoperability and a focused set of appropriate quality measures, with EHR certification centered on this as well. They also encourage more market-driven solutions, with government regulation focused on providing incentives and standards for desired outcomes.

Taking more of a glass half-full point of view, I wrote in this blog several months ago that EHR adoption has “failed to translate” the benefits that have been borne out in practical research studies. I noted the success of some institutions, mostly integrated delivery systems, in successfully adoption EHRs, and also persistence in healthcare of the problems that motivate them, such as suboptimal quality and safety of care while costs continue to rise.

A few other recent pieces have painted a path forward. The trade journal Medical Economics interviewed several physician informatics experts to collate their thoughts on what features a highly useful EHR might have, especially in contrast to systems that a majority of physicians complain about today [3]. The set of features does not represent much more than we expect of all of our computer applications these days, but whose availability in EHRs continues to be elusive:
  • Make systems work together – achieve interoperability of data across systems
  • Make it easier and more intuitive – make systems easier to understand and use; reduce cognitive load
  • Add better analytics – add more capability to use data to coordinate and improve care
  • Support high-tech care delivery – be able to engage patients in through video and asynchronous communication
  • Make EHRs smarter – systems anticipate user actions and provide reversible shortcuts
  • Become a virtual assistant – assist the clinician with all aspects of managing the delivery of care
A couple other recent Perspective pieces in the New England Journal of Medicine provide some additional solutions. Two well-known informatics thought leaders from Boston Children’s Hospital lay out the case for an application programming interface (API) approach to the EHR based on standards and interoperability [4]. Although this piece has a different focus than the previous one, there is no question that the data normalization from FHIR Resources, the flexible interfaces that can be developed using SMART, and the ease of developing it all via SMART on FHIR could make those goals achievable.

In the second other piece, a well-known leader in primary care medicine calls for delivering us from the current EHR purgatory [5]. His primary solutions focus on reforming the healthcare payment system, moving toward payment for outcomes and not volume, i.e., value-based care.

I agree with just about all that these authors have to say. While the meaningful use program required some benchmarks to insure the HITECH incentive money was appropriately spent, we are probably beyond the need to continue requiring large numbers of process measures. We need to focus on standards and interoperability that will open the door to doing more with the EHR than just documenting care, such as predictive analytics and research. Continuing to reform our payment system is a must, not only for better EHR usage but also to control cost and improve health of the population.

There is also an important role for clinical informatics professionals and leaders, who must lead the way in righting the problems of the EHR and other information systems in healthcare. I have periodically reached back to a quote of my own after the unveiling of the HITECH Act: “This is a defining moment for the informatics field. Never before has such money and attention been lavished on it. HITECH provides a clear challenge for the field to 'get it right.' It will be interesting to look back on this time in the years ahead and see what worked and did not work. Whatever does happen, it is clear that informatics lives in a HITECH world now.” Informatics does live in this world now, and we must lead the way, not letting perfect get in the way of good, but making EHRs most useful for patients, clinicians, and all other participants in the healthcare system.

References

1. Washington, V, DeSalvo, K, et al. (2017). The HITECH era and the path forward. New England Journal of Medicine. 377: 904-906.
2. Halamka, JD and Tripathi, M (2017). The HITECH Era in Retrospect. New England Journal of Medicine. 377: 907-909.
3. Pratt, MK (2017). Physicians dream up a better EHR. Medical Economics, May 22, 2017. http://medicaleconomics.modernmedicine.com/medical-economics/news/physicians-dream-better-ehr.
4. Mandl, KD and Kohane, IS (2017). A 21st-century health IT system — creating a real-world information economy. New England Journal of Medicine. 376: 1905-1907.
5. Goroll, AH (2017). Emerging from EHR purgatory — moving from process to outcomes. New England Journal of Medicine. 376: 2004-2006.