“Listening in”: natural language processing in the clinical encounter

Resident Fellow Council, AAP
16 min readJul 16, 2020

by Nicholas S. Race, M.D., Ph.D

Photo by Karolina Grabowska, link here

“I just want to see patients.” Punctuated with an air of frustration, variants of this utterance routinely escape the lips of clinicians across every setting and profession within modern healthcare systems. Ire directed at documentation requirements in electronic health records (EHRs) often lies at the root of such dissatisfaction. In the years prior to their full-fledged arrival and implementation, EHRs were championed as the future of medicine. Promises included newfound efficiency alongside unprecedented access to information, accuracy of the historical record, error reduction, and integrated clinical decision support (CDS) tools. Without a doubt, some success has been had in these areas. However, on the front lines of clinical care, there are numerous perceived shortcomings in EHR implementation, efficiency, and accuracy [1, 2]. Look no further than the recent “Patients Over Paperwork” CMS federal relaxation of documentation requirements in the wake of the ongoing COVID-19 pandemic as tangible evidence of the over-burden placed on clinicians and barriers to patient care presented by modern-day EHR documentation requirements [3].

While process adherence and patient satisfaction have been reported to improve in the age of EHRs, efficiency benefits have not been realized to the same degree [4]. Indeed, it has been reported that nearly two-thirds of work day effort by practicing physicians is spent working with EHRs compared to under one-third spent face-to-face with patients [2]. Other phenomena such as copy-forwarding (and thus error-forwarding) and billing-driven documentation have been reported to compromise both the utility and data integrity of EHRs [5–8]. Dedicated efficiency training has mixed results with net perceived benefits to clinicians which, though statistically significant, are small and likely not of practical significance [9]. These problems affect not only practicing clinicians, but also trainee students and residents who have been reported to spend greater than three times as much of their work hours with EHR systems as compared to direct patient care [10]. A collection of technologies falling under the umbrella of natural language processing (NLP) may hold promise in addressing some of the current shortcomings of and perceived frustrations with EHR systems by enhancing usability, functionality, efficiency, and accuracy.

Natural Language Processing: An Overview

NLP broadly refers to computer algorithms designed to interpret communication between humans. These tools rest at the intersection between artificial intelligence and linguistics. At a basic level, the computational building blocks of NLP include text preprocessing (tokenization, part-of-speech tagging, and syntactic parsing), named entry recognition, context extraction, and association and relations extraction [11]. For the clinician, it is not so important to understand how NLP works. Rather, it is more effective to understand what NLP does and can do. At its core, NLP is built on a foundation of identifying and classifying words or phrases that represent particular concepts, and then understanding and interpreting the relationships between the recognized concepts [11]. NLP research and application development in healthcare to date has been dominated by the syntactic analysis of unstructured data (“free text”) in clinician notes and transforming them to structured, standardized data fields for improved EHR data fidelity and process optimization [12]. For example, integration of NLP into specific clinical pathways in EHRs has enabled process optimization including: automated spelling/grammar correction with context-specific expansion of acronyms, automated reading and assignment of ICD codes to radiology reports, and automated combing of clinical notes and procedure reports to assess status of colonoscopy inquiry/response/performance, among many more use cases [11]. NLP also includes optical character recognition to transform image files (ex: PDF scanned records from outside hospital) into text files which may then be parsed, searched, etc.[13]. More recently, clinical decision support (CDS) tools have grown in popularity, ranging from simple outpatient reminder systems for scheduled preventative or follow-up care, to automated identification of patients with potential nosocomial infections and provision of appropriate antibiotic recommendations [11]. More specific NLP-augmented CDS use cases include reducing rates of medical imaging when not indicated [14,15], efforts to reduce adverse drug events [16–18], and improving glucose control in diabetics by modifying physician practice patterns[19]. Watson from IBM has gained recognition by building CDS with integrated NLP for unstructured datasets (the medical literature at large) to gather, interpret, and summarize rapidly evolving clinical literature to provide personalized recommendations for patient care [20]. In addition, Watson has been used to mine EHR data searching for new, previously undiscovered patterns in patients and their disease processes [11]. Such wide-ranging applications of NLP technology help illustrate diversity of opportunity for the utility of such tools in healthcare.

Speech recognition, too, falls under the purview of NLP[12,13] and is used for dictation by many clinicians. Speech recognition (SR) within healthcare-oriented NLP endeavors has to date largely been focused on dictation of clinical notes by healthcare professionals [12,13]. Some clinicians prefer dictating notes to typed entry, but evidence does not support clear perceived benefit from SR in terms of data entry efficiency, quality of care, documentation quality, or improved workflow [21]. This is understandable, as SR-mediated physician-driven documentation still requires extra time set aside during or post-encounter, consuming over half of physicians’ working time and effort [2]. Given the current state of EHR use and perception surrounding use, there remains considerable room for improvement and opportunity exists for NLP advancement to play a role. One area would be NLP/A.I.-augmented generation and interpretation of clinical documentation [12]. If further developed and implemented effectively, NLP-augmented EHRs and clinical encounters could enable a reversion of current trends and allow more clinical personnel working time and effort to be spent face-to-face with patients. NLP thus possesses wide-ranging potential to improve healthcare professionals’ interactions with EHR systems in manners ranging from streamlined workflow to next-gen CDS tools[13].

Looking Forward: NLP Augmentation of the Clinician-Patient Encounter

One area which has received significantly less attention than dictation is the use of speech recognition, voice analytics, and NLP during the physician-patient encounter. It is conceivable to imagine an artificial intelligence system equipped with such tools “listening in” to live conversations between doctors and patients as they happen. Imagine a next-generation “scribe 2.0,” an A.I./NLP-enabled transcription system and CDS tool in one. One positive impact could be automated generation of encounter documentation (even if partial) to improve clinician workflow. Somewhat to this effect, human scribes have gained popularity in recent years to streamline workflow with some success [22,23]. However, human scribes are expensive and can infringe on patient privacy and the doctor-patient relationship by introducing another human witness to the encounter [24]. A.I.-mediated data capture, of course, would obviate these hurdles by removing the additional human listener. Additional conceivable benefits of automated interview data capture for documentation generation include reduced dependence on fallible human memory, more complete record-keeping with audio backup files for medical-legal protections, and elimination of post-interaction documentation delays to communicate with allied clinicians.

The next level of value provided by NLP in live analysis of the patient interview would be beyond document generation into the realm of data analysis and interpretation. For example, it has been demonstrated that NLP technology can analyze free text across clinical notes to identify and risk-stratify patients [25,26]. The same could surely be done with clinical narratives extracted from raw recordings of clinician-patient encounters. Furthermore, much in the way clinician-generated clinical notes are mined for patterns and new discoveries in modern bioinformatics-driven research projects (ex: PPI initiation and major cardiac event risk[27]), SR-transcribed free text of patient interviews could be mined as well. Perhaps raw patient stories would provide “cleaner” unstructured data for analysis than disjointed and disparate physician-driven documentation? This is an area ripe for exploration in research and development.

Taken a step further, A.I. interpretation of the patient interview would permit analysis not only of what is said in the interview, but also how it is said. NLP-driven vocal metric analysis of quantitative speech elements from clinician-patient interview audio has demonstrated some promising ability to identify clinically relevant information. For example, NLP of speech complexity using two simple markers (maximum phrase length and use of determiners such as “which”) has predicted emergent psychosis in at-risk youth with 100% accuracy, outperforming classification from clinical interviews by mental health professionals[28]. Similarly, NLP has been used to predict mental and physical health responses to bereavement[29], adjustment to cancer[30], classification of suicide notes[31], discrimination between healthy age-related memory loss and mild cognitive impairment[32], and screening for drug abuse[33]. Specific to PM&R, imagine if passive recording/analysis of brain injury/stroke patient speech patterns were able to identify subtle changes with prognostic value for recovery potential, diagnostic value for development of specific complications, direct/target pharmacologic or speech therapist interventions. More broadly, in nearly any acute rehabilitation patient, NLP-based assessment of patient adjustment to newfound disability / functional impairment could prove useful in the manner it already has for adjustment to cancer[30]. Ultimately, the hope would be for refinement and validation of these approaches alongside development of novel NLP use cases to be integrated seamlessly and productively for actionable CDS tools that aide physician decision-making in real time[34]. However, the number of tasks, pathway integrations, and barriers to adoption are vast. As such, the success of NLP endeavors will be dependent on coordinated efforts across the medical community at large if a practical, scalable impact is to be made.

Technical, Financial, and Systemic Barriers to NLP-augmented Documentation and Encounters

Barriers to widespread adoption and integration of A.I. / NLP — augmented clinical encounter technologies are vast and include technical, financial, legal, ethical, and systemic challenges[12,13]. From a technical standpoint, voice recognition currently struggles in even the computationally simpler task of single-speaker dictation capture. Automated SR often requires human over-reads and performs inferiorly to experienced human medical transcriptionists[21, 35]. This owes to numerous factors, including medical jargon, abbreviation overlap, colloquialisms, homophones/homonyms, and other complex language features obfuscating automated syntactic interpretation[21,35]. Live NLP of clinical encounters would be further complicated by multiple speakers (physician and patient) and stylistic speech variables of natural conversation (ex: sarcasm, accents, figures of speech). Even the cutting edge SR tech behind Siri and Alexa from our most powerful tech companies struggle with such stylistic speech variables[36]. At the rate of technological advancement in modern society, however, it is not challenging to envision a near future when these technical hurdles exist only in history books. In spite of looming technical and interpersonal hurdles, systemic and institutional barriers are perhaps the largest consideration for the proposed NLP tool development and implementation in the clinical encounter.

The technical hurdles between the state of the current technology and a workable future product are likely best addressed by tech giants such as Amazon, Apple, Google, owing to their nearly unlimited resources and the requisite internal expertise to develop next-gen NLP tools. Systemic and structural challenges within healthcare and between the healthcare and tech industries further complicate the path forward. The tech giants, powerful and resource-rich as they are, have not historically been intimately involved in the healthcare space beyond wearable monitoring devices. As such, tech companies are the furthest from clinical encounter at present, and perhaps least likely to be invited into the room by those who “own” it: physicians, patients, and healthcare systems. The “owners” are thus the gatekeepers of the training data (real clinical encounters) necessary to develop the NLP algorithms for the clinical encounter of the future. With this in mind, partnerships between these tech giants and existing, healthcare-oriented organizations (Dragon Dictation, Epic/Cerner/Allscripts, etc.) already “in the room” may be more effective. These entities and healthcare-providing organizations need one another to succeed if development, integration, and adoption of NLP services in clinical encounters and EHRs are to be utilized effectively. However, a lack of clearly aligned goals/incentives at present is a hindrance to collaborative development and progress. The relationship between the above stakeholder entities will also be at the core of financial concerns — who is paying for it, what is the incentive, and why? Will the development be paid for, or only the finished product? Undertaking development and implementation of next-gen NLP in the clinical encounter is a high-risk, high-cost endeavor with uncertain rewards at the end. Efficacy remains largely theoretical as of now, though exciting data have been generated across numerous possible use cases as described in the previous section. To successfully develop and implement the NLP-driven clinical interaction and documentation of the future, it is clear that the relationships between large tech companies and healthcare systems will need to be redefined and developed collaboratively.

Legal, Perceptual, and Practical Barriers to Adoption of NLP-augmented Clinical Encounters

Even if the technology did exist, adoption remains a concern. Privacy and data sharing concerns, among others, would need to be addressed explicitly. Regulating how data is protected from or shared with insurers and the justice system would be critical and germane to the protection of both patients and healthcare practitioners. Interpersonal challenges include patient and physician distrust and the effects of perception of being recorded on alteration of natural behavior and speech[37]. Would the presence of a recording device fundamentally change the interaction between physicians and patients? Will patients be as forthcoming with information during clinical encounters, or would the existence of recording give perception of non-privacy as in the case of human scribes? Would physicians experience decreases in clinical efficiency due to more time spent covering oneself from a medicolegal perspective in every recorded conversation? Would the advances in NLP-augmented documentation efficiency gains offset more thorough clinical encounter (or “efficiency losses”), in a manner that improves and restores the work time allotted to direct patient care? These practical questions are all unknowns at this stage and merit deeper exploration through research. All of these hurdles would undoubtedly hinder adoption rates in a healthcare industry setting, which is historically slow to adopt new technologies, not to mention the litigious medicolegal culture in the US.

Final Thoughts

We clearly are far from a day when next-gen NLP technology revolutionizes clinical encounters, documentation, and decision making. Many hurdles must be overcome, ranging from technical to financial to legal to ethical challenges. Frankly, the healthcare system at large is not yet ready for widespread integration of NLP into the clinical practice setting. The vision of what could be, however, if the power of NLP and A.I. in the clinical encounter can be realized to their full potential, is worth the chase. If we are to assert the power and productivity offered by refined NLP and A.I. tools in the clinical encounter, strategic alignment of stakeholders across the healthcare industry, insurers, tech industry, government, and the general public is essential and must be the first step. Data describing efficacy and practical use cases of NLP in the day-to-day clinical setting are scant at present and unlikely to sway investment of time, energy, and money toward rapid private sector advancement. Perhaps the largest immediate barrier to cross is to more convincingly demonstrate efficacy in metrics ranging from patient outcomes to documentation efficiency and accuracy to clinician wellness as a result of NLP integration-driven improvements. The ability to ‘test-drive’ a functional product paired with strong efficacy data from well-designed, randomized trials against current standard processes would be most likely to rapidly align stakeholders to drive toward a NLP / A.I.-augmented future clinical encounter.

Nick is a resident in Physical Medicine and Rehabilitation Residency at the University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania. Follow him on Twitter @docdocrace.

References

[1] Holden RJ. Physicians’ beliefs about using EMR and CPOE: in pursuit of a contextualized understanding of health IT use behavior. International journal of medical informatics. 2010 Feb 1;79(2):71–80. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2821328/

[2] Sinsky C, Colligan L, Li L, Prgomet M, Reynolds S, Goeders L, Westbrook J, Tutty M, Blike G. Allocation of physician time in ambulatory practice: a time and motion study in 4 specialties. Annals of internal medicine. 2016 Dec 6;165(11):753–60. https://mfprac.com/web2018/07literature/literature/Misc/MD-Time_Sinsky.pdf

[3] Centers for Medicare and Medicaid Services (CMS). Physicians and Other Clinicians: CMS Flexibilities to Fight COVID-19. 2020 Mar 30. Online. https://www.cms.gov/files/document/covid-19-physicians-and-practitioners.pdf

[4] Adler‐Milstein J, Everson J, Lee SY. EHR adoption and hospital performance: time‐related effects. Health services research. 2015 Dec;50(6):1751–71. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4693851/

[5] Gagliardi JP, Turner DA. The electronic health record and education: Rethinking optimization. Journal of graduate medical education. 2016 Jul;8(3):325–7. http://www.jgme.org/doi/pdf/10.4300/JGME-D-15-00275.1?code=gmed-site

[6] Siegler EL, Adelman R. Copy and paste: a remediable hazard of electronic health records. The American journal of medicine. 2009 Jun 1;122(6):495–6. https://www.amjmed.com/article/S0002-9343(09)00157-0/fulltext

[7] Tsou AY, Lehmann CU, Michel J, Solomon R, Possanza L, Gandhi T. Safe practices for copy and paste in the EHR: Systematic review, recommendations, and novel model for Health IT collaboration. Applied clinical informatics. 2017;8(1):12. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5373750/

[8] Weis JM, Levy PC. Copy, paste, and cloned notes in electronic health records. Chest. 2014 Mar 1;145(3):632–8. https://www.sciencedirect.com/science/article/pii/S0012369215343786

[9] Dastagir MT, Chin HL, McNamara M, Poteraj K, Battaglini S, Alstot L. Advanced proficiency EHR training: effect on physicians’ EHR efficiency, EHR satisfaction and job satisfaction. InAMIA Annual Symposium Proceedings 2012 (Vol. 2012, p. 136). American Medical Informatics Association. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3540432/#!po=54.1667

[10] Block L, Habicht R, Wu AW, Desai SV, Wang K, Silva KN, Niessen T, Oliver N, Feldman L. In the wake of the 2003 and 2011 duty hours regulations, how do internal medicine interns spend their time?. Journal of general internal medicine. 2013 Aug 1;28(8):1042–7. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3710392/

[11] Demner-Fushman D, Chapman WW, McDonald CJ. What can natural language processing do for clinical decision support?. Journal of biomedical informatics. 2009 Oct 1;42(5):760–72. https://www.sciencedirect.com/science/article/pii/S1532046409001087

[12] Nadkarni PM, Ohno-Machado L, Chapman WW. Natural language processing: an introduction. Journal of the American Medical Informatics Association. 2011 Sep 1;18(5):544–51. https://academic.oup.com/jamia/article/18/5/544/829676

[13] Bresnick J. What is the Role of Natural Language Processing in Healthcare? HealthIT Analytics. Online, Accessed 29 July 2018. https://healthitanalytics.com/features/what-is-the-role-of-natural-language-processing-in-healthcare

[14] Blackmore CC, Mecklenburg RS, Kaplan GS. Effectiveness of clinical decision support in controlling inappropriate imaging. Journal of the American College of Radiology. 2011 Jan 1;8(1):19–25. https://www.sciencedirect.com/science/article/pii/S1546144010003893

[15] Raja AS, Ip IK, Prevedello LM, Sodickson AD, Farkas C, Zane RD, Hanson R, Goldhaber SZ, Gill RR, Khorasani R. Effect of computerized clinical decision support on the use and yield of CT pulmonary angiography in the emergency department. Radiology. 2012 Feb;262(2):468–74. https://pubs.rsna.org/doi/pdf/10.1148/radiol.11110951

[16] Nuckols TK, Smith-Spangler C, Morton SC, Asch SM, Patel VM, Anderson LJ, Deichsel EL, Shekelle PG. The effectiveness of computerized order entry at reducing preventable adverse drug events and medication errors in hospital settings: a systematic review and meta-analysis. Systematic reviews. 2014 Dec;3(1):56. https://systematicreviewsjournal.biomedcentral.com/articles/10.1186/2046-4053-3-56

[17] Ranji SR, Rennke S, Wachter RM. Computerised provider order entry combined with clinical decision support systems to improve medication safety: a narrative review. BMJ Qual Saf. 2014 Sep 1;23(9):773–80. https://qualitysafety.bmj.com/content/23/9/773.short

[18] Wolfstadt JI, Gurwitz JH, Field TS, Lee M, Kalkar S, Wu W, Rochon PA. The effect of computerized physician order entry with clinical decision support on the rates of adverse drug events: a systematic review. Journal of general internal medicine. 2008 Apr 1;23(4):451–8. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2359507/

[19] O’Connor PJ, Sperl-Hillen JM, Rush WA, Johnson PE, Amundson GH, Asche SE, Ekstrom HL, Gilmer TP. Impact of electronic health record clinical decision support on diabetes care: a randomized trial. The Annals of Family Medicine. 2011 Jan 1;9(1):12–21. http://www.annfammed.org/content/9/1/12.full.pdf

[20] AOCNP D. Watson will see you now: a supercomputer to help clinicians make informed treatment decisions. Clinical journal of oncology nursing. 2015 Feb 1;19(1):31. https://cjon.ons.org/sites/default/files/srTechSavvyFeb2015Page1_0.pdf

[21] Johnson M, Lapkin S, Long V, Sanchez P, Suominen H, Basilakis J, Dawson L. A systematic review of speech recognition technology in health care. BMC medical informatics and decision making. 2014 Dec;14(1):94. https://bmcmedinformdecismak.biomedcentral.com/articles/10.1186/1472-6947-14-94

[22] Yan C, Rose S, Rothberg MB, Mercer MB, Goodman K, Misra-Hebert AD. Physician, scribe, and patient perspectives on clinical scribes in primary care. Journal of general internal medicine. 2016 Sep 1;31(9):990–5. https://link.springer.com/article/10.1007/s11606-016-3719-x

[23] Shultz CG, Holmstrom HL. The use of medical scribes in health care settings: a systematic review and future directions. The Journal of the American Board of Family Medicine. 2015 May 1;28(3):371–81. http://www.jabfm.org/content/28/3/371.short

[24] Hafner K. A Busy Doctor’s Right Hand, Ever Ready to Type. The New York Times. 12 Jan 2014. https://www.nytimes.com/2014/01/14/health/a-busy-doctors-right-hand-ever-ready-to-type.html?_r=0

[25] Chang EK, Yu CY, Clarke R, Hackbarth A, Sanders T, Esrailian E, Hommes DW, Runyon BA. Defining a Patient Population With Cirrhosis. Journal of clinical gastroenterology. 2016 Nov 1;50(10):889–94. https://www.ncbi.nlm.nih.gov/pubmed/27348317

[26] Osborne JD, Wyatt M, Westfall AO, Willig J, Bethard S, Gordon G. Efficient identification of nationally mandated reportable cancer cases using natural language processing and machine learning. Journal of the American Medical Informatics Association. 2016 Mar 28;23(6):1077–84. https://academic.oup.com/jamia/article/23/6/1077/2399248

[27] Shah NH, LePendu P, Bauer-Mehren A, Ghebremariam YT, Iyer SV, Marcus J, Nead KT, Cooke JP, Leeper NJ. Proton pump inhibitor usage and the risk of myocardial infarction in the general population. PloS one. 2015 Jun 10;10(6):e0124653. http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0124653

[28] Bedi G, Carrillo F, Cecchi GA, Slezak DF, Sigman M, Mota NB, Ribeiro S, Javitt DC, Copelli M, Corcoran CM. Automated analysis of free speech predicts psychosis onset in high-risk youths. npj Schizophrenia. 2015 Aug 26;1:15030. https://www.nature.com/articles/npjschz201530

[29] Pennebaker JW, Mayne TJ, Francis ME. Linguistic predictors of adaptive bereavement. Journal of personality and social psychology. 1997 Apr;72(4):863. https://www.researchgate.net/profile/Tracy_Mayne2/publication/14108836_Linguistic_predictors_of_adaptive_bereavement/links/5716867108aeefeb022c39a9.pdf

[30] Owen JE, Giese-Davis J, Cordova M, Kronenwetter C, Golant M, Spiegel D. Self-report and linguistic indicators of emotional expression in narratives as predictors of adjustment to cancer. Journal of Behavioral Medicine. 2006 Aug 1;29(4):335–45. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.573.3063&rep=rep1&type=pdf

[31] Pestian JP, Matykiewicz P, Grupp-Phelan J. Using natural language processing to classify suicide notes. InProceedings of the Workshop on Current Trends in Biomedical Natural Language Processing 2008 Jun 19 (pp. 96–97). Association for Computational Linguistics. https://aclanthology.info/pdf/W/W08/W08-0616.pdf

[32] Roark B, Mitchell M, Hollingshead K. Syntactic complexity measures for detecting mild cognitive impairment. InProceedings of the Workshop on BioNLP 2007: Biological, Translational, and Clinical Language Processing 2007 Jun 29 (pp. 1–8). Association for Computational Linguistics. http://www.aclweb.org/anthology/W07-1001

[33] Butler SF, Venuti SW, Benoit C, Beaulaurier RL, Houle B, Katz N. Internet surveillance: content analysis and monitoring of product-specific internet prescription opioid abuse-related postings. The Clinical journal of pain. 2007 Sep 1;23(7):619–28. http://beau.fiu.edu/writings/butler%20et%20al%202007.pdf

[34] Demner-Fushman D, Chapman WW, McDonald CJ. What can natural language processing do for clinical decision support?. Journal of biomedical informatics. 2009 Oct 1;42(5):760–72. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2757540/

[35] David GC, Garcia AC, Rawls AW, Chand D. Listening to what is said–transcribing what is heard: the impact of speech recognition technology (SRT) on the practice of medical transcription (MT). Sociology of health & illness. 2009 Sep;31(6):924–38. https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1467-9566.2009.01186.x

[36] Paul S. Voice is the next big platform, unless you have an accent. WIRED Magazine. 20 March 2017. https://www.wired.com/2017/03/voice-is-the-next-big-platform-unless-you-have-an-accent/

[37] Lamb R, Mahl GF. Manifest reactions of patients and interviewers to the use of sound recording in the psychiatric interview. American Journal of Psychiatry. 1956 Mar;112(9):731–7. https://ajp.psychiatryonline.org/doi/abs/10.1176/ajp.112.9.731

[38] Ariel B, Farrar WA, Sutherland A. The effect of police body-worn cameras on use of force and citizens’ complaints against the police: A randomized controlled trial. Journal of quantitative criminology. 2015 Sep 1;31(3):509–35. https://www.repository.cam.ac.uk/bitstream/handle/1810/246429/JQC11.pdf?sequence=1&isAllowed=n

[39] Ariel B, Sutherland A, Henstock D, Young J, Drover P, Sykes J, Megicks S, Henderson R. Wearing body cameras increases assaults against officers and does not reduce police use of force: Results from a global multi-site experiment. European Journal of Criminology. 2016 Nov;13(6):744–55. http://journals.sagepub.com/doi/abs/10.1177/1477370816643734

[40] Ariel B, Sutherland A, Henstock D, Young J, Drover P, Sykes J, Megicks S, Henderson R. “Contagious accountability” a global multisite randomized controlled trial on the effect of police body-worn cameras on citizens’ complaints against the police. Criminal justice and behavior. 2017 Feb;44(2):293–316. https://www.repository.cam.ac.uk/bitstream/handle/1810/260710/Ariel_et_al-Journal_of_Criminal_Justice_and_Behavior-AM.pdf?sequence=1

[41] Hedberg EC, Katz CM, Choate DE. Body-worn cameras and citizen interactions with police officers: Estimating plausible effects given varying compliance levels. Justice quarterly. 2017 Jun 7;34(4):627–51. https://www.researchgate.net/profile/Charles_Katz/publication/303988618_Body-Worn_Cameras_and_Citizen_Interactions_with_Police_Officers_Estimating_Plausible_Effects_Given_Varying_Compliance_Levels/links/59de4b90aca27247d7942e54/Body-Worn-Cameras-and-Citizen-Interactions-with-Police-Officers-Estimating-Plausible-Effects-Given-Varying-Compliance-Levels.pdf

--

--

Resident Fellow Council, AAP

Resident and Fellow Council of the Association of Academic Physiatry (@AssocAcademicPhysiatry)