Developing Physician Leaders in Artificial Intelligence (AI) & Medicine

By KJ Lavan

RESEARCH STATEMENT

Developing Physician Leaders in Artificial Intelligence (AI) & Medicine

By KJ Lavan


My passion for inspiring well-being through neuroplasticity & epigenetics has led me to this question:

What should medical students learn about artificial intelligence (AI) as it relates to well-being?

Starting with the concept of a “re-imagined medical school,” my continued research is focused on developing a pilot for which physician-leaders are fluent in AI & medicine. In considering the complexities of a multidisciplinary, integrated approach to learning, it is critical to distinguish between that which all physicians must know for everyday practice, and that which some physicians should know to drive innovation.

Medical schools play a critical role in not only helping students learn but also, just as importantly, nurturing their academic interests while sowing the seeds of future leadership. In light of emerging AI innovations projected to have an increasingly significant impact in medical practice, there is an increased interest in training current and future physicians about AI2.

Although competencies for AI clinical usage are, for the most part, similar to other novel technology, there is the caveat of qualitative differences. Of critical importance are the concerns centered on health equity, explainability, & data security3-5.

Drawing on my research & documented experiences at Texas A&M Health Science Center College of Medicine and MIT Critical Data’s “datathons,” I recommend a two-fold approach: the robust combination of science-focused data incorporated into baseline health research curricula and extracurricular programs to cultivate leadership in this space. The validation of said recommendation can & should be tailored in accordance to the context and strengths of each medical school, its partnerships, and its student body.

Understanding AI in the Clinical Context

Just as there is the need to understand any technology impacting clinical decision-making, physicians need to understand AI in the same way. For instance, a physician utilizing MRI does not need to understand the particle spin physics differentiating T1 and T2 weighted scans, but they do need to be able to: 

a) Utilize it –identify the appropriateness for a particular clinical context, and what essential inputs are needed to culminate meaningful results

b) Interpret it– understand and interpret the results with a sensible degree of accuracy through diligent source awareness of clinical irrelevance, bias, or error

c) Explain it –convey the underlying processes of the results in a lucid manner so as to be understandable to others (e.g. partnered health professionals and patients)

The aforementioned skills take on peculiar nuances in the AI context. For (a) and (b), it is imperative for physicians to appreciate the particular context-specific nature of AI, and the fact that transferability may not be feasible for performance in a single restricted context. Furthermore, it is imperative to be aware of factors which may reduce the performance of algorithms for specific patient groups3.

AI has been frequently criticized for the “black box” effect –the mechanism by which a model delivers a decision may be indecipherable1. The absence of technical “explainability,” nevertheless, does not preclude the obligations of (c). In order to fulfill requirements of informed consent and clinical collaboration, there may be a request for a physician to communicate their understanding of the genesis, nature, and justification of an algorithm’s results to patients, families, and colleagues.

Understanding AI in the Broader Professional Context

The professional obligations of physicians go beyond the clinician role into leadership and health advocacy. There are significant operational and ethical challenges raised with the disruptive prospects of AI in healthcare. These challenges dictate the need for a collective preparedness among physicians to ensure patient welfare.

The issue of health equity is a major concern relative to the impact of algorithmic clinical decision support. That involves underrepresentation of minority populations in the use of datasets3, as well as existing biases being learned and perpetuated by algorithms4. Further, risks surrounding privacy and data security are becoming increasingly clear5. Yet, AI itself potentiates alleviating some of medicine’s current problems in terms of inequities and bias6. In the scheme of things, both possibilities should be top of mind for physicians including the advocacy for ethical and equitable systems development and deployment. Ultimately, physicians must hold themselves to account as responsible stewards for patient data: to ensure fundamental trust between provider and patient is not compromised.

What Should Be the Pedagogy for AI Learning?

Nurturing fluency in both AI and medicine must be carefully coordinated. This duplex competence is vital due to the difficult undertaking for selection of clinical relevancy and the feasibility of computational targets for AI in medicine. There is a danger in an independent approach as it does not support information sharing. That can lead to specific clinical targets being overlooked and deteriorate the production of “solutions in search of problems”7. In order to create the viability of such production, a multidisciplinary, integrated approach to learning is necessary.

As stated earlier, it is critical to distinguish between that which all physicians must know for everyday practice, and that which some physicians should know to drive innovation. With respect to the former, curricular elements should be the aim, whereas robust extracurricular program elements should be the aim for the latter. In the inherent complexities of a multidisciplinary, integrated approach to learning, this is a key preceptor. The impact question concerning physician identity, currently and into the future, is served by both elements to advance discussions about this query. This creates alignment with the concept of the “reimagined medical school” –establishing a core knowledge framework while supporting students who desire in-depth analysis into specific subject areas8.

My on-going research at this point is focused on designing and implementing a pilot that will be embraced by administration as an important part of the Faculty’s strategic plan8. Pre-clinical curriculum lectures would introduce all students to these concepts, while a medicine computational type program would provide clinical data science projects and practical programming skills to students of specific interest9. As well, I envision a “Medicine AI” student interest group hosting extracurricular seminars on the subject while helping to promote connections between medical students and a city’s larger AI ecosystem –academia and industry. As part of my research, I am compiling a list of Medicine AI potential offerings which encompasses learning for both pre-clinical curriculum & extracurricular possibilities.

In the same vein, Harvard Medical School is offering a clinical informatics training elective for medical students10. This offering consists of pairing students with faculty mentors of their specific interest scope and engaging in a collection of instructional and hands-on learning to how informatics is set in health systems. In collaboration with the MIT Critical Data group, the school also offers a project-based course on data science in medicine11. In its quest to increase interest in AI, the MIT Critical Data group puts on “datathons” –short competitions whereby clinicians and scientists collaborate to use data to solve clinical problems12. These collaborations are demonstrative of the possibilities for working partnerships with non-medical faculties to enhance the education of medical students.

The potential aforementioned experiences offer insight into identifying an array of important developments in both the curricular and extracurricular domains. I must stress the synergistic importance needed between the learning objectives and their delivery, and the need for maintaining a learner-centered core. This importance also includes prioritizing student engagement instead of passive knowledge transfer. Furthermore, the integration of these concepts should occur with other features of the curriculum wherever relevant (e.g. AI case study inclusion in an ethical clinical decision-making workshop). This is a natural evolution as the competencies necessary to work effectively with AI will frequently intersect with those required to fulfill other fundamental aspects of the physician role in terms of leadership, advocacy, and communication.

After Graduate Medical Education, What’s Next?

Although the realm of this research does not include postgraduate medical education (PGME) and continuing medical education (CME), it is imperative to hold the understanding that medical education is deemed a life-long undertaking. As such, attention must be given to learners at subsequent later stages13. In current research or Quality Improvement (QI) blocks, there could be competencies incorporated in PGME curricula. Research training for medical or surgical trainees might involve technical areas such as data science or biomedical engineering. It could also involve medical education, health services research, and ethics. Assessing and shifting robust innovations into care would be the central role of QI. Although CME in-person workshops and online offerings provide clinicians a career continuum to refresh competencies, there has to be more. That entails endowing established practitioners with the skills and knowledge to nurture evolution with this field14. The Medicine AI offerings mentioned earlier, which I am currently compiling, will serve as a scalable resource to support learners at different stages of their careers.

The Action Value

The overarching objective of medical schools is to train physicians. In that mission, the training must include AI in that it is inherently positioned to have an increasingly considerable impact on medicine. The success of this undertaking is centered on students having both curricular and extracurricular learning experiences. These experiences must take into account ethical considerations, technical constraints, and clinical usage of the tools at their discretion. 

Understanding the significance and probable impact of this technology, action must be taken to establish artificial intelligence foundational literacy among physicians at large, as well as to nurture skills and interests of future leaders who will drive innovation in this space.

REFERENCES

1. Topol, E. J. High-performance medicine: the convergence of human and artificial intelligence. Nat. Med. 25, 44–56 (2019).

2. Wartman, S. A. The empirical challenge of 21st-century medical education. Academic Med. 94, 1412–1415 (2019).

3. Adamson, A. S. & Smith, A. Machine learning and health care disparities in dermatology. JAMA Dermatol 154, 1247–1248 (2018).

4. Parikh, R. B., Teeple, S. & Navathe, A. S. Addressing bias in artificial intelligence in health care. JAMA. http://jamanetwork.com/journals/jama/fullarticle/2756196. (2019).

5. Price, W. N. & Cohen, I. G. Privacy in the age of medical big data. Nat. Med. 25, 37–43 (2019).

6. Chen, I. Y., Joshi, S. & Ghassemi, M. Treating health disparities with artificial intelligence. Nat. Med. 26, 16–17 (2020).

By KJ Lavan

7. Wiens, J. et al. Do no harm: a roadmap for responsible machine learning for health care. Nat. Med. 25, 1337–1340 (2019).

8. Prober, C. G. & Khan, S. Medical education reimagined: a call to action. Acad. Med. 88, 1407–1410 (2013).

9. Law, M., Veinot, P., Campbell, J., Craig, M. & Mylopoulos, M. Computing for medicine: can we prepare medical students for the future? Acad. Med. 94, 353 (2019).

10. Harvard Medical School Course Catalogue. PD530.7 Clinical Informatics.

http://www.medcatalog.harvard.edu/coursedetails.aspx?cid=PD530.7&did=260&yid=2020 &fbclid=IwAR3FRgDGVFK4ca_wHGGnXBwf3zRLkN8LMiJXBph1q3tFc_g3ZAVT5gK 1qAI (2020).

11. MIT Critical Data. 2019.HST.953: Collaborative Data Science in Medicine. https://criticaldata.mit.edu/blog/2019/08/06/hst-953-2019/. (2020).

12. Aboab, J. et al. A “datathon” model to support cross-disciplinary collaboration. Sci. Transl. Med. 8, 333ps8 (2016).

13. Aschenbrener, C. A., Ast, C. & Kirch, D. G. Graduate medical education: its role in achieving a true medical education continuum. Acad. Med. 90, 1203–1209 (2015).

14. McMahon, G. T. The leadership case for investing in continuing professional development. Acad. Med. 92, 1075–1077 (2017).

15. Floridi, L. et al. AI4People—an ethical framework for a good AI society: opportunities, risks, principles, and recommendations. Minds Mach. 28, 689–707 (2018).




Comments

Popular posts from this blog