Editorial

Artificial Intelligence in Medicine and Medical Education: Current Applications, Challenges, and Future Directions


Manali Sarkar1, Mihnea-Alexandru Găman2, Juan C. Puyana3, Francisco J. Bonilla-Escobar4


doi: http://dx.doi.org/10.5195/ijms.2023.2626

Volume 12, Number 1: 9-13

The father of modern computers Alan Turing provided us with a basis to understand intelligent machines through his infamous “Turing test” where machines were tested to determine their capability to think as human beings.1 The term artificial intelligence (AI) was coined years later in 1956 by John McCarthy.2 Until recently, most machines would fail the test but current advances and leaps in technology have made ChatGPT Version 4 and Eugene Goostman, a program devised by Google to pass the Turing test.3,4 AI in medicine can be used via two interfaces-virtually through big data applications or physically through robots, artificial neural networks, and prostheses. AI working using big data applications are known as large language model (LLM). LLM are trained using large amounts of data collected so that they can generate responses to queries given by the user. On the other hand convolutional neural networks (CNN) (a type of neural network based on AI) are used to perform decision-making tasks with minimal human input through heuristics.5 This editorial aims to discuss potential applications, current use, existing legal regulatory framework, challenges, future directives and calls to stakeholders on AI in medicine and medical education.

AI: Potential and Current Use

AI is at the precipice of completely revolutionizing healthcare as we know it. AI can improve medical education by personalizing education to individual student, increase diagnostic precision, aid healthcare professionals in decision-making, and reduce human error. Students at Duke and Stanford University are currently supporting studies on building enhanced technologies using AI to integrate into existing curricula and healthcare practices. 6 The Human Diagnosis Project aims to improve diagnosis and provide it at accessible rates by assimilating machine learning to physician diagnosis.6 Homer Stryker School of Medicine and “Resource Medical” have collaborated to provide simulation center using AI for medical students to train.6 Educators can be assisted by AI to better understand their student weaknesses and develop tailor-made courses, which the students can access and when the students make an error the AI can detect the student's weak point and assist students in overcoming the same. The Radiology department at the University of Florida is using AI to enhance mammography detection rates. Certain medical universities have introduced courses for their students to learn about upcoming technological healthcare innovations. To reduce human error during prescription AIs are being incorporated such as MedAware, MedEye, and MedPass.6

The use of AI in the field of research has exploded. AI and especially generative AI has been a useful aid in scholarly writing by improving vocabulary and grammar. Antiplagiarism programs are using AI for plagiarism checks and AI can assist in database searches and literature reviews. PubMed, a free database containing millions of articles, has updated its search algorithm using Best Match (BM25) and LambdaMART (L2R) AI algorithms. The Best Match will process the search results and L2R will then rerank the first 500 results. The function of query suggestion, query expansion, author name disambiguation, and automatic article indexing are also done by AI.7 Despite these benefits, numerous obstacles must be overcome before AI becomes a staple in the medical field.

Obstacles in AI adoption

The four pillars of bioethics serve as a foundational framework for ethical decision-making in healthcare and are followed by all healthcare personnel.8 The pillars are autonomy, beneficence, non-maleficence, and justice. Hence it is sought to reason that the ever-evolving, dynamic sphere of AI should also be bound to the same pillar. The examination of AI should first ask the question: is AI a moral agent, and if not, should AI be built to be such an agent?9

Manipulation and Bias

The base of a future clinician starts from the education they receive as an undergraduate. With the proliferation of AI and use of LLM, there can be a worry that misuse by companies can cause manipulation and impose hidden influences on their user. Through algorithmic manipulation, AI can target and exploit the decision-making capacities of its users. The Cambridge Analytica scandal10 was able to instill hate among communities through targeted political advertisements and with medicine so closely intertwined with politics - it seeks to reason that psychological manipulation can be done to bypass users' autonomous will. More worryingly is the concept of “garbage in, garbage out” where AI is taught using pre-existing datasets that have an underrepresentation of minorities or contain racist, and sexist stereotypes and the usage of the AI to solve queries can lead to the reinforcement of these stereotypes.11 CNNs were found to be less effective in diagnosing skin lesions in black patients than their white counterparts.12 This is especially worrying as blacks have a higher mortality rate for melanoma and early diagnosis is key to improving survival outcomes. In equally sick black and white patients AI misdiagnosed the black patients as healthy and gave higher priority to whites.12 Prediction models for heart attacks misdiagnosed women patients and on further investigation, it was found that the model was trained exclusively on male datasets.12 In clinical trials and animal studies, male sex have been preferred historically, and datasets generated from such studies when incorporated in AI can exacerbate the gender bias in healthcare.

Autonomy and Informed Consent

With the integration of AI into healthcare, it is not in the near distant future that AI-driven treatment plans will be made by physicians. Doctors have an ethical responsibility to inform and patients have the right to be aware of the impact the AI system is having on their successful or unsuccessful treatment. Currently, most AI systems are made using black-box algorithms and there is a lack of transparency on how the AI came to the treatment plan. The question arises if a treatment has failed, who should be held responsible - the AI or the healthcare professional?13 There are currently no clear legal regulations and due to this, it hinders a patient's autonomy as they will not have the full information to make an informed choice.

Data Privacy

Most AI organizations are private organizations, and the integration of AI in medicine needs a significant amount of data to train algorithms to accurately diagnose and formulate a treatment plan. The access to such data is where the issue of privacy comes into play as there are no universal protocols for data encryption and sharing for AI projects. Furthermore, even if data was delinked, de-identification of photos and radiographic images is impossible especially if the photos or radiographic images contain unique or identifiable features. Search histories and data from smart appliances are not covered under the Health Insurance Portability and Accountability Act (HIPAA) and can be used to triangulate an individual. The access and security of AI generated treatment plans remains unanswered.14 This stored data containing millions of patients' healthcare information can be sold on the dark web if adequate cybersecurity measures are not taken.

Legal Concerns

The legality of AI in medicine is murky with policymakers having been unable to formulate definitive laws and the existing laws (Table 1) are rudimentary at best.15

Table 1

Legal Regulation in the United States and Europe for Artificial Intelligence (AI).1519

Regulation Description
Regulations in United States
  Medical Devices It is regulated by the FDA under the US Federal Food, Drug, and Cosmetic Act (FDCA). Instruments, apparatus, implants, and other articles intended for diagnosis, treatment, or affecting bodily functions are defined as medical devices
  21st Century Cures Act Certain medical and decision support software functions are exempted from FDA regulation, if they are related to administrative support, lifestyle maintenance, patient records, and clinical data handling.
  Draft Guidance on Clinical Decision Support Software FDA has outlined criteria for exemption and emphasized the importance of transparency and user understanding in software recommendations.
  Software Pre-Cert Pilot Program FDA has launched a pilot program to pre-certify digital health developers based on criteria such as patient safety and product quality, allowing streamlined or exempted review processes for low-risk software devices.
  Proposed Regulatory Framework for AI/ML-based Software FDA has proposed a regulatory framework for AI software working as medical devices. The proposed framework should prioritize optimizing performance and effectiveness while reducing risks.
  2023 AI Legislations 25 states introduced legislation on AI and 18 states adopted the legislation. From Feb 1, 2024, Connecticut required the State Department of Administrative Services to assess AI systems used by state agencies to rule out unlawful discrimination by AI.
Louisiana requested the Joint Committee on Technology and Cybersecurity to assess the impact of AI on policies and operations.
Maryland introduced a grant program assisting small and medium-sized manufacturing industries in implementing industrial AI.
North Dakota redefined a person where inanimate objects such as AI and animals were excluded from the definition.
Councils were created by North Dakota, Puerto Rico, and West Virginia to monitor AI used by state agencies.
Law and policymakers also debated on measures to assess the impact of AI, specific privacy, and ethical issues related to facial recognition software and autonomous cars.
  White House Executive Order on AI Key highlights of the executive order include the sharing of safety results by AI developers, establishing guidelines for safety privacy, and algorithmic fairness, and protection against AI-enabled fraud.
  2024 Guidelines By Dec 1, 2024, all agencies will have to implement AI safeguards or cease using the technology.
Regulations in Europe
  Classification Process under Medical Device Regulation (MDR) The MDR introduces changes in the classification process of medical devices with software used for prediction or prognosis being classified as a medical device.
  Classification of Software under MDR AI software targeted towards diagnosis or treatment will be classified as class IIa, IIb, or III depending on the potential impact on health. Others are classified as class I.
  Conformity Assessment under MDR All medical devices undergo a conformity assessment before being released to the market. The assessment procedure varies based on device classification and type.
  EU AI Act It is the world's first law governing AI. The Act enshrines the definition of AI. The Act prohibits the usage of AI for unlawful practices and classifies AI systems into risk categories. National and EU officers are to be appointed for enforcement of the above act. Non-compliance to the act can lead to fines.

Other Concerns

Future clinicians will be dependent on AI for the majority of their decisions and will be intellectually lazy with reduced critical thinking capacity than their current counterparts.16 AI applications despite having improved medication management have been noted to overprescribe medications.17 All of these can lead to a loss of doctor-patient trust.18 AI-powered devices will exacerbate existing disparities in medical outcomes among countries as usage of an AI-powered device requires stable electricity and a stable WiFi connection with affordable data plans all of which many Low-income countries (LIC) and Low middle-income countries (LMIC) do not have. For female students, certain AI applications have been used to morph their pictures and videos. These morphed pictures and videos can be done through a multitude of ways-superimpose one face onto another, manipulate lip and face movement to sync to a different audio track and generate synthetic body movements. They are then circulated using social media.19 Their exchange leads to psychological distress, and damage to professional and personal relationships.

Integration of AI in Medical Curricula

In Medical Education

Previous studies have noted medical students are eager for AI and believe AI will have a positive impact on medicine but they have perceived a need to transform medical curriculum to incorporate AI.20,21 The existing educational gaps noted by the students were a lack of knowledge and trust in AI applications, inadequate training to solve AI ethical issues, and inability to inform their patients about the features and risks of AI.20 Quinn T et al.,22 suggest an outline for embedded 4 steps AI ethics education framework. The first step includes the formulation of new AI lessons on ethics; the second step will be the crucial step where alignment of the newly formulated AI lessons into the existing curricula will be done; the third step is the education of the teachers in prerequisite technical AI knowledge and the final step will be the dissemination of the knowledge to medical students using interactive visual aids or case-based discussion.

The feasibility of such a framework is questioned as the major barriers remain on what and how AI ethics material should be incorporated into a medical student's already intensive and packed curriculum. The second barrier is that medical colleges do not have data scientists or engineers as faculties and hence pre-existing faculty who may be ill-equipped will have to be the primary instructors. Li et. al23 found that performance expectancy, hedonic motivation, and trust affected the use of AI by medical students. Based on this they recommended enhanced awareness towards AI and enrichment of the pre-existing curriculum. For improving performance expectancy, medical educators should teach medical students the perfect way of incorporating AI to improve their scholastic performance, for hedonic motivation medical educators should use visual aids to deliver instruction on the usage of AI, and for trust students need to recognize that their educators are well versed in AI rather than faculty who have been burdened to teach about AI to their students.

In Ethical Education

There is a strong demand for a curriculum on ethics and law, but the current ethical education is quite limited, with students having a self-perceived lack of knowledge.24,25 To fill this gap, student-led organizations (Asian Medical Students Association of India, Rotaract Club of Medicrew, Rotaract Club of Caduceus, Medical Students Association of India) have founded their bioethics units. These student-led units conceptualize, design, and implement projects to educate their members on ethics. These organizations have been instrumental in fostering in medical students a hunger for knowledge, but they suffer from a lack of measurable outcomes and generalizability. The International Chair in Bioethics conducts a paid 26-week International Certificate Course in Bioethics and is the major spearhead in access to and spread of bioethics education for all healthcare professionals.26 The courses conducted here are done by interdisciplinary experts. Regrettably, the discourse and dissemination of AI ethics to medical students remain limited within their purview.

In Research

Research is another area where medical students use AI. Currently, there is a wide discourse among journals on whether generative AI should be included as an author27 or not with major journals being against the same. Generative AI also gives out fake citations to articles that do not exist, which is referred to as ‘artificial hallucinations'.28 Hence, anti-LLM programs such as GPTZero29 and ZeroGPT were devised to weed out generative AI content in manuscripts with varying success.30 These future medical students who would become authors and journal editors should be educated about the judicious use of AI.

Future Directives

AI has ushered in a revolutionary age of unprecedented opportunities in the field of medical diagnosis, treatment, and patient care but the deficits in AI training have the potential to create classes of future physicians ill-equipped to navigate the intricacies of a joint human-machine healthcare system. The urgency to attend to this gap cannot be overstated and policymakers, medical educators, and stakeholders must take decisive actions to strengthen and prioritize AI education. The current crisis in AI stems from an absence of policies for the development and deployment of AI which can be mitigated by the establishment of robust ethical frameworks. Medical institutions should overhaul their curricula and integrate AI ethics through dedicated coursework and interdisciplinary collaboration. Furthermore, we request professional development and training opportunities to be provided to early as well as practicing career physicians. A proactive approach is the need of the hour.

Role of the International Journal of Medical Students

As a journal for student and early career physicians, IJMS will serve as a global, non-judgemental platform for sharing perspectives, experience, and empirical research on AI. Authors submitting their manuscripts to IJMS are advised not to utilize AI-generated content as the Journal will be scanning and rejecting articles found to have AI-generated content. The IJMS is committed to serving as an international forum for advancing knowledge and preparedness toward AI. In conclusion, the current medical curricula lacks a dedicated focus on AI, and integration of AI education is imperative and should be done through a proactive, collaborative approach from stakeholders.

From this Issue at IJMS

The current issue of the IJMS brings forth seven original research articles, one narrative and one systematic review, two case reports and five experience articles, summing up a total of sixteen publications written by medical students and early career scientists.

Mohamed et al., assessed the occurrence of generalized anxiety disorder (GAD) and its potential risk factors in nearly 400 students enrolled in a medical school from Sudan. Their data demonstrated that over 30% of the subjects suffered from GAD, with severe anxiety detected in 12.3% of the study population, which significantly impacted their daily activities. Moreover, they revealed that GAD was more likely to occur in females, med students suffering from chronic illnesses and final-year medical students.31 Similarly, Gul et al., investigated the occurrence of depression in med students from Pakistan, pointing out based on information collected from over 300 individuals that nearly 20% of med students suffer from depression, and over a quarter of them are borderline cases. However, their assessment failed to identify risk factors for depression in the examined cohort, stressing the need for future research in the field of mental health amongst the future healthcare workforce.32 Moreover, Ozdemir et al., evaluated the interplay between sleep alterations and stress in med students from Turkey, pointing out that healthcare undergraduates display poor sleep quality and life satisfaction, variables which seemed to be influenced by anthropometric indices, lifestyle, nutritional habits, and hormonal changes.33 Furthermore, Brown et al., investigated the presence of gut-brain interaction disorders (DGBI) in med students from the United Kingdom, highlighting that over three-quarters of the examined population experienced DGBI. The most affected individuals seemed to be those experiencing anxiety, depression, somatic symptoms, or those who displayed poor nutritional habits, decreased physical or mental quality of life, or who frequently used medications for various reasons. Worryingly, a small proportion of the examined individuals sought help for DGBI management.34

On another note, Ganguli et al., evaluated the content of residency programs and other websites used by residency applicants during the match process, revealing that data regarding the wellness of healthcare workers is often lacking despite it being extremely relevant in the process of choosing a future specialty.35 In addition, Evensen-Martinez et al., examined the impact of Spanish education on the outcome of a one-week medical trip to a Spanish-speaking, stressing that no previous knowledge of Spanish should not be vicissitude in med students' choice to engage in an international medical trip to a Spanish-speaking region.36

LeBron et al., investigated the appropriateness of empiric antibiotic management of uncomplicated cystitis in the emergency department, revealing no difference in prescription between week-days and weekend days, and that almost a third of antibiotic prescriptions were inappropriate for the disease.37

In a narrative review, Dailey examined the impact of psychosocial factors on birth outcomes of females with substance use disorder, highlighting the relevance of socioeconomic status, maternal stress, as well as mental health.38 Rafiq et al., conducted a systematic review to analyze the influence of sociocultural variables on the study habits of medical students, revealing a potential impact of personal factors, behavior, and the environment on medical education.39

Several interesting case reports have been accepted for publication in the current issue. Saldaña-Ruiz et al., present the case of a female adolescent who required liver transplantation for fulminant hepatic failure, being eventually diagnosed with Hogdkin's lymphoma with liver involvement.40 González-Zuelgaray et al., present a case of acute renal failure-induced hyperkalemia and share valuable tips on the ECG changes that occur in this type of dyselectrolytemia.41

Medical students also have amazing experiences to share with the international community. One can follow discussions on public health issues encountered in Cambodia and the US, as well as work in a leprosy healthcare facility from Nigeria.4245 Finally, you can explore Dr. Michael McGee's invaluable insights on medical socialization, drawing on his expertise in psychiatry.46 Read all about it in this issue's virtual pages!

Conflict of Interest Statement & Funding

The Authors have no funding, financial relationships or conflicts of interest to disclose. Dr. Juan C. Puyana work is partially funded by the National Institute of Health (NIH) of the United States with the grant UH3HL151595. The opinions expressed in this article are the author's own and do not reflect the view of the National Institutes of Health, the Department of Health and Human Services, or the United States government.

Cite as Sarkar M, Găman MA, Puyana JC, Bonilla-Escobar FJ. Artificial Intelligence in Medicine and Medical Education: Current Applications, Challenges, and Future Directions. Int J Med Stud. 2024 Jan-Mar;12(1):9-13.


References

1. Amisha, Malik P, Pathania M, Rathaur VK. Overview of artificial intelligence in medicine. J Family Med Prim Care. 2019;8(7):2328–2331.

2. Bekbolatova M, Mayer J, Ong CW, Toma M. Transformative Potential of AI in Healthcare: Definitions, Applications, and Navigating the Ethical Landscape and Public Perspectives. Healthcare (Basel). 2024;12(2):125.

3. Warren A. Can AI really pass the Turing test? WildefirePR. 2023 Jan 3. Accessed from: https://www.wildfirepr.com/blog/can-ai-really-pass-the-turing-test/#:∼:text=Not%20long%20ago%2C%20Google%20demonstrated,20th%2Dcentury%20mathematician%20Alan%20Turing.

4. Study finds ChatGPT's latest bot behaves like humans, only better. Stanford School of Humanities and Sciences. 2024 Feb 22. Accessed from: https://humsci.stanford.edu/feature/study-finds-chatgpts-latest-bot-behaves-humans-only-better#:∼:text=In%20the%20study%2C%20ChatGPT's%20version,have%20won%20itself%20many%20friends.

5. Wong DJ, Gandomkar Z, Wu WJ, Zhang G, Gao W, He X, Wang Y, Reed W. Artificial intelligence and convolution neural networks assessing mammographic images: a narrative literature review. J Med Radiat Sci. 2020;67(2):134–142.

6. Mir MM, Mir GM, Raina NT, Mir SM, Mir SM, Miskeen E, Alharthi MH, Alamri MMS. Application of Artificial Intelligence in Medical Education: Current Scenario and Future Perspectives. J Adv Med Educ Prof. 2023;11(3):133–140.

7. Kiester L, Turp C. Artificial intelligence behind the scenes: PubMed's Best Match algorithm. J Med Libr Assoc. 2022;110(1):15–22.

8. Beauchamp TL, Childress JF. Principles of bioethics. 7th ed. Oxford University Press; 2013.

9. Misselhorn C. Artificial Moral Agents: Conceptual Issues and Ethical Controversy. In p. 31–49.

10. Susser D, Roessler B, and Nissenbaum H. Online Manipulation: Hidden Influences in a Digital World. Georgetown L. Techn. Rev. 2019; 4(1):1–45.

11. Omiye JA, Lester JC, Spichak S, Rotemberg V, Daneshjou R. Large language models propagate race-based medicine. NPJ Digit Med. 2023;6(1):195.

12. Norori N, Hu Q, Aellen FM, Faraci FD, Tzovara A. Addressing bias in big data and AI for health care: A call for open science. Patterns (N Y). 2021;2(10):100347.

13. Farhud DD, Zokaei S. Ethical Issues of Artificial Intelligence in Medicine and Healthcare. Iran J Public Health. 2021;50(11):i–v.

14. Philibert RA, Terry N, Erwin C, Philibert WJ, Beach SR, Brody GH. Methylation array data can simultaneously identify individuals and convey protected health information: An unrecognized ethical concern. Clin Epigenetics. 2014;6:28.

15. Gerke S, Minssen T, Cohen G. Ethical and legal challenges of artificial intelligence-driven healthcare. Artificial Intelligence in Healthcare. 2020:295–336.

16. Ahmad SF, Han H, Alam MM, Rehmat MK, Irshad M, Arraño-Muñoz M, et al. Impact of artificial intelligence on human loss in decision making, laziness and safety in education. Humanit Soc Sci Commun. 2023;10(1):311.

17. Damiani G, Altamura G, Zedda M, Nurchis MC, Aulino G, Heidar Alizadeh A, Cazzato F, Della Morte G, Caputo M, Grassi S, Oliva A; D.3.2 group. Potentiality of algorithms and artificial intelligence adoption to improve medication management in primary care: a systematic review. BMJ Open. 2023;13(3):e065301.

18. Asan O, Bayrak AE, Choudhury A. Artificial Intelligence and Human Trust in Healthcare: Focus on Clinicians. J Med Internet Res. 2020;22(6):e15154.

19. Vasireddy A. Deepfake Dangers: Unmasking the Impact of AI-Generated Image Manipulation on Girls and Society. Medium. 2023 Sep 27. https://medium.com/@anivasireddy/deepfake-dangers-unmasking-the-impact-of-ai-generated-image-manipulation-on-girls-and-society-bc6b10d58d1c

20. Civaner MM, Uncu Y, Bulut F, Chalil EG, Tatli A. Artificial intelligence in medical education: a cross-sectional needs assessment. BMC Med Educ. 2022;22(1):772.

21. Asan O, Bayrak AE, Choudhury A. Artificial Intelligence and Human Trust in Healthcare: Focus on Clinicians. J Med Internet Res. 2020;22(6):e15154.

22. Quinn TP, Coghlan S. Readying medical students for medical AI: The need to embed AI ethics education. arXiv preprint

23. Li, Q., Qin, Y. AI in medical education: medical student perception, curriculum recommendations and design suggestions. BMC Med Educ. 2023;23(1):852.

24. Faihs L, Neumann-Opitz C, Kainberger F, Druml C. Ethics teaching in medical school: the perception of medical students. Wiener klinische Wochenschrift. 2024;136(5-6):129–136.

25. AlMahmoud T, Hashim MJ, Elzubeir MA, Branicki F. Ethics teaching in a medical education environment: preferences for diversity of learning and assessment methods. Med Educ Online. 2017;22(1):1328257.

26. UNESCO bioethics chair. Training courses. Accessed from: https://www.unescobiochair.org/training-courses/

27. Lee JY. Can an artificial intelligence chatbot be the author of a scholarly article? J Educ Eval Health Prof. 2023;20:6.

28. Kacena MA, Plotkin LI, Fehrenbacher JC. The Use of Artificial Intelligence in Writing Scientific Review Articles. Curr Osteoporos Rep. 2024;22(1):115–121.

29. Habibzadeh F. GPTZero Performance in Identifying Artificial Intelligence-Generated Medical Texts: A Preliminary Study. J Korean Med Sci. 2023;38(38):e319.

30. Bellini V, Semeraro F, Montomoli J, Cascella M, Bignami E. Between human and AI: assessing the reliability of AI text detection tools. Curr Med Res Opin. 2024;40(3):353–358.

31. Mohamed KO, Ahmed AA, Zaki EA, Soumit SM, Allam WA, Mohamed AM. Prevalence of Generalized Anxiety Disorder and Associated Risk Factors Among Medical Students in Sudan: A Cross-Sectional Study at Omdurman Islamic University. Int J Med Stud. 2024;12(1):14–21.

32. Gul N, Ali A, Rizwanullah, Khayam, Khan MS, Gul F, et al. Prioritizing Mental Health: A Cross-Sectional Investigation of Depression Prevalence and Risk Factors among Medical Students in Peshawar, Pakistan. Int J Med Stud. 2024;12(1):22–28.

33. Ozdemir E, Yazarkan Y, Pehlivanoglu B. Medical Students' Stress Levels Are Correlated with Their Sleep Quality and Life Satisfaction. Int J Med Stud. 2024;12(1):53–59.

34. Brown LC, Aziz I. Prevalence and Burden of Disorders of Gut-Brain Interaction Among UK Medical Students. Int J Med Stud. 2024;12(1):43–52.

35. Ganguli S, Chen SW, Maghami S, Corpodean F, Lin PP, Haywood YC, et al. Residency Program Website Content May Not Meet Applicant Needs. Int J Med Stud. 2024;10(1):60–68.

36. Evensen-Martinez M, Santiago M, Martinez R, Beck D, Trawick A, Zapata I, et al. The Influence of Pre-Trip Medical Spanish Education on a US-Based, Medical Student Service Trip: A Cohort Study. Int J Med Stud. 2024;12(1):35–42.

37. LeBron KA, Bielawski A, Popiel P, Shams S, Grimes CL. Antibiotic Appropriateness on Mondays vs. Fridays: Empiric Treatment of Simple Cystitis in the Emergency Department. Int J Med Stud. 2024;12(1):29–34.

38. Dailey AR. A Review of Psychosocial Factors on Birth Outcomes in Women with Substance Use Disorder in the United States: The Importance of Preventing Relapse During Sustained Remission. Int J Med Stud. 2024;12(1):69–82.

39. Rafiq HS, Blair E. Medical Students' Study Habits Through a Sociocultural Lens: A Systematic Literature Review. Int J Med Stud. 2024;12(1):83–91.

40. Saldaña-Ruiz MA, Ortiz-Alonso F, Sandoval-González AC, Tapia-Brito LS, Lozano-Galván LC, Ramírez-Pintor KM. Fulminant Hepatic Failure as the Initial Presentation of Hodgkin's Disease and Liver Transplantation: A Case Report. Int J Med Stud. 2024;12(1):92–95.

41. González-Zuelgaray J, Frangi PI, Longo DA, Tosoni LB, Baranchuk A. Severe Hyperkalemia: Electrocardiographic Tips for Early Recognition Based on a Case Report. Int J Med Stud. 2024;12(1):96–99.

42. Iba C, Namba M, Kaneda Y, Ando T. Public Health Outreach in Impoverished Areas of Cambodia: Addressing the Issues Related to Prescription Practices. Int J Med Stud. 2024;12(1):100–102.

43. Namba M, Shinohara M, Sela S, Khouch K, Kaneda Y, Haruyama R. Grassroots HPV Vaccine Education in Phnom Penh, Cambodia: A Personal Reflection. Int J Med Stud. 2024;12(1):103–105.

44. Akpegah GK. From Theory to Practice: Reflections of a Medical Student's Rural Posting in a Leprosy Hospital. Int J Med Stud. 2024;12(1):106–108.

45. Dubbaka S, Lentz T. The Importance of Understanding Social Determinants of Health as Medical Students: My Experience with the Cincinnati Homeless Coalition. Int J Med Stud. 2024 Jan–Mar;12(1):109–111.

46. McGee M. Becoming a Physician: A 40-year Retrospective on Medical Socialization. Int J Med Stud. 2024;12(1):112–119.


Manali Sarkar, 1 Intern, MGM Medical College Navi, Mumbai, India.

Mihnea-Alexandru Găman, 2 MD, PhD(c), Faculty of Medicine, “Carol Davila” University of Medicine and Pharmacy, 050474 Bucharest, Romania & Department of Hematology, Center of Hematology and Bone Marrow Transplantation, Fundeni Clinical Institute, 022328 Bucharest, Romania. Scientific Editor, IJMS.

Juan C. Puyana, 3 MD, FACS, School of Medicine, Department of Surgery, Professor of Surgery, Critical Care Medicine, and Clinical Translational Science, Director for Global Health-Surgery, University of Pittsburgh, Pittsburgh, PA, USA. O'Brien Professor of Global Surgery for Royal College of Surgeons in Ireland (RCSI). Editorial Board Member, IJMS.

Francisco J. Bonilla-Escobar, 4 MD, MSc, PhD, Department of Ophtalmology; University of Washington, Seattle, WA, USA. Fundación Somos Ciencia al Servicio de la Comunidad, Fundación SCISCO/Science to Serve the Community Foundation, SCISCO Foundation, Cali Colombia. Grupo de investigación en Visión y Salud Ocular, VISOC, Universidad del Valle, Cali, Colombia. Editor in Chief, IJMS.

About the Author: Manali Sarkar is currently an Intern at MGM Medical College, Navi Mumbai, India. She is also a recipient of Indian Public Health Association Maharashtra Chapter Padvidhar Sanshodhan Prakalp Anudan 2022 grant.

Correspondence: Francisco J. Bonilla-Escobar. Address: 750 Republican St, University of Washington, Seattle, WA, USA. Email: editor.in.chief@ijms.info


Copyright © 2024 Manali Sarkar, Mihnea-Alexandru Găman, Juan C. Puyana, Francisco J. Bonilla-Escobar

This work is licensed under a Creative Commons Attribution 4.0 International License.



International Journal of Medical Students, VOLUME 12, NUMBER 1, April 2024