Medical Artificial Intelligence: a Potential Healthcare Revolution
By Adham Alkhadrawi, Postdoctoral Researcher, University of Hawaii at Manoa
The News on AI advances has become commonplace, a rapidly advancing field impacting all aspects of life. In medicine, particularly, numerous exciting applications have emerged, potentially heralding a revolution in healthcare. This article will explore the most significant developments in AI within the medical field, examining current challenges and prospects.
Artificial intelligence has already proven its value in medical care.
Medical imaging and radiology are among the most prominent applications of artificial intelligence in the field of medicine. Current technologies rely on well-known technologies such as convolutional neural networks (CNNs) and vision transformers (ViTs). These algorithms have demonstrated outstanding performance in various clinical tasks. Tumor detection and characterization have witnessed particularly remarkable advancements. These models can directly identify and segment different lesions with exceptional precision. Perhaps one of the most significant recent achievements in this area is the ability of AI algorithms to predict breast cancer up to five years in advance based on a single mammogram (PMID: 39361281), capabilities that have recently received FDA Approval. This is achieved by recognizing subtle cell patterns invisible to the naked eye, offering a new approach to breast cancer prevention.
Another proven application is the ability of artificial intelligence to identify brain tumors with high efficiency in MRI scans (PMID: 41136620). Brain tumor segmentation algorithms can define tumor boundaries with an accuracy comparable to that of expert clinicians, providing crucial volumetric measurements for treatment planning and response assessment. Additionally, deep learning systems can now detect lung nodules in chest CT scans with a sensitivity approaching or exceeding that of radiologists (PMID: 38365831).
Despite all these remarkable achievements, there are some limitations that restrict the wider inclusion of Artificial Intelligence in Medical Practice
Whole organ segmentation, once a laborious manual process, is now performed automatically with astonishing accuracy (PMID: 40767616). Modern models enable the partitioning of livers prior to surgery, the sizing of kidneys for transplant evaluation, and the sizing of prostates before radiation therapy. Deep learning models achieve this with accuracy comparable to human performance while significantly reducing time and effort.
In addition to oncology and structural imaging, artificial intelligence has enabled entirely new categories of quantitative biomarkers. Thyroid eye disease is a prime example. The volume of the extraocular muscles can now be precisely measured from CT scans using deep learning models (PMID: 38696188). These quantitative measurements have allowed specialists to diagnose cases more quickly, as well as assess treatment response more accurately, providing objective measures of disease severity and treatment response. Unlike human assessments, which rely on specialists’ skills and suffer from inter-personal variations, AI solutions provide standardized, more reliable measurements.
Similarly, fetal imaging has undergone a major transformation thanks to AI-based quantitative measurement techniques. AI can accurately measure amniotic fluid volume and fetal size in MRI scans (PMID: 40563043). Previous methods relied on ultrasound images, which provide approximate estimations rather than precise quantitative measurements. This advancement has reduced analysis time from 30 minutes to less than a minute while simultaneously improving the accuracy of the results.
These examples demonstrate that artificial intelligence has contributed to automating time-consuming tasks, enabling measurements that inform clinical decision-making. Here, AI has proven its ability to assist specialists in their work rather than replace them, focusing on tedious calculations while physicians concentrate on interpreting results and caring for patients.
Current limitations of AI systems in healthcare
Despite all these remarkable achievements, there are some limitations that restrict the wider inclusion of artificial intelligence in medical practice.
Lack of interpretability
Despite the significant advancements in deep learning models, particularly in medical imaging, they remain as “black boxes”; clinicians input data, generate predictions, but lack any understanding of the underlying characteristics that led to those conclusions. This is a major obstacle to the wider integration of these technologies. When an algorithm diagnoses a cancerous lesion, specialists need to understand the reason: is it a genuine tumor, or did the algorithm mistakenly identify an anatomical anomaly as a tumor, or is it simply an image artifact?
This highlights the need for algorithms that explain the decisions of AI models, collectively known as explainable AI. The importance of these algorithms is particularly evident in situations of uncertainty, where the AI model lacks confidence in its decisions. As an example, Shapley Additive Explanations (SHAP), an algorithm derived from game theory (https://arxiv.org/abs/1705.07874, PMID: 32607472), is now widely used to explain model predictions by calculating how much each input feature contributes to the final model decision. Thanks to recent advances, SHAP values can now be integrated seamlessly in most AI pipelines from cancer genomics to CT scan segmentation.
The problem of generalization
AI models trained on data from a specific demographic group may experience a significant decrease in accuracy when tested on data from other demographic groups. Similarly, models trained on patient data from one institution may be significantly weakened when tested on data from other institutions. This lack of generalization is a result of the lack of data diversity. Strict restrictions to protect data privacy make it very difficult to transfer patient data across research institutes, which makes it difficult to train models on larger datasets.
To overcome this problem, special protocols to streamline data transfer should be developed. These protocols should avoid encountering bureaucracy or compromising patient privacy, allowing for smoother data flow across institutions, enabling models to benefit from diverse, massive datasets, which results in better generalization.
Integration barrier
While AI models possess high accuracy and generalization, another challenge remains: integrating them in a way that makes them easily usable by specialists. AI models are complex and difficult for non-specialists to use, and their operation requires significant computing power, which may limit their widespread adoption. Therefore, creating dedicated deployment platforms that allow for the easy integration of multiple models is a crucial step towards the widespread adoption of these technologies. These platforms provide a suitable infrastructure for running the models and offer a user interface that facilitates easier use for radiologists. This level of integration requires substantial investment in IT infrastructure and close collaboration between AI developers, radiologists, and hospital IT departments.
The next transformation : Multimodal systems and agentic AI in Healthcare
While current AI systems demonstrate impressive capabilities within narrow domains, the next wave of innovation promises something entirely different: large language models and multimodal AI systems that not only analyze images or predict outcomes, but also reason across multiple data streams to support clinical decision-making.
Multimodal systems
Recent advances in large language models (LLMs), particularly multimodal systems capable of processing both images and text, point to a radically different future for medical artificial intelligence https://doi.org/10.1016/j.dsp.2025.105441). Multimodal language models demonstrate an unprecedented ability to analyze complex clinical scenarios by integrating information from diverse sources, radiographic images, pathology slides, laboratory values, clinical observations, and medical references.
Unlike current models, multimodal AI can link different information sources, enabling a deeper understanding of the clinical context and generating coherent medical reasoning. This also indirectly contributes to addressing the “black box” problem mentioned earlier. In this case, the decision is based on multiple modalities, allowing for enhanced diagnostic accuracy and explainability.
The reasoning capabilities of multimodal machine learning systems represent a quantum leap beyond current medical artificial intelligence. Instead of merely detecting nodules or segmenting organs, these systems can engage in clinical reasoning: considering multiple hypotheses, weighing supporting and conflicting evidence, acknowledging uncertainty, and explaining their logic in ways that mimic the reasoning processes of physicians.
Agentic AI: From Tools to Clinical Partners
The new horizon transcends the capabilities of individual models to encompass agentic AI systems capable of independently managing complex clinical workflows (PMID: 40705666). We can envision an AI agent managing an entire diagnostic process: reviewing sequential images to assess tumor progression, drawing relevant conclusions from previous radiology reports, comparing them with pathology findings, identifying discrepancies between imaging and clinical presentation, and determining cases requiring discussion by a multidisciplinary oncology team.
These systems can operate continuously, automating patient monitoring, linking medical histories to recent CT scans, and generating reports for radiologist review and approval. The system will gather all pertinent prior research in complicated or unclear cases, draw important conclusions from clinical observations, and produce a thorough case summary that facilitates efficient expert interpretation.
The key difference from current AI systems: These systems will not only perform isolated tasks, but will actively participate in clinical workflows, handling routine cognitive work while delegating complex cases to human experts.
The inevitability of transformation in healthcare thanks to artificial intelligence
The accelerating pace of AI development suggests that the overall transformation of healthcare is not a question of ” will it happen? ” but rather ” when it will happen? “, and ” when it will happen? ” appears to be measured in years, not decades.
Compute resources are accelerating. Training processes that were once time-consuming and required supercomputers are now being performed on high-performance graphics processing units. This development suggests that we may soon be able to perform these processes on mobile phone processors. Data availability is also increasing dramatically. With the comprehensive digital transformation of healthcare systems and the maturation of data exchange frameworks, the massive and diverse datasets needed to train robust medical artificial intelligence are becoming readily available. Models can be trained across institutions without compromising patient privacy because to standardized learning techniques.
This shift seems increasingly inevitable. Technology is maturing, and regulatory pathways are becoming clearer. What remains is for us to integrate these technologies in a way that ensures their optimal use, supporting sound decision-making that ensures a real transformation of healthcare.

