The Generalist Machine: How Foundation Models Are Reshaping the Entire Imaging Chain

By Professor & Dr. Anthony Noujaim Endowed Chair of Oncology. Director, Division of Oncologic Imaging and Radionuclide Therapy, Faculty of Medicine & Dentistry, University of Alberta

Continuing from Part 1, we left the story at the edge of a new era—task-specific deep learning tools that were accurate within their narrow lane but brittle beyond it. Part 2 picks up where the architecture changed.

For the better part of a decade, AI in radiology operated on a simple and somewhat exhausting premise: one problem, one model, one dataset, one validation study. Want to detect lung nodules? Train a nodule detector. Want to flag intracranial hemorrhage? Train a hemorrhage classifier. The tools worked, within their defined scope, but the economics of building, validating, and maintaining hundreds of single-purpose algorithms were becoming difficult to justify and the clinical reality, in which a single scan might carry ten different findings across five organ systems, was making the whole approach look increasingly inadequate.

What changed was not a single breakthrough but a convergence: the arrival of foundation models and the maturation of the infrastructure cloud computing, federated data networks, and GPU clusters, needed to train them at a meaningful scale.

What a Foundation Model Actually Is

The term deserves a plain-language definition before it becomes another piece of industry wallpaper. A foundation model is a large AI system pre-trained on an exceptionally broad and diverse dataset, not optimized for any single task, but trained to develop deep, general representations of structure and pattern that can then be adapted to many specific applications with relatively modest additional training. [1]

In radiology, this means a single model that has seen enough chest CT scans, brain MRIs, mammograms, and abdominal ultrasounds to develop a generalist understanding of medical image anatomy and that can then be fine-tuned, with far less data and cost, to perform specific tasks in specific clinical contexts. The contrast with first-generation deep learning tools is not subtle: where those models were sharp instruments designed for one cut, foundation models are something closer to a trained clinical mind capable of context, capable of nuance, and capable of operating across a far wider range of presentations. [2]

The Imaging Chain, Reimagined End to End

What makes this architecturally significant for healthcare executives is not just improved diagnostic performance, it is the potential to apply AI coherently across the entire imaging workflow, rather than deploying isolated point solutions that do not communicate with each other.

Referral and appropriateness. The journey of a medical image does not begin at the scanner, but at the moment a clinician decides to order one. For decades, that decision was governed more by habit and institutional culture than by evidence. The American College of Radiology’s (ACR) Appropriateness Criteria, a rigorously maintained evidence base matching clinical presentations to optimal imaging strategies existed, but its translation into the ordering workflow was inconsistent at best. AI-assisted clinical decision support, now embedded in electronic ordering systems at leading institutions, changed that. When a referring physician orders an MRI for a clinical question that a targeted ultrasound would answer better, the system says so, in real time, at the point of order. Early deployments showed measurable reductions in low-value imaging, and the downstream savings in both cost and radiation exposure were not trivial. [3]

The imaging chain is being rebuilt, stage by stage, with AI woven through every step

Protocol selection. Once a scan is ordered appropriately, the next decision, which imaging sequences to run, at what parameters, with what contrast approach was traditionally a radiologist’s administrative task: time-consuming, repetitive, and entirely unsuited to someone with twelve years of medical training. Machine learning models that parse the clinical indication from the referral text and prescribe the correct protocol automatically have achieved accuracy above 95% in controlled deployments. That is not a marginal efficiency gain. It is the correction of a long-standing structural misallocation one in which a physician-level resource was routinely consumed by a decision that, once the evidence base matures, an algorithm can make more consistently, more quickly, and at a fraction of the cost.

Image reconstruction. Before a radiologist ever sees an image, the raw data from the scanner must be processed into a diagnostic picture. This reconstruction step, historically a physics-driven computation demanding significant processing time and, by extension, patient waiting has been quietly transformed by deep learning. GE Healthcare’s TrueFidelity and Canon Medical’s AiCE use neural networks trained on high-quality reference scans to reconstruct diagnostic images from data acquired at significantly lower radiation doses; clinical evidence consistently demonstrates reductions of 38–71% without compromise to image quality. [4]

The gains, however, reach well beyond the dosimetry report. Faster reconstruction means shorter turnaround from scan completion to reading queue, directly reducing the bottlenecks that frustrate both patients and schedulers. Cleaner images at lower dose mean less diagnostic ambiguity at the workstation, fewer requests for repeat acquisitions, and reduced wear on tube components that represent some of the most expensive consumables in an imaging department. For oncology patients who may undergo dozens of follow-up CT scans across the arc of their disease, the cumulative radiation benefit is not academic. But for the CFO reviewing scanner utilization data, the operational case is equally compelling: more scans per shift, lower consumable cost per study, and fewer callbacks that consume radiologist time without generating additional revenue.

Detection and triage. This is the layer most familiar from the first generation of AI tools, automated flagging of time-critical findings for urgent review. What foundation models add is the ability to handle that triage across a far broader range of findings simultaneously, and to do so with a contextual understanding of the patient’s clinical history that single-purpose detectors could not access. An AI that has seen a patient’s prior scans, knows their oncology treatment timeline, and can compare current findings against a longitudinal baseline, is doing something qualitatively different from a tool that reads one scan in isolation.

Structured reporting. At the end of the chain, AI-assisted report generation is moving from novelty to standard infrastructure at large imaging centers. The goal is not to replace the radiologist’s interpretative voice, that remains the medico-legal and clinical core of the profession, but to automate the structured, templated elements of reporting, pre-populate measurement data, and flag inconsistencies before the report leaves the department.

The Consolidation Signal: What RadNet and Gleamer Are Really Telling Us

In early March 2026, RadNet, the largest U.S. outpatient radiology network, performing over eleven million scans annually, completed the acquisition of Gleamer, a Paris-based clinical AI company with FDA and CE-cleared tools covering X-ray, mammography, lung disease, and trauma imaging, deployed across more than 700 hospital and imaging-center contracts in 44 countries. [5]

The price and structure matter less than the strategic logic. RadNet is not buying a product; it is buying an integrated AI operating layer for a national imaging network. Combined with its existing DeepHealth platform, the merged entity now holds clearances across 75 or more clinical indications, projects an ARR of approximately $140 million by end of 2026, and claims the largest installed base of clinical radiology AI in the world.

For a healthcare executive watching from outside the radiology sector, the message is unambiguous: the era of buying AI point solutions and hoping they eventually talk to each other is ending. The organizations moving fastest are those that control the full stack, from referral decision to final report and are building proprietary data moats from their own imaging volumes that no external vendor can replicate.

The Skills the Technology Cannot Replace

Every technology transition in medicine eventually meets the same question: what is left for the human? In radiology, that question deserves a more nuanced answer than the market usually provides.

Foundation models excel at pattern detection, consistency, and throughput. They do not yet handle the clinical conversation the radiologist who calls a referring oncologist to discuss an ambiguous finding, recommends an alternative modality, or recognizes that a technically adequate scan is inadequate for the specific clinical question being asked. They do not carry the accountability that attaches to a signed report. And they do not, at present, generalize reliably to the long tail of rare presentations the unusual tumor morphology, the incidental finding in an unexpected location, that experienced radiologists accumulate judgment about over careers.

What they change is the distribution of radiologist time: less of it is spent on the routine and the repetitive, and more is available for the cases that genuinely benefit from deep clinical reasoning and clinical experience. For workforce planning, that is a profound shift, not a replacement scenario, but a rebalancing of what the profession does with its hours.

For the executive translating this into return on investment, the arithmetic runs in several directions at once. The radiology department gains throughput without proportional headcount growth more studies read, more consistently, with the same or leaner staffing. The oncologist gains a faster, more structured reporting cadence that reduces the ambiguity that currently slows treatment decisions: fewer calls chasing a result, fewer equivocal reports that delay a staging conversation, fewer tumor boards that open with a radiological question rather than a clinical one. A pre-operative imaging report that has already been cross-referenced with prior imaging, marked for pertinent interval change, and organized for quick assimilation rather than linear reading is provided to the surgeon.

And for the investor in radiology infrastructure specifically, the signal is unambiguous: the practices and networks that deploy AI as a workflow layer not as a collection of point tools will read more volume per radiologist, retain referring physicians through faster turnaround, and build the kind of longitudinal imaging dataset that becomes, over time, a proprietary clinical asset no competitor can simply purchase. In a specialty where capacity has been the binding constraint for a decade, that is not a feature. It is the business model.

What This Means for the C-Suite

Three decisions face every healthcare executive with imaging in their portfolio.

First: infrastructure before algorithms. The foundation model era rewards organizations with clean, interoperable, longitudinal imaging data. If your PACS is fragmented, your data governance is incomplete, or your imaging data is siloed by modality, no AI vendor can overcome that. The time to address it is before procurement, not after.

Second: integration architecture over point solutions. The RadNet-Gleamer model is instructive. The competitive advantage is not in owning one strong AI tool; it is in owning the workflow layer that connects all of them, and the dataset that keeps improving them. Executives who approach AI as a series of standalone purchases will find themselves managing a collection of tools that collectively underdeliver.

Third: governance alongside deployment. FDA-cleared does not mean validated for your patient population, your scanner fleet, or your clinical protocols. The institutions that extract lasting value from foundation model AI are those that treat deployment as a clinical program with monitoring, outcome measurement, and structured feedback not as a software installation.

The imaging chain is being rebuilt, stage by stage, with AI woven through every step. The organizations that understand the architecture, not just the individual tools, are the ones that will lead.


References

  1. Rajpurkar P, Chen E, Banerjee O, Topol EJ. AI in health and medicine. Nat Med. 2022;28(1):31–38.
  2. Fleishman KH, Sheridan TB, Sheridan MK. The case for artificial intelligence augmentation in radiology: implications for subspecialty training and practice. Acad Radiol. 2021;28(11):1513–1519.
  3. Giess CS, Ip IK, Schneider L, Hanson R, Imsirovic H, Khorasani R. Influence of patient-centered clinical decision support on appropriate imaging utilization. J Am Coll Radiol. 2014;11(7):677–683.
  4. Qian N. Deep learning image reconstruction at reduced radiation dose for oncology CT follow-up. Br J Radiol. 2023;96(1150):20220915.
  5. RadNet Inc. RadNet acquires Gleamer, making DeepHealth the largest provider of radiology clinical AI solutions worldwide [press release]. Los Angeles: RadNet; 2026 Mar 2.

Next in this series — Part 3: “Trust, Governance, and the Closed Loop: Leading the AI-Integrated Imaging Enterprise.”

Leave a Reply

Your email address will not be published. Required fields are marked *