The Best AI Is The One You Don't Notice
One of the best AI I used last year had no "AI" logo or button.
I was in Brazil last year to give a talk. When I first got there, the driver who picked me up from the airport didn't speak any English. I can get by with some Spanish, but my Portuguese is nonexistent.
I anticipated this might happen, so I got the new AirPods Pro 3 before I left. Using the Live Translation feature, I had a full conversation with him during the two-and-a-half-hour ride from the airport to the hotel. São Paulo traffic gave us plenty of time.
Later that year, as I was giving a talk at EuSoMII 2025 in Crete, I retold that story with a blank slide as I took the AirPods out of my pocket. Holding the device in my hand, I pointed out how there's no "AI" button or flashy branding. When I was in that armored vehicle in São Paulo, I wasn't wondering whether it was using a large language model, a small language model, or a tiny language model. I wasn't thinking about whether the model ran on the device or in the cloud. It didn't matter because it just worked.
This is the same argument I've been making about radiology AI for years. Our goal shouldn't be to showcase the technology's complexity. It should be to simplify and improve the radiologist's workflow. The best AI is the AI you don't even notice.
Radiology is still working toward this.
We've all seen the pattern: an AI product that performs well on paper but struggles to gain traction in the reading room.
As an industry, we've spent a decade optimizing AUC and chasing sensitivity and specificity in top journals. That work matters, but too often, tools that score well in journals score poorly on the only metric that ultimately matters: whether it's actually useful.
The missing piece is not accuracy. It's integration.
The Attention Tax
A radiologist reading a chest X-ray on a PACS workstation is in a groove. Eyes on the image. Systematic search pattern. Findings form as the report takes shape in the next window over.
Now introduce a typical AI tool. It presents findings in its own interface. To use it, the radiologist must break that perceptual loop: shift gaze, shift context, shift mental model. Then shift back. That context switch has a cost.
This is the attention tax. If an AI tool costs more attention than it saves, it tends to get set aside — not because it was wrong, but because it was in the way.
Calm Technology and the Periphery
Mark Weiser and John Seely Brown called this "calm technology" back in 1996: the best systems sit at the periphery of your attention. You don't stare at a thermostat. You notice it when it breaks. Radiology AI hasn't gotten there yet.
The most common deployment pattern has been the bolt-on overlay: a standalone application running in parallel to the clinical workflow. Separate viewer. Separate dashboard. Separate notification system. This approach made sense early on — it was the fastest path to getting AI into clinical use. But the tradeoff is that each new tool adds another tab, another icon, another place where attention must be directed. The radiologist ends up serving as the integration layer, manually shuttling context between systems.
What Northwestern Got Right
The study published in JAMA Network Open by Northwestern Medicine demonstrated that their AI system generates draft reports for radiographic studies, achieving a 15.5% improvement in documentation efficiency with no decrease in accuracy during peer review. But the more important finding is buried in the methods section.
The system was "integrated with the institutional EHR software (Epic; Epic Systems) and reporting software (PowerScribe; Nuance Communications), minimizing disruptions to established clinical routines."
That sentence does more explanatory work than a results table would.
Here is what it means in practice: the AI-generated draft report appears as custom fields within a selectable template in PowerScribe. No new software. No new window. No new login. No new click. The radiologist opens the reporting template, and the draft is already there, pre-populated and ready to be edited.
The workflow mirrors how radiologists already handle trainee-produced reports. An attending opens a study, reviews the resident's draft, edits it, and signs it. That interaction pattern is decades old. Northwestern mapped AI output onto an existing mental model rather than inventing a new one. The radiologists did not need to learn a new tool. They needed to do what they already do — read and edit.
Why Integration Is So Hard to Prioritize
Industry partners sell accuracy because that's what gets published. Buyers evaluate AUC because that's what they're shown. Peer reviewers assess sensitivity and specificity because that's what they know how to measure. The entire value chain is organized around one question: Is the AI correct?
Correctness is necessary, but it's not sufficient. An AI tool that is 98% accurate but disruptive to use will struggle against one that is 92% accurate and invisible. The first generates publications. The second generates adoption.
The deeper issue is that integration quality is hard to measure and hard to publish. There is no "integration AUC." There is no standardized benchmark for cognitive friction. The closest proxy is the adoption rate, but it is rarely reported in evaluation studies — most of which are conducted under controlled conditions that don't reflect the pace of a real reading room.
I often say in my talk, clinical accuracy ≠ clinical efficiency ≠ clinical utility. As an industry, we optimize what we can measure. The opportunity now is to start measuring — and valuing — what we haven't.
The Disappearing Act
When the AI becomes a topic of conversation — when it requires training sessions, troubleshooting guides, user forums, and workarounds — it's still a foreign object in the workflow.
When radiologists stop mentioning it, the AI has succeeded.
References
-
Weiser M and Brown JS. The Coming Age of Calm Technology. October 5, 1996. https://calmtech.com/papers/coming-age-calm-technology
-
Huang J, Wittbrodt MT, Teague CN, et al. Efficiency and Quality of Generative AI-Assisted Radiograph Reporting. JAMA Netw Open. 2025. Jun 2;8(6):e2513921. doi: 10.1001/jamanetworkopen.2025.13921.