Hidden Work and Shadow AI Are Driving Health System AI Pilots, New Findings Show

Black Book Market Research finds fewer than 1 in 5 health systems report mature AI governance and most undercount staff time spent supervising and correcting AI output

NEW YORK CITY, NEW YORK / ACCESS Newswire / December 17, 2025 / A new national study from Black Book Market Research reveals that while AI pilots are now ubiquitous across U.S. health systems, the real story is the hidden work and shadow AI sitting underneath the official transformation narrative.

In a study of 228 respondents including 92 enterprise and service-line leaders and 136 front-line professionals, Black Book finds that health systems are simultaneously all-in on AI pilots and underprepared to govern and operationalize them safely at scale.

"Health systems are deploying AI to reduce burden, but they're quietly shifting a massive amount of unpriced labor onto clinicians, staff, and governance teams," said Douglas Brown, President of Black Book Market Research. "Our data shows AI is everywhere but so is the hidden work of supervising it, correcting it, and cleaning up around it. That's the part nobody is putting on the slide deck."

AI is everywhere but stuck in pilot mode

According to the study, 84% of organizations report at least one AI pilot in the last 12 months, and AI is now being tested across nearly every major operational and clinical domain:

  • 71% are piloting or deploying ambient clinical documentation / AI scribes

  • 64% are using radiology or imaging AI tools

  • 59% are piloting AI in revenue cycle (coding, denials, prior authorization)

  • 47% are piloting AI in patient access / contact centers

Yet only 23% of leaders say any use case has reached broad, enterprise-level deployment.

"We're in a pilot-heavy, scale-light phase of healthcare AI," Brown noted. "Most organizations are simultaneously running many experiments and very few true production programs."

Governance lags behind the AI hype

Despite the breadth of AI experimentation, AI governance is still immature in most health systems:

  • Only 19% of leaders rate their AI governance as formal and organization-wide, with clear accountability and monitoring.

  • 35% describe their governance as low or ad hoc, making decisions case by case.

Even basic policy around public large language models (LLMs) is uneven:

  • Just 24% say they have a clear, documented, enforced staff policy on tools such as ChatGPT, Gemini, or Copilot.

  • 52% sit in a gray zone where a policy exists but awareness or enforcement is limited.

"AI governance isn't a nice-to-have," said Brown. "Without it, you don't actually know where your models live, what they're doing, or how much risk and work they're creating across the organization."

The hidden work: clinicians supervising algorithms

The study finds a large and largely invisible layer of human supervision propping up AI tools.

Among front-line respondents who use AI in their daily work:

  • 62% spend 30 minutes or more per day reviewing, correcting, or re-doing AI outputs.

  • 30% spend at least one hour per day supervising AI.

But when leaders were asked how often they explicitly include this supervision time in AI ROI models:

  • Only 13% said they "often" or "always" include staff review time.

  • Nearly half (49%) said they "rarely" or "never" price this effort in.

"The math simply doesn't add up," Brown said. "You cannot credibly claim AI-driven productivity gains while ignoring the hours your clinicians and staff spend verifying, editing, and double-checking algorithm output."

Even with this hidden work, AI is not a net negative. Respondents were split:

  • 38% of AI users said their total documentation/clerical time is lower with AI.

  • 24% said their time burden is higher than before.

The rest reported no change or said it is too early to tell.

Shadow AI is now routine and risky

The study confirms that shadow AI-unapproved or informal use of generic AI tools-has already become standard behavior for a large portion of the workforce.

Among front-line respondents:

  • 58% used generic AI tools (ChatGPT, Gemini, Copilot, etc.) for work-related tasks at least once in the last 30 days.

  • 39% used them weekly or more often.

Common shadow AI tasks include:

  • Drafting emails and internal communications

  • Creating patient education materials

  • Summarizing complex clinical information or inbox messages

  • Drafting patient letters and portal messages

The study also surfaces real protected health information (PHI) exposure risk:

  • Among generic AI users, 17% admit they sometimes or often include identifiable patient information when using public tools.

  • Another 27% say they "rarely, but it might slip," signaling weak guardrails in high-pressure workflows.

Leaders, meanwhile, are less worried than the numbers suggest:

  • 54% of leaders describe themselves as "very" or "somewhat" confident that AI use is mostly captured within official, approved channels, a direct contrast to front-line behavior.

"Shadow AI isn't a fringe problem," Brown emphasized. "It's a mainstream behavior emerging in environments where official tools don't keep up with real-world needs, and where policy is vague or absent."

AI, burnout, and the workforce: mixed impact

AI's relationship to burnout is not uniform, but the study suggests a modest protective effect among regular users:

  • Overall, 30% of respondents meet a "high burnout" threshold (feeling burned out a few times per week or more).

  • High burnout is 31% among regular AI users (at least weekly), versus 40% among non-users.

  • 52% of regular AI users agree AI "reduces my risk of burnout," compared with 19% of non-users.

  • At the same time, 37% of all respondents say AI tools "sometimes add stress," rising to 43% among regular users.

On retention, AI is becoming embedded in day-to-day work:

  • 38% say their job would be harder without AI in their workflow, and 23% say they would be more likely to leave if current AI tools were removed.

Governance changes the game

When the data is segmented by governance maturity, a clear pattern emerges.

In organizations with mature AI governance:

  • Front-line staff are 30% less likely to use generic AI tools weekly or more.

  • Staff are twice as likely to report that AI saves net time, even after review and correction.

  • High burnout rates are lower (25% vs 36%) compared to low-maturity organizations.

  • Staff are far more likely to know how to report AI concerns and believe their organization is proactive on AI safety and fairness.

"Governance is not bureaucracy, it is an enabler of safe, scalable AI," Brown concluded. "High-maturity organizations are already converting AI from shiny pilots into measurable value with less hidden work, less shadow use, and better workforce outcomes."

Call to action: From pilots to production, with eyes open

Black Book's findings suggest three immediate priorities for health system leaders:

  1. Quantify the hidden work.
    Explicitly model review and supervision time in AI ROI calculations and workforce planning.

  2. Surface and regulate shadow AI.
    Move from blanket bans or hand-waving policy to clear, pragmatic rules and approved safe alternatives.

  3. Invest in mature AI governance.
    Build a central AI governance council, maintain a live inventory of models, and implement standardized validation, monitoring, and reporting channels.

About Black Book

Black Book Market Research LLC is a full-service, international market research and public opinion research company. Recognized as the healthcare industry's source for unbiased, crowdsourced performance evaluations, Black Book conducts annual client experience and user satisfaction surveys across the healthcare technology and services landscape. Methodology: This release is based on a national study of 228 respondents, including 92 enterprise and service-line leaders and 136 front-line professionals. Results are reported at a 95% confidence level with a ±6.5 percentage point margin of error for the full sample.

For more information on this study or to request a custom briefing, please contact:
Email: research@blackbookmarketresearch.com

SOURCE: Black Book Research



View the original press release on ACCESS Newswire

Recent Quotes

View More
Symbol Price Change (%)
AMZN  223.52
+0.96 (0.43%)
AAPL  273.00
-1.61 (-0.59%)
AMD  200.32
-8.85 (-4.23%)
BAC  54.73
-0.09 (-0.16%)
GOOG  300.08
-7.65 (-2.49%)
META  658.51
+1.36 (0.21%)
MSFT  477.12
+0.73 (0.15%)
NVDA  171.50
-6.22 (-3.50%)
ORCL  179.23
-9.42 (-4.99%)
TSLA  473.16
-16.72 (-3.41%)
Stock Quote API & Stock News API supplied by www.cloudquote.io
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the Privacy Policy and Terms Of Service.