Make
no
mistake:
the
hype
surrounding
generative
AI
in
the
healthcare
sector
is
still
going
strong.
Overall,
the
industry
is
excited
about
the
technology’s
potential
to
alleviate
burnout,
increase
operational
efficiency
and
improve
patient
outcomes
—
but
healthcare
leaders
still
have
a
lot
of
work
to
do
when
it
comes
to
putting
the
appropriate
guardrails
around
such
a
novel
form
of
technology.
When
I
asked
Jason
Hill,
Ochsner
Health’s
innovation
officer,
about
the
state
of
AI
governance
in
healthcare,
he
said
that
the
issue
weighs
heavily
on
his
mind.
“I
go
to
sleep
most
nights
and
wake
up
most
mornings
worrying
about
that
one
thing,”
he
remarked
during
an
interview
last
month
at
HLTH
in
Las
Vegas.
In
his
view,
providers
and
other
healthcare
organizations
are
in
dire
need
of
standardized
frameworks
they
can
adopt
to
ensure
their
AI
tools
are
safe
and
perform
well
over
time.
“If
I
had
millions
of
dollars
right
now
to
make
a
startup,
I
would
create
a
company
that
could
provide
a
quality
assurance
framework
for
AI.
In
my
mind,
it’s
not
a
matter
of
if
it’s
going
to
be
regulated
—
it’s
a
matter
of
when
it’s
going
to
be
regulated,
and
how.
The
first
company
to
market
that
has
an
established
system
for
that
—
when
that
regulation
happens,
which
we
don’t
know
when
it
will
—
will
be
the
winner,”
Hill
explained.
He
thinks
that
future
AI
regulations
will
encompass
two
categories:
the
technology
side
and
the
operations
side.
For
the
technology
side,
Hill
thinks
that
AI
regulations
will
focus
on
whether
generative
AI
models
are
hallucinating
and
whether
those
hallucinations
are
clinically
relevant.
On
the
operational
side
of
things,
health
systems
will
have
to
do
a
better
job
of
making
sure
they
aren’t
infected
with
“new
shiny
thing
syndrome,”
he
said.
“If
cardiology
comes
to
me
and
says,
‘Hey,
look
at
this
cool
stethoscope
thing
—
it
actually
detects
valve
stenosis
and
helps
us
get
people
into
valvuloplasties.’
What
I
would
then
say
to
cardiology
is,
‘Awesome.
I
need
you
to
look
at
50
of
what
that
AI
outputs
a
week,
and
then
I
need
you
to
judge
its
effectiveness
on
a
rating
scale
of
1-10.’
Then
that’s
going
to
be
built
into
the
contract
—
and
if
I
don’t
see
those
results
for
more
than
four
weeks,
we’re
going
to
cancel
the
contract.
Operational
needs
to
have
some
skin
in
the
game
for
if
their
thing
works,”
he
explained.
From
Hill’s
vantage
point,
he
would
like
to
see
hospital
leaders
“harness
some
of
the
hype
and
turn
it
into
a
commitment.”
He
believes
that
AI
governance
doesn’t
just
apply
to
the
safety
checks
that
are
performed
before
a
health
system
decides
to
put
a
tool
into
practice.
To
him,
ongoing
quality
assurance
is
just
as
important.
Photo:
SIphotography,
Getty
Images