If
someone
offers
a
solution
to
the
access-to-justice
crisis,
you
generally
should
say
yes.
The
legal
needs
of
millions
currently
receive
the
attention
of
a
handful
of
attorneys
and
beyond
the
profession’s
long,
abysmal
record
of
meeting
the
needs
of
low-income
folks,
middle
class
families
increasingly
find
themselves
priced
out
of
legal
help.
Or,
just
as
damaging,
they
feel
so
priced
out
that
they
don’t
even
bother
to
figure
out
if
they
need
legal
help
and
just
mainline
existential
dread
over
their
troubles.
Since
a
long
overdue
influx
of
resources
into
legal
aid
is
not
on
the
DOGE
agenda,
the
legal
tech
community
has
set
its
eyes
on
generative
AI
as
a
potential
vector
for
delivering
needed
legal
services.
That
is,
if
the
AI
is
carefully
and
competently
designed
to
provide
accurate
legal
help
and
not
a
psychadelic
journey
to
the
heart
of
the
Montreal
Convention.
But
what
if
the
better
the
AI
tool,
the
more
dangerous
it
becomes?
That
uneasy
question
lingered
after
this
morning’s
ABA
TECHSHOW
panel
where
Kara
Peterson,
co-founder
of
justice
tech
startup
descrybe.ai
and
Jessica
Bednarz,
Director
of
Legal
Services
and
the
Profession
at
the
Institute
for
the
Advancement
of
the
American
Legal
System,
led
a
lively
discussion
on
the
promises
and
perils
of
AI
in
the
justice
gap
arena.
One
of
the
potential
benefits
of
AI
Peterson
identified
is
“simplified
and/or
translated
explanations
of
legal
concepts
and
directions.”
The
first
step
to
solving
a
legal
problem
is
knowing
you
have
a
legal
problem.
Bednarz
noted
that
research
from
2021
revealed
that
most
people
seeking
legal
help
went
to
internet
search.
But
today,
this
is
shifting
toward
GenAI
as
ChatGPT
takes
over
the
public
imagination.
Still,
Peterson
pointed
out
that
most
people
don’t
exactly
have
a
sixth
sense
for
spotting
“good”
vs.
“bad”
AI
advice.
Given
how
often
lawyers
screw
this
up,
calling
this
a
challenge
feels
like
an
understatement.
If
folks
in
need
of
legal
help
flock
to
big
name,
consumer-facing
AI
tools,
the
legal
advice
they
get
can
range
from
hilarious
to
disastrous.
Which
is
where
the
designers
of
bespoke
legal
tools
come
in.
Legal
professionals
understand
what
people
might
actually
need
from
an
AI
legal
information
product.
An
idiot
man-child
feeding
his
AI
“all
court
cases”
does
not.
But
the
next
challenge
is
finding
a
way
to
connect
these
non-lawyers
with
the
good
tools.
Because
right
now,
the
average
person
is
more
likely
to
consult
HallucinabotGPT
than
find
a
tailor-made
access-to-justice
tool
buried
deep
in
a
conference
vendor
list.
At
a
gathering
of
experts
intended
to
open
the
conversation
about
this
and
the
regulatory
environment
generally,
Bednarz
reported
that
the
group
settled
on
a
phased
approach
beginning
with
a
“soft
power”
campaign
based
in
guidance,
sandboxes,
and
spreading
the
message
that
AI
is
not
(necessarily)
the
unlicensed
practice
of
law.
A
full
report
on
the
group’s
conclusions
will
be
coming
in
a
few
months.
Damien
Riehl
of
vLex
took
the
unlicensed
practice
of
law
issue
further
and
questioned
whether
these
regulatory
regimes
amount
to
a
case
of
the
emperor’s
new
clothes.
State
regulators
aren’t
going
to
sue
OpenAI
even
though
ChatGPT
is
the
one
delivering
shitty
legal
advice
at
scale
to
the
have-nots.
But
when
a
specially
designed
legal
AI
tool
emerges,
suddenly
the
regulators
are
quick
to
fire
off
complaints.
But
one
questioner
flagged
a
downside.
Non-lawyers
know
to
take
Google
with
a
grain
of
salt.
But
once
an
AI
can
posture
itself
as
a
more
authoritative,
legally
vetted
product,
along
with
its
improved
accuracy
and
value
is
an
imprimatur
of
trust
that
would
make
the
impact
of
a
mistake
more
pronounced.
There’s
a
paradox
at
play.
Riehl
describes
a
spectrum
from
no
legal
services
to
hiring
counsel.
Essentially
along
the
way
there’s
Google,
which
is
worse
than
consumer
AI,
which
is
worse
than
legal
AI,
which
is
worse
than
legal
AI
complemented
by
a
human
lawyer.
But
what
if
worse
is
better
in
some
contexts?
If
one
of
the
most
important
contributions
to
solving
access
to
justice
is
helping
people
realize
when
they
have
a
legal
problem
in
the
first
place,
to
what
extent
do
better
tools
erroneously
convince
them
that
they
have
problems
they
can
handle
themselves.
Maybe
a
clunky,
obviously
flawed
tool
is
less
dangerous
than
one
that
seems
polished
and
confident
(and
correct…
if
you
have
the
training
to
understand
the
nuance)
enough
to
inspire
false
confidence.
Consider
the
medical
profession,
where
arming
patients
with
more
“little
bits
of
knowledge”
spawned
a
population
of
morons
convinced
that
they
can
solve
measles
with
good
nutrition
because
they
conflate
the
fact
that
Vitamin
A
is
a
treatment
for
measles
with
the
idea
that
it’s
some
kind
of
vaccine
for
measles.
More
information
—
even
on
its
face
good
information
—
can
usher
in
the
maelstrom
of
misinformation
in
the
wrong
hands.
Though
the
damage
may
already
be
done
by
the
range
of
consumer
AI
products.
Google
gave
people
options
to
consider…
AI
gives
them
an
“answer.”
And
the
thing
about
consumer
AI
is
that
it’s
going
to
ACT
like
it
knows
what
it’s
doing.
We
don’t
call
it
Mansplaining
As
A
Service
for
nothing.
If
the
cat’s
out
of
the
bag,
the
only
benefit
of
cracking
down
on
artisanal
legal
AI
for
clients
is
leaving
them
at
the
mercy
of
the
loudest,
drunkest
large
language
model
at
the
bar.
In
other
words,
maybe
selling
contact
voltage
detectors
at
Lowe’s
increases
the
risk
of
ill-considered
home
electrical
projects,
but
since
many
more
folks
were
going
to
do
try
it
anyway
and
I’d
rather
they
have
voltage
testers
when
they
do.
Home
insurance
carriers
agree.
All
of
which
is
to
say
that
the
ethical
issues
surrounding
AI’s
role
in
bridging
the
access
to
justice
gap
remain
thorny.
On
balance,
society
is
probably
better
off
with
a
regulatory
environment
that
encourages
more
legal-need-specific
AI
products
than
not.
But
developers
should
be
mindful
of
the
risks
involved
when
handing
legal
help
to
non-lawyers
and
spend
at
least
as
much
time
pondering
how
the
tech
can
be
misapplied
as
they
do
thinking
about
how
much
it
can
help.
Joe
Patrice is
a
senior
editor
at
Above
the
Law
and
co-host
of
Thinking
Like
A
Lawyer.
Feel
free
to email
any
tips,
questions,
or
comments.
Follow
him
on Twitter or
Bluesky
if
you’re
interested
in
law,
politics,
and
a
healthy
dose
of
college
sports
news.
Joe
also
serves
as
a
Managing
Director
at
RPN
Executive
Search.