The law firm of choice for internationally focused companies

+263 242 744 677

admin@tsazim.com

4 Gunhill Avenue,

Harare, Zimbabwe

Please, Please Stop Using ChatGPT If You’re Not Checking Cites – Above the Law

G’day
mates.

The
latest
installment
of
“Lawyers
Royally
Bungling
with
AI”
features
an
Australian
attorney
who
apparently
decided
that
verifying
case
citations
was
just
too
2024.
After
prompting
the
consumer-facing
AI
to
do
some
research
(bad
idea),
and
putting
that
into
a
filing
(worse
idea),
he
decided
to
throw
another
shrimp
on
the
barbie
and
submit
the
documents
filled
with
references
to
phony
cases
to
the
court
(worst
idea).

A
reminder
that
every
time
you
think
this
story
has
been
hyped
enough
that
lawyers
won’t
do
it
again…
they
go
ahead
and
do
it
again.

From

The
Guardian
:

An
Australian
lawyer
has
been
referred
to
a
state
legal
complaints
commission,
after
it
was
discovered
he
had
used ChatGPT to
write
court
filings
in
an
immigration
case
and
the
artificial
intelligence
platform
generated
case
citations
that
did
not
exist.

In
a
ruling
by
the
federal
circuit
and
family
court
on
Friday,
Justice
Rania
Skaros
referred
the
lawyer,
who
had
his
name
redacted
from
the
ruling,
to
the
Office
of
the
NSW
Legal
Services
Commissioner
(OLSC)
for
consideration.

Presumably
the
judge
said,
“That’s
not
a
case…
this
is
a
case.”

This
legal
eagle,
whose
name
has
been
thoughtfully
redacted
to
protect
the
technologically
inept,
was
handling
an
immigration
appeal
when
he
submitted
filings
in
October
2024.
These
documents
were
so
compelling
that
they
cited
and
even
quoted
from
tribunal
decisions
that
simply
didn’t
exist.
When
confronted,
the
lawyer
confessed
to
using
ChatGPT
for
research,
admitting
he
didn’t
bother
to
verify
the
AI-generated
information.

Despite
the
temptation
to
turn
this
into
a
story
about
technology
run
amok,
this
is
still

fundamentally
a
matter
of
human
laziness
.
Just
as
a
lawyer
shouldn’t
mindlessly
blockquote
the
memo
the
summer
associate
slapped
together
on
their
way
to
happy
hour,
any
lawyer
using
generative
AI
retains
the
obligation
to
check
the
final
product
for
accuracy.
And,
in
the
case
of
ChatGPT
legal
research,
it’s
more
like
the
memo
the
summer
associate
slapped
together
on
their
way

back
from

happy
hour.

He
attributed
this
lapse
in
judgment
to
time
constraints
and
health
issues.
Maybe
that’s
true.
But
the
most
troubling
detail
from
this
story
is
that
this
happened
in
an
immigration
case.
One
of
the
more
noble
selling
points
for
generative
AI
is
the
hope
that
it
could
expand
access
to
justice
by
streamlining
practices
like
immigration.
Filevine,
for
instance,

offers
some
really
slick
AI-assisted
immigration
tools


all
of
which
are

very

different
than
turning
over
briefing
to
a
free
chatbot.
Yet
with
this
promise
comes
the
risk
that
practice
areas
that
serve
the
most
vulnerable
will
be
the
most
likely
to
get
shortchanged
out
of
attorney
judgment.
That
paying
client
will
get
proper
attention
while
the
pro
bono
matter
gets
churned
out
by
lightly
checked
AI.

Again,
that
might
not
have
been
the
case
here,
but
this
is
the
new
frontier
for
AI
screw-ups.
More
lucrative
practices
are
going
to
get
lawyer
attention
and
the
benefit
of
high
quality
AI
tools
crafted
specifically
for
the
legal
profession
with
all
the
safeguards
that
requires.
And
it’s
going
to
be
the
lower
income
practices
that
give
rise
to
future
embarrassing
cases.

And

in
matters
like
immigration

potentially
tragic
ones.




HeadshotJoe
Patrice
 is
a
senior
editor
at
Above
the
Law
and
co-host
of

Thinking
Like
A
Lawyer
.
Feel
free
to email
any
tips,
questions,
or
comments.
Follow
him
on Twitter or

Bluesky

if
you’re
interested
in
law,
politics,
and
a
healthy
dose
of
college
sports
news.
Joe
also
serves
as
a

Managing
Director
at
RPN
Executive
Search
.