The law firm of choice for internationally focused companies

+263 242 744 677

admin@tsazim.com

4 Gunhill Avenue,

Harare, Zimbabwe

‘My Dear Miss Glory, the Robots Are Not People,’ Says Judge In Yet Another Hallucinations Case


“My
dear
Miss
Glory,
the
Robots
are
not
people.
Mechanically
they
are
more
perfect
than
we
are;
they
have
an
enormously
developed
intelligence,
but
they
have
no
soul.”

With
that
quote
from

R.U.R.

(Rossum’s
Universal
Robots),
a
1920
science
fiction
play
by
Czech
writer
Karel
Čapek,
U.S.
District
Judge
Kai
N.
Scott,
in
the
Eastern
District
of
Pennsylvania,
eases
us
into
yet
another
instance
of
a
lawyer
in
trouble
for
filing
hallucinated
cases.

It
was
just
two
days
ago
that
I

wrote
about
another
such
case
,
one
in
which
the
judge
let
the
lawyer
off
the
hook,
calling
his
citation
errors
an
“honest
mistake.”

Not
so
in
this
case,
where
the
judge,
having
concluded
that
the
lawyer,
Raja
Rajan,
“outsourced
his
job
to
an
algorithm,”
imposed
a
$2,500
sanction
and
ordered
him
to
complete
a
CLE
program
on
AI
and
legal
ethics,
all
for
violating
Rule
11
of
the
Federal
Rules
of
Civil
Procedure,
which
requires
lawyers
to
certify
the
veracity
of
their
court
filings.

“[U]nlike
the
cases
Mr.
Rajan
cited,
Rule
11
is
not
artificial;
it
imposes
a
real
duty
on
lawyers

not
on
algorithms

to
‘Stop,
Think,
Investigate
and
Research’
before
filing
papers
either
to
initiate
a
suit
or
to
conduct
the
litigation,’”
the
judge
wrote,
quoting
a
1987
case
from
the
3rd
U.S.
Circuit
Court
of
Appeals.

An
All
Too
Familiar
Story

The
back
story
is
by
now
familiar,
thanks
to
a
lengthening
litany
of
similar
cases.
After
Rajan
filed
two
motions,
the
court
“was
perplexed”
to
see
that
two
cases
could
not
be
found
in
any
legal
research
tool.

Rajan
also
cited
two
cases
for
propositions
completely
unrelated
to
the
points
of
law
they
decided,
and
he
cited
another
two
cases
that
were
no
longer
good
law.

When
Judge
Scott
ordered
Rajan
to
show
cause
why
he
should
not
be
disciplined,
Rajan
told
the
court
that
“never
in
[his]
wildest
dreams”
would
he
have
predicted
that
AI
would
provide
artificial
cases.

He
told
the
court
that,
while
he
had
previously
used
Casetext
to
help
him
review
briefs,
in
this
instance,
he
used
ChatGPT
for
the
first
time,
and
never
thought
it
would
manufacture
artificial
cases
to
support
the
outcomes
he
desired.

“Far
from
reasonably
inquiring
into
the
legal
contentions
contained
in
his
briefs,
Mr.
Rajan
blindly
trusted
an
algorithm
he
had
never
used
before,”
the
judge
wrote.

“He
conducted
no
research
into
ChatGPT’s
efficacy
as
a
legal
tool,
no
research
into
its
reliability
as
compared
to
the
Case
Text
(sic)
program,
and
worst
of
all,
no
independent
research
into
the
legal
cases
that
were
cited.”

Added
the
judge:
“This
Court
recognizes
that
technology
is
always
evolving,
and
legal
research
tools
are
no
exception.
But
if
approached
without
prudential
scrutiny,
use
of
artificial
intelligence
can
turn
into
outright
negligence.”

In
this
case,
the
just
emphasized,
the
lawyer’s
negligence
was
in
his
failure
to
verify
the
cases
he
cited.

“There
is
nothing
in
Rule
11
that
specifically
prohibits
reliance
on
AI
for
research
assistance,
but
Rule
11
does
make
clear
that
the
signing
attorney
is
the
final
auditor
for
all
legal
and
factual
claims
contained
in
their
motions.”

For
these
reasons,
the
judge
sanctioned
Rajan
by
ordering
him
to
pay
a
penalty
of
$2,500
and
complete
a
one-hour
CLE-credited
seminar
or
educational
program
related
to
both
AI
and
legal
ethics.


Full
opinion:

Bunce
v.
Visual
Technology
Innovations,
Inc.,
E.D.
Pa.
23-cv-01740
(Feb.
27,
2025
.