The
most
charitable
read
one
can
give
Sokolowski
et
al
v.
Digital
Currency
Group,
Inc.
et
al,
is
that
the
federal
fraud
complaint
trumpeted
as
“the
first
OpenAI
o1
pro
guided
litigation”
reads
maybe
98.8
percent
like
a
perfectly
professionally
prepared
court
filing.
But
like
the
98.8
percent
genetic
similarity
between
chimps
and
humans,
that
1.2
percent
is
pretty
important
and
it’s
what
transforms
the
complaint
from
a
harbinger
of
a
robot
lawyer
future
into
a
dumpster
fire
begging
to
be
dismissed.
Plaintiffs
Stephen
Sokolowski
and
Christopher
Sokolowski
brought
this
claim
in
the
U.S.
District
Court
for
the
Middle
District
of
Pennsylvania
seeking
fraud
damages
arising
from
their
decision
to
place
“over
ninety
percent
(90%)
of
their
total
net
worth”
with
Genesis
Global
Trading,
a
crypto
outfit
that
filed
for
Chapter
11.
Mike
Dunford
prepared
an
amusing
thread
of
his
live
reaction
to
the
case
announcement
and
complaint
that
gets
into
the
weeds
that
you
should
check
out.
According
to
their
Reddit
post
announcing
the
filing,
the
Sokolowski
boys
decided
to
file
this
pro
se
after
evaluating
the
potential
of
using
artificial
intelligence
to
manage
the
case:
Eventually
though,
Claude
3.5
Sonnet
was
released,
and
it
was
finally
capable
of
evaluating
the
law
(but
it
still
made
errors
in
interpreting
the
precedential
value
of
cases
in
its
training
data.)
Then,
OpenAI
changed
all
that
with
o1
pro.
OpenAI’s
o1
pro
is
an
artificial
general
intelligence
(AGI)
system
that
is
smarter
than
any
lawyer
I’ve
talked
to.
You
should
really
talk
to
more
lawyers
then.
It’s
also
worth
noting
that
o1
pro
is
not,
in
fact,
“an
artificial
general
intelligence”
which
is
a
term
for
the
Holy
Grail
of
AI
design
where
the
algorithm
will
overtake
human
reasoning.
Given
that
SkyNet
has
not
driven
the
human
race
to
extinction,
its
safe
to
say
AGI
hasn’t
arrived.
When
o1
was
made
available,
we
quickly
signed
up
and
compared
it
to
Gemini
Experimental
1206.
We
determined
that
both
were
acceptable
for
moving
forward,
but
o1
was
clearly
superior
in
understanding
case
law
and
anticipating
defenses.
Superior
to…
what?
To
ChatGPT?
Sure.
To
an
attorney?
No.
But
this
is
the
sort
of
brain-fried
nonsense
that
prompts
Elon
Musk
to
say
he
can
feed
“all
court
cases”
into
an
algorithm
and
replace
the
whole
legal
system.
The
“techbrogentsia”
imagine
law
as
a
middle
school
essay
and
that
a
sufficiently
developed
model
can
take
“the
law,”
apply
a
fact
pattern,
and
get
a
result.
To
some
extent,
this
is
the
fault
of
the
mainstream
media
treating
hallucinations
as
the
obstacle
holding
back
AI
lawyering
instead
of
just
a
moderately
helpful
tool
in
the
hands
of
dumb
lawyers.
Hallucinations
are
inevitable
in
generative
AI,
since
the
whole
purpose
of
the
technology
is
to
guess
words
that
will
make
the
user
happy.
But
hallucinations
are
also
unlikely
to
matter
soon.
Serious
players
in
the
legal
AI
game
(read:
not
Elon)
are
spending
massive
resources
to
shield
the
end
user
from
hallucinations.
Hallucinations
won’t
be
the
problem,
the
problem
will
be
how
to
parse
through
and
select
from
accurate
but
not
necessarily
useful
information…
which
is
one
of
those
1.2
percent
problems
that
a
human
with
a
JD
has
to
handle.
And
this
complaint
cries
out
for
that
JD-trained
editor.
It
carries
on
and
on
offering
preemptive
motion
to
dismiss
responses
that
aren’t
pertinent
at
all
in
an
initial
pleading.
But
this
is
a
product
of
the
plaintiffs’
methodology,
which
asked
the
algorithm
to
review
the
initial
AI
complaint
and
prepare
an
AI
motion
to
dismiss
and
then
pretend
to
be
a
judge
and
evaluate
that
motion
to
dismiss
vs.
the
complaint
and
integrate
it
all
in.
I
ran
this
simulation
many
times,
and
the
last
“judge”
denied
the
motion
0/10
times.
From
context
I
think
he
meant
to
say
“granted
the
motion,”
but
I
will
say
that
I
also
think
the
judge
will
deny
the
inevitable
motion
to
dismiss
“0/10
times.”
But
along
the
way,
the
complaint
highlights
some…
important
facts.
Dunford
asks,
“10:
Oh
my
god,
these
utter
muppets
are
trying
to
reverse-pierce
out
of
their
own
corporate
veil”
and
answers…
See,
now
a
lawyer
might’ve
had
thoughts
about
this
case
based
on
that
allegation.
Or
the
related
reason
why
they
might
want
to
reverse
pierce
their
own
veil…
Generative
AI
could
well
be
a
revolutionary
technology
for
the
legal
industry
but
it’s
not
going
to
do
that
by
replacing
core
lawyer
duties.
Not
just
because
that
raises
serious
ethical
concerns,
but
because
AI
simply
never
going
to
be
smart
enough
to
do
that.
What
we
see
from
AI
right
now
is
pretty
much
as
good
as
it’s
going
to
get.
That
doesn’t
mean
it
won’t
get
better
at
executing
tasks
with
refinement,
but
as
the
march
of
technology
goes,
we’re
not
talking
about
getting
from
Kitty
Hawk
to
the
moon,
we’re
talking
about
toilet
paper
being
slightly
softer
than
it
was
in
the
50s.
A
report
prepared
by
Goldman
Sachs
revealed
that
even
AI
enthusiasts
are
admitting
that
linear
improvements
will
require
exponential
increases
in
training
investment.
That’s
not
sustainable
and
not
a
viable
path
to
AI
running
complex
litigation.
Without
some
exogenous
advancement
like
quantum
computing
or
viable
fusion
power
to
cure
the
energy
drain,
generative
AI
may
get
better
at
what
it
does
but
it’s
not
going
to
do
much
more
than
it
does
now…
which
is
still
a
massive,
potentially
indispensable
time-saving
tool
for
trained
lawyers
but
it’s
not
a
replacement.
Nor
is
it
an
access
to
justice
tool
that
will
give
pro
se
litigants
a
free
robot
lawyer.
Maybe
for
routine
traffic
infractions.
But
the
access
to
justice
potential
—
for
litigation
—
in
generative
AI
isn’t
in
helping
people
deal
with
their
legal
problems,
it’s
in
helping
people
realize
that
they
have
legal
problems.
A
lot
of
injustice
happens
because
people
don’t
know
if
they
have
a
case
and
aren’t
willing
to
spend
money
to
find
out.
AI
can
tell
someone
wondering
about
their
plight,
“Yeah,
actually,
that
might
not
be
legal
and
you
should
feel
confident
calling
someone
about
that.”
Unfortunately,
until
we
square
our
expectations
around
what
AI
is
actually
capable
of
accomplishing,
we’re
going
to
see
more
of
this
mess
in
the
courts.
(Complaint
on
the
next
page…)
Announcement
of
the
first
o1
pro
guided
Federal
litigation
[Reddit]
Earlier:
Generative
AI…
What
If
This
Is
As
Good
As
It
Gets?
Elon
Musk
Feeds
AI
‘All
Court
Cases,’
Promises
It
Will
Replace
Judges
Because
He’s
An
Idiot
For
The
Love
Of
All
That
Is
Holy,
Stop
Blaming
ChatGPT
For
This
Bad
Brief
Joe
Patrice is
a
senior
editor
at
Above
the
Law
and
co-host
of
Thinking
Like
A
Lawyer.
Feel
free
to email
any
tips,
questions,
or
comments.
Follow
him
on Twitter or
Bluesky
if
you’re
interested
in
law,
politics,
and
a
healthy
dose
of
college
sports
news.
Joe
also
serves
as
a
Managing
Director
at
RPN
Executive
Search.