via
Getty)
ChatGPT
has
been
blamed
on
everything
from
embarrassing
moments
to
having
to
argue
to
a
judge
why
he
shouldn’t
impose
sanctions
on
you.
But
it
is
a
lot
rarer
to
hear
about
ChatGPT
being
taken
to
court.
One
AI
hallucination
error
looks
a
little
too
much
like
accusing
an
innocent
man
of
murder.
The
case
is
centering
the
importance
of
privacy
laws
in
a
rapidly
moving
and
fact-check
light
world.
PC
Gamer
has
coverage:
A
Norwegian
man
called
Arve
Hjalmar
Holmen
recently
struck
up
a
conversation
with
ChatGPT
to
see
what
information
OpenAI’s
chatbot
would
offer
when
he
typed
in
his
own
name.
He
was
horrified
when
ChatGPT
allegedly
spun
a
yarn
falsely
claiming
he’d
killed
his
own
sons
and
been
sentenced
to
21
years
in
prison.
The
creepiest
aspect?
Around
the
story
of
the
made
up
crime,
ChatGPT
included
some
accurate,
identifiable
details
about
Holman’s
personal
life,
such
as
the
number
and
gender
of
his
children,
as
well
as
the
name
of
his
home
town.
Large
language
models
are
like
autocorrect’s
much
beefier
and
better-at-data-aggregating
cousin.
Despite
how
often
it
can
seem
that
ChatGPT
or
similar
LLMs
are
correctly
answering
the
questions
posed
to
them,
the
results
—
much
like
with
autocorrect
—
can
also
feel
a
lot
like
your
computer
correcting
the
word
“duck”
to
“duck”
when
you
really
wanted
to
say
“duck.”
The
response
to
Holmen’s
question
may
have
just
been
the
fruit
of
an
extremely
low
probability,
but
the
fact
that
it
looks
like
someone
using
personal
information
to
make
their
defamatory
claims
look
more
realistic
is
terrifying.
It
is
bad
enough
that
we
have
fake
news
coming
from
humans,
but
the
prospect
of
easily
accessible
AI
that
can
throw
dirt
on
anyone’s
name
is
a
burden
no
one
should
have
to
bear.
The
threat
of
litigation
could
be
enough,
at
least
initially,
to
shed
some
light
on
the
black
box
processes
that
spit
out
reputation
harming
nonsense
like
this.
Noyb,
a
privacy
rights
group,
took
interest
in
Holmen’s
case
and
filed
complaints
to
get
the
personal
information
that
OpenAI
may
have
used
for
Holmen
scrubbed
from
its
data
reserves.
But
they’re
doing
so
under
GDPR.
Lefty
states
like
California
may
have
things
like
the
California
Consumer
Privacy
Act
to
protect
their
citizens’
information
online,
but
many
states
are
far
behind
when
it
comes
to
privacy
protections
and
enforcement
mechanisms.
Does
it
inspire
tee-hees
when
Grok
decides
to
name
Elon
Musk
as
the
greatest
spreader
of
misinformation
and
a
Russian
asset?
Yes.
But
what
if
a
quick
re-figuring
of
the
black
box
makes
it
easy
for
Musk
or
one
of
his
big-ball
lackeys
to
farm
misinformation
using
AI
without
consequence?
Not
so
tee-hee.

Chris
Williams
became
a
social
media
manager
and
assistant
editor
for
Above
the
Law
in
June
2021.
Prior
to
joining
the
staff,
he
moonlighted
as
a
minor
Memelord™
in
the
Facebook
group Law
School
Memes
for
Edgy
T14s.
He
endured
Missouri
long
enough
to
graduate
from
Washington
University
in
St.
Louis
School
of
Law.
He
is
a
former
boatbuilder
who
cannot
swim, a
published
author
on
critical
race
theory,
philosophy,
and
humor,
and
has
a
love
for
cycling
that
occasionally
annoys
his
peers.
You
can
reach
him
by
email
at [email protected] and
by
tweet
at @WritesForRent.