The law firm of choice for internationally focused companies

+263 242 744 677

admin@tsazim.com

4 Gunhill Avenue,

Harare, Zimbabwe

Sex, Lies, And Deepfakes: CES Panel Paints A Scary Portrait – Above the Law

The
2025
CES
trade
show
in
Las
Vegas.
(Photo
by
Zhang
Shuo/China
News
Service/VCG
via
Getty
Images)


Lies.
Scams.
Disinformation.
Misinformation.
Voice
cloning.
Likeness
cloning.
Manipulated
photographs.
Manipulated
videos.
AI
has
exploded
the
possibilities
of
all
these
things
to
the
point
that
it’s
almost
impossible
to
trust
anything.
Lack
of
trust
has
enormous
implications
for
lawyers,
judges,
and
the
way
we
resolve
disputes.


And
if
you
believe
a
Thursday
afternoon



CES


panel
presentation
entitled



Fighting
Deepfakes,
Disinformation
and
Misinformation
,
it’s
likely
a
problem
that
will
only
get
worse
and
for
which
there
are
precious
few
solutions.


The
Bad
News


A
year
ago,
it
was
relatively
easy
to
tell
if
a
photograph
had
been
substantially
manipulated.
Today,
according
to
the
panelists,
it’s
next
to
impossible.
In
a
year,
the
same
will
be
true
of
manipulated
or
AI
generated
fictitious
video.
Right
now,
it
takes
the
bad
guys
about
6
seconds
of
audio
to
clone
a
voice
so
well
it’s
hard
to
tell
the
difference

and
that
time
will
get
less. 


The
bad
guys
are
only
going
to
get
better.
Add
to
this
fact
that,
according
to
the
panel,
we
are
accustomed
to
assuming
that
a
photograph
or
video
or
even
audio
recording
is
what
it
purports
to
be.
Camera,
video,
and
audio
companies
have
spent
years
convincing
us
this
assumption
is
valid.


Finally,
as
we
begin
to
use
AI
generated
avatars,
digital
twins,
and
even
AI
agents
of
and
for
ourselves,
it
will
get
worse:
The
bad
guys
won’t
have
to
create
a
fake;
we
will
do
it
for
them.


What’s
to
Be
Done?


The
panel
talked
about
solutions,
none
of
which
struck
me
as
that
great.
First,
there
is
detection.
There
are
sophisticated
tools
and
analyses
that
can
be
done
to
attempt,
with
varying
success,
to
detect
deepfakes.
The
problem,
though,
is
similar
to
what
the
cybersecurity
world
faces:
The
bad
guys
can
figure
out
ways
to
avoid
detection
faster
than
we
can
figure
out
how
to
detect
the
fakes.
Yes,
tools
do
exist
to
detect
fakes.
But
the
tools
always
will
lag
behind
the
abilities
of
the
deepfake
producers
to
elude
detection.
In
addition,
forensic
tools
and
experts
are
expensive,
giving
the
bad
guys
more
opportunity.
And
there
are
a
lot
more
bad
guys
than
forensic
experts.


The
second
way
to
combat
the
problem
is
referred
to
as



provenance
.
Provenance
is
a
way
to
determine
where
the
object
in
question
came
from
and
what
data
was
used
to
create
it.
It
informs
and/or
labels
any
object
that
may
have
been
manipulated.
Watermarks
are
perhaps
a
familiar
example.
The
idea
is
to
create
something
like
the
nutrition
labels
on
foods.


But
again,
the
panelists
noted
that
provenance
examination
and
labeling
don’t
always
work
since
the
bad
guys
will
always
be
a
step
ahead
of
the
game
and
can
erase
or
hide
the
information.
Provenance
doesn’t
completely
solve
the
problem
in
any
event,
particularly
when,
as
in
a
court
of
law,
accuracy
counts.
Provenance
may
tell
you
a
photo
may
have
been
manipulated,
but
it
won’t
necessarily
tell
you
whether
it
has
been
for
sure
and
how.
(Keep
in
mind
that
with
photos,
for
example,
some
level
of
manipulation
may
be
acceptable
or
even
expected.
The
issue
is
when
the
process
creates
an
altered
or
fictitious
image).
So
the
question
remains
subject
to
debate.


Where
did
the
panelists
come
down?
Detection
and
provenance
need
to
be
used
together
to
achieve
the
maximum
chances
of
success.
I
didn’t
get
a
warm
and
fuzzy
feel
from
this
solution,
though.


So
What
Are
Lawyers
to
Do?


Deepfakes
pose
tough
questions
for
lawyers,
judges,
and
juries.
For
lawyers
and
judges,
while
we
may
want
to
believe
what
we
are
seeing,
we
now
have
to
accept
that
we
can’t.
We
can
no
longer
assume
that
something
is
what
it
purports
to
be.
We
have
to
view
evidence
with
new,
more
critical
eyes.
We
have
to
be
prepared
to
ask
tougher
evidentiary
authentication
questions.
Authentication
can’t
be
assumed.
It
is
no
longer
the
tail
wagging
the
proverbial
dog.
It
may
be
the
dog.


One
thing
the
panelists
did
agree
on:
You
can’t
determine
if
something
is
fake
just
by
looking
at
it
or
listening
to
it.
So
we
have
to
ask
questions.
We
may
have
to
use
experts. 


We
have
to
keep
abreast
of
the
tools
available
to
question
authenticity;
we
have
to
keep
abreast
of
tools
and
strategies
the
bad
guys
are
using.


The
panelists
offered
some
help
using
what
they
called
the
human
firewall
to
ferret
out
deepfakes.
We
need
to
ask
questions
like:
Where
did
the
object
come
from?
What
is
the
credibility
of
the
source?
What
is
the
motive
of
the
object
provider?
Does
the
object
depict
something
that
is
consistent
with
the
remaining
evidence,
or
is
it
in
stark
contrast?
Is
the
photograph
consistent
with
other
photographs
from
other
sources?


In
short,
we
have
to
treat
those
attempting
to
authenticate
evidence
the
same
way
we
treat
substantive
witnesses.


Judges,
too,
have
a
significant
role.
They
need
to
understand
the
threat.
They
need
to
know
that
authenticity
can’t
be
assumed
and
is
important.
They,
too,
have
to
keep
abreast
of
what’s
happening
with
AI
and
deepfakes
and
what
the
threats
are
in
real
time.
They
need
to
know
that
“letting
the
jury
decide”
is
not
a
solution.


We
need
more
and
better
rules
for
assessing
evidentiary
credibility.
Just
as
Daubert
was
a
watershed
case
for
ensuring
the
credibility
of
expert
witnesses
and
evidence,
courts
need
some
definitive
guidance
in
the
rules
as
to
how
to
assess
deep
fake
issues.


The
public
from
which
juries
come
needs
to
be
constantly
educated
about
the
threat
so
they,
too,
can
take
with
a
grain
of
salt
evidence
that
comes
to
them
if
the
court
does
not
make
the
determination.


Is
This
Realistic?


Despite
these
potential
solutions,
it’s
hard
not
to
be
pessimistic.
Precious
few
resources
are
allocated
to
our
court
systems
already.
It’s
hard
to
see
legislatures
providing
the
funds
necessary
to
better
educate
judges
on
deepfake
issues.
The
expense
of
experts
and
forensic
analysis
will
place
less
well-heeled
litigants
at
a
disadvantage.
It
will
be
hard
to
convince
people
that
they
can’t
believe
what
they
see
when
they
have
been
conditioned
to
do
so.


And
with
today’s
polarization
of
political
beliefs
and
ideologies,
it
may
be
hard
to
convince
people
that
something
is
fake
if
they
want
to
believe
to
the
contrary.
As
lying
and
misinformation
become
more
prevalent,
litigants
and
even
lawyers
may
be
more
and
more
tempted
to
use
deepfakes
to
justify
what
they
believe
and
want.


Put
all
this
together,
and
I’m
fearful
of
what
technology
may
do
to
our
cherished
legal
institutions.
I’m
generally
an
evangelist
when
it
comes
to
technology.
Sometimes
though,
shiny
new
objects
turn
out
to
be
nothing
more
than
a
bucket
of
shit.




Stephen
Embry
is
a
lawyer,
speaker,
blogger
and
writer.
He
publishes TechLaw
Crossroads
,
a
blog
devoted
to
the
examination
of
the
tension
between
technology,
the
law,
and
the
practice
of
law.