I
see
trees
of
green,
red
roses
too.
I
see
them
bloom
for
me
and
you.
And
I
think
to
myself,
what
a
wonderful
world.
—
Louis
Armstrong
There
is
a
school
of
thought
that
AI
and
its
use
has
gone
about
as
far
as
it
can
go
for
now.
The
theory
is
that
future
applications,
especially
in
the
workplace,
will
basically
be
minor
iterations
of
what
we
have
now.
In
the
legal
world,
this
thinking
morphs
into
the
idea
that
AI
won’t
significantly
impact
what
lawyers
do
because
it
can’t
do
the
work
that
really
matters
to
clients.
Or
AI
is
nowhere
near
matching
the
intuition
and
gut
instincts
of
experienced
lawyers
and
won’t
any
time.
Of
course,
not
everyone
agrees.
Just
yesterday,
January
6,
Sam
Altman,
the
OpenAI
CEO,
stated
in
a
blog
post
that
OpenAI
knows
“how
to
build
AGI
(artificial
general
intelligence)
as
we
have
traditionally
understood
it.”
Altman
also
predicts
that
AI
agents
capable
of
autonomously
performing
certain
tasks
may
start
to
“materially
change
the
output
of
companies”
this
year.
Jensen
Huang
Keynote
That
same
school
of
possibility
thinking
was
expressed
last
night
in
the
opening
CES
Keynote
by
Jensen
Huang,
CEO
of
Nvidia.
Nvidia
is
the
world’s
largest
semiconductor
company
and
a
dominant
AI
hardware
and
software
supplier.
Huang
is
a
popular
speaker
mainly
because
he
has
mastered
the
ability
to
talk
to
a
massive
audience
just
like
he
is
speaking
to
you
across
the
kitchen
table.
(All
the
public
speaking
instructions
tell
you
to
do
that,
but
it’s
easier
said
than
done.)
Huang
did
it
for
over
an
hour
last
night,
captivating
the
audience
even
though
he
sometimes
talked
in
technical
terms
that
were
over
my
head.
Huang
traced
the
evolution
of
AI
and
Gen
AI,
which
now
understands,
translates,
and
generates
images
and
text.
Huang
explained
how
neural
networks
and
machine
learning
are
advancing.
He
showed
an
AI
computer
generated
video
that
was
completely
realistic
and
indistinguishable
from
a
real
life
video
taken
in
real
life
with
real
people.
Huang
told
us
the
AI
video
was
made
by
inputting
a
limited
number
of
pixels
from
which
the
AI
program
inferred
and
deduced
the
pixels
that
needed
to
be
added
to
generate
the
finished
content.
Huang
explained
how
the
amount
of
data
available
to
AI
programs
will
increase
exponentially
over
the
next
few
years.
This
increased
data
can
then
be
used
to
better
train
the
AI
models,
enabling
them
to
do
more
and
more.
Humans
can
also
reinforce
this
increased
learning,
leading
to
even
more
exponential
AI
growth.
And,
Huang
theorized,
the
AI
program
itself
would
learn
how
to
improve
itself.
“In
the
future
[AI}
is
going
to
be
thinking.
It’s
going
to
be
internally
reflecting,
processing.
…
And
it’s
interacting;
it’s
taking
the
problem
you
gave
it,
breaking
it
down
step
by
step.”
Huang
believes
we
are
just
beginning
to
see
what
sophisticated
AI
can
do.
We
are
moving,
he
said,
from
Gen
AI,
where
computers
create
content,
to
agentic
AI,
where
AI
agents
can
actually
do
things
without
being
given
detailed
instructions.
These
agentic
agents
will
become
invaluable
digital
employees
who
will
do
things
on
our
behalf
with
little
prompting.
They
will
become
research
assistants,
create
and
act
on
sophisticated
weather
forecasts,
analyze
and
make
traffic
decisions,
and
monitor
manufacturing
processes,
for
example.
Huang
did
not
say
what
the
humans
would
be
doing,
by
the
way.
In
the
legal
world,
AI
agents
could
do
such
things
as
better
automate
legal
research,
undertake
drafting,
or
even
formulate
litigation
strategy.
The
next
step,
according
to
Huang,
will
be
physical
AI,
where
AI
understands
the
physical
world
and
how
objects
interact.
He
gave
an
example
of
a
ball
rolling
off
the
table,
and
opined
that
physical
AI
would
understand
that
the
ball
didn’t
disappear
but
simply
fell
to
the
ground.
Developing
this
kind
of
AI
requires
massive
video
input
and
investment
in
training.
But
it
will
lead
to
considerable
advances
in
robotics
as
machines
understand
the
physical
world
much
like
we
humans
do
now.
What
Does
All
This
Mean?
I’ve
learned
over
the
years
to
take
much
of
what’s
said
at
CES
with
a
grain
of
salt.
All
too
often,
its
wishful
thinking
designed
to
get
attention
more
than
reflect
reality.
But
Huang
and
Nvidia
have
an
imposing
track
record
that
cannot
be
taken
lightly.
If
they
are
right,
where
does
it
leave
us?
In
particular,
where
does
it
leave
the
legal
profession,
which
for
years,
has
touted
and
relied
upon
human
communication,
persuasion,
and
interaction?
Our
business
is
believed
to
be
a
particularly
human
one,
for
better
or
worse.
A
Wonderful
World?
I
worry
what
will
happen
as
the
distinction
between
what
is
real
and
what
is
computer
created
blurs.
What
happens
when
it
really
no
longer
matters
whether
something
is
real
or
not?
What
does
that
do
to
evidence?
What
does
that
do
to
fact-finding?
Are
we
prepared
for
a
world
where
a
computer
can
come
up
with
a
better
solution
than
a
human?
Where
an
algorithm
can
reach
a
faster
and
better
result
than
a
human
judge
or
jury?
When
an
AI
program
can
construct
a
more
compelling
argument
than
a
real
life
lawyer?
What
will
happen
if
legal
AI
agents
can
do
much
of
the
work
that
keeps
law
firms’
personnel
busy?
What
will
lawyers
be
doing
in
five
years?
Ten?
As
I
sit
at
CES
this
year
and
listen
to
AI’s
possibilities,
I
can’t
help
but
think
that
legal
is
a
bit
like
the
proverbial
ostrich
sticking
its
head
in
the
sand.
We
stew
about
AI.
We
try
to
demonstrate
why
it
can’t
and
won’t
work
in
legal.
We
try
to
convince
ourselves
that
AI
has
gone
as
far
as
it
can
go
so
we
don’t
have
to
worry
that
our
law
cocoon
might
soon
burst.
Maybe
the
naysayers
are
right.
Perhaps
change
won’t
happen.
But
if
there
is
even
the
slightest
chance
Huang
and
the
other
AI
evangelists
are
right,
the
changes
AI
brings
and
brings
quickly
could
upend
our
profession
from
top
to
bottom.
And
other
than
folks
like
Cat
Moon
at
Vanderbilt
University,
Andrew
Perlman,
dean
of
Suffolk
University
Law
School
and
David
Wilkins
at
Harvard
Law,
there
seems
to
be
precious
few
thinking
about
what
those
changes
might
mean.
To
paraphrase
Steve
Jobs,
AI
has
no
respect
for
the
status
quo.
You
can
quote
it,
disagree
with
it,
glorify
or
vilify
it.
But
the
only
thing
you
can’t
do
is
ignore
it,
because
AI
will
change
things.
Stephen
Embry
is
a
lawyer,
speaker,
blogger
and
writer.
He
publishes TechLaw
Crossroads,
a
blog
devoted
to
the
examination
of
the
tension
between
technology,
the
law,
and
the
practice
of
law.