By: Practical
Guidance
Corporate
investment
in
generative
artificial
intelligence
(Gen
AI)
technologies
continues
to
accelerate.
Average
Gen
AI
budgets
grew
by
30%
in
2024
and
they
are
expected
to
grow
at
roughly
60%
over
the
next
three
years,
soaring
to
7.6%
of
total
IT
budgets
by
2027,
according
to
a
report
from
Boston
Consulting
Group.
These
investments
are
leading
directly
to
rapid
adoption
of
Gen
AI
tools
in
the
workplace.
A
recent
Gallup
study
of
HR
executives
found
that
93%
of
Fortune
500
companies
have
begun
using
AI
tools
and
technologies
to
improve
business
practices
and
nearly
half
(45%)
of
them
say
their
organization’s
operational
efficiency
has
already
improved
because
of
AI.
There
is
one
potential
problem
looming
amid
this
exciting
tech-driven
trend:
too
often
the
legal
team
is
not
in
the
loop.
Law360
reported
that
human
resources
departments
are
using
AI
“while
about
half
of
their
legal
chiefs
don’t
even
know
about
it,”
noting
that
“these
discrepancies
among
executives
pose
challenges
for
effective
AI
risk
management.”
8
Tips
for
Creating
a
Comprehensive
AI
Policy
Legal
experts
caution
that
the
decision
to
implement
AI
in
the
workplace
should
be
a
deliberate
and
careful
one.
The
risks
are
too
great
to
rush
into
adoption
of
any
AI-powered
technology
simply
because
competitors
are
using
it
or
customers
are
asking
about
it.
“While
use
of
Gen
AI
can
make
it
easier
to
enhance
productivity
and
streamline
processes,
adopting
and
implementing
such
technologies
can
simultaneously
add
significant
complexity
to
an
organization’s
operations,
sales,
manufacturing
and
human
capital
management
operations,”
wrote
Eric
Felsberg
and
Douglas
Klein,
principals
at
Jackson
Lewis
P.C.,
in
a
recent
LexisNexis
Practical
Guidance
practice
note.
“And
with
the
emergence
of
Gen
AI,
many
jurisdictions
have
issued
regulations
to
guard
against
its
misuse.
Consequently,
it
is
important
that
employers
seek
legal,
ethical
and
regulatory
guidance
when
implementing
AI
platforms
in
the
workplace.”
The
authors
suggest
that
employers
work
with
their
legal
team
to
create
a
comprehensive
AI
usage
policy
that
sets
the
ground
rules
for
the
deployment
of
AI
tools
in
their
organizations.
Here
are
eight
specific
tips
to
consider:
-
Define
AI
Clearly
delineate
exactly
what
is
being
addressed
by
the
policy
—
and
avoid
overly
technical
language
—
so
that
users
have
a
clear
understanding
of
which
specific
AI
tools
are
being
covered.
-
Approved
AI
Platforms
and
Use
Establish
an
approved
list
of
AI
platforms
in
order
to
mitigate
the
risk
of
employees
leveraging
any
AI
platform
they
come
across
to
complete
work
tasks,
then
create
a
vehicle
through
which
employees
may
request
the
review,
vetting
and
approval
of
new
tools
in
the
future.
-
Importance
of
Confidentiality
and
Data
Security
Include
a
prohibition
against
entering
any
confidential
information
into
any
AI
platform
unless
that
use
is
expressly
authorized.
To
avoid
the
inadvertent
disclosure
of
sensitive
information,
implement
data
security
measures,
including
data
encryption,
access
controls
and
data
retention
policies.
-
Ensuring
Accuracy
before
Relying
on
AI
Output
Require
users
to
independently
verify
the
output
of
the
AI
platform
before
relying
on
the
content.
In
addition
to
the
potential
embarrassment
and
liability
that
could
result
from
using
“hallucinated”
AI-generated
output,
there
may
be
violation
of
applicable
laws
and
ethical
rules
for
certain
industries
…
such
as
those
for
lawyers.
-
Intellectual
Property
Rights
The
policy
must
alert
users
not
to
use
Gen
AI
to
produce
content
that
may
violate
intellectual
property
(IP)
rights
that
may
belong
to
the
employer,
clients
or
other
third
parties.
Employees
need
to
understand
that
Gen
AI
should
be
utilized
as
an
“idea
generator”
and
must
alert
others
that
it
was
used
to
generate
work
product.
-
Monitoring
for
Bias
Employers
should
ensure
that
their
policy
outlines
how
any
AI
tools
are
used
when
helping
make
employment
decisions
about
applicants
for
hire,
as
well
as
employees
eligible
for
promotion
or
facing
termination.
They
should
then
monitor
any
tool
used
for
employee
selection
decisions
for
evidence
of
bias
built
into
them.
-
Governance
Establish
an
internal
function
dedicated
to
reviewing,
evaluating
and
approving
AI
tools;
another
function
that
monitors
developing
laws
and
regulations
to
assess
their
potential
impact;
and
a
third
function
that
fields
internal
questions,
requests
to
use
additional
AI
platforms
and
reports
of
violations
of
the
AI
usage
policy.
-
Monitoring
and
Periodic
Updates
as
AI
Continues
to
Evolve
As
technologies
continue
to
evolve
and
regulations
emerge,
the
AI
usage
policy
should
contain
a
provision
advising
AI
users
that
the
policy
is
subject
to
frequent
update.
Users
must
be
directed
to
consult
the
policy
each
time
that
they
embark
on
an
AI-related
project.
Resources
from
LexisNexis
Practical
Guidance
The
LexisNexis
Practical
Guidance
team
has
published
a
Generative
Artificial
Intelligence
Resource
Kit,
a
comprehensive
collection
of
information
resources
that
examine
the
key
legal
issues
related
to
the
adoption
and
use
of
Gen
AI
technologies.
Specific
content
includes:
-
A
training
presentation
with
guidance
on
how
AI
is
impacting
employment
law
and
the
workplace; -
A
practice
note
that
provides
an
overview
of
key
legal
issues; -
A
template
that
can
be
used
to
create
a
workplace
policy
governing
the
use
of
AI-driven
tools
in
the
workplace;
and -
A
tracker
that
provides
weekly
updates
on
federal
legislation
pertaining
to
the
deployment
of
Gen
AI
technologies.
Click here for
a
free
trial
of
LexisNexis
Protégé.