AI art is getting a lot of controversy for its implications for current artists.
What will it do to employment prospects in the arts? What about the copyright implications?
What about all of the art that is used to train these models?
All of those questions are important things to think about.
I for one think that some of the fears of human artists getting fully displaced by automation is a bit over stated.
I think it won't displace artists as much as people are worried about. I think it will be just another tool that's used to create art.
It may reshuffle the decks a bit and maybe put some people out of business, put some people into business, but it won't be as much of a sea change in that regard as many think.
However, what worries me the most about the increasing role of AI tools is their closed nature.
As
these
increasingly
sophisticated
AI
models
do
more
and
more,
not
just
in the field of art but in every aspect of our lives, it's crucial that these tools are open and accessible to everyone.
Unfortunately, that is not the case with most of these tools. Currently, the AI models and their outputs and inputs are owned by just a few companies, leaving most users locked out.
I
have
a
strong
concern
that
this
will
concentrate
the
art
market,
displacing
the
decentralized
infrastructure
and
ecosystem
of
small
business
artists
with
a
much
more
centralized
art
world,
dominated
by a few companies that provide tools that play an increasingly critical role in creating art in the modern world.
The
majority
of
the
significant
recent generative AI models are proprietary, from AI music generators to tools like GPT and MidJourney.
These
tools
are
not
even
available
for
use
on
your
own
computer,
instead,
you
have to send your inputs to be processed on a cloud server owned and maintained by the authors of the AI model.
Even
a
few
models
that are source available (and even marketed as open source), like Stable Diffusion, are not fully free and open source.
One reason for these models not being free and open source is what some sources call "toxic candy models."
As
per
this
memo
by
a
contributor
to
the
Debian
Linux
distribution,
writing
in
regards
to
determining
which
AI
software
should
be
included
as
FOSS,
these
are
models
where
the
algorithm’s
weights
and
other
parts
are
a
complete
black
box, and you only receive the final output of the model generation process without information on how it was generated.,
these
are
models
where
the
algorithm's
weights
and
other
parts
are
a
complete
black
box, and you only receive the final output of the model generation process without information on how it was generated.
This includes models based on data/input scraped from the internet.
This
results
in
situations
where
the
art
used to create the final model is usually proprietary, and the legality of even doing this scraping is in dispute.
And of course the companies can't distribute that art to anyone who wants to modify or fully understand the model.
They can't provide a full list of every bit of art they used.
So
you
as
a
user,
if
you
want
to
fundamentally
modify
what's
fed
into
those
models,
if
you
want
to
see
what's
fed
into
those models and figure out where the model gets what it gets from, you fundamentally can't under the current ecosystem.
Access to the data is necessary for users to fully understand, modify, use, and build own their versions of the model based on this.
I think that this issue is adjacent to one of the more plausible arguments that the models should be considered a derivative work of the input art – but I am not sure if I endorse such an argument.
A
concerning
trend
is
for
companies
producing
source-available
AI
models
to
release
them
under
non-free
and
open-source
licenses
that
do
not
meet
standard
guidelines
for
open-source
licenses, such as the FSF definition, the OSI open source definition, or the Debian Free Software Definition.
The most notable of these licenses is the Responsible Artificial Intelligence Source Code (RAIL) License, which imposes restrictions on how users can use the output generated by the tool.
This is similar to proprietary companies that claim copyright interest in the output of their program
This
is
a
departure
from
the
open-source
community's
consensus
that
the
software
developer
does
not
have
ownership
over
what
people
use
the
software for – despite the fact that some of the companies involved still attempt to claim to be open source friendly.
There is a movement in the software industry, particularly in the AI world, for developers to dictate what users can do with their software.
This
mindset
and
movement
asserts
that
the
developer
of
the
software
has,
in
terms
of
both
moral
obligation
and
right,
and
in
terms
of the legal ability to enforce this, has the duty and ability to basically dictate what users do with this software.
https://facctconference.org/static/pdfs_2022/facct22-63.pdf
(From the Abstract) A number of organizations have expressed concerns about the inappropriate
or irresponsible use of AI and have proposed ethical guidelines around the application of such systems.
While such guidelines can help set norms and shape policy, they are not easily enforceable.
In this paper, we advocate the use of licensing to enable legally enforceable behavioral
use conditions on software and code and provide several case studies that demonstrate the
feasibility of behavioral use licensing. “`
From Pg 4 In this paper, we seek to encourage entities and individuals who create AI tools
and applications, to leverage the existing IP license approach to restrict the downstream
use of their tools and applications (i.e., their “IP”).
Specifically, IP licensors should allow others to use their IP only if such licensees agree
to use the IP in ways that are appropriate for the IP being licensed.
While contractual arrangements are not the only means to encourage appropriate behaviour,
it is a mechanism that exists today, is malleable to different circumstances and technologies,
and acts as a strong signaling mechanism that the IP owner takes their ethical responsibilities seriously. “`
This has the potential to spread beyond the AI world and impact the norms of the software industry as a whole.
This
mindset
is
in
blatant
contradiction
of not just the norms of the open source community, but also the old norms of the software industry as a whole.
The expansion of copyright for AI technology is a big concern. The RAIL license, used by Stable Diffusion among others, is an interesting and notable case.
The
developers
behind
this
license
believe
it
is
necessary
to
prevent
harmful
and irresponsible uses of their products, and they believe that AI technology has a lot of potential for misuse.
They argue for the need to come up with a legally enforceable mechanism to limit potentially irresponsible uses.
https://www.licenses.ai/
Responsible AI Licenses (RAIL) empower developers to restrict the use of their AI technology
in order to prevent irresponsible and harmful applications.
These licenses include behavioral-use clauses which grant permissions for specific use-cases
and/or restrict certain use-cases.
In case a license permits derivative works, RAIL Licenses also require that the use of any
downstream derivatives (including use, modification, redistribution, repackaging) of the
licensed artificial must abide by the behavioral-use restrictions.
But I don't particularly agree with the necessity of using copyright as a means for this.
However, I do not agree with the use of copyright as a means to achieve this.
AI
art
generation
may do different things than traditional art methods, but it's not as much of a game-changer as some people claim.
AI
is
just
a
buzzword
for
things
that
seem
computationally practical based on everyday experiences, but where practical algorithms are new or nonexistent.
Today's
AI
techniques
will
become
tomorrow's
conventional
art
techniques,
and software tools for modifying and creating art have existed for a long time, such as Photoshop and GIMP.
These AI tools are just an extension of digital art.
Artistic
controversies,
such as whether or not something is real art, have arisen before with new forms of art, such as photography.
AI
art
is
just
another
method
of
art
that
uses
technology
to
probe
and
sample
an
extrinsic
space
outside of the artist's mind, similar to how photography creates art by sampling from from the physical environment.
In both cases, the artist's creativity comes from knowing what to sample, and how to sample it, thereby the creation of novel art is possible.
Both conventional art methods and the new AI art have a lot of the same ethical issues.
For example, one that gets mentioned a lot is the ability of AI art to potentially create fake media.
Images that look like they're of a real person or of a real event, but aren't actually representative of the world.
However,
traditional
means
for
visual
art
also
have lots of ways to be misleading, manipulated, edited, and staged in a way that doesn't reflect the real world.
People
often overestimate how accurate visual arts, especially photographic arts, are at truly representing the world.
The
new
technology
driving
new
ways
to
manipulate
and
generate
imagery
may
reset
the
social
environment
around
visual
arts
to
something
that's actually more healthy and representative of not just what AI art is, but what visual art has always been.
The
extreme
(but
unlikely)
case
might
be when fakes become so common that the only way to trust an image is to know where it came from, its history.
This
would
reduce
visual
imagery
to
how
it
was
perceived
before
modern
photography
became
widely
available,
in which you had to trust the testimony of the artist or the author for wherever you were getting the image from.
Every
medium
of
art
or
expression
has
the
ability
to
mislead
and
be
misused, and the mechanisms that society has to limit that misuse don't need to change with this new technology.
The
needed
legal mechanisms already exist, such as defamation law, to limit the use of faked images to lie about someone.
Attempting
to bring copyright into what's been traditionally handled by defamation law is an attempt to rewrite the balance.
Copyright has different, often more extreme penalties, than society has seen fit to impose for conventionally.
And
how
society
has
deemed
it
proper
to
handle
things
like
lying
about
people
or
deliberately
misleading
has
been
constructed
by the process of democracy and centuries of societal experience, to optimize various societal trade-offs.
To
balance
negative
social
effects
of
potentially
dangerous content and or damage to people's reputation, disseminating, versus the importance of freedom of expression.
This
type
of
rulemaking
is
fundamentally
anti-democratic
and
technocratic,
as
it appoints those who write the license and push the rules as arbiters of how society should handle these risks.
It also doesn't take into account the ways in which humans can fail, sometimes more than machines can fail.
For example, traditional human forensic methodologies can also be very inaccurate, yet still entered as evidence.
The use of AI technology raises many important questions about its potential misuse and accountability.
But it is not necessarily true that AI technology is worse than humans in many cases often discussed.
For instance, consider the process of creating a sketch of a suspect.
A
witness
description
could
be
interpreted by a human sketch artist or an AI model, both of which are interpretations and not the ground truth.
The AI system may even come up with an equal or better interpretation than the human.
It is crucial to have a wide social debate about the trade-offs of AI and where its limits lie.
When is AI better than humans, and when has society already gone too far in trusting human methods?
AI
has
many
of
the
same
limitations
as
humans,
but
it
may
demonstrate
those
limits
in
a
way
that
prompts
society to reconsider its past decisions and to be more responsible with both human and automated decision making.
There is also the issue of accountability, especially when it comes to the normal legal system.
A
top-down
institutional
approach
to
limiting
technology
has
much
less
accountability to the public and lacks a wide range of perspectives, leading to less legitimate and often worse results.
I believe this mindset could spread throughout the software industry, including to places where it would be very dangerous.
If
this
idea
of
social
responsibility
of
companies
and
developers
to
restrict
their
users
becomes
more
widespread, it would rewrite the balance of power between software companies and consumers in favor of the companies.
Imagine if this mindset is taken to conventional tools.
Imagine
the
world
in
which
Microsoft
is
treated
as
both
in
terms
of
legal
power
and
in terms of generally perceived ethical responsibility as responsible for what a writer does with Microsoft Word.
Or if Adobe is considered in the same way responsible for what an artist does with Photoshop or Illustrator and so on.
It would be no longer a world where you can do what you want with a piece of software that runs on your computer.
Someone
else, someone with limited accountability to you, would have a lot more power over what you can do on your own computer.
The companies who make the software you use would have more power over what you can do with their software, and this change could make the world a much worse place.
A
point
raised
in the previously linked discussion of responsible AI licenses is the idea of authorial integrity over software.
The
developer
or the company who produced it holds the mindset and vision that should influence what users do with the software.
It is contended that this artistic or authorial vision should also affect everyone downstream.
However,
using
the
software
in
a way that is not part of that vision is essentially violating the rights of the author or the developer or the company.
https://facctconference.org/static/pdfs_2022/facct22-63.pdf
(Pg 2) The context in which a model is applied can be far removed from that which the developers
had intended, a major point of concern from the perspective of human-centered machine learning
[31] … applications that may be of concern, such as large-scale surveillance or the creation of “fake”
media.
In some cases, the developers or technology creators may legitimately want to control the
use of their work due to concerns arising out of the data that it was trained on, the technology’s
underlying assumptions about deploy-time characteristics, or the lack of sufficient adversarial
testing and testing for bias.
This is especially true of AI models that are difficult or expensive to recreate.
For example, given that models such as GPT-3 [17] reportedly cost over $10 million (U.S.)
to train, very few organizations are positioned to train (and potentially, need to retrain) a model of similar size
The mindset that the developer or the company has control over the software is incorrect.
There
is
a
big
difference between functional works and creative works, and software falls into the category of functional works.
Software is essentially a description of a process and a set of instructions, a tool that is used to guide a method.
It's like a recipe or a textbook telling how you need to mix the paints to get a color. It's not the painting that uses that color.
Control over the software used to make art, is fundamentally exerting control over a method, over a technique that's represented by that software.
A work of art is a final product that can stand on its own, a work that's enjoyed by itself.
In that case, an artist can have an actual creative vision that's put through into their art.
And I think that doesn't work when you get a tool like software.
The paper raises the cost of creating the software as a reason for preserving the vision, but I believe that considering the cost of software development moves things in the opposite direction.
In
the
art
world,
there
is
potential
for
substitutes,
for
other
artists
to
come
in
and
make
a
work of art that reflects their vision without necessarily needing to modify or use what another artist has done.
The
resources
available
to
make
art
are often common enough or inexpensive enough that many visions of what art should be can coexist with each other.
You can have many artists creating many works, and each of those works with their own vision.
But
when
you
get
a
software
program
that
costs
tens
of
millions
of
dollars,
even
hundreds
of
millions
of
dollars
to
produce,
a normal person can't step into that competition, they can't step into that creative process around developing software.
With
software,
the
cost
of
production
is
so
high
that
a
normal
person
cannot
compete
in the creative process of developing software, giving the developer or copyright holder a lot of power over society.
Once
you
include
the
case
of
interlinked
supply
chains,
programs
that
are
dependent
on
other
programs,
and
the
entire
tech
stack
would
have
to
be
rebuilt from the ground up to have a different vision, which is infeasible even for the wealthiest person on the planet.
This is why the freedom to use and modify software and expand upon it is important and critical.
Asserting
that
copyright
holders or companies or software developers have the right or obligation to restrict its use is very dangerous.