menu

Policy Lab: Generative Artificial Intelligence and the Future of Creativity

Michael Lind Menna

04 Apr 2024

Introduction

Last October, the Movement for a Better Internet hosted a virtual policy lab on generative artificial intelligence (AI) and the future of creativity, featuring members and representatives from Organizing Partners Creative Commons, Internet Archive, and Public Knowledge. A policy lab is a process by which members of a movement can engage in order to reflect on specific issues and develop policy proposals or campaigns. The goal of this lab was to find alignment on the most urgent possibilities, problems, and policy interventions surrounding AI and human creativity. More broadly, we also wanted to consider how civil society can help sustain the commons and make the internet safer, kinder, and more equitable in an AI-integrated age. This blog post reports on where and how participants reached consensus—with respect to the opportunities and dangers that AI poses, the ideals that AI policy should pursue, and the interventions that might make a positive impact for those in need. It also lays out next steps, to help members of civil society know where and how to act and listen for new developments.

After sharing warm introductions, the policy lab followed a format of two breakout sessions, wherein attendees split into five separate groups for discussion. The first session staged conversations about the core values that should attend the ongoing expansion of domestic and global AI infrastructure, as well as the risks that any failure to cultivate these values might create. Next, in the second breakout session, attendees prioritized and discussed practical actions and interventions that might help ensure that AI development proceeds with the public interest at top of mind.

Breakout 1: Values and Risks

During the first set of discussions around the values and risks of generative AI, participants were asked to dwell not only on the advantages and disadvantages of this new and exciting technology, but also on who is positioned to benefit and suffer in the next phase of its development. In this sense, equity and pluralism were built-in considerations, and it is no surprise that all five groups touched in some way on the need to make sure that everyone gets to experience and appreciate the boon of generative AI the way that they want. By the same token, it is no surprise that each group also raised concerns about how AI economies might give way to high concentrations of power, wherein the technology and its data resources are sustained and developed in a way that serves a few select corporations, at the expense of the broader population. 

When asked about the values that should accompany the ongoing effort to build out and integrate generative AI infrastructure, the groups’ answers tended to circle three essential principles: 

  1. we need to preserve the existing body of knowledge and people’s unmitigated access to it; 

  2. we need to ensure users’ freedom of speech and expression, as they mine and build out that body of knowledge; and

  3. we need to cultivate a reciprocal relationship to generative AI, wherein human communities and their creative capacities are enhanced and not replaced by the technology. 

These three principles are deeply interrelated, of course, insofar as generative AI is trained upon existing data, and should not compound existing biases in the material that it spits out. By securing different humans’ ability to contribute meaningfully to the body of shared knowledge—in journalism, art, music, and so many other cultural industries—we also make sure that the assistive technology is growing in a way that enhances human creativity as much as human efficiency.

Naturally, in conversations about the risks posed by generative AI, the groups’ answers tended to revolve around ways in which the current status quo might beget an unregulated or improperly regulated technological infrastructure, limiting humans’ freedom to see, use, and build on trustworthy information. One group described how popular media’s familiar doom scenarios often skip over the real-world processes that could bring these darker timelines about—namely, around the concentration of market and capital power in a few wealthy corporations, whose incentives in maintaining or in restricting generative AI could deviate from or entirely contradict the needs of the public. The consequences to these rather familiar imbalances are far-reaching and potentially devastating to the values outlined above. As currently wielded, regulatory structures like copyright could bias generative AI’s inputs and outputs, if they are understood to disallow the technology’s training and expressive applications. This kind of prohibition may at first appear to benefit the human creators (and their corporate backers) who have raised alarm about generative AI’s threat to their livelihood. Nevertheless, it subsists upon a narrow and draconian view of copyright not only out of line with historical understandings of fair use, but also anathema to the project of letting humans freely create with the materials they need.

Breakout 2: Policy Interventions

In the second breakout session, policy lab attendees were asked to consider the regulatory interventions and practical actions that might improve the developing systems and habits around generative AI. Each breakout group was asked to consider one of five potential avenues of impact, as well as the parties best positioned to bring that change about without incurring any unwanted threats or challenges. The avenues bore on one or both of the processes attendant to training AI on inputs or using it to generate outputs, and they included: (1) new laws and regulations; (2) preference signaling; (3) legal tools; (4) technical improvements; (5) stated ethical principles. Notably, participants seemed less interested in efforts to enact new laws and regulations. This reticence reflected understandable frustrations about Congressional gridlock, but there was also a sense that legislators are more likely at this premature stage to focus on the wrong regulatory priorities, instead of more general technology regulation (say, in data protection) that would do more to ameliorate concerns about AI.

Preference signaling refers to the possibility of allowing the creators and rights holders in AI inputs the opportunity either to opt into that process or to establish usage restrictions in allowing AI to train on their work. Members of this breakout group recognized that, while this sort of system might sound appealing, it raises several practical and ethical questions. First, participants expressed doubt about how it is possible to ‘retrain’ an AI model without starting from scratch. Second, even if engineers were able to create a standard by which AI could ‘skip’ certain inputs, there’s the problem that this permissions-based culture would bias generative AI in a way favoring entrenched corporate stakeholders, at the expense of the fair use entitled to any artist using another’s work as a training exercise.

Legal tools are one means of enforcing creators’ and rights holders’ stated preferences, and members of that breakout group discussed whether and how some legal design—whether in the form of a licensing agreement or platform terms of use—might implement those preferences in a desirable way. Setting aside the policy question of whether it would be a good or valid application of copyright to create this sort of system, this group noted that the benefit of legal tools lay in their flexible directionality. The same way that CC licenses empowered creators who wanted their work to be shared openly, it may be possible to articulate a similar opt-in structure that allows creators to make their own works, AI-generated or otherwise, available for training and other public uses.

Technical improvements offer another means of implementing preferences—and for maximizing the value generative AI as a whole. These could implement creator and user optionality, but much more importantly, may also assist in other more pressing concerns, like in labeling and tagging AI-generated products to ensure information reliability and greater transparency around data usage and privacy standards. Another possibility would be to enhance language-learning models on books which are now withheld due to copyright protections, which again, may unduly create barriers to what in the material world would be considered fair use. Advocating these kinds of technical improvements may offer an opportunity to show lawmakers the need for open, unbiased data access in training generative AI.

Last but not least, another way of articulating shared values and improving current practices surrounding generative AI may be to promote and institute standards bodies, professional organizations, or some other set of reviewable ‘best practices’ oriented toward stated ethical principles. Participants in this breakout group noted that many industries regulate themselves—in medicine, law, etc.—by establishing some set of practical and moral norms. Unfortunately, it was also noted during discussion that these guild-like models can be defensive and protectionist in ways that skew benefits to society. For this reason, everyone agreed that regulators and civil society members should have a prominent seat at the table in any current and ongoing discussions about AI ‘best practices,’ regarding the training data fed into a machine or the outputs users make with it.

Conclusion and Next Steps

This policy lab on generative AI and the future of creativity was a first for the Movement for a Better Internet and its Organizing Partners, and it provided some key insights. First and foremost, participants were able to find meaningful overlap in their values and fears about generative AI and its current regulatory trajectory. Second, and not unsurprisingly, they learned the extent to which this area’s many complex problems and proposed solutions demand more time and deliberation. Based on the insights from this policy lab, the three attending Organizing Partners are considering alternative methods for movement members to engage, including virtual “pop-up” formats on more bite-sized topics. These events may allow for more focused discussion, but participants should still refer to the values, risks, and possible legal interventions articulated here in the Movement for a Better Internet’s first Policy Lab on Generative AI.