Generative AI: for a better understanding of our practices. A discussion with Arvind Narayanan

Arvind Narayanan, Professor of Computer Science at Princeton University, looks at the challenges of large-scale language models and artificial intelligence, considering cybersecurity, the impact on the cultural and creative industries as well as open source.

What are the major challenges that have emerged because of large-scale language models?

There are four categories of risks that I am particularly concerned about. The first one relates to the economic impact of large-scale language models. Only a few companies will benefit from the profits generated by these models, and the users who helped generate the data are not compensated.

The second set of concerns is about the outputs of generative AI, as on the one hand, malicious actors can use the models to cause harm, and on the other, people who are not informed of the limitations of these models can also harm themselves. For example, when using large language models in medical diagnostics, the user interface is convenient and the answers are very good most of the time. However, these models are not impervious to errors and have real shortcomings in terms of transparency. Several publications have also highlighted the impact of generative AI on access to knowledge and the forming of opinions. I'm thinking of a Stanford publication entitled Whose Opinions Do Language Models Reflect ? and another entitled Co-Writing with Opinionated Language Models Affects Users' Views, which shows that our interactions with language models can change our opinions on controversial subjects. When these models are capable of affecting billions of people, this seems to me to be a major issue we need to consider. For example, in China, language models have to conform to the Chinese political regime, and the Falcon 40B model developed by the Emirate of Abu Dhabi generates responses that are in favor of the government, sometimes to the detriment of the truth.

The third set of concerns that I have is about the power of these specific companies. In the US, OpenAI has been lobbying intensely for licensing. I think that licensing will entrench the power of these specific primarily US-based companies and make it hard for others to compete whether it’s open source, whether it’s international competitors. 

Cybersecurity is the fourth type of risk, which can be approached from two angles: on the one hand, the fact that AI itself can be hacked, and on the other, the fact that it can be used to hack sensitive software or infrastructures. I primarily worry about the first one, especially "prompt injections" (ed. a method of manipulating language models by providing them with malicious instructions), against which companies are generally poorly protected.

There are also many blind spots in the public discussion. A lack of attention is paid, for example, to the generation of non-consensual pornographic deepfakes. This problem affects many women, yet receives little media or academic attention. However, AI can also help address these issues. For the last 10 or 20 years, it has already been the case that software tools are better at finding vulnerabilities than humans are in many cases. In some cases, therefore, it seems possible to use AI against AI by using software to detect flaws in these models. On social networks, it can be used to perform automatic moderation functions to compensate for staff shortages. Regarding the reliability of large language models, we can think of foundation models (tels que ChatGPT) being designed to look up and summarize information when possible, rather than simply generating text, a technique called Retrieval Augmented Generation.

What are your observations on the impact on the creative and cultural industries?

I am not going to make predictions, I think the history of predicting the impact of automation on labor does not have a great  track record. So I'd rather focus on what's happening now.

Let's take the example of the creative and cultural industries. On the one hand, artists have existential concerns about the use of AI in their field. And on the other hand, other people are asserting that AI art is not real art. So why fear AI's takeover of the art market? Empirically, I cannot answer that question. On the contrary, some sectors of the creative and cultural industries are already suffering from automation. As a matter of example, there is a lot of evidence that demand for photo libraries and image banks has fallen since the emergence of image-generating AI applications for the general public.

We also need to be able to measure the real impact of the appropriation of work by AIs on authors, and consider that it does not have to be "all or nothing". AI is having a bigger effect on creators of visual media than textual media, and therefore, a fair solution might be to have distinct copyrights. While there is a risk for authors’ work to be used without their consent, there is a real demand for AI-generated content, with the users being aware that it is made by AI. Watermarking is not going to help with that problem. One good example of this would be the success of Heart on my sleeve, a duet with the voices of Drake and The Weeknd that was created from scratch by a user making use of AI.

What are the strengths that you identify in the forthcoming regulation of AI, particularly about the European AI Regulation, and what avenues of improvement would you suggest?

Regulation is the product of social and political values, and it's always difficult to make judgments about their relevance. Historically, the approach in the US for regulation is sectoral as opposed to a more European horizontal approach. It is very nice to have these two contrasting approaches, we get to generate data on whether or not they are effective. Whatever the approach of the regulation is, however, I don't think it is advisable to apply a principle of non-proliferation of the models, as this would run the risk of reinforcing the monopoly of the major players in artificial intelligence. Speculating on future developments could be more problematic than anything else.

I very much appreciate the EU AI Act because of its focus on transparency. The emphasis on audits is a good thing, but if we want to better understand the risks that these models might pose, audits are only half the answer. The other half is about the way people are using those models, whether or not false results recur as a result of users’ requests, how frequently they occur, the circulation of hateful content, and so on. For example, there are studies on gender bias in image and text generation models. For the most part, they are based on examples of invented prompts, whereas if you ask a model a question in a certain way, the model may produce a biased or toxic result. In contrast to an audit approach, which only provides information on how the model works, I believe it is vital to encourage greater transparency on how people use these models so that we can anticipate the problems we may face in the future. The transparency of use makes it possible to understand whether, in practice, the integration of racist, sexist, and gender biases into general-purpose AI is a real problem. In this way, it can inform regulators about what aspects need regulation, meanwhile challenging speculation about the risks.

Moreover, there are many actors in the AI supply chain, and therefore not all the responsibility has to focus on one single entity. Consequently, it is important to apply a layering logic for general-purpose AI systems. The possible risks of such systems will differ depending on whether the product is used to produce medical diagnoses, for example, or is dedicated to something else. I think it makes sense to consider different regulations for different uses of a product. 

Finally, we need to start thinking about how children use AI. They are tomorrow's users, and artificial intelligence will shape their world much more than it has shaped ours. For the time being, AI companies are refusing to accept any responsibility in this respect, simply stating that children must not use their products, which of course has no effect on their use in practice and exposes them to danger. I advocate for an approach where we encourage children's use of AI, as long as we impose requirements on providers to put guardrails in place. The goal is to empower tomorrow's users.

To what extent do you share the enthusiasm for open-source AI models?

Open-source generative AI models present risks as much as they can solve problems. In terms of risks, the democratization enabled by open source certainly makes it possible for a larger set of bad actors to get access to them. To go back to the example of pornographic deepfakes, open-source AI models are far more efficient to allow their proliferation than proprietary models.

There are two possible safeguards. Firstly, in the absence of licenses, it would be desirable to impose pre-deployment audits of the models. So if a team of researchers develops and trains a powerful open-source model for text and image generation, they should not be able to make it immediately available on GitHub. If threats emanating from the model are detected, a competent actor - state or private - could be involved in some form of coordination to address the risks.

Secondly, obligations could be imposed on providers who host models, because these models are not end products for users, but technologies. My intuition is that users will not use these models directly but will turn to the providers who host them. It is therefore necessary to impose obligations on the players who make them available to the public. We might consider several proposals, including the obligation to set up safeguards or a transparency requirement. In addition, rather than limiting access to software, it would be more relevant to improve the attack surface’s defense (ed. to protect all the weak points through which a malicious actor could penetrate software). The example of misinformation and disinformation would be relevant in this respect. Generative AI models have brought costs down and democratized the production of information news. However, the production of false information was never the difficult part. The hard part was the dissemination of false information to the public and the ability to persuade users of its veracity. Our response should therefore be to improve the means of identifying false information. I think legislators should focus on these means, which are the responsibility of the platforms. The platforms have been rather effective in combating false information but not effective enough.

Finally, while it can present risks and drawbacks, my ideal future would be to see governments join forces to foster and fund a 100 million euro university collaboration aimed at developing open-source models capable of competing with the major market players.

Partager