Generative AI—systems that can create content on their own—has sparked calls for regulation, especially with OpenAI’s recent release of a new version of its ChatGPT tool. The European Union is leading regulation efforts, with the proposed AI Act focusing on how high-risk applications are used. In the US, the Biden administration has presented a voluntary AI Bill of Rights, while some public agencies are already limiting the use of generative AI tools. However, experts warn that generative AI could lead to mass-produced disinformation and pose a threat to truth. Some tech leaders caution against overly stringent regulations, while China already has plans to limit generative AI.
The Need for Regulation
Generative AI, a type of artificial intelligence that produces content such as text or images on its own, has been in development for some time. But with OpenAI’s release of ChatGPT, a tool that can generate human-like text from prompts, the potential of this technology has come into sharp focus. Experts warn that generative AI carries risks for privacy, equity, and truth.
Janet Haven, executive director of Data & Society, a nonprofit research organization in New York, emphasizes the need for regulation. “The idea that tech companies get to build whatever they want and release it into the world and society scrambles to adjust and make way for that thing is backwards,” she says.
The European Union’s Proposed AI Act
The European Union has been at the forefront of efforts to regulate AI. Its Artificial Intelligence Act, first issued in 2021 and still being debated, would put aggressive safeguards in place when the technology is being used for “high risk” cases, including employment decisions or in some law enforcement operations, while leaving more room for experimentation with lower-risk applications. Some lawmakers want to include ChatGPT as a high-risk application, but others disagree. As it’s written, the bill focuses on how technologies are used rather than on the specific technologies themselves.
The US Approach
In the US, local, state, and federal officials have all taken steps toward developing rules for AI. The Biden administration presented its blueprint for an “AI Bill of Rights” last fall, which addresses issues such as discrimination, privacy, and the ability for users to opt out of automated systems. However, the guidelines are voluntary and some experts warn that generative AI has already raised issues that the blueprint doesn’t address, such as the potential for mass-produced disinformation that could make it harder for people to trust anything they encounter online.
Impact of Generative AI and Tech Companies’ Responsibility
Generative AI is already being used in a range of industries, from finance to entertainment. But companies must take responsibility for ensuring that their use of the technology aligns with ethical concerns. “For me, the thing that will raise alarm bells is if organizations are driving towards commercializing without equally talking about how they are ensuring it’s being done in a responsible way,” says Steven Mills, chief AI ethics officer at Boston Consulting Group Inc.
Tech leaders, including Google, Microsoft, and OpenAI, have been vocal about their commitment to ethical concerns, but they also caution against overly stringent regulation. In a congressional hearing in March, former Google CEO Eric Schmidt argued that AI tools should reflect American values and that the government should primarily focus on “working on the edges where you have misuse.” However, the situation is complicated by China’s aggressive pursuit of AI, which could give the country a geopolitical advantage if the West overregulates.
The Future of Generative AI
Despite the potential risks of generative AI, experts believe its development will continue, with new applications emerging rapidly. OpenAI itself has just released a new, more advanced version of the technology that powers ChatGPT. Some public agencies in the US, such as the New York City Department of Education, are already limiting the use of generative AI tools, but more regulation is needed to ensure that this powerful technology is used responsibly and ethically.