By Dr. Gyan Pathak
Generative AI could revolutionize industries and societies, and generate as much as $4.4 trillion yearly, but it also runs significant risks. It poses critical societal and policy challenges that policymakers must confront are: potential shifts in labour markets, copyright uncertainties, and risk associated with the perpetuation of societal biases and the potential for misuse in the creation of disinformation and manipulated content. Consequences could extend to the spreading of mis- and disinformation, perpetuation of discrimination, distortion of public discourse and markets, and the incitement of violence.
In a new OECD working paper on “Initial Policy Considerations for Generative Artificial Intelligence” the authors Philippe Lorenz, Karine Perset, and Jamie Berryhill has weighed the potential benefits and risks and lamented that policy makers around the globe are still grappling with its implications.
The path ahead is unclear and replete with differing perspectives. One extreme argues for a moratorium on experiments with generative AI more advanced than GPT-4 while the other believes that the supposed existential risks of AI are overhyped. Others—perhaps most—fall somewhere in between.
Regardless of ideological stance on these issues, there is an urgent need for further research to prepare for different possible generative-AI future scenarios. Given the great uncertainty and potentially large impact the technology could have at both micro and macro levels, policy makers must remain informed and prepared to take appropriate action through forward-looking AI policies.
Generative AI models that generate text, image, video, and audio (e.g., music, speech) content are advancing at breakneck speed. This poses endless possibilities, demonstrated across a growing array of domains. However, the technology also poses numerous challenges and risks to individuals, companies, economies, societies, and policymaking around the globe, ranging from near-term labour-market disruption and disinformation to potential long-term challenges in controlling machine actions. The future trajectories of generative AI are difficult to predict, but governments must explore them to have a hand in shaping them, the paper emphasized.
It should be noted that investment banks, consulting firms, and researchers have reported that generative AI will cause massive economic impacts in the coming years. Goldman Sachs estimates that generative AI could account for a 7 percent rise in global gross domestic product (GDP) over ten years. McKinsey & Company estimates that generative AI could add USD 2.6-4.4 trillion per year across63 use cases, for an increase in AI’s total economic effects of 15-50 percent. Polaris estimates growth of the global generative-AI market at a compound annual rate of34.2 percent, from USD 10.6 billion in 2022 to USD 200.7 billion by 2032.
At present, generative AI is at an early developmental stage, requiring large investments in R&D and a skilled but scarce workforce to take it to the next stage of maturity. Further growth is expected to come from audio synthesis, data pre-processing, image compression, noise reduction from visual data, medical imaging, and image classification, especially in healthcare. Application areas are widening which would include chip and parts design, material sciences, and entertainment in near future in big way.
There are both short-term and long-term risks and concerns, the paper says. Near-term issues, often rooted in present-day opportunities and challenges, which policymakers should consider due to their urgency and potential for impact include, but are not limited to: labour-market impacts, including job displacement, changing skills needs, labour-market inclusiveness, and promoting trustworthy use of AI in the workplace; information pollution – including the reduced quality of generative AI outputs due to exponential growth in AI-generated content ingested as training data by other AI systems in a vicious cycle –and the consequent decreasing informational relevance of the Internet.
Other issues include; AI coding assistants enabling automated cyber-security attacks; generative AI’s role in mass surveillance and censorship; overreliance and dependency on generative AI systems; copyright issues for new creations and from training on copyrighted works; academic and creative dishonesty, such as plagiarism; concentration of AI resources (data, hardware, talent) among few multinational tech companies and governments; disparate access to generative AI across societies, countries, and world regions; the need for stronger efforts to curate diverse, high-quality datasets; mis- and disinformation, hate speech, bias, and discrimination by increasingly powerful and realistic generative AI outputs; and generative AI’s ecological footprint and natural resources consumption from the tremendous amounts of computing power required for deep learning.
The paper says that the risks from emerging model behaviours are also critical to address. It has flagged systemic, delayed harms and collective disempowerment. Systemic, delayed harms are non-immediate harms that can be “destructive, long-lasting, and hard to fix”, such as social-media recommender systems based on reinforcement-learning. Such algorithms optimise for metrics that can “change or manipulate user’s internal states (e.g. preferences, beliefs, psychology)”.Collective disempowerment is the perceived danger that model capabilities will perform increasingly important functions in society, taking power away from humans. This could take the form of gradually ceding decision-making to generative AI systems. Its second impact is intensifying concentrations of power and the ability to reap the benefits of AI – already a concern.
AI safety researchers are explicitly looking into another emerging behaviour of concern to the alignment between AI objectives and human preferences: power-seeking, in which goals that provoke power-seeking are reinforced during training and pursued more directly and with novel strategies during deployment, posing new and potentially severe threats to society.
Machine-Learning (ML) systems demonstrate two emergent behaviours that could be catalysed by growing generative AI model capabilities. In reward hacking, a model finds unforeseen, and potentially harmful ways of achieving a goal while exploiting the reward signal. In pursuing instrumental goals, a model seeks strategies to attain sub-objectives that help it reach an envisaged goal, which might go against the intent of the developers and envisaged goal. Early evidence shows this can happen even without explicit instructions by model operators or designers.
For example, to solve a CAPTCHA code during initial safety testing, ChatGPT misrepresented itself as a vision-impaired human and hired a gig economy worker to solve the CAPTCHA for it.
Researchers find that models trained with reinforcement learning from human feedback (RLHF) are more likely to exhibit behaviours such as persuading developers to not shut off the system, pretending to be human, and seeking resource acquisition, such as accruing wealth.
Future risks of generative AI, the paper says, could demand solutions on a larger, more systemic scale. These include regulation, ethics frameworks, technical AI standardisation, audits, model release, and access strategies, among others. (IPA Service)