Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Learn Global Policy and AI Governance | Ethical, Regulatory, and Future Perspectives in Generative AI
Generative AI
course content

Course Content

Generative AI

Generative AI

1. Introduction to Generative AI
2. Theoretical Foundations
3. Building and Training Generative Models
4. Ethical, Regulatory, and Future Perspectives in Generative AI

book
Global Policy and AI Governance

As generative AI becomes embedded in daily lifeβ€”from content creation to decision supportβ€”regulatory and governance frameworks have become essential to ensure its safe, fair, and transparent use. Without oversight, AI systems risk amplifying harm, escaping accountability, and undermining public trust. This chapter explores global efforts to regulate generative AI and set standards for responsible deployment.

Government Regulation

Governments around the world are recognizing that the transformative power of generative AI comes with significant risksβ€”ranging from misinformation and deepfakes to labor displacement and legal ambiguity. As a result, several regulatory approaches have emerged.

European Union – EU AI Act

The EU AI Act is the first comprehensive legislative framework for AI in the world. It classifies AI systems by risk levelβ€”ranging from minimal to unacceptable β€” and places generative models like GPT and Stable Diffusion in the β€œhigh-risk” category. Key obligations include:

  • Transparency requirements: developers must clearly disclose that content was generated by AI (e.g., via watermarks or metadata);

  • Documentation and risk management: developers must provide technical documentation outlining training data, potential risks, and mitigation strategies;

  • Limitations on use: certain applications (e.g., real-time biometric surveillance) are banned outright or strictly regulated.

Note
Note

Under the AI Act, companies deploying generative models must assess and report on bias, misuse risks, and societal impacts before launch.

United States – Sector-Specific and State-Level Initiatives

The U.S. has yet to adopt a unified federal AI law. However, various state-level laws and federal executive actions have emerged:

  • California’s AB 730 prohibits the use of deepfakes in political advertising during election periods;

  • Executive Order on AI (2023) calls for federal agencies to develop safety standards, support watermarking, and fund research into AI risk mitigation.

China – Mandatory Disclosure and Content Review

China has adopted strict rules requiring:

  • Real-name authentication for users interacting with AI-generated content;

  • Watermarking of synthetic media and human moderation of content involving politically sensitive subjects;

  • Algorithm registration: developers must register and disclose intent and capabilities for any model deployed publicly.

Note
Note

The Cyberspace Administration of China mandates that providers label AI-generated content and ensure training data does not endanger national security.

Other Countries

  • Canada: proposed the Artificial Intelligence and Data Act (AIDA) to regulate high-impact AI systems;

  • UK: the government supports a β€œpro-innovation” regulatory approach with voluntary guidelines but no strict legislation yet;

  • Brazil and India: debating frameworks that blend consumer protection with incentives for innovation.

Voluntary Frameworks and Industry Initiatives

While regulation lags behind technological advances, industry players and international organizations have stepped in to establish ethical norms and best practices.

International Standards and Ethics Guidelines

  • OECD AI Principles: adopted by over 40 countries, these principles promote AI that is inclusive, transparent, and accountable;

  • UNESCO’s AI Ethics Framework: encourages human rights-based governance, including environmental sustainability and cultural diversity;

  • IEEE’s Ethically Aligned Design: offers a technical guide for developing AI that respects privacy, fairness, and autonomy.

Industry-Led Consortia

Companies are increasingly recognizing the need for self-regulation to maintain public trust and avoid more restrictive government intervention.

  • Partnership on AI: founded by OpenAI, Google, Microsoft, and others, it supports research on fairness, interpretability, and social impact;

  • Frontier Model Forum: a collaboration between OpenAI, Anthropic, Google DeepMind, and Cohere to promote:

    • Responsible model scaling;

    • External safety audits;

    • Best practices for high-stakes deployments;

    • Sharing of technical and safety documentation.

  • MLCommons and BigScience: open-source research communities working on transparency benchmarks and open model evaluations.

Note
Note

Frontier AI developers have committed to working with governments to create pre-deployment risk evaluations for powerful models like GPT-5.

Future Outlook: What’s Coming Next?

The governance of generative AI is still in its early stages, and several key trends are shaping its future:

  • Model transparency: policies are likely to require developers to disclose how AI-generated content is created, and whether users are interacting with an AI system;

  • Synthetic content labeling: watermarking and invisible signatures may become mandatory for AI-generated images, videos, and text;

  • Audits and risk evaluations: independent audits of generative models will be critical, particularly for frontier models with emergent capabilities;

  • Global coordination: as models become more powerful, there's growing recognition that global agreementsβ€”similar to climate or nuclear accordsβ€”may be necessary;

  • Model registries: Countries may require developers to register large-scale AI models along with safety evaluations and intended use cases.

1. What is one major requirement of the EU AI Act for generative AI systems?

2. What is the purpose of the Frontier Model Forum?

3. Which of the following is a likely future trend in AI governance?

question mark

What is one major requirement of the EU AI Act for generative AI systems?

Select the correct answer

question mark

What is the purpose of the Frontier Model Forum?

Select the correct answer

question mark

Which of the following is a likely future trend in AI governance?

Select the correct answer

Everything was clear?

How can we improve it?

Thanks for your feedback!

SectionΒ 4. ChapterΒ 5

Ask AI

expand
ChatGPT

Ask anything or try one of the suggested questions to begin our chat

course content

Course Content

Generative AI

Generative AI

1. Introduction to Generative AI
2. Theoretical Foundations
3. Building and Training Generative Models
4. Ethical, Regulatory, and Future Perspectives in Generative AI

book
Global Policy and AI Governance

As generative AI becomes embedded in daily lifeβ€”from content creation to decision supportβ€”regulatory and governance frameworks have become essential to ensure its safe, fair, and transparent use. Without oversight, AI systems risk amplifying harm, escaping accountability, and undermining public trust. This chapter explores global efforts to regulate generative AI and set standards for responsible deployment.

Government Regulation

Governments around the world are recognizing that the transformative power of generative AI comes with significant risksβ€”ranging from misinformation and deepfakes to labor displacement and legal ambiguity. As a result, several regulatory approaches have emerged.

European Union – EU AI Act

The EU AI Act is the first comprehensive legislative framework for AI in the world. It classifies AI systems by risk levelβ€”ranging from minimal to unacceptable β€” and places generative models like GPT and Stable Diffusion in the β€œhigh-risk” category. Key obligations include:

  • Transparency requirements: developers must clearly disclose that content was generated by AI (e.g., via watermarks or metadata);

  • Documentation and risk management: developers must provide technical documentation outlining training data, potential risks, and mitigation strategies;

  • Limitations on use: certain applications (e.g., real-time biometric surveillance) are banned outright or strictly regulated.

Note
Note

Under the AI Act, companies deploying generative models must assess and report on bias, misuse risks, and societal impacts before launch.

United States – Sector-Specific and State-Level Initiatives

The U.S. has yet to adopt a unified federal AI law. However, various state-level laws and federal executive actions have emerged:

  • California’s AB 730 prohibits the use of deepfakes in political advertising during election periods;

  • Executive Order on AI (2023) calls for federal agencies to develop safety standards, support watermarking, and fund research into AI risk mitigation.

China – Mandatory Disclosure and Content Review

China has adopted strict rules requiring:

  • Real-name authentication for users interacting with AI-generated content;

  • Watermarking of synthetic media and human moderation of content involving politically sensitive subjects;

  • Algorithm registration: developers must register and disclose intent and capabilities for any model deployed publicly.

Note
Note

The Cyberspace Administration of China mandates that providers label AI-generated content and ensure training data does not endanger national security.

Other Countries

  • Canada: proposed the Artificial Intelligence and Data Act (AIDA) to regulate high-impact AI systems;

  • UK: the government supports a β€œpro-innovation” regulatory approach with voluntary guidelines but no strict legislation yet;

  • Brazil and India: debating frameworks that blend consumer protection with incentives for innovation.

Voluntary Frameworks and Industry Initiatives

While regulation lags behind technological advances, industry players and international organizations have stepped in to establish ethical norms and best practices.

International Standards and Ethics Guidelines

  • OECD AI Principles: adopted by over 40 countries, these principles promote AI that is inclusive, transparent, and accountable;

  • UNESCO’s AI Ethics Framework: encourages human rights-based governance, including environmental sustainability and cultural diversity;

  • IEEE’s Ethically Aligned Design: offers a technical guide for developing AI that respects privacy, fairness, and autonomy.

Industry-Led Consortia

Companies are increasingly recognizing the need for self-regulation to maintain public trust and avoid more restrictive government intervention.

  • Partnership on AI: founded by OpenAI, Google, Microsoft, and others, it supports research on fairness, interpretability, and social impact;

  • Frontier Model Forum: a collaboration between OpenAI, Anthropic, Google DeepMind, and Cohere to promote:

    • Responsible model scaling;

    • External safety audits;

    • Best practices for high-stakes deployments;

    • Sharing of technical and safety documentation.

  • MLCommons and BigScience: open-source research communities working on transparency benchmarks and open model evaluations.

Note
Note

Frontier AI developers have committed to working with governments to create pre-deployment risk evaluations for powerful models like GPT-5.

Future Outlook: What’s Coming Next?

The governance of generative AI is still in its early stages, and several key trends are shaping its future:

  • Model transparency: policies are likely to require developers to disclose how AI-generated content is created, and whether users are interacting with an AI system;

  • Synthetic content labeling: watermarking and invisible signatures may become mandatory for AI-generated images, videos, and text;

  • Audits and risk evaluations: independent audits of generative models will be critical, particularly for frontier models with emergent capabilities;

  • Global coordination: as models become more powerful, there's growing recognition that global agreementsβ€”similar to climate or nuclear accordsβ€”may be necessary;

  • Model registries: Countries may require developers to register large-scale AI models along with safety evaluations and intended use cases.

1. What is one major requirement of the EU AI Act for generative AI systems?

2. What is the purpose of the Frontier Model Forum?

3. Which of the following is a likely future trend in AI governance?

question mark

What is one major requirement of the EU AI Act for generative AI systems?

Select the correct answer

question mark

What is the purpose of the Frontier Model Forum?

Select the correct answer

question mark

Which of the following is a likely future trend in AI governance?

Select the correct answer

Everything was clear?

How can we improve it?

Thanks for your feedback!

SectionΒ 4. ChapterΒ 5
We're sorry to hear that something went wrong. What happened?
some-alt