Nadcab logo
Blogs/Artificial Intelligence

AI Regulation News 2026 – Latest Global Policies and Laws Explained

Published on: 10 Apr 2026

Author: Shubham

Artificial Intelligence

Key Takeaways

  • The EU AI Act is now partially in force, with full compliance for high-risk AI systems required by August 2026, making it the world’s first comprehensive legal framework for AI Application oversight.
  • Over 75 countries are actively developing or tracking AI legislation, signaling that global AI governance is no longer optional for businesses using AI Platforms.
  • The United States shifted to a pro-innovation stance under Executive Order 14179 (January 2025), prioritizing deregulation to accelerate competitiveness in AI Application markets.
  • India is adopting a “soft law first, hard law where harm is evident” model, with MeitY’s seven-sutra AI governance framework guiding sectoral AI regulation rather than a single sweeping act.
  • China’s AI governance follows a vertical control model, mandating watermark labeling of AI-generated content from September 2025 and strengthening cybersecurity fines from January 2026.
  • Violations of the EU AI Act can cost enterprises up to 7% of global annual revenue, stricter than GDPR penalties, pushing compliance to the top of enterprise agendas.
  • Ethical concerns including bias, deepfake misuse, privacy violations, and autonomous weaponization are the primary forces driving AI regulation globally.
  • AI regulation directly impacts startups by increasing compliance costs, but also creates new markets for regulatory technology, AI auditing, and governance consulting.
  • The tension between innovation and regulation is shaping a fragmented global compliance landscape that businesses using AI Platforms must proactively monitor.
  • Companies that adopt a “compliance by design” approach from the earliest stage of AI Application development will gain a significant competitive advantage as regulations mature.

Introduction to AI Regulation News

Artificial intelligence is no longer a technology confined to research labs or experimental prototypes. It is embedded in hiring algorithms, healthcare diagnostics, financial credit scoring, autonomous vehicles, and the AI Platforms powering consumer applications used by billions daily. With that pervasiveness comes an urgent question that governments, regulators, businesses, and citizens are grappling with: who governs AI, and how? The answer is unfolding rapidly across courtrooms, parliaments, and boardrooms worldwide.

From the sweeping EU AI Act to India’s principle-driven governance guidelines, the regulatory landscape for AI Application development and deployment is undergoing a seismic transformation. Understanding these changes is not simply a legal exercise. It is a strategic imperative for every organization that builds, deploys, or simply uses AI in its operations.

Why AI Regulation Is Becoming a Global Priority

The explosive adoption of AI Platforms across every industry has outpaced policymakers’ ability to respond. According to global surveys, 78% of organizations reported actively using AI by 2024, up sharply from 55% just a year earlier. In the United States alone, federal agencies introduced 59 AI-related regulations in a single year, more than double the prior period. These numbers reflect not just enthusiasm but growing anxiety about AI’s societal footprint.

78%
Organizations actively using AI in 2024
75+
Countries tracking or developing AI legislation
59
US federal AI regulations introduced in 2024
7%
Max EU AI Act penalty of global revenue

Governments are responding because AI errors are not just software bugs. When an AI Application makes a biased hiring decision, denies someone healthcare coverage, or enables mass surveillance, the consequences are deeply human. The acceleration of generative AI tools further exposed regulatory blind spots, forcing policymakers to act with unprecedented speed. The race to regulate AI is ultimately a race to preserve trust.

Recent Developments in AI Laws and Policies (2025–2026)

The two-year window from 2025 to 2026 has been the most active period in AI governance history. Key milestones have reshaped how AI Application providers and AI Platforms operators must conduct themselves globally.

Timeline Jurisdiction Development Status
Feb 2025 European Union EU AI Act: “Unacceptable risk” AI bans enforced (social scoring, biometric surveillance) Active
Jan 2025 United States Executive Order 14179 revokes prior AI safety order; shifts to pro-innovation stance Active
Jan 2025 South Korea AI Framework Act enters force, strengthening transparency and safety requirements Active
May 2025 Japan AI Promotion Act passed: innovation-first, non-punitive guidance model Active
Aug 2025 European Union GPAI (General-Purpose AI) model obligations come into full effect Active
Sep 2025 China AI-generated content labeling rules: visible watermarks and encrypted metadata mandated Active
Nov 2025 European Union European AI Office launches with cross-member enforcement authority Active
Dec 2025 United States White House EO signals federal intent to preempt conflicting state AI laws Evolving
Jan 2026 China Amended Cybersecurity Law: immediate fines for data leaks, no warning period Strict
Aug 2026 European Union Full compliance deadline for high-risk AI systems under EU AI Act Upcoming

Key Countries Leading AI Regulation Efforts

While dozens of nations are moving toward AI governance frameworks, a handful are setting the global standard. Their choices on how to regulate AI Application ecosystems and AI Platforms will shape international norms for years to come. The following comparison captures the dominant approaches.

Country Regulatory Model Key Law / Policy Approach Style
European Union Risk-based, comprehensive EU AI Act (2024) Strict & Proactive
United States Sectoral, deregulatory federal + active states EO 14179 + State Laws Innovation-First
China Vertical state control model Generative AI Measures, Labeling Rules State-Controlled
India Sectoral, principle-based MeitY AI Guidelines + Digital India Act Soft Law First
United Kingdom Sector-specific, principles-led AI Regulation White Paper Flexible & Pro-Growth
South Korea Comprehensive framework AI Basic Act (Jan 2026) Safety + Innovation
Japan Cooperative, non-punitive AI Promotion Act (May 2025) Light Touch
Canada Risk-based (in development) AI and Data Standardization Collaborative Standards-Led

Overview of the European Union’s AI Act

The EU AI Act is the world’s first horizontal, binding legal framework specifically designed to govern AI Applications across all sectors. It entered into force on August 1, 2024 and is phasing in obligations through 2027. The law classifies every AI system into one of four risk tiers, and compliance obligations scale accordingly. For companies building or deploying AI Platforms within or directed at EU residents, this is the most consequential regulatory development in a generation.

Risk Category Examples Compliance Requirement Timeline
Unacceptable Risk Social credit scoring, mass biometric surveillance, emotion recognition at work Banned outright Since Feb 2025
High Risk AI in hiring, credit scoring, education, critical infrastructure, healthcare diagnostics Risk assessment, documentation, human oversight, testing Aug 2026
Limited Risk Chatbots, AI-generated content, deepfake generation Transparency and disclosure obligations Active
Minimal Risk Spam filters, AI in video games, basic recommendation systems No mandatory requirements Ongoing
Important: Penalty Structure
Non-compliance with the EU AI Act can result in fines of up to €35 million or 7% of global annual turnover, whichever is higher. This exceeds GDPR penalty levels and signals the EU’s seriousness about enforcing AI governance at scale.

The Act also introduces obligations for general-purpose AI models (GPAI), particularly those posing systemic risks. Providers must implement risk mitigation protocols, maintain transparency about training data sources, and comply with copyright standards. A Code of Practice for GPAI model providers was submitted to the European Commission in mid-2025 and is now an active compliance tool.

AI Regulation in the United States: Current Landscape

The United States presents a uniquely complex regulatory environment. At the federal level, comprehensive AI legislation remains absent. Instead, the landscape is shaped by executive orders, agency guidance, and an increasingly assertive patchwork of state laws. President Trump’s Executive Order 14179, signed in January 2025, fundamentally reoriented US AI policy by revoking Biden-era safety directives and emphasizing innovation, competitiveness, and deregulation as national priorities.

Yet below the federal level, states have moved decisively. California enacted an AI Safety Act effective January 2026, establishing whistleblower protections for employees who report AI safety risks. Colorado’s AI Act targets developers of high-risk AI systems with risk management and anti-discrimination obligations. New York’s pending RAISE Act would impose safety policies on large AI model developers. This creates a layered compliance challenge for AI Platforms operating nationally, as each state may impose distinct standards.

Key Insight: Fragmentation Risk
A December 2025 White House executive order explicitly signals federal intent to challenge or preempt state AI laws viewed as conflicting with national innovation goals. Legal battles over federal versus state AI jurisdiction are expected to intensify through 2026 and 2027.

The NIST AI Risk Management Framework has emerged as a de facto compliance standard adopted voluntarily by enterprises seeking to demonstrate responsible AI Application governance even in the absence of binding law. The Algorithmic Accountability Act and AI Foundation Model Transparency Act remain active in Congress and represent the most likely pathways to federal legislation by late 2026 or 2027.

India’s Approach to AI Governance and Policy

India’s approach to AI regulation reflects its identity as both a major AI talent hub and a nation acutely aware of the societal risks of unregulated technology. Rather than pursuing a single omnibus AI law, India’s Ministry of Electronics and Information Technology (MeitY) released AI Governance Guidelines built around seven foundational “sutras” or principles, including safety, inclusivity, accountability, privacy, transparency, responsibility, and trustworthiness.

This sectoral model means different industries, such as healthcare, finance, and media, will be governed by sector-specific AI rules rather than a one-size-fits-all framework. Where harm is clearly evident, particularly in synthetic media and deepfake content, India has proposed hard regulatory requirements. Draft IT Rules changes would mandate visible AI-generated content labeling, requiring at least 10% of a visual surface area or the first 10% of audio duration to carry a disclosure. This demonstrates India’s “soft law everywhere, hard law where needed” philosophy.

The Digital India Act, which aims to update India’s regulatory regime for cyberspace and address AI-generated content, is progressing through legislative consultation. India’s pattern signals a measured but deliberate trajectory toward formal AI governance as its AI Application sector continues to scale rapidly.

China’s AI Regulations and Strategic Control Measures

China’s regulatory strategy for AI is unlike any other jurisdiction’s. Rather than building a single comprehensive law, China has deployed a layered suite of targeted rules that collectively form what analysts call a “vertical control model.” The state’s priorities are clear: national security, content control, and maintaining political and social stability, all while fostering domestic AI innovation to compete globally.

China’s Generative AI Services Management Measures, in effect since 2023, require providers of generative AI services to register with regulators, perform security assessments, and ensure their outputs do not violate social order or national security interests. From September 2025, the Measures for Labeling AI-Generated Content created a comprehensive tracking system requiring both visible watermarks and invisible encrypted metadata on all synthetic content. This creates a closed loop where AI content is never anonymous within China’s digital ecosystem.

China AI Regulatory Fact

From January 2026, amendments to China’s Cybersecurity Law removed the prior “warning shot” mechanism for violations. Enterprises now face immediate and severe financial penalties for any data leak or AI-related infrastructure failure, with no grace period for correction.

Chinese AI companies are simultaneously creating innovative open-model products that challenge US counterparts, while navigating disclosure requirements that are, in some ways, stricter than Western standards. China’s dual agenda, tight state control combined with aggressive AI capability building, makes it one of the most influential yet distinctive voices in global AI governance.

Ethical Concerns Driving AI Regulation

Behind every regulatory headline is a set of deeply human concerns. AI Platforms and AI Applications are not ethically neutral tools. They embed the values, biases, and blind spots of their creators, and when deployed at scale, those imperfections become systemic harms. Regulators worldwide are responding to these documented realities.

  • Algorithmic Bias: AI systems trained on historical data replicate and amplify historical inequalities in hiring, lending, criminal justice, and healthcare access.
  • Privacy Erosion: AI Platforms built on mass data collection frequently process sensitive personal information without genuine informed consent.
  • Deepfake Proliferation: Generative AI has made synthetic media cheap and convincing, enabling disinformation, fraud, and non-consensual intimate imagery.
  • Autonomous Weapons: AI-enabled lethal autonomous systems raise profound questions about accountability and international humanitarian law.
  • Opacity and Explainability: Complex AI Application models, particularly deep neural networks, cannot always explain their decisions, making accountability nearly impossible.
  • Labor Displacement: Automation powered by AI Platforms threatens entire job categories, raising policy questions about economic safety nets and workforce transition.
  • Concentration of Power: The majority of advanced AI capabilities are concentrated in a handful of large corporations, creating antitrust and democratic governance concerns.

These concerns are not speculative. They are documented in court cases, academic research, investigative journalism, and government audits. Their political resonance across ideological lines explains why AI regulation is one of the few areas generating bipartisan legislative momentum in otherwise divided governments.

Impact of AI Regulation on Tech Companies

The business consequences of AI regulation are substantial and growing. For large technology companies operating AI Platforms globally, compliance is now a significant line item. Anthropic, for example, has reported allocating approximately 15 to 20% of development resources for its models specifically toward EU AI Act compliance activities. This is not a one-time cost but a recurring operational expense as regulations evolve and audits intensify.

Impact Area Description Severity
Compliance Costs Risk assessments, documentation, audits, legal counsel, and dedicated compliance teams add operational overhead High
Product Redesign High-risk AI Applications must be rebuilt with documentation, oversight mechanisms, and transparency features from the ground up Moderate to High
Market Access Non-compliant AI Platforms may be excluded from EU, South Korean, or other regulated markets Critical
Data Management Strict data provenance and training data transparency requirements increase data engineering costs Moderate
Competitive Differentiation Demonstrable compliance and ethical AI practices become brand assets and enterprise sales advantages Opportunity
Liability Exposure Clear accountability chains required by law expose companies to new litigation and regulatory penalties High

Role of Governments vs Private Sector in AI Oversight

One of the central tensions in AI governance is how to divide responsibility between public institutions and the private companies actually building AI Platforms and AI Applications. Governments possess the authority to impose binding obligations and levy penalties, but often lack the technical depth to audit sophisticated AI systems effectively. The private sector has the expertise but faces obvious incentive conflicts when self-regulating.

Several governance models have emerged to navigate this tension. The EU’s conformity assessment model delegates technical certification to approved private bodies while retaining enforcement authority with public regulators. The US voluntary framework approach, exemplified by NIST’s AI Risk Management Framework, relies on industry adoption of best practices without statutory force. China’s model gives state agencies direct involvement in certifying and monitoring AI services. Each approach reflects deeper cultural and political values about the relationship between state power and commercial freedom.

Industry consortia and multi-stakeholder bodies are increasingly filling governance gaps where legislation is absent or immature. Standards bodies like ISO and NIST are developing technical specifications for AI reliability, robustness, and bias testing. The EU’s AI Code of Practice for GPAI models was itself developed through a multi-stakeholder process, offering a template for collaborative governance that may be replicated globally.

Data Privacy and Security in AI Regulations

Data is the fuel of every AI Application and AI Platform. Consequently, data protection and AI regulation are inseparable. The EU AI Act and GDPR are explicitly designed to work in tandem, creating a comprehensive data governance layer that covers not just how personal data is collected but how it is used to train and deploy AI models. The concept of “privacy by design” has evolved into “compliance by design,” requiring development teams to embed data governance at the architecture level, not as an afterthought.

Key data-related requirements appearing across multiple regulatory frameworks include training data transparency, where providers must disclose the sources, scale, and processing methods of training datasets. Data minimization obligations require that AI systems process only the personal data strictly necessary for the task. Privacy impact assessments, analogous to GDPR’s Data Protection Impact Assessments, are required for high-risk AI Applications in both EU and emerging frameworks globally.

Cross-Border Data Challenge
AI Application providers operating in multiple jurisdictions face compound compliance burdens where EU GDPR requirements, US state privacy laws, China’s Personal Information Protection Law, and India’s Digital Personal Data Protection Act may simultaneously apply to the same data pipeline, each with distinct requirements and penalties.

Challenges in Implementing AI Laws Globally

Despite strong political will, translating AI regulation from legislative text to real-world enforcement is enormously difficult. Several structural challenges are slowing implementation across jurisdictions.

1

Technical Complexity

AI systems are probabilistic and context-dependent, making deterministic compliance verification extremely difficult for traditional regulatory bodies.

2

Regulatory Capacity

Most national regulators lack the AI expertise needed to audit advanced models or assess whether an AI Application genuinely meets risk thresholds.

3

Definitional Gaps

Laws struggle to define precisely what constitutes an “AI system,” creating ambiguity that sophisticated organizations can exploit.

4

Speed Asymmetry

AI capabilities advance in months; legislation takes years. By the time a law is enforced, the technology it targets may have fundamentally changed.

5

Jurisdictional Conflicts

Fragmented national rules create compliance contradictions for AI Platforms operating globally, forcing costly legal navigation across dozens of regimes.

6

SME Disproportionality

Compliance costs are regressive. Small AI startups face the same legal obligations as trillion-dollar corporations, threatening innovation diversity.

The EU itself acknowledged these implementation challenges in its Digital Omnibus proposal of late 2025, proposing to delay the high-risk AI compliance deadline to allow time for technical standards and guidance tools to be finalized. This pragmatic adjustment reflects the real-world gap between regulatory ambition and operational readiness.

How AI Regulation Affects Innovation and Startups

The relationship between AI regulation and innovation is nuanced. Poorly designed regulation can stifle experimentation, increase barriers to entry, and concentrate market power among incumbents who can absorb compliance costs. Well-designed regulation, on the other hand, creates the legal certainty that encourages long-term investment, protects against catastrophic failures that could set entire industries back, and opens new markets for compliance technology itself.

The EU has recognized the disproportionate impact of compliance costs on smaller players. Its Digital Omnibus proposal extended regulatory relief, previously available only to small businesses, to “small mid-caps” with up to 750 employees. Pilot programs offering subsidized compliance support for SMEs opened in March 2026. South Korea’s AI Open Innovation Hub provides a national infrastructure platform specifically designed to support AI startups in meeting governance requirements without prohibitive costs.

For AI Application startups, regulation is also a market signal. The growing demand for explainable AI, bias auditing tools, AI risk assessment platforms, and governance consulting represents a substantial emerging industry. Startups positioned at the intersection of AI capability and regulatory compliance are among the most attractive investment targets in the current climate. Regulation does not merely constrain the AI market; in many respects, it is actively creating new segments within it.

Industry Reactions to New AI Policies

The technology industry’s response to the evolving regulatory landscape is multifaceted. Large AI Platforms providers like Google, Microsoft, and OpenAI have largely adopted a constructive engagement posture with regulators, participating in Code of Practice development processes and investing in internal compliance teams. This reflects both genuine commitment to responsible development and pragmatic recognition that proactive engagement shapes more workable rules than adversarial resistance.

At the same time, significant lobbying activity is visible in Brussels, Washington, and New Delhi, where industry representatives advocate against overly prescriptive requirements, liability provisions they view as excessive, and definitions they consider technically unworkable. The tension between public interest advocacy and commercial interest is a permanent feature of AI governance processes.

Smaller AI Application companies and startups have been more vocal about the disproportionate compliance burden. Many point to the irony that the compliance infrastructure required by current rules overwhelmingly benefits larger incumbents, potentially entrenching market concentration rather than promoting the competitive innovation that regulators nominally seek to protect. These voices are increasingly influential in shaping the “proportionality” provisions appearing in second-generation AI laws globally.

Future Predictions for AI Regulation Worldwide

Looking beyond 2026, the arc of global AI governance points toward greater convergence on core principles even as national approaches retain distinctive characteristics. The risk-based framework pioneered by the EU AI Act has been adopted or adapted by South Korea, Brazil, Canada, and numerous other jurisdictions, suggesting it will become the dominant global template.

Artificial intelligence governance is likely to become a dimension of diplomatic relations. Trade agreements will increasingly include AI governance compatibility clauses. International organizations including the OECD, G7, and United Nations are advancing multilateral principles that, while non-binding, create normative pressure on outlier jurisdictions. The 2025 Seoul and Paris AI Safety Summits demonstrated that governments across political divides can find common ground on catastrophic risk prevention even when they disagree sharply on routine AI governance.

What to Watch in 2026 and Beyond
Federal AI legislation in the US remains the most consequential outstanding development. If Congress passes comprehensive AI law modeled on NIST’s risk framework, it will trigger a global realignment comparable to what GDPR did for data privacy. Every AI Application and AI Platform operator globally will need to recalibrate compliance programs accordingly.

The emergence of real-time AI auditing technology, where models can be continuously monitored for compliance in production rather than assessed only at deployment, will fundamentally transform enforcement capability. Regulatory technology startups in this space are attracting significant venture capital and government procurement interest, pointing to a future where AI governance is automated and adaptive rather than static and documentary.

What Businesses Should Know About AI Compliance

For any organization using, building, or procuring AI Applications or AI Platforms, proactive compliance is no longer a choice. The regulatory window for passive observation has closed. The following parameters define the compliance readiness that regulators expect and that enterprise customers increasingly require as a procurement condition.

Compliance Parameter What It Requires Priority Level
AI Inventory Catalogue every AI system in use, including third-party AI Platforms and embedded models in enterprise software Critical
Risk Classification Assess each AI Application against applicable jurisdiction risk tiers (EU, US state, China, India) to determine obligation level Critical
Technical Documentation Maintain detailed records of model purpose, training data, testing results, performance metrics, and known limitations High
Human Oversight Mechanisms High-risk AI systems must have documented human review points, override capabilities, and escalation procedures High
Bias Testing Regular audits for discriminatory outcomes across protected characteristics with documented remediation where bias is found High
Transparency Notices Users interacting with AI must be informed they are doing so, with chatbots, synthetic media, and AI decisions disclosed Standard
Incident Response Procedures for detecting, reporting, and responding to AI system failures, particularly those with adverse human impacts High
Vendor Due Diligence AI Platforms procured from third parties carry compliance obligations that flow to the deploying organization Standard

Ready to Build Compliant, Future-Proof AI Solutions?

Partner with Nadcab Labs to navigate AI regulations confidently and deploy AI Applications that meet global standards from day one.

Get Expert AI Compliance Guidance →

Conclusion: Balancing Innovation and Regulation in AI

The global movement to regulate AI Applications and AI Platforms is not a temporary compliance exercise. It is a structural transformation of how technology and society negotiate the terms of an increasingly consequential relationship. The frameworks being designed today, imperfect and evolving as they are, will shape the governance architecture for one of the most powerful technologies humanity has produced.

The central challenge is not choosing between innovation and safety. That is a false binary constructed by those who benefit from the absence of oversight. The real challenge is designing governance systems sophisticated enough to distinguish between AI Applications that genuinely pose systemic risks and those that deserve the freedom to experiment and fail and iterate. The EU AI Act’s risk-based structure, for all its implementation difficulties, represents the most ambitious attempt yet to draw that line coherently.

Businesses that treat AI regulation as purely a compliance cost will be perpetually reactive, always playing catch-up with the next enforcement wave. Those that internalize the principles behind the regulations, transparency, accountability, human oversight, and fairness, will find that responsible AI design is not just legally necessary but commercially superior. Trust is increasingly the scarcest resource in digital markets, and no AI Platform can sustain scale without it.

 

Frequently Asked Questions

Q: Does the EU AI Act apply to companies outside Europe?
A:

Yes. The EU AI Act follows an extraterritorial model similar to GDPR. If your AI Application or AI Platform is used by people in EU member states, or if its outputs affect EU residents, your organization is subject to the Act’s requirements regardless of where it is incorporated or headquartered. This means US, Indian, Chinese, and other non-EU companies serving European markets must meet EU AI Act compliance standards.

Q: What happens if my AI system is classified as "high risk" under the EU AI Act?
A:

High-risk classification triggers a comprehensive set of obligations before your AI Application can be deployed. You must implement a formal risk management system, maintain detailed technical documentation, ensure data governance for training data, design human oversight mechanisms, conduct accuracy and robustness testing, and register your system in the EU’s AI database. Post-market monitoring is also required once the system is live. The full compliance deadline for most high-risk systems is August 2026.

Q: Is there a federal AI law in the United States right now?
A:

No comprehensive federal AI law currently exists in the US. The regulatory landscape is shaped by executive orders, most notably EO 14179 from January 2025, alongside sector-specific agency guidance from bodies like the FDA, FTC, and banking regulators. However, more than a dozen states including California, Colorado, New York, Illinois, and Utah have enacted or proposed their own AI laws, creating a patchwork compliance environment that US-operating AI Platforms must navigate carefully.

Q: How is India's approach to AI governance different from the EU's?
A:

India is pursuing a sectoral, principle-based model rather than the EU’s horizontal, risk-categorized statutory framework. MeitY’s AI Governance Guidelines establish seven broad principles that each sector is expected to apply contextually. India applies hard regulatory requirements only where specific harms are evident, such as deepfake labeling. This “soft law first” approach gives India’s AI Application ecosystem more flexibility while maintaining the ability to introduce targeted rules as specific risks emerge.

Q: Can my startup get any relief from EU AI Act compliance costs?
A:

Yes. The EU has implemented SME relief provisions that reduce some compliance obligations for small and medium enterprises. The Digital Omnibus proposal further extended relief to “small mid-caps” with up to 750 employees. Additionally, the European Commission launched pilot programs in March 2026 offering subsidized compliance support for smaller businesses. The EU AI Office also publishes guidance documents designed to reduce interpretation uncertainty and lower compliance costs through standardization.

Q: What does China's AI content labeling rule actually require?
A:

China’s Measures for Labeling AI-Generated Content, effective September 2025, require all AI-generated text, images, audio, and video to carry both visible labels such as watermarks or on-screen disclosures, and invisible technical markers such as encrypted metadata. This creates a trackable record of all synthetic content within China’s digital ecosystem. The rules apply to any service provider offering AI content generation capabilities to users within China, including foreign-operated AI Platforms accessible within the country.

Q: How do AI regulations affect companies that buy AI tools from third-party vendors?
A:

In most regulatory frameworks, deployers of AI systems, meaning the companies using third-party AI Platforms or tools, bear significant compliance obligations even though they did not build the underlying model. This means your organization is responsible for understanding the risk classification of any AI Application you deploy, conducting due diligence on your vendors’ compliance status, maintaining records of how you use the system, and ensuring any required human oversight or transparency disclosures are in place. Contracts with AI vendors should explicitly address compliance responsibilities and liability allocation.

Q: What is the NIST AI Risk Management Framework and should we follow it?
A:

The NIST AI RMF is a voluntary framework published by the US National Institute of Standards and Technology that provides structured guidance for identifying, assessing, and managing AI risks. While it is not legally binding at the federal level, it has become an industry standard widely adopted by enterprises and increasingly referenced in government procurement requirements, financial sector guidance, and state-level regulations. Aligning your AI Application governance with the NIST RMF is widely considered best practice and positions you well for the likely shape of future federal legislation.

Q: Are there penalties specifically for using AI in hiring without proper oversight?
A:

Yes, and this is one of the most actively enforced areas of AI regulation. New York City’s Local Law 144 already requires employers using automated employment decision tools to conduct annual bias audits and notify candidates that AI is being used. Under the EU AI Act, AI systems used in hiring decisions are classified as high-risk, triggering the full compliance framework. The US Equal Employment Opportunity Commission has also issued guidance applying existing employment discrimination law to AI hiring tools. Penalties range from regulatory fines to civil litigation exposure.

Q: What is a General-Purpose AI model and why does it face special rules?
A:

A General-Purpose AI model (GPAI) is a foundation model trained on vast datasets that can be adapted across a wide range of tasks and use cases, such as large language models or multimodal AI systems. They face special rules under the EU AI Act because their broad applicability means they can underpin thousands of different AI Applications, each with distinct risk profiles. GPAI providers must maintain transparency about training data, implement copyright safeguards, and conduct systemic risk assessments if their models exceed defined capability thresholds. These rules have been in effect in the EU since August 2025.

Reviewed & Edited By

Reviewer Image

Aman Vaths

Founder of Nadcab Labs

Aman Vaths is the Founder & CTO of Nadcab Labs, a global digital engineering company delivering enterprise-grade solutions across AI, Web3, Blockchain, Big Data, Cloud, Cybersecurity, and Modern Application Development. With deep technical leadership and product innovation experience, Aman has positioned Nadcab Labs as one of the most advanced engineering companies driving the next era of intelligent, secure, and scalable software systems. Under his leadership, Nadcab Labs has built 2,000+ global projects across sectors including fintech, banking, healthcare, real estate, logistics, gaming, manufacturing, and next-generation DePIN networks. Aman’s strength lies in architecting high-performance systems, end-to-end platform engineering, and designing enterprise solutions that operate at global scale.

Author : Shubham

Newsletter
Subscribe our newsletter

Expert blockchain insights delivered twice a month