Artificial Intelligence & Machine Learning

META DESCRIPTION: Explore the latest in Artificial Intelligence & Machine Learning ethics and regulation: EU’s AI Code, U.S. state laws, and industry reactions from July 26–August 2, 2025.

AI Ethics & Regulation Weekly: How Europe’s New AI Code and U.S. State Laws Are Redrawing the Machine Learning Map


Introduction: The Week AI Regulation Got Real

If you thought the summer of 2025 would be a sleepy season for Artificial Intelligence & Machine Learning, think again. This week, the world’s biggest tech companies found themselves racing against the clock—not to launch the next viral chatbot, but to sign on the dotted line of Europe’s new AI Code of Practice. Meanwhile, across the Atlantic, U.S. states continued their legislative sprint, stacking up new rules for everything from deepfakes to AI-powered mental health chatbots.

Why does this matter? Because the rules of the AI game are being rewritten in real time, and the stakes are nothing less than how (and whether) these technologies will earn our trust. In the past seven days, we’ve seen:

  • OpenAI, Anthropic, and 24 other tech giants sign the EU’s General-Purpose AI Code of Practice, setting a new global bar for transparency, safety, and copyright compliance[1].
  • Fierce dissent from Meta and the Belgian government, who argue the Code is both rushed and riddled with loopholes[1].
  • The U.S. patchwork of state AI laws grow even more tangled, as Texas and New York roll out new rules targeting manipulative AI and mental health risks[2][3].

This isn’t just regulatory theater. These moves will shape the AI tools in your pocket, the algorithms in your workplace, and the digital assistants in your home. In this week’s roundup, we’ll unpack the drama behind the EU’s regulatory blitz, the U.S. states’ piecemeal approach, and what it all means for the future of AI ethics and compliance.


26 Tech Giants Sign the EU AI Code: A New Era for AI Ethics & Regulation

On August 1, 2025, the European Union’s AI Act moved from theory to practice as 26 of the world’s largest tech companies—including OpenAI, Anthropic, Google, Microsoft, and Amazon—signed the voluntary General-Purpose AI (GPAI) Code of Practice[1]. This Code, published by the European Commission on July 10, 2025, is the first comprehensive compliance roadmap for companies building or deploying general-purpose AI models in Europe[1].

What’s in the Code?
Think of it as the AI industry’s new rulebook, covering:

  • Transparency: Companies must disclose how their models work, what data they use, and how they make decisions.
  • Copyright Compliance: AI models must respect intellectual property rights, a nod to the growing chorus of artists and publishers demanding fair treatment.
  • Safety & Security: Firms are required to conduct systemic risk assessments, adversarial testing, and energy usage reporting—measures designed to prevent everything from algorithmic bias to catastrophic model failures[1].

Why Now?
The EU AI Act, passed in 2024, set a deadline of August 2, 2025, for GPAI providers to comply with its new rules[1]. But the final developer guidance—the GPAI Code—wasn’t published until July 10, giving companies less than a month to adapt[1]. The result? A regulatory scramble, with some firms reportedly working around the clock to meet the deadline.

Industry Reactions: Applause and Alarm Bells
While many see the Code as a win for legal certainty and a competitive edge in the European market, not everyone is cheering. Meta and the Belgian government have voiced strong opposition, arguing that the Code’s rapid rollout and ambiguous obligations blur the line between best practice and legal requirement[1]. Belgian Minister for Digitalization Vanessa Matz called for stronger protections for journalists and content creators, warning, “This is not the end of the process”[1].

Real-World Impact:
For businesses, signing the Code means reduced administrative headaches and a presumption of compliance with the EU AI Act—a major incentive for anyone hoping to avoid regulatory whiplash[1]. For consumers, it promises more transparent, accountable AI systems, with clearer disclosures and safeguards against misuse.


U.S. State AI Laws: The Patchwork Grows

While Europe pushes for a unified approach, the United States is doubling down on its state-by-state patchwork. As of July 31, 2025, more than 150 state-level AI laws are on the books, each targeting specific risks and use cases[3].

Texas Takes the Lead
The Texas Responsible AI Governance Act (TRAIGA), enacted this month, is one of the most comprehensive state laws yet. It bans intentionally manipulative and harmful AI, regulates government use of AI, and sets new standards for transparency and accountability[3].

New York’s Mental Health Mandate
New York, meanwhile, has zeroed in on AI companion chatbots, requiring clear disclosure when users are interacting with AI and mandating protocols for responding to suicidal ideation[3].

Federal Gridlock
The U.S. Senate briefly considered a moratorium on state-level AI law enforcement, but ultimately rejected it, leaving the current patchwork intact[1]. The result: a regulatory landscape where companies must navigate a maze of overlapping (and sometimes conflicting) rules, depending on where they operate.

Expert Perspective:
Legal analysts warn that this fragmented approach could stifle innovation and create compliance nightmares for startups and multinationals alike[3]. But for now, the state-by-state model is here to stay.


The EU’s Rushed Rollout: Dissent and Demands for Revision

The EU’s AI Code may be a landmark, but it’s also a lightning rod for criticism. The final guidance landed just weeks before the August 2 enforcement deadline, leaving companies scrambling to interpret and implement complex new requirements[1].

Key Points of Contention:

  • Ambiguous Obligations: Critics argue that the Code’s requirements for systemic risk mitigation, energy reporting, and adversarial testing are too vague, making it hard for companies to know if they’re truly compliant[1].
  • Copyright and Content Protections: Journalists, publishers, and content creators worry that the Code doesn’t go far enough to protect their rights, especially as generative AI models become more sophisticated[1].
  • Legal Uncertainty: Some firms fear that the voluntary Code could become a de facto legal standard, exposing them to liability even if they follow the rules in good faith[1].

Belgium’s Stand:
Belgium’s digital minister has called for a revision of the Code before the next legal review phase, signaling that the debate over AI governance in Europe is far from settled[1].

What’s Next?
The European Commission is expected to issue further guidelines on high-risk AI systems by February 2026, but for now, the industry is left to navigate a rapidly evolving—and sometimes confusing—regulatory landscape[1].


Analysis & Implications: The New Normal for AI Ethics & Regulation

This week’s developments mark a turning point in the global conversation about AI ethics and regulation. Here’s what’s emerging:

1. Europe Sets the Pace—But Not Without Friction

The EU’s AI Act and Code of Practice are setting a new global standard for AI governance, forcing even non-European companies to adapt if they want access to the world’s second-largest digital market[1]. But the rushed rollout and ongoing dissent highlight the challenges of regulating fast-moving technologies.

2. The U.S. Patchwork Persists

With federal action stalled, U.S. states are filling the void—sometimes in conflicting ways. This creates both opportunities and headaches for companies, who must tailor their compliance strategies to a growing mosaic of local laws[3].

3. Transparency and Accountability Are Now Table Stakes

Whether in Brussels or Austin, the message is clear: AI developers must be able to explain how their systems work, what data they use, and how they mitigate risks. Black-box models and “move fast and break things” are out; explainability and responsibility are in[1][3].

4. Real-World Impact: What This Means for You

  • For businesses: Expect more paperwork, but also more clarity on what’s required to operate in key markets.
  • For consumers: Look for clearer disclosures, more robust safety features, and (hopefully) fewer algorithmic surprises.
  • For developers: The days of “build first, ask permission later” are numbered. Compliance is now a core part of the AI development lifecycle.

Conclusion: The Future of AI Ethics—A Work in Progress

As the dust settles on this week’s regulatory fireworks, one thing is clear: the era of AI self-regulation is over. Whether you’re a developer, a business leader, or just someone who relies on AI-powered tools, the rules of engagement are changing fast.

Europe’s new AI Code of Practice is a bold experiment in balancing innovation with accountability, but its rushed rollout and vocal critics show that the path to ethical AI is anything but straightforward. Meanwhile, the U.S. continues to embrace its patchwork approach, leaving companies to navigate a maze of local laws and shifting expectations.

The big question for the months ahead: Can regulators, companies, and civil society find common ground before the next wave of AI breakthroughs arrives? Or will the race to regulate AI become as complex—and unpredictable—as the technology itself?

One thing’s for sure: In the world of Artificial Intelligence & Machine Learning, the only constant is change. Stay tuned.


References

[1] OpenAI, Anthropic, and 24 Other Tech Giants Sign EU AI Code of Practice. (2025, August 1). Nemko Digital. https://digital.nemko.com/news/openai-anthropic-signs-eu-ai-code
[2] From California to Kentucky: Tracking the Rise of State AI Laws in 2025. (2025, May 27). White & Case. https://www.whitecase.com/insight-alert/california-kentucky-tracking-rise-state-ai-laws-2025
[3] Artificial Intelligence (AI) Legislation. (2024, January 1). MultiState. https://www.multistate.ai/artificial-intelligence-ai-legislation

Editorial Oversight

Editorial oversight of our insights articles and analyses is provided by our chief editor, Dr. Alan K. — a Ph.D. educational technologist with more than 20 years of industry experience in software development and engineering.

Share This Insight

An unhandled error has occurred. Reload 🗙