Canadian government demands safety changes from OpenAI
Summary
Canadian officials summoned OpenAI leaders to discuss safety concerns regarding ChatGPT, particularly after a user linked to a mass shooting was banned without notifying authorities. The government demands changes to safety protocols amid ongoing scrutiny and previous failed legislation.
Key Insights
Why didn't OpenAI report the shooter's account to police if it had already banned the account?
OpenAI stated that while it banned Jesse Van Rootselaar's account in 2025 after detecting misuse of its models in furtherance of violent activities, the company determined the account did not meet its internal threshold for law enforcement notification. OpenAI's policy requires that an account pose an 'imminent and credible risk of serious physical harm to others' to warrant police contact. The company considered reporting but concluded this threshold was not met, despite the account being flagged by systems designed to identify violent misuse.
Sources:
[1]
What specific actions is the Canadian government threatening if OpenAI doesn't improve safety protocols?
Canadian Justice Minister Sean Fraser stated that the government expects OpenAI to implement safety changes 'very quickly,' warning that if changes are not forthcoming, 'the government is going to be making changes.' This indicates the government is prepared to mandate safety improvements through legislation rather than relying on voluntary corporate action. The warning came after Canadian ministers summoned OpenAI's safety team for talks on February 25, 2026, following the February 10 shooting in British Columbia.
Sources:
[1]