Why Companies Can't Regulate Their Own AI
As Google's recent rollback of AI ethical principles demonstrates, we can't expect companies to consistently choose ethics over growth—but we can make ethical behavior the profitable choice.

Here's a pattern we've seen before: A tech company makes bold ethical commitments, then quietly walks them back when market or political pressures pressures mount. Google's recent decision to remove its AI ethical principles isn't surprising – it's predictable. Just this week, Google eliminated key language promising not to pursue "technologies that cause or are likely to cause overall harm" and "weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people."
Let's start with the numbers. Google is planning to spend $75 billion on AI infrastructure in 2025, a massive jump from $32.3 billion in 2023 capital expenditures. Their cloud revenue is growing at 30% year-over-year to $12 billion, but that's a deceleration from 35% in the previous quarter and missed analysts' estimates of $12.2 billion. Microsoft and other competitors are pushing hard into AI. The market is demanding growth.
This is the core tension: Companies exist to grow and generate returns. When ethical principles conflict with growth imperatives, growth tends to win.
The Incentive Problem
Think about it this way: If you're a tech or product executive, you face quarterly pressure to show revenue growth, market share gains, and product launches. Your compensation is likely tied to these metrics. Your board evaluates you on them. The market values your company based on them – just look at how Alphabet's stock tumbled 8.3% when they missed revenue expectations, shaving $169.5 billion off their market cap.
Now imagine you have two options:
- Deploy an AI feature that could drive significant revenue but has some ethical concerns
- Delay or limit the feature based on those concerns
In theory, option 2 might be better for long-term value. In practice, someone else will likely launch a similar feature anyway, and you'll be explaining to shareholders why you're falling behind.
This isn't because executives are bad people. It's because corporations are machines optimized for growth and profit. Expecting them to consistently choose ethics over growth is like expecting a calculator to sometimes get math wrong for moral reasons.
So What Actually Works?
If we accept that pure self-regulation is unlikely to hold up under pressure, what can actually be done in the current environment? The key is to work with corporate incentives rather than against them.
Make Ethics Expensive to Ignore
Google's recent experience is instructive. When they announced changes to their AI principles, their stock dropped. This suggests markets do care about governance – or at least about the risks of poor governance.
Companies respond to costs. If abandoning ethical principles becomes expensive enough – through market penalties, employee pushback, customer loss, or partner concerns – they'll maintain them. The goal should be to increase those costs.
Some practical approaches:
- Technical Architecture Build ethical constraints directly into systems. Make them hard to remove without breaking things.
- Legal Structures Create binding commitments that survive quarterly pressure. Make governance changes require board approval.
- Market Mechanisms Tie executive compensation to ethical metrics. Create certification systems partners rely on.
- Public Accountability Document commitments publicly. Make changes visible and costly to reputation.
The European Factor
While U.S. regulation remains unlikely, the EU is moving forward with comprehensive AI rules. This creates a natural experiment. Companies will need strong governance systems for European markets regardless of U.S. requirements.
The contrast is stark: While Google is removing ethical constraints in the U.S., stating they'll now simply implement "appropriate human oversight, due diligence, and feedback mechanisms," they'll have no choice but to maintain stricter standards for European operations.
Smart companies will use this as an opportunity. Build robust governance now, use it as a competitive advantage, and be prepared for when regulations inevitably expand.
Practically Speaking
The reality is that we're stuck with corporate self-regulation for now, at least in the U.S. But that doesn't mean all approaches are equally effective.
The key is to be clear-eyed about incentives. Don't expect companies to consistently choose ethics over growth. Instead, make ethical behavior align with growth incentives. Make unethical behavior expensive.
For business leaders, this means:
- Build governance systems that are hard to dismantle
- Create accountability mechanisms with real teeth
- Align incentives toward responsible deployment
- Prepare for eventual regulation
For everyone else – developers, customers, and citizens – here's what actually matters:
- Follow the money, not the messaging: When Google announces a new AI model like Gemini 2.0, look at their capital expenditure plans ($75 billion for 2025) and where that money is actually going. Watch which AI principles get removed when they conflict with revenue opportunities.
- Vote with your wallet and your labor: The most effective pressure points are economic. Enterprise customers can demand governance requirements in contracts. Developers can choose which platforms to build on. Users can select products based on governance standards. Google's 8.3% stock drop shows markets do respond to governance concerns.
- Focus on systems, not statements: Instead of celebrating when companies publish AI principles, examine their implementation mechanisms. Does the oversight board have real power? Are there technical safeguards? Are there independent audits? Google's original 2018 principles had dedicated review teams – what happened to them?
- Build institutional memory: When companies revise their principles (like Google removing language about weapons and surveillance), document the changes. Share them. Reference them in future discussions. Don't let corporate PR rewrite history about what was promised and what was delivered.
The Broader Implication
There's a lesson here that goes beyond AI governance. When we rely on corporate self-regulation for important social goods, we need to design systems that work with corporate incentives rather than hoping companies will consistently act against them. As one Google employee put it: "It's deeply concerning to see Google drop its commitment to the ethical use of AI technology without input from its employees or the broader public."
The question isn't whether companies will face pressure to compromise their AI principles – they will. The question is whether we can make maintaining those principles the profitable choice.