This week, the American aviation giant pleaded guilty to fraud. Boeing knowingly withheld information from regulators about an autopilot design flaw; shortly afterwards, the autopilot caused two of Boeing’s planes to crash, killing 346 people.
Boeing’s actions are a tragic example of how easily profit can trump safety. This profit vs safety tension is particularly relevant for large AI companies, who are competing to be at the cutting edge of innovation. For these tech executives, it is not only profit, but also personal legacy that’s on the line when they launch new products. Thus, it’s easy to see how they may get caught up in the race to Artificial General Intelligence (AGI) at the expense of safety.
While recent technological improvements - particularly in large language models (LLMs) and image generation - have been impressive, we want to remain clear-eyed when it comes to progress and not get swept away by the hype. This means acknowledging the real ethical and safety concerns associated with AI.
As more data generally produces better models, AI companies have furiously snapped up any data they can get their hands on. This includes novels, social media posts, Google Docs, raising questions of intellectual property and informed consent.
When it comes to safety, AI could be misused to scale cyberattacks on critical infrastructure or assist in developing novel bioweapons. Even without malicious actors, these models are often black boxes, meaning that we don’t always know why they act in certain ways. Many AI researchers also worry about misalignment, where AI acts against human values. Sam Altman himself has said that “thinking that the alignment problem is solved would be a very grave mistake indeed”.
And unlike aviation, AI safety is not legislated. The new AI Safety Institute in the National Institute of Standards and Technology (NIST) is not a regulatory body, but rather seeks to set guidelines and standards. It can require companies to share safety testing results, but cannot require those companies to actually do testing. While the White House has introduced safety commitments for technology companies, these are voluntary and have no enforcement mechanism.
The problem is not that executives are evil and don’t care about safety. They very likely mean it when they publicly acknowledge their concerns about AI. The problem is that when launch dates and expectations are looming, it is very tempting to take shortcuts. What were intended to be one-time shortcuts can become habitual and influence the culture of an organization. When resigning from OpenAI, an alignment researcher conceded that “safety culture and processes have taken a backseat to shiny products”.
The current US approach to AI safety relies on companies to self-regulate. As we’ve seen in aviation and are beginning to see in AI, corporate incentives do not ensure optimal outcomes for public safety. Without binding commitments and enforcement mechanisms, the consequences could be deadly.
For the US to lead on AI, AISI needs resources and formal authorization from Congress
Countless American sports fans are suffering emotional despair and financial ruin as a result of AI-powered online sports betting