Former NSA head joins OpenAI board and safety committee

5 months ago 45
ARTICLE AD

Former head of the National Security Agency, retired Gen. Paul Nakasone, will join OpenAI’s board of directors, the AI company announced Thursday afternoon. He will also sit on the board’s “security and safety” subcommittee.

The high-profile addition is likely intended to satisfy critics who think that OpenAI is moving faster than is wise for its customers and possibly humanity, putting out models and services without adequately evaluating their risks or locking them down.

Nakasone brings decades of experience from the Army, U.S. Cyber Command and the NSA. Whatever one may feel about the practices and decision-making at these organizations, he certainly can’t be accused of a lack of expertise.

As OpenAI increasingly establishes itself as an AI provider not just to the tech industry but government, defense and major enterprises, this kind of institutional knowledge is valuable both for itself and as a pacifier for worried shareholders. (No doubt the connections he brings in the state and military apparatus are also welcome.)

“OpenAI’s dedication to its mission aligns closely with my own values and experience in public service,” Nakasone said in a press release.

That certainly seems true: Nakasone and the NSA recently defended the practice of buying data of questionable provenance to feed its surveillance networks, arguing that there was no law against it. OpenAI, for its part, has simply taken, rather than buying, large swathes of data from the internet, arguing when it is caught that there is no law against it. They seem to be of one mind when it comes to asking forgiveness rather than permission, if indeed they ask either.

The OpenAI release also states:

Nakasone’s insights will also contribute to OpenAI’s efforts to better understand how AI can be used to strengthen cybersecurity by quickly detecting and responding to cybersecurity threats. We believe AI has the potential to deliver significant benefits in this area for many institutions frequently targeted by cyber attacks like hospitals, schools, and financial institutions.

So this is a new market play, as well.

Nakasone will join the board’s safety and security committee, which is “responsible for making recommendations to the full Board on critical safety and security decisions for OpenAI projects and operations.” What this newly created entity actually does and how it will operate is still unknown, as several of the senior people working on safety (as far as AI risk) have left the company, and the committee is itself in the middle of a 90-day evaluation of the company’s processes and safeguards.

Read Entire Article