Startup

Former Palantir CISO Dane Stuckey joins OpenAI to steer safety

Dane Stuckey, the previous CISO of analytics agency Palantir, has joined OpenAI as its latest CISO, serving alongside OpenAI head of safety Matt Knight.

Stuckey introduced the transfer in a put up on X Tuesday night.

“Safety is germane to OpenAI’s mission,” he stated. “It’s essential we meet the very best requirements for compliance, belief, and safety to guard tons of of hundreds of thousands of customers of our merchandise, allow democratic establishments to maximally profit from these applied sciences, and drive the event of secure AGI for the world. I’m so excited for this subsequent chapter, and may’t wait to assist safe a future the place AI advantages us all.”

Stuckey began at Palantir in 2014 on the knowledge safety group as a detection engineering and incident response lead. Previous to becoming a member of Palantir, Stuckey spent over a decade in numerous business, authorities, and intelligence neighborhood digital forensics, incident detection/response, and safety program growth roles, in accordance to his weblog.

Stuckey’s work at Palantir, an AI firm wealthy in authorities contracts, might maybe assist advance OpenAI’s ambitions on this space. Forbes experiences that, by way of its associate Carahsoft, a authorities contractor, OpenAI is searching for to determine a better relationship with the U.S. Division of Protection.

Because it lifted its ban on promoting AI tech to the navy in January, OpenAI has labored with the Pentagon on a number of software program initiatives, together with ones associated to cybersecurity. It’s additionally appointed former head of the Nationwide Safety Company, retired Gen. Paul Nakasone, as a board member.

OpenAI has been beefing up the safety facet of its operation in current months.

A number of weeks in the past, the corporate posted a job itemizing for a head of trusted compute and cryptography to steer a brand new group targeted on constructing “safe AI infrastructure.” This infrastructure would entail capabilities to guard AI tech, safety instrument evaluations, and entry controls “that advance AI safety,” per the outline.

Leave a Reply

Your email address will not be published. Required fields are marked *