Saturday, November 23, 2024

OpenAI’s head of trust and safety steps down

STOCKHOLM (Reuters) -OpenAI’s head of trust and safety Dave Willner is leaving the company, he said in a LinkedIn post on Friday, citing the pressures of the job on his family life and saying he would be available for advisory work.

Chief Technology Officer Mira Murati will directly manage the team on an interim basis, and Willner will continue to advise through the end of the year, OpenAI said in a statement.

Trust and safety departments have taken on a high-profile role in technology companies such as OpenAI, Twitter , Alphabet and Meta as they seek to limit the spread of hate speech, misinformation and other harmful content on their platforms.

At the same time, concerns about the negative impact of AI have risen.

Describing Willner’s work as “foundational in operationalizing our commitment to the safe and responsible use of our technology,” the company said it is seeking a “technically-skilled lead” as his replacement.

Willner took over his role at OpenAI in February 2022 after working at Airbnb and Facebook. He attributed his decision to quit to growing demands from his job affecting his family life.

“Anyone with young children and a super intense job can relate to that tension, I think, and these past few months have really crystallised for me that I was going to have to prioritise one or the other,” he said in the post.

“I’ve moved teaching the kids to swim and ride their bikes to the top of my OKRs (objectives and key results) this summer.”

Microsoft-backed OpenAI, whose AI chatbot ChatGPT became the fastest-growing consumer application in history earlier this year, has said it depends on its trust and safety team to build “the processes and capabilities to prevent misuse and abuse of AI technologies”.

(Reporting by Supantha Mukherjee in Stockholm and Fanny Potkin in Singapore; Editing by Barbara Lewis)

tagreuters.com2023binary_LYNXMPEJ6K091-VIEWIMAGE

Related Posts

1 of 111