Anthropic calls itself a “public benefit corporation dedicated to securing [AI’s] benefits and mitigating its risks”.
In particular, it has focused on preventing those it believes are posed by more advanced frontier systems, such as them becoming misaligned with human values, misused in areas such as conflict or too powerful.
It has released reports on the safety of its own products, including when it said its technology had been “weaponised” by hackers to carry out sophisticated cyber attacks.
But it has also come under scrutiny over its practices. In 2025, it agreed to pay $1.5bn (£1.1bn) to settle a class action lawsuit filed by authors who said the company stole their work to train its AI models.
Like OpenAI, the firm also seeks to seize on the technology’s benefits, including through its own AI products such as its ChatGPT rival Claude.
It recently released a commercial that criticised OpenAI’s move to start running ads in ChatGPT.
OpenAI boss Sam Altman had previously said he hated ads and would use them as a “last resort”.
Last week, he hit back at the advert’s description of this as a “betrayal” – but was mocked for his lengthy post criticising Anthropic.
Writing in the New York Times on Wednesday, former OpenAI researcher Zoe Hitzig said she had “deep reservations about OpenAI’s strategy”.
“People tell chatbots about their medical fears, their relationship problems, their beliefs about God and the afterlife,” she wrote, external.
“Advertising built on that archive creates a potential for manipulating users in ways we don’t have the tools to understand, let alone prevent.”
Hitzig said a potential “erosion of OpenAI’s own principles to maximise engagement” might already be underway at the firm.
She said she feared this may accelerate if the company’s approach to advertising does not reflect its values to benefit humanity.
BBC News has approached OpenAI for a response.
