Anthropic Launches Internal Think Tank as Pentagon Blacklist Heads to Court

Anthropic merges three research teams under Jack Clark days after suing the Pentagon over its supply-chain risk designation.

Anthropic Launches Think Tank Amid Pentagon Blacklist

Anthropic on Wednesday announced the Anthropic Institute, an internal think tank that merges three of the company's existing research teams under co-founder Jack Clark, the company said in a blog post. Thirty people to start. The unit combines Anthropic's Frontier Red Team, its Societal Impacts group, and its Economic Research division to study how AI reshapes jobs, security, governance, and social values. Clark takes the new title of Head of Public Benefit, leaving the public policy team he built and tripled in size during 2025 to Sarah Heck, formerly Anthropic's Head of External Affairs.

The announcement lands days after Anthropic sued the Trump administration over a supply-chain risk designation that bars Pentagon contractors from using its AI models, a label previously reserved for foreign adversaries.

Key Takeaways

  • Anthropic merges three research teams into the Anthropic Institute under co-founder Jack Clark, starting with 30 people
  • Clark shifts from policy chief to Head of Public Benefit; Sarah Heck takes over public policy
  • The launch comes days after Anthropic sued the Pentagon over a supply-chain risk designation that bars contractors from using its models
  • Founding hires include former Google DeepMind, Yale Law, and UVA economics talent


The journalist who became a policy chief

Clark arrived at AI through newsrooms, not labs. A British immigrant, he spent years as a technology reporter at Bloomberg and The Register, covering distributed systems, quantum computing, and early machine learning research. Journalism was method acting, he once said. Databases meant learning SQL. Writing about chip companies meant teaching himself how fabs actually work. Clark brought that obsession to OpenAI in 2016, where he rose to policy director and grew fixated on the gap between what AI could do and what policymakers understood about it.

In 2021, Clark left with the Amodeis to co-found Anthropic. Over the five years since, he built the company's policy operation from scratch, testified before Congress on AI safety, and published Import AI, a weekly newsletter read by roughly 70,000 people. Stanford's AI Index counted him as a founding member. So did the National Artificial Intelligence Advisory Committee under Biden.

Now he's stepping back from the policy team to run a think tank that sits inside the company but faces outward. The move had been in the works since November, Clark told The Verge, and the Pentagon conflict did not directly shape the research agenda. But the timing is hard to ignore.

Anthropic's annualized revenue pace has climbed to $19 billion from $14 billion just weeks ago. Its valuation sits near $350 billion. And it is fighting a legal battle that could cost it billions in government-adjacent revenue, according to court filings from CFO Krishna Rao. The company's public sector business was projected to reach multiple billions in annual recurring revenue within five years, Anthropic's head of public sector Thiyagu Ramasamy said in a filing tied to the lawsuit.

The Institute's stated mission reads like a response to the exact questions the Pentagon fight has forced into the open. What happens when AI models get embedded in military targeting systems? Nobody has settled who decides what those models can and cannot do, and Congress has barely tried.

"What we're experiencing with the last few weeks just sort of shows you how much hunger there is for a larger national conversation by the public about this technology," Clark told The Verge.

Three teams, one mandate

The Institute pulls together groups that already existed inside Anthropic but operated separately. The Frontier Red Team stress-tests AI models at their capability limits, probing for dangerous behaviors before deployment. Societal Impacts tracks how Claude gets used in practice across industries. Economic Research measures AI's effects on labor and output.

Clark's bet is that combining these functions under one roof produces insights none of them could generate alone. A red team finding that a model can discover zero-day cybersecurity vulnerabilities becomes more actionable when paired with economists who can model what happens if that capability proliferates. An impact study showing Claude replacing paralegal work means more when a forecasting team can project how fast displacement scales.


The founding hires make the bet concrete. Matt Botvinick spent years running research at Google DeepMind before a fellowship at Yale Law School. Now he'll lead the Institute's work on AI and the rule of law. Anton Korinek is on leave from the University of Virginia's economics department to study how AI could restructure economic activity at its foundations. And Zoë Hitzig walked away from OpenAI after the company started running ads in ChatGPT. She joins to connect economic research with model training decisions.

The contradiction nobody can resolve

Anthropic built its reputation on safety research. It also deployed Claude across all 18 U.S. intelligence agencies and partnered with Palantir to embed it in classified military systems, including the Maven targeting platform used in Iran operations.

The company draws a distinction between helping analysts process intelligence and giving AI autonomous control over weapons or domestic surveillance. The Pentagon wanted those restrictions removed. Anthropic refused. Defense Secretary Pete Hegseth accused the company of "arrogance and betrayal." DOD technology chief Emil Michael called Amodei a "liar" with a "God-complex." Cornered from every side.

Hugging Face's chief ethics scientist Margaret Mitchell was less diplomatic in an interview with The Guardian. "It's not that they don't want to kill people. It's that they want to make sure to kill the right people." The government decides who qualifies, she said.

Clark's Institute doesn't resolve that tension. It puts it in writing. A think tank examining whether AI "makes us safer or introduces new dangers" is led by a company whose AI is right now embedded in a war it can't fully observe, inside classified systems it can't fully control.

What comes next

Clark wants to add teams that forecast AI progress and study how the technology collides with existing law. Both are already being staffed. Anthropic is also opening a Washington, D.C., office this spring, planting a bigger flag in the city where its lawsuit against the DOD is playing out alongside a separate case in San Francisco.

Michael told Bloomberg this week that talks with Anthropic are over. OpenAI signed its own Pentagon deal claiming the same red lines Anthropic was blacklisted for defending. You can study the societal implications of AI while your chatbot helps select bombing targets, apparently. Just not in the same contract.

Clark's new think tank will publish research on what powerful AI means for the world. He won't have to look far for the first case study. It's already in the courtrooms, the classified networks, and whatever targets Maven selects with Claude running underneath.

Frequently Asked Questions

What is the Anthropic Institute?

An internal think tank combining Anthropic's Frontier Red Team, Societal Impacts group, and Economic Research division. It studies how AI affects jobs, security, governance, and social values. Jack Clark leads it with 30 initial staff.

Why did Anthropic sue the Pentagon?

The Defense Department placed Anthropic on a supply-chain risk list, a designation previously reserved for foreign adversaries like Huawei. The label bars Pentagon contractors from using Anthropic's AI models, threatening billions in projected government revenue.

What is Jack Clark's background before Anthropic?

Clark worked as a technology reporter at Bloomberg and The Register covering distributed systems and machine learning. He joined OpenAI in 2016 as policy director, then co-founded Anthropic with the Amodeis in 2021.

Who are the Anthropic Institute's key hires?

Matt Botvinick, formerly of Google DeepMind and Yale Law School, leads AI and rule-of-law research. Anton Korinek from UVA studies economic restructuring. Zoë Hitzig left OpenAI to connect economic research with model training.

How does Claude currently work with the military?

Claude operates across all 18 U.S. intelligence agencies and runs inside Palantir's classified military systems, including the Maven targeting platform. Anthropic maintains restrictions against autonomous weapons control and domestic surveillance.

OpenAI's Pentagon Deal Claims the Same Red Lines That Got Anthropic Blacklisted
Sam Altman announced Friday night that OpenAI reached an agreement with the Pentagon to deploy its AI models on classified military networks, claiming the deal preserves the same safety red lines that
Anthropic Lost the Pentagon Contract. It Won the Argument. Then Offered to Keep the Lights On.
On Thursday afternoon, the Department of Defense formally notified Anthropic that the company and its products "are deemed a supply chain risk, effective immediately." The label has historically been
OpenAI and Pentagon Agree to Tighten Surveillance Restrictions in AI Contract After Public Backlash
OpenAI CEO Sam Altman personally approached Undersecretary of Defense for Research and Engineering Emil Michael to renegotiate the Pentagon AI contract, sources familiar with the talks told Axios on M

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Implicator.ai.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.