Anthropic has just rolled out a set of custom AI models designed to tackle unique challenges in U.S. national security. Dubbed the ‘Claude Gov’ models, they were created with direct input from government experts to support everything from strategic planning and operational support to intelligence analysis.
These models are already being used by top U.S. national security agencies and are restricted to those with the appropriate clearance. They’ve undergone the same rigorous safety testing as other Claude models, ensuring they can handle sensitive tasks with confidence.
In its effort to secure more stable revenue streams, Anthropic has strengthened its ties with U.S. government entities. Last November, the company partnered with Palantir and Amazon’s AWS to offer its AI solutions to the defence sector. The Claude Gov models are optimised for processing classified data, demonstrating a lower rate of refusals when dealing with delicate, sensitive content.
Built to understand key dialects and interpret complex cybersecurity data, these models are a practical asset for analysing challenging defence-related documents. While Anthropic leads this particular charge, OpenAI is also collaborating with the U.S. Defence Department, Meta is expanding its Llama models for defence use, Google is adapting its Gemini AI for classified applications, and Cohere is teaming up with Palantir for similar projects.