Gate News reports that on March 17, Anthropic posted a job opening on LinkedIn for a “Chemical Weapons and High-Explosive Policy Manager.” Applicants are required to have at least 5 years of experience in chemical weapons and/or explosive defense, and knowledge of radiological dispersal devices (dirty bombs). Anthropic stated that the position aims to prevent its AI tool Claude from being used for “catastrophic misuse,” expressing concern that Claude could be exploited to obtain information on manufacturing chemical or radiological weapons. Experts are needed to assess whether existing safety measures are sufficient. Another AI company also posted a similar position on their careers page, seeking a Biological and Chemical Risk Researcher with a maximum annual salary of $455,000. In response, Dr. Stephanie Hare, co-host of the BBC program “AI Decoded” and a technology researcher, questioned, “Is it really safe to let AI systems handle sensitive information about chemicals, explosives, and radiological weapons? Even if AI is instructed not to use this information.” She also pointed out that there are currently no international treaties or regulations governing such work or the combined use of AI with these types of weapons, “All of this is happening outside the public eye.”