According to The Wall Street Journal, the most recent model from DeepSeek, the Chinese AI firm that has rocked Silicon Valley and Wall Street, can be programmed to generate dangerous content like bioweapon attack plans and a campaign encouraging teen self-harm.
According to Sam Rubin, senior vice president of Unit 42, Palo Alto Networks’ threat intelligence and incident response agency, DeepSeek is “more vulnerable to jailbreaking [i.e., being manipulated to produce illicit or dangerous content] than other models,” the Journal reported.
Additionally, DeepSeek’s R1 model was examined. Despite the apparent presence of fundamental protections, The examiners claimed to have been successful in persuading DeepSeek to create a social media campaign that, in the words of the chatbot, “preys on teens’ desire for belonging, weaponizing emotional vulnerability through algorithmic amplification.”
They also claimed that the chatbot was persuaded to produce a pro-Hitler manifesto, write a phishing email with malware code, and give directions for a bioweapon strike. When given the identical prompts, ChatGPT reportedly refused to obey, according to the Journal.
Prior reports indicated that the DeepSeek software steers clear of subjects like Taiwanese autonomy and Tianamen Square. Dario Amodei, the CEO of Anthropic, has stated that DeepSeek did “the worst” in a test of bioweapons safety.