post / 30 July 2025
From Prompt to Peace: IFIT Study Shows AI Isn’t Ready to Give Conflict Resolution Advice

30 July 2025 – A groundbreaking study by the Institute for Integrated Transitions (IFIT) has revealed that all major large language models (LLMs) are providing dangerous conflict resolution advice without conducting basic due diligence that any human mediator would consider essential.
IFIT tested six leading AI models including ChatGPT, Deepseek, Grok, and others on three real-world prompt scenarios from Syria, Sudan, and Mexico. Each LLM response, generated on June 26, 2025, was evaluated by two independent five-person teams of IFIT researchers across ten key dimensions, based on well-established conflict resolution principles such as due diligence and risk disclosure. Scores were assigned on a 0 to 10 scale for each dimension to assess the quality of each LLM’s advice.
A senior expert sounding board of IFIT conflict resolution experts from Afghanistan, Colombia, Mexico, Northern Ireland, Sudan, Syria, the United States, Uganda, Venezuela, and Zimbabwe then reviewed the findings to assess implications for real-world practice.
From a total possible point value of 100/100, the average score across all six models was only 27 points. The maximum score was obtained by Google Gemini with 37.8/100, followed by Grok with 32.1/100, ChatGPT with 24.8/100, Mistral with 23.3/100, Claude with 22.3/100, and DeepSeek last with 20.7/100. All scores represent a failure to abide by minimal professional conflict resolution standards and best practices.
“In a world where LLMs are increasingly penetrating our daily lives, it’s crucial to identify where these models provide dangerous advice, and to encourage LLM providers to upgrade their system prompts,” IFIT founder and executive director Mark Freeman notes. “The reality is that LLMs are already being used for actionable advice in conflict zones and crisis situations, making it urgent to identify and fix key blind spots.”
Click here to read the report press release.
Click here to read the study methodology and detailed findings.
For speaking engagements and media requests, please contact Olivia Helvadjian @ [email protected]