
Vice President JD Vance told world leaders in Paris that AI must “be free from ideological prejudices,” and that American technology is not a censorship tool. (Credit: Reuters)
New report from the Prevention League (ADL) shows anti-Semitic and anti-Israel bias AI major language model (LLM).
In that study, ADL asked GPT-4O (Openai), Claude 3.5 Sonnet (Anthropic), Gemini 1.5 Pro (Google), and Llama 3-8b (META) to agree to a series of statements. They changed the prompt by naming some and anonymating others – and they saw the difference in LLMS answer based on the user’s name or lack of it.
In this study, LLMs were asked to evaluate 8,600 statements each, giving a total of 34,000 responses according to the ADL. The organization said it used 86 statements, each divided into one of six categories. Bias for Jewsprejudice against Israel, the war in Gaza/Israel and Hamas, Jewish and Israeli conspiracy theory (except the Holocaust), Holocaust conspiracy theory and ratio, non-Jewish conspiracy theory and captions.

AI assistant apps on smartphones such as Openai ChatGpt, Google Gemini, and Anthropic Claude. (Getty Images/Getty Images)
ADL said so All llms “There was measurable anti-Semitic and anti-Israel prejudice,” and Lama’s prejudice was “most prominent.” According to the ADL, Meta’s llama gave “completely false” answers to questions about Jews and Israel.
“Artificial intelligence is reshaping how people consume information, but as this study shows, AI models are not escaping deep societal bias,” ADL CEO Jonathan Greenblatt said in a statement. “If LLMS amplifies misinformation or refuses to acknowledge certain truths, it can distort public discourse and contribute to anti-Semitism. This report is an urgent call for AI developers to take responsibility for their products and implement stronger safeguards against bias.”
When the model was asked about the ongoing Israeli-Hamas War, it was found that the GPT and Claude exhibited “significant” bias. Additionally, ADL said, “LLMS refused to answer questions about Israel more frequently than other topics.”

The Meta-Observation Committee has declared a call to the anti-Israel rally, “From River to River.”
The LLM used in the report “is concerned that it will not be able to accurately reject anti-Semitic ratios and conspiracy theories,” the ADL warned. Furthermore, ADL found that, according to ADL, all LLMs except GPT showed more bias when answering questions about Jewish conspiracy theory than Jewish conspiracy theory, but they all all said to have shown more bias towards Israel than Jews.
A Meta spokesman told Fox Business that ADL research did not use the latest version of Meta AI. The company said it tested the same prompts used by ADL and found that responses from the updated version of Meta AI gave different answers when asked for multiple-choice and open-ended questions. According to the meta, users are more likely to ask open-ended questions than questions that are formatted like ADL prompts.
“People usually use AI tools to ask open-ended questions that allow for subtle answers, rather than prompts where they need to select from a list of pre-selected multi-selected answers. While they’ve constantly improved the model and are not fair based on facts, this report does not reflect the way AI tools are commonly used.”
Google raised a similar issue when talking to Fox Business. The company said the version of Gemini used in the report is a developer model and not a consumer product.
Like Meta, Google took the issue about how ADL asked Gemini’s question. According to Google, the statement does not reflect the way users ask questions, and the answers they receive will be more detailed.
Daniel Kelly, interim president of the ADL Center for Technology and Society, warned that these AI tools are already ubiquitous in schools, offices and social media platforms.
“AI companies need to take proactive steps to address these obstacles, from improving their training data to improving content moderation policies,” Kelly said in a press release.

Pro-Palestinian protesters march ahead of the Democratic National Convention on August 18, 2024 in Chicago, Illinois. (Jim Vondruska/Getty Images)
Click here to get your Fox business on the go
ADL has given several recommendations to both developers considering addressing AI bias and government developers. First, organizations will ask developers to partner with institutions such as government and academia to carry out pre-deployment testing.
Developers are also encouraged to consult with the National Institute of Standards and Technology (NIST) risk management framework for AI and to consider possible biases in their training data. Meanwhile, the government is being encouraged to encourage AI to have a “built-in focus to ensure content and use safety.” ADL also encourages governments to create regulatory frameworks for AI developers and invest in AI safety research.
Openai and humanity did not immediately respond to Fox Business’s request for comment.