Microsoft Says It Provided AI to Israel for Gaza War but Denies Use to Harm Palestinians

Microsoft acknowledged its deep involvement in the Gaza war for the first time, but did not directly address questions about precisely how the Israeli military is using its technologies

Microsoft acknowledged Thursday that it sold advanced artificial intelligence and cloud computing services to the Israeli military during the war in Gaza and aided in efforts to locate and rescue Israeli hostages. But the company also said it has found no evidence to date that its Azure platform and AI technologies were used to target or harm people in Gaza.

The unsigned blog post on Microsoft’s corporate website appears to be the company’s first public acknowledgement of its deep involvement in the war.

It comes nearly three months after an investigation by The Associated Press revealed previously unreported details about the American tech giant’s close partnership with the Israeli Defense Ministry, with military use of commercial AI products skyrocketing by nearly 200 times after the October 7 Hamas attack.

The AP reported that the Israeli military uses Azure to transcribe, translate and process intelligence gathered through mass surveillance, which can then be cross-checked with Israel’s in-house AI-enabled targeting systems and vice versa.

Meanwhile, human rights groups have raised concerns that AI systems, which can be flawed and prone to errors, are being used to help make decisions about who or what to target, resulting in the deaths of innocent people.

Microsoft said Thursday that employee concerns and media reports had prompted the company to launch an internal review and hire an external firm to undertake “additional fact-finding.” The statement did not identify the outside firm or provide a copy of its report.

The statement also did not directly address several questions about precisely how the Israeli military is using its technologies, and the company declined Friday to comment further. Microsoft declined to answer written questions from The AP about how its AI models helped translate, sort and analyze intelligence used by the military to select targets for airstrikes.

The company’s statement said it had provided the Israeli military with software, professional services, Azure cloud storage and Azure AI services, including language translation, and had worked with the Israeli government to protect its national cyberspace against external threats.

Microsoft said it had also provided “special access to our technologies beyond the terms of our commercial agreements” and “limited emergency support” to Israel as part of the effort to help rescue the more than 250 hostages taken by Hamas on October 7.

“We provided this help with significant oversight and on a limited basis, including approval of some requests and denial of others,” Microsoft said. “We believe the company followed its principles on a considered and careful basis, to help save the lives of hostages while also honoring the privacy and other rights of civilians in Gaza.”

The company did not answer whether it or the outside firm it hired communicated or consulted with the Israeli military as part of its internal probe. It also did not respond to requests for additional details about the special assistance it provided to the Israeli military to recover hostages or the specific steps to safeguard the rights and privacy of Palestinians.

In its statement, the company also conceded that it “does not have visibility into how customers use our software on their own servers or other devices.” The company added that it could not know how its products might be used through other commercial cloud providers.

In addition to Microsoft, the Israeli military has extensive contracts for cloud or AI services with Google, Amazon, Palantir and several other major American tech firms.

Microsoft said the Israeli military, like any other customer, was bound to follow the company’s Acceptable Use Policy and AI Code of Conduct, which prohibit the use of products to inflict harm in any way prohibited by law. In its statement, the company said it had found “no evidence” that the Israeli military had violated those terms.

Emelia Probasco, a senior fellow for the Center for Security and Emerging Technology at Georgetown University, said the statement is noteworthy because few commercial technology companies have so clearly laid out standards for working globally with international governments.

“We are in a remarkable moment where a company, not a government, is dictating terms of use to a government that is actively engaged in a conflict,” she said. “It’s like a tank manufacturer telling a country you can only use our tanks for these specific reasons. That is a new world.”

No Azure for Apartheid, a group of current and former Microsoft employees, called on Friday for the company to publicly release a full copy of the investigative report.

“It’s very clear that their intention with this statement is not to actually address their worker concerns, but rather to make a PR stunt to whitewash their image that their relationship with the Israeli military has tarnished,” said Hossam Nasr, a former Microsoft worker fired in October after he helped organize an unauthorized vigil at the company’s headquarters for Palestinians killed in Gaza.

Cindy Cohn, executive director of the Electronic Frontier Foundation, applauded Microsoft Friday for taking a step toward transparency. But she said the statement raised many unanswered questions, including details about how the Israeli military was using Microsoft’s services and AI models on its own government servers.

“I’m glad there’s a little bit of transparency here,” said Cohn, who has long called on U.S. tech giants to be more open about their military contracts. “But it is hard to square that with what’s actually happening on the ground.”

  • Phpto: Microsoft offices in Herzliya in central Israel in 2018.Credit: Seth Aronstam/Shutterstock.com