OpenAI Collaborates with US Military on AI Projects After Policy Update

- Rating: 0.0/5

OpenAI Collaborates with US Military on AI Projects After Policy Update

In a recent revelation, OpenAI, the groundbreaking artificial intelligence company, has disclosed its engagement in various artificial intelligence initiatives with the US military, following a significant policy change. This policy shift, which has now allowed OpenAI to collaborate with the military, was disclosed by one of the company’s executives during the World Economic Forum in Davos, as reported by Bloomberg.

OpenAI’s Evolving Stance on Military Involvement

Anna Makanju, the Vice President of Global Affairs at OpenAI, explained that the company is actively involved in the development of “open-source cybersecurity software” and is also engaged in discussions with the US government on strategies to prevent suicides among military veterans. Although Ms. Makanju did not delve into the specifics of these projects, she clarified that the decision to remove the prior prohibition on using OpenAI’s technology for military and warfare applications aligns with the company’s broader policy adjustments, aimed at accommodating new applications of their AI technology.

She stated, “Because we previously had what was essentially a blanket prohibition on military, many people thought that would prohibit many of these use cases, which people think are very much aligned with what we want to see in the world.”

Restrictions Remain on Certain Uses

Despite the relaxation of the ban, Ms. Makanju emphasized that OpenAI continues to maintain strict prohibitions on the utilization of its technology for purposes that involve developing weapons, causing harm to people, or destroying property.

However, it’s worth noting that Microsoft, a major stakeholder in OpenAI, does not possess an inherent prohibition on weapons development and has historically engaged in contracts with the US military and various government agencies.

OpenAI’s Expansion into Election Security

CEO Sam Altman, in another development disclosed during the Davos conclave, highlighted OpenAI’s expanding involvement in “election security.” Mr. Altman acknowledged the significance of elections, stating, “Elections are a huge deal,” and added that it is essential to address concerns and anxieties related to the electoral process.

OpenAI is reportedly focusing on preventing the misuse of its generative AI tools for disseminating “political disinformation.” This includes combatting deepfakes and other artificially-generated media that could potentially influence or manipulate political candidates during the upcoming 2024 elections.

Legal Challenges Loom Over OpenAI

It’s worth noting that OpenAI and Microsoft are currently facing legal challenges, with the New York Times recently suing them for copyright infringement. The lawsuit, filed by the renowned newspaper, alleges that OpenAI’s generative AI capabilities pose an existential threat to press freedom and constitute unfair competition. The lawsuit seeks “billions of dollars in statutory and actual damages” for what it terms “unlawful copying” and unauthorized use of the New York Times’ intellectual property.

Furthermore, the law firm Susman Godfrey, representing the New York Times, previously proposed a class action lawsuit against both OpenAI and Microsoft. This class action suit alleges “rampant theft” of authors’ works, asserting that these companies improperly used nonfiction authors’ writings without consent to “train” their highly-publicized chatbot, ChatGPT.

In light of these developments, OpenAI’s evolving role in AI and its engagements with the military and election security continue to generate significant attention and debate. The company remains at the forefront of discussions surrounding the responsible use of AI technologies in various domains.

Related Articles

Discussion Thread

Send this to a friend