The U.S. House has reportedly enforced a strict prohibition on the use of Microsoft Corp.’s MSFT Copilot AI chatbot by its congressional staffers. This decision comes in the wake of the federal government’s ongoing efforts to manage its internal use of AI while simultaneously working on regulations for this emerging technology, reported msmestory.

The Chief Administrative Officer of the House, Catherine Szpindor, has communicated to congressional offices that Microsoft Copilot is “unauthorized for House use.” The Office of Cybersecurity has identified Copilot as a potential risk for leaking House data to non-House-approved cloud services.

As a result, Copilot will be removed from and blocked on all House Windows devices. Microsoft is working to address these concerns by rolling out a suite of government-oriented tools later this year, designed to meet federal government security and compliance requirements.

Subscribe to the msmestory Tech Trends newsletter to get all the latest tech developments delivered to your inbox.

Why It Matters: This move by the U.S. House is the latest in a series of actions taken by governmental bodies and major corporations to address the potential risks associated with AI-based chatbots. Earlier, in 2023, Apple Inc. banned the internal use of OpenAI’s ChatGPT and other AI tools. This was followed by a similar move by Alphabet Inc.’s Google in June 2023. These actions reflect the growing concerns about data security and privacy surrounding AI-based chatbots, prompting organizations to reevaluate their use of these technologies.

Meanwhile, the White House has introduced a comprehensive policy to address the potential risks associated with artificial intelligence (AI) use across federal agencies. The new policy, released on Thursday, mandates federal agencies to appoint a chief AI officer, disclose their AI usage, and implement protective measures.

Check out more of msmestory’s Consumer Tech coverage by following this link.

Read Next: Elon Musk Reacts As GOP House Judiciary Says Biden Administration Is ‘Flying’ Migrants Into US

Disclaimer: This content was partially produced with the help of msmestory Neuro and was reviewed and published by msmestory editors.

Photo via Shutterstock

Table of Content

Background

The U.S. House has enforced a strict ban on the use of Microsoft Copilot, an AI chatbot developed by Microsoft Corp., due to concerns over data security and privacy.

Decision on Microsoft Copilot

The Chief Administrative Officer of the House, Catherine Szpindor, communicated the prohibition of Microsoft Copilot for House use, citing potential risks associated with leaking House data to unauthorized cloud services.

Identified Risks

The Office of Cybersecurity identified Microsoft Copilot as a potential risk for data leakage to non-approved cloud services, leading to its removal and blockage on all House Windows devices.

Microsoft’s Mitigation Steps

Microsoft is addressing these concerns by developing a suite of government-oriented tools to meet federal security and compliance requirements, scheduled for release later this year.

Growing Concerns

The ban on Microsoft Copilot is part of a broader trend where organizations like Apple and Google have restricted internal use of AI tools, reflecting mounting concerns about data security and privacy in AI applications.

White House Policy

The White House has introduced a new policy mandating federal agencies to appoint a chief AI officer, disclose AI usage, and implement protective measures to manage risks associated with AI technologies.

SEO Focus Keyword: AI chatbot security risks