Click to Skip Ad
Closing in...

DOGE might replace some government workers with AI

Published Mar 11th, 2025 11:15AM EDT
ChatGPT photo illustration
Image: Rafael Henrique/SOPA Images/LightRocket via Getty Images

If you buy through a BGR link, we may earn an affiliate commission, helping support our expert product labs.

Using artificial intelligence to replace some of the government workforce might be one way for the Department of Government Efficiency (DOGE) to compensate for the lost workforce, assuming the new plan it’s hatching works. It might also be the latest controversy related to DOGE in the department’s short history. That’s because we’re still in the early days of AI, and chatbots can’t fully replace humans with any degree of reliability.

Still, DOGE will deploy a GSAi (or GSA Chat) program to as many as 1,500 federal workers, with plans to make it available to over 10,000 GSA employees — which accounts for the agency’s entire workforce. The GSA is the General Services Administration, a US government branch that manages contracts and services worth over $100 billion.

The development of GSAi began last year under the Biden administration as a test program with specific guardrails in place. The Trump administration has already repealed Biden’s executive order on AI safety and aggressively accelerated the development and deployment of GSAi.

Word of DOGE’s plans with GSAi got out, with The Atlantic having spoken with current and former GSA employees with access to the program. The Atlantic also gained access to internal documents, recordings, and the GSAi code, which is available on GitHub.

GSAi started under the Biden administration. A small GSA technology team, known as 10X, designed the AI as a testbed for such programs. The program was known as “10x AI Sandbox” at the time and wasn’t supposed to be a chatbot like ChatGPT. Instead, it was supposed to be a cost-effective environment for federal employees to test AI and its use for their specific work.

Once the Trump administration took over, the AI project took off under DOGE, becoming the current GSAi chatbot. DOGE allies pushed to accelerate its development just as the department started firing employees or encouraging resignations.

The report notes that GSAi looks a lot like ChatGPT. It has a prompt box where users issue commands, and the chatbot responds. The AI can draft emails, write code, and “much more!” That’s according to an internal email from GSA’s chief AI officer, Zach Whitman.

GSAi supports models from Meta and Anthropic, but it’s not as advanced as some commercial AI programs. For example, you can’t upload documents to the chatbot now, but future versions might support it.

The Atlantic notes the GSAi might be used for more complex  tasks in the future:

The program could conceivably be used to plan large-scale government projects, inform reductions in force, or query centralized repositories of federal data, the GSA worker told me.

The report also notes remarks from Thomas Shedd made in a recent meeting at the Technology Transformation Services (TTS), GSA’s IT division. A former Tesla engineer, Shedd is now the director of the TTS. He said the agency is pushing an “AI-first strategy,” where tech and automation can make up for the human workforce terminated by DOGE.

Shedd said that “coding agents” could be employed across the government, and AI could “run analysis on contracts.” Software could be used to “automate” GSA’s “finance function.” This all sounds feasible, as we’re getting to a place where coding agents exist, at least from AI firms deploying commercial products. But it’s also wishful thinking at the moment, considering that GSAi can’t be as advanced as ChatGPT, Claude, and other commercial AIs that have received agentic features.

The report mentions another meeting where Shedd said the TTS itself might become “at least 50 percent smaller” within weeks. The TTS also houses the team that built GSAi. That could be a problem for the development of this internal AI tool.

The Atlantic also points out the Trump administration’s desire to push out the software quickly without first determining whether it would be suitable for governmental work. The initial plan for this AI was to test such AI products for governmental work rather than rely on them for every task.

A former GSA employee warned about some of the obvious risks with GSAi that DOGE doesn’t seem to be concerned with:

“They want to cull contract data into AI to analyze it for potential fraud, which is a great goal. And also, if we could do that, we’d be doing it already.” Using AI creates “a very high risk of flagging false positives,” the employee said, “and I don’t see anything being considered to serve as a check against that.”

The report does note that a help page for early GSAi users provides warnings about AI hallucinations, “biased responses or perpetuated stereotypes,” and “privacy issues.” It also tells users not to include personal information or sensitive unclassified information in chats with the AI. It’s unclear, however, whether anyone is enforcing AI safety.

Given that it took years for programs like ChatGPT, Claude, Gemini, and DeepSeek to get where they are today, including early support for agentic behavior where AI can do browsing and coding on its own in response to a prompt, it’s unlikely that GSAi can display similar sophistication. It’s a months-old chatbot rushed forward during tumultuous times. It’ll take time and dedication to make it as powerful as ChatGPT.

Meanwhile, you can and should check The Atlantic’s full report at this link.

Chris Smith Senior Writer

Chris Smith has been covering consumer electronics ever since the iPhone revolutionized the industry in 2007. When he’s not writing about the most recent tech news for BGR, he closely follows the events in Marvel’s Cinematic Universe and other blockbuster franchises.

Outside of work, you’ll catch him streaming new movies and TV shows, or training to run his next marathon.