Madhu Gottumukkala, now serving as interim director at CISA, is facing scrutiny after reportedly uploading government contracting records to a public version of ChatGPT last summer, an action that triggered automated alerts and an internal review of potential risks to federal operations.
The episode reportedly took place even though Department of Homeland Security systems restrict use of public AI platforms without approval, and the files in question were meant for internal government channels. That combination of restricted access and sensitive material is at the center of an accountability debate. The incident raises questions about how top officials handle new tools while protecting critical information.
Gottumukkala assumed a senior role at the Cybersecurity and Infrastructure Security Agency in May 2025 after being appointed deputy director by DHS Secretary Kristi Noem and later stepping into the interim director post. That elevation put her in charge of defending federal networks against state actors and cybercriminals, which makes any misstep especially consequential. The optics of a leader in that role running afoul of policy are bad for confidence in the agency.
Soon after arriving, she requested an exception to use ChatGPT, a platform normally restricted by DHS policy unless specific approval is granted. A short-term authorization was issued to select personnel, and records show Gottumukkala last accessed the tool in mid-July 2025. The narrow permission was supposed to balance experimentation with safeguards, but execution matters as much as intent.
Instead of sticking to approved, internal AI systems, some agency contracting files labeled for official use only were uploaded to the public ChatGPT service owned by OpenAI. Those commercial platforms retain inputs and can use them to shape future outputs, so placing restricted government documents there risks exposure beyond the agency. That risk is exactly what DHS rules aim to prevent.
By early August 2025, CISA’s automated monitoring flagged the uploads and sent repeated alerts intended to stop unauthorized disclosures. The tools detected activity outside normal channels and triggered reviews to assess any exposure. When automated safeguards do their job, they should lead to clear consequences and fixes, not bureaucratic obfuscation.
Gottumukkala met with senior DHS leadership to walk through the matter, including then-acting General Counsel Joseph Mazzara and Chief Information Officer Antoine McCord. The review also involved CISA’s Chief Information Officer Robert Costello and Chief Counsel Spencer Fisher to determine how to handle the restricted documents. Those meetings focused on identifying what was shared, why it happened, and what the agency needed to do to secure its data.
DHS policy requires a full probe whenever protected information might have been exposed, with corrective steps ranging from retraining to potential disciplinary action depending on the seriousness of the breach. An internal assessment was launched to evaluate the risks, and senior leaders reviewed the incident. Officials have not publicly disclosed the final determination or any subsequent measures taken.
CISA spokesperson Marci McCarthy stated, “Gottumukkala received approval to access ChatGPT under specific DHS safeguards and described the usage as temporary and limited in scope.” Limited or not, uploading restricted files to a public tool that millions of people can access does not match the government’s obligation to safeguard information. McCarthy also noted, “The agency continues to pursue artificial intelligence adoption consistent with President Donald Trump’s directive to accelerate U.S. leadership in AI development.”
Federal staff receive mandatory training on protecting sensitive materials, yet this incident suggests a lapse either in following rules or in enforcing them at senior levels. DHS-approved AI solutions keep inputs inside federal systems; choosing an external commercial tool instead opens avoidable vulnerabilities. If leadership expects others to follow rules, it must set the example and use the tools meant to protect classified or restricted data.
The situation comes as the administration’s nominee for permanent CISA director, Sean Plankey, waits for confirmation and faces unrelated objections from Sen. Rick Scott of Florida. Interim leadership must maintain credibility during that gap, and incidents like this complicate the picture. Lawmakers and agency overseers should insist on transparency about corrective steps taken.
This is not just a matter of one permission slip gone wrong. It speaks to how federal agencies adopt new technology while preserving operational security and public trust. Clear rules are only as good as the discipline to follow them, and when the protectors of our networks stumble, it invites tougher oversight and renewed emphasis on accountability.
