Technology policy experts from the federal government gave testimony before the House Subcommittee on Cybersecurity, Information Technology, and Government Innovation about how the government is continuing to adopt more sophisticated artificial intelligence systems for agency operations and what steps they are taking to erect barriers on the emerging technology.
The future of AI in the federal workforce was debated by representatives from the White House Office of Science and Technology Policy, the Department of Homeland Security, and the Department of Defense. While particular applications differed amongst agencies, prudent AI deployment emerged as an ongoing topic.
According to OSTP Director Arati Prabhakar on Thursday, when used appropriately, AI may help us provide better results and open up new possibilities for the American people.
Prabhakar went on to describe the OSTP’s function as a nerve center for coordinating agency research and development, as well as federal government AI policy.
One of our key responsibilities, she said, is to be transparent with our colleagues in the White House, the President, our colleagues in departments and agencies, and other stakeholders about how the technology is developing, what challenges they will need to overcome, and what the big opportunities are.
Witnesses observed prospects to use AI for sophisticated data analyses as well as to enhance the way administrative duties and customer service are carried out in government entities.
These are areas where the new generation of language-based artificial intelligence, of course, can have tremendous benefits, but it must be used thoughtfully and carefully, according to Prabhakar. If you take a step back and consider how much the government does that is about interacting with citizens, providing information, and taking information, those are areas where the government interacts with citizens the most.
According to Craig Martell, the Department of Defense’s senior digital and artificial intelligence officer, using high-quality data in defence operations and aiding the national defence plan are two areas where AI can be used.
According to him, they are concentrated on comprehensively enhancing the data quality that supports the majority of DOD use cases. They have been concentrating a lot on how to jointly develop models and assess their efficacy, in addition to how to share data effectively and in compliance with rules.
The DHS has found that using AI to fight crime is an effective use case. According to Hysen, his organization has employed AI algorithms to solve cold cases and rescue victims from terrible living conditions. In terms of national security, he claimed that DHS has given employing large language models to thwart cyberattacks top priority.
In order to successfully cooperate with critical infrastructure organizations on securing their usage of AI and bolstering their cybersecurity practices generally to defend against new threats, Hysen added, they are collaborating with the Cybersecurity and Infrastructure Security Agency.
Martell emphasized that AI is not a “monolithic technology” that can serve as a general fix for all operational issues across the various types of use cases and applications for AI solutions in the federal government. He stated that in order to prevent misuse, his organization is concentrating on how to precisely assess an AI system’s capabilities and success in one use case versus another.
To train the various models that are the foundation for each of our various use cases, Martell explained that we need various algorithms, success criteria, and data. We must design systems that have humans integrated into them, not just ones that rely solely on that algorithm. It’s actually a human-machine collaboration so that a human can say, “Oh, no, they got it wrong.”