A knowledge advantage can save lives, win wars, and avert disasters. At the Central Intelligence Agency, basic artificial intelligence (machine learning and algorithms) has long served its mission. Now, generative AI is joining the effort.
CIA Director William Burns has said that AI technology will augment humans, not replace them. Nand Mulchandani, the agency's first chief technology officer, manages the tool. There's a lot of urgency. Adversaries are already spreading AI-generated deepfakes with the goal of undermining U.S. interests.
Mulchandani, a former Silicon Valley CEO who led successful startups, was appointed to the position in 2022 after working at the Department of Defense's Joint Artificial Intelligence Center.
Projects he oversees include generative AI applications like ChatGPT that leverage open source data (meaning data that is unclassified, publicly available, or commercially available). It is used by thousands of analysts in the intelligence community at his 18 agencies in the United States. His other CIA projects using large-scale language models are understandably kept secret.
This Associated Press interview with Mulchandani has been edited for length and clarity.
Q: You recently said that generative AI should be treated like a “drunk, crazy friend.” Could you tell me more?
A: When these generative AI systems “hallucinate”, they can act like your drunk friend at a bar says something that goes beyond the boundaries of normal concepts and evokes unconventional thinking. there is. Please note that these AI-based systems are probabilistic in nature and therefore not accurate (and susceptible to fabrication). Therefore, these systems are great for creative work such as art, poetry, and painting. However, I have not yet used these systems to perform precise calculations or design airplanes or skyscrapers. In these tasks, “close enough” just doesn't work. They can also be biased and narrowly focused, what I call the “rabbit hole” problem.
Q: Currently, the only large-scale language model I know of in use at the CIA on an enterprise scale is an open source AI called Osiris that was created for the entire intelligence community. Is that correct?
A: This is all we have made public. It was a great success for us. However, we need to expand the discussion beyond just LLM. For example, we process vast amounts of foreign language content across multiple media types, including video, and use other AI algorithms and tools to process it.
Q: The Special Competition Research Project, a powerful advisory group focused on AI in national security, says U.S. intelligence agencies need to rapidly integrate generative AI given its disruptive potential. A report has been published. The plan sets out his two-year timeline for “deploying Gen AI tools at scale” beyond experiments and limited pilot projects. do you agree?
A: The CIA is 100% committed to leveraging and expanding these technologies. We take this issue as seriously as perhaps any technology issue. We believe we are well ahead of our original schedule, as we are already using Gen AI tools in production. The deeper answer is that we are in the early stages of a huge number of incremental changes, and the bulk of the work is integrating the technology more broadly into our applications and systems. It's still early days.
Q: What are the names of the large language model partners?
A: I'm not sure if it's interesting to name the vendor at this point. There is an explosion of LLMs available on the market today. As a prudent customer, we do not intend to tie our ship to any particular LLM set or any particular vendor set. We have evaluated and used nearly every HighRunner LLM out there, both commercial grade and open source. We do not view the LLM market as a singular market where one lab is better than another. As you have noticed in the market, models are constantly evolving as new products are released.
Q: What are the most important use cases for large-scale language models at the CIA?
A: First of all, a summary. It is impossible for CIA open source analysts to digest the vast amount of media and other information they collect on a daily basis. As such, this was a milestone for insights into emotions and global trends. Analysts then dig into the details. You must be able to annotate and explain with complete confidence the data you cite and how you arrived at your conclusions. Our technology hasn't changed. The addition of both confidential and open source materials we collect gives analysts a broader perspective.
Q: What are the biggest challenges in adapting generative AI in government?
A: There's not a lot of cultural resistance within the company. Our employees work on AI every day to gain a competitive edge. Clearly, the whole world is hooked on these new technologies and amazing productivity gains. The key is to address constraints on how information is partitioned and systems are built. Data separation is often done for legal reasons rather than security. How can we efficiently connect systems to reap the benefits of AI while maintaining full functionality? We've thoroughly considered this problem and combined data in a way that maintains encryption and privacy controls. There are some really interesting technologies emerging that can help us do that.
Q: Generative AI is now as sophisticated as an elementary school student. In contrast, espionage is for adults. It's all about trying to break through the enemy's deception. How does Gen AI fit into the job?
A: First, let me emphasize that human analysts have an advantage. We have world-leading experts in their fields. And much of the information that comes in involves a tremendous amount of human judgment, including that of the individual providing the information, to determine its importance and significance. We're not letting machines reproduce any of that. Nor do we want computers to do the work of domain experts.
The model we're looking at is the co-pilot model. We believe that Gen AI can greatly impact brainstorming and generate new ideas. and improve productivity and insight. We need to be very decisive about how we do it, because these algorithms can be a force for good if leveraged properly. But if you use it incorrectly, you can really hurt yourself.