Interest in, and investment in, artificial intelligence, particularly generative AI platforms and tools, continues to grow across nearly every industry.a Forrester Report Products and services such as OpenAI’s ChatGPT and Google Gemini predict a compound annual growth rate of 36% between now and 2030.
a Bipartisan group of U.S. lawmakers It also recommends that the federal government spend billions of dollars to develop generative AI technology.
Despite the promise of generative AIHowever, as it permeates many aspects of business and government, it raises risks and security concerns. At the recent RSA conference in San Francisco, FBI officials warned that they are responding to an increase in cyber threats targeting businesses and government agencies that utilize these technologies. wall street journal.
To bridge the gap between responsible development and understanding risk, the National Institute of Standards and Technology has released four draft guideline documents aimed at “improving the safety, security, and reliability” of AI and generative AI platforms. announced that US Department of Commerce, which oversees NIST. The documentation is based on and helpful for extensions. AI risk management framework (AI RMF), designed to help manage risks associated with generative AI.
The publication of these documents is Biden Administration Initiatives To oversee the development of AI technology and bring some regulation to it.
“These guidance documents not only inform software creators about these inherent risks, but also help them develop ways to reduce them while supporting innovation,” said Under Secretary of Commerce for Standards and Technology and NIST. Director Laurie E. LoCascio said in a statement.
Many companies are still in the early testing stages of generative AI tools. However, as large-scale deployments become more common in the coming years, developers as well as security professionals will be required to understand the impact of AI technology.
Before that happens, technology experts must begin absorbing the lessons NIST and other agencies have produced regarding generative AI. This is also important for career development and keeping your skills sharp and current. Especially as the adoption of AI becomes more widespread.. Organizations should seek guidance from technology experts to help develop policies and safety guidelines.
“Organizations should not wait for widespread regulation to start writing their own rules around AI. ” said Nicole Carignan, vice president of strategic cyber AI at the security firm. Darktrace recently told Dice: “Organizations can set nuanced and granular policies around the use of generative AI by assessing their specific business risks and opportunities. Industry collaboration is especially important at a time when AI regulation is not fully developed. ”
As NIST circulated these documents for public comment ahead of their final release, several cybersecurity experts and insiders shared their views on AI, the risks associated with these platforms, and what tech professionals can learn to help advance their careers as interest in generative AI grows.
Understand the risks and benefits of generative AI
Each of the four NIST draft guides published on April 29 covers different aspects of AI, the technology, the risks associated with these platforms, and how best to mitigate cybersecurity issues. Papers include:
Each of these guides provides new insights into how enterprises (not just IT and security teams) need to address the risks associated with generative AI technologies. For example, the NIST AI 600-1 document shows how to leverage generative AI to pre-empt AI-powered threats such as business email compromise (BEC), social engineering, and advanced phishing campaigns, security companies say. said Stephen Kowski, field CTO at SlashNext.
At the same time, he added, NIST AI 100-4 provides critical guidance for leveraging generative AI to strengthen defenses against advanced threats such as BEC, business text compromise (BTC), and social engineering.
“We see opportunity in these standards by following the NIST AI 100-4 recommendations for labeling and identifying AI-generated content. This greatly enhances our ability to distinguish malicious communications from malicious communications,” Kowsky told Dice. “This collaboration not only strengthens defenses against traditional phishing, but also counters advanced generative AI threats involved in social engineering, BEC, and BTC attacks, making generative AI a powerful cybersecurity force. It turns into a tool.”
As technology professionals become more familiar with AI technology and absorb lessons learned from NIST and real-world threat analysis, Craig Jones, vice president of security operations at Ontinu, said he wants to ensure that those learnings are passed through the organization. He says it's important to communicate. This requires developing skills in building multi-layered security approaches and understanding critical issues such as data encryption at rest and in transit, strict access controls, and continuous monitoring for anomalies.
“When a breach occurs, we must take prompt response and remedial action in accordance with legal and regulatory requirements, with clear communication to affected stakeholders,” Jones told Dice. Ta. “Lessons learned from such incidents should be integrated into improving data security frameworks for future scenarios.”
Generate AI security buy-in across your organization
Although generative AI is still in its early stages of adoption, it is important to train employees across multiple business lines on how the technology works and the benefits and risks associated with these platforms.
Meanwhile, technical experts should work with departments considering implementing generative AI to ensure uniform policies are in place across the organization to reduce risk.
“Generative AI ultimately requires nuanced usage policies that help manage risk. Risk and Compliance Teams, Chief Human Resources Officers, Chief Human Resources Officers, CIOs, CISOs, Chief AI Officers, Data Executives, Strategic Leaders Different stakeholders across the business need to work together to create and implement AI policies,” said Darktrace’s Carignan. “Each role brings a unique perspective to the problem, and collaboration allows us to safely and reliably realize the benefits of AI while managing and mitigating risk.”
Other experts, including Carrie Guenther, senior manager of cyber threat research at Critical Start, also said that organizations and their technology teams need to maximize productivity gains while protecting sensitive data and maintaining compliance. , pointed out the need to provide clear guidelines for the use of generative AI.
This includes developing policies and skills for effective communication and collaboration between each other. CISOsecurity teams, and business leaders ensure that technology deployments are aligned with business goals and security requirements.
“Commitment to strategic investments and collaborations highlights an organization's commitment to addressing the security concerns of generative AI. A commitment to rapid innovation in response to the evolving GenA.I landscape is necessary. ,” Gunther told Dice. “Developing technology that can address emerging threats puts companies at the forefront of the data security industry. This approach helps organizations leverage the benefits of his GenA.I. while minimizing risk.”