Although Artificial Intelligence (AI) has been around since the late 1950s, it has been out of the public’s attention. It wasn’t until late 2022 when Open AI released ChatGPT for public use that AI captured the public’s attention and renewed interest in the technology.
Bloomberg predicts the AI market to explode from a $40 billion market in 2022 to $1.3 trillion over the next 10 years. Businesses are compelled to embrace the technological advancements that AI brings – from streamlining operations, to improving decision-making processes, to gaining a competitive edge.
What might be even more exciting and scary is how individuals use AI. Already, people are using AI in their personal lives. Once comfortable with AI, those same people will use it in their professional lives as well, without the knowledge of their employers. Just take a look at the New York Times article on 35 Ways Real People are Using A.I. Right Now.
Why Are AI Security Policies Important?
With the benefits of AI come new challenges. Among those challenges is safeguarding the security of sensitive data and maintaining regulatory compliance. With programs like ChatGPT, people are beginning to wonder what role AI plays in cybersecurity, the threat AI poses to their IT infrastructure, and how they can stay ahead of the risk. It is imperative to underscore the need for companies to update their security policies to encompass the intricacies of AI. In this article, we will delve into the essential components that should be integrated into security policies, with a focus on aligning these AI security policies with the guidelines set forth by the American Institute of Certified Public Accountants (AICPA).
How to Address AI in the Risk Assessment & Mitigation Process
Before updating and revising Security Policies, companies should conduct a comprehensive information security risk assessment that addresses potential vulnerabilities and threats related to AI. When performing a risk assessment, best practice is to include a full systems inventory, including AI systems and services, in the risk assessment process. Starting with a risk assessment, Companies can manage and mitigate specific risks stemming from the use and/or implementation of AI. The Security Policies, including the Risk Management Policy, can then be updated to reflect the necessary policy changes to manage AI risk.
What Is the Impact of AI on Data Governance & Privacy?
AI systems collect and analyze immense amounts of data and information. AI is currently being used, to name a few, to perform any number of the following functions:
- Transfer data between systems.
- Replace lost credit/ATM cards.
- Reconcile financial records.
- Review and analyze documents (e.g. contracts) and data (e.g. credit card and health data) to extract summaries.
- Analyze and summarize data and documents to provide insight and analysis
If not properly protected, AI systems could be vulnerable to breaches, missuses, and unauthorized access to data, or lead to privacy violations. Information security policies should be reviewed and updated to reflect guidelines for data collection, storage, transfer, and disposal.
The impact of AI systems and services should also be considered when reviewing and updating a company’s Privacy and Terms-of-Use policies. Evolving regulations and legal frameworks may not keep pace with AI advancements. Companies using AI must navigate compliance issues, which can be complex and subject to change.
How to Address the Transparency & Explainability of AI
A Company’s code of conduct is an essential tool for management to communicate its expectations of employees and how they behave in the workplace. Codes of conduct vary between organizations but often have common themes. Those themes often include:
- An emphasis on compliance with laws and regulations, ethical standards, professional conduct, and behavior.
- The handling of confidential and sensitive data.
- The importance of data privacy and security.
AI systems are often treated as “black boxes” and their decision-making processes cannot be explained or understood. Just like with people, organizations should have similar expectations of the tools used in their operations. AI security policies should encourage and establish standards requiring AI systems to be transparent and exhibit integrity in operations.
What Are the Ethical Considerations of AI?
As previously discussed, a Company’s code of conduct is fundamental in establishing management’s expectations of employee behavior. AI systems are known for perpetuating biases and presenting biased (and even incorrect) responses, leading to ethical concerns. Furthermore, AI can be used for immoral purposes. Codes of conduct and AI cybersecurity policies should address these issues and emphasize the importance of ethical AI development and use.
Do Your Vendors Use AI?
Many companies rely on third parties in their operations. Third parties may use AI in their own operations or may provide AI services to fulfill technological needs. These vendors introduce a layer of risk that must be managed. Vendor management policies should be assessed and revised to address the selection of vendors as well as the ongoing monitoring of such vendors. Like any other risk, vendor risk management should be included in the risk assessment and mitigation process.
Are Training & Awareness Programs Important for AI?
Employees play a pivotal role in protecting systems and ensuring security. AI is increasingly growing more sophisticated, with deep fakes and voice cloning. AI attacks, as a result, are becoming more difficult to spot. Personnel should undergo training, such as security awareness training, and be made aware of AI enhancement to keep pace and be empowered to respond to AI threats.
What is the Impact of AI on Incident Response?
Recently the SEC adopted rules on cybersecurity risks and incident disclosure that require an entity to report cyber incidents within four days of occurrence. With this new rule, how a company responds to security incidents is paramount. Incident response planning should include updating incident response policies to reflect the challenges and threats associated with AI.
Even with robust preventive measures, security breaches can still occur. With the complexities of security breaches, including AI threats, Companies are looking to AI tools to respond quickly and decisively. Incident response policies and management practices should be updated to reflect the use of AI tools within the security incident response process.
What Are the Audit & Compliance Considerations of AI?
AI will be used, regardless of whether Companies allow it or not. Employees will use AI to write emails, review and summarize contracts, or analyze data. Companies should acknowledge and accept the reality of AI and that it will be used within the Company’s environment. When performing audits, procedures should be completed to look for the use of AI. Where AI is used, audits should determine whether use is in line with a Company’s code of conduct and security policies.
AI security policies should be updated to provide guidance on audit frequency and scope of AI-related audits.
Conclusion
As Companies embrace the transformative potential of AI, it is important that security remains top of mind. Security policies are foundational to a Company’s attitude and stance on security. The importance of robust AI security policies cannot be overstated. Best practices suggest security policies are updated as changes are necessary, but should be implemented at least annually.
If your Company’s security policies have not been reviewed lately, take the time now to update them and include the threats and challenges AI brings to your environment. Policies should be comprehensive and aligned with industry standards to effectively address the unique challenges introduced by AI. With the proper implementation of AI security policies, companies can foster a culture of security, transparency, and compliance in their AI journey.
If you would like to learn more or if you are interested in engaging our services for your upcoming audit, please feel free to contact me and the team of audit professionals here at Linford &Co.
Ben Burkett is an experienced auditor for Linford & Co. Starting his career at KPMG in 2002, Ben has extensive experience in the business of Information Technology (IT). As an auditor, he drove IT risk management and compliance efforts. As the head of an IT Project Management Office and a Technology Business Management (TBM) function, he sought to drive and maximize the value of IT.