With the rise of organizations providing artificial intelligence (AI) or machine learning (ML) tools and services, one has to wonder about the risks associated with those services and the security, at the very least, of the data used for and created as a result of the AI and ML services. Data considerations include the makeup of the data such as personally identifiable (PII) data, medical data, confidential data, etc. Of critical concern, too, is the interpretation of the results of the analytics run on the data.
Assumptions made during the coding and learning process can provide results that are either expected or unexpected, interpreted correctly or misinterpreted, complete or incomplete, etc. If the user of the results does not have the correct understanding of the analytic process, improper and inappropriate interpretations of the analytic results can occur. In addition, if these assumptions and results are incorrect, the AI and ML models can continually and exponentially produce incorrect results. Don’t forget, too, that results of the AI and ML activities can be used in improper and inappropriate ways.
Concerns over service organizations providing AI and ML services bring to mind the performance of an AICPA Trust Services SOC 2 audit over service organizations and things to consider. When you think of performing a SOC 2 audit over such a service organization, the available offerings the service organization provides must be taken into account.
The remainder of this article will talk about three AI and ML scenarios that a service organization could provide. Please note that these are just three examples of potential scenarios.
The Service Organization Only Provides the Tools
This is the most simple of the service offering combinations and the one with the least risk. In this case, the tools and environment are made available in which to perform the analytics; but, data used for the analytics and created as a result of the analytics is not stored within the service organization’s environment. Only the analytics are performed in the environment using the information technology resources made available by the environment.
AI and ML data processing are often resource intensive. Transitioning the data processing to an analytics service organization may result in cost and resource-saving benefits for user organizations.
When thinking of AICPA trust services to cover in the SOC 2 audit in addition to security, one might consider availability since the subservice organization is providing the environment in which to perform the AI and ML analytic processing.
The Service Organization Provides the Tools & Data Resides in the Service Organization’s Environment
In this case, the tools and environment are made available in which to perform the analytics, and the user organizations’ data is allowed to reside within the service organization’s environment. The risk is higher as concerns over the data now need to be considered as part of the SOC 2 procedures. Questions to consider include:
- What type of raw data is allowed to reside in the service organization’s environment?
- What type of analytic results are produced and maintained in the service organization’s environment (e.g., individual records or aggregated results)?
- How is the data secured and who has access to the data (raw and analytic results)?
- What are the retention requirements of the data and how is it destroyed when no longer needed?
The SOC audit testing would include security over the environment and the data including segregation of client data and the ability for gaining access to client data. The service auditor will need to consider all the risks associated with data processed and maintained in the service organization’s environment. These considerations should include types of data such as PII, PHI, or other forms of confidential data. The assumption must be made by the service auditor that such levels of data reside in the service organization’s environment unless there are controls in place that prevent the ingestion and storage of prohibited types of data. In addition to the AICPA’s security trust services criteria, consideration should also be given to the availability, confidentiality, and privacy trust services criteria.
The Service Organization Provides the Tools & Base Analytic Code or Query Development
In this case, in addition to providing the environment and the analytic tools, the service organization also provides base analytic code or query development for building and maintaining the AI and ML models. Risks now include the processes around the development and maintenance of the analytic code used by the AI and ML engines. Questions to consider include:
- What controls have been implemented to manage the development, testing, and maintenance of analytic code?
- Is there any type of oversight of the AI and ML engines to see that the learning of these engines is appropriate and in alignment with the original objective of the project or desired output?
- How is access to the code controlled so that unauthorized changes do not occur?
- What controls are in place to prevent an authorized person from making changes to the analytic code and models?
With the providing of base analytic code or query development, the AICPA trust services criteria of processing integrity, along with security and availability, may be criteria to consider being covered with the SOC 2 audit.
Should data be stored within the service organization’s environment, the same concerns previously discussed would apply as well.
AICPA Code of Professional Conduct
As with any SOC or attest engagement the service auditor performs, abidance to the AICPA Code of Professional Conduct is required. This means performing the engagement in alignment with the principles defined in the Code of Professional Conduct:
- Responsibilities Principle
- Public Interest Principle
- Integrity Principle
- Objectivity and Independence Principle
- Due Care Principle
- Scope and Nature of Services Principle
Keeping these critical principles top of mind while performing the SOC 2 audit and exhibiting due diligence in the planning, fieldwork, and reporting stages of the SOC 2 audit will help provide for an audit that results in a fair and appropriate assessment of the defined audit scope.
In conclusion, the risks and considerations with performing a SOC 2 audit on a service organization that provides AI or ML services is determined by the services provided and usage and storage of the raw data and the analytic results. The scenarios described in this article are only a few examples of the combinations of services that could be offered by a service organization.
The service auditor, as part of engagement acceptance, needs to consider what level of risk they and their organization are willing to take on based on the level of services provided by the service organization. The higher the level of risk, the more diligent and expansive the audit procedures performed by the service auditor will most likely be. Certain levels of risk may result in the service auditor declining to be engaged to perform a SOC 2 or other engagement over the service offering.
Lois started with Linford & Co., LLP in 2020. She began her career in 1990 and has spent her career working in public accounting at Ernst & Young and in the industry focusing on SOC 1 and SOC 2 and other audit activities, ethics & compliance, governance, and privacy. At Linford, Lois specializes in SOC 1 and SOC 2 audits. Lois’ goal is to collaboratively serve her clients to provide a valuable and accurate product that meets the needs of her clients and their customers all while adhering to professional standards.