Globally, the advent of AI systems and technologies is leading massive innovations. For example:
- The AI market in the U.S. was valued at $50.16 billion in 2024 and is projected to grow at a compound annual growth rate (CAGR) of 28.30%, reaching $223.70 billion by 2030.
- In 2023, investments in generative AI surged to $25.2 billion, nearly nine times the amount invested in 2022. This accounted for over a quarter of all AI-related private investments that year.
- AI is expected to contribute a net increase of 21% to the United States’ GDP by 2030, underscoring its significant role in economic growth.
With this influx of investment, research, and development come new challenges, new threats, and new attack vectors. To meet the need to secure the AI systems of the future, HITRUST has released the AI Security Assessment and Certification, which is conducted in alignment with the robust HITRUST assurance program which has been serving the needs of the industry since 2007.
Why is an AI Security Assessment & Certification Needed?
Conducting regular security audits of AI development and deployment systems is essential to validate data integrity, compliance, and trustworthiness while mitigating risks of exploitation. AI systems face unique challenges that require unique approaches to assessment and certification due to security challenges associated with integrity and confidentiality.
Common threats addressed through the HITRUST AI security assessment include:
- Availability attacks, such as denial of service attacks.
- Poisoning attacks, such as data or model poisoning.
- Supply chain attacks, including compromised third-party data sets, models, or code artifacts.
- Input-based attacks including prompt injection as well as those mentioned above.
- Advanced attacks, including confabulation, sensitive information disclosure, and excessive agency or copyright concerns.
Development of the HITRUST AI Security Certification Assessment
HITRUST created this unique assurance solution by working closely with top AI experts and industry groups to understand AI risks and find ways to measure and reduce them. HITRUST considered AI-specific security threats from nearly 20 trusted sources, like ISO, NIST, OWASP, and commercial AI security guides. Next, HITRUST compared these insights with its own HITRUST CSF framework and Cyber Threat Adaptive engine to develop a clear, detailed set of controls to address these risks. Finally, HITRUST opened up the planned assessment requirements to the public for comment and feedback from the industry as a whole. Linford & Company engaged in this process and provided specific and actionable advice on how to improve the quality and integrity of the assessment process for the specified requirements.
How Can My Organization Complete the HITRUST AI Security Assessment?
The AI Security Assessment and certification is designed to be an add-on to the existing HITRUST assessment portfolio which includes the e1, i1, and r2 validated assessments. As the foundation of the assessment, the organization’s underlying information security program will be assessed at a given level of assurance selected by the entity pursuing assessment and certification. In addition, the organization’s specific AI technologies will be assessed for compliance with the requirements specified by HITRUST for AI security.
Following this simple approach, organizations can complete a combination e1 and AI Security Validated Assessment and Certification engagement that would include less than 90 specific requirements. If added to an i1 assessment, approximately 226 requirements would be included. r2 assessments including the AI security component would begin around 300 requirements.
Who Will Require HITRUST AI Assessment & Certification?
The HITRUST AI Security Assessment is intended to meet the needs of providers of AI solutions, and focuses on the security of the overall AI system, not just the usage of AI systems. Organizations building AI models or integrating AI functionality into existing products should seek to demonstrate a high level of assurance for the AI solutions being developed. The assessment and certification addresses AI security practices needed for all AI/ML deployments and contains additional practices specific to generative AI. Per ISO/IEC 22989:2022, there are two main types of AI providers to consider:
- AI platform providers that provide platforms enabling other organizations to deliver AI-enabled products.
- AI product providers that provide AI-enabled products (e.g., AI applications) directly usable by end-users/end-customers
Prioritizing Secure AI Systems
Maintaining a proactive approach to security and a culture of security are a cornerstone of responsible AI development and deployment. Based on industry-driven authoritative sources and guidance, HITRUST has provided a mechanism for organizations to demonstrate a high level of commitment to securing the future of AI systems and AI-enabled solutions.
As a HITRUST-authorized external assessor organization and member of the HITRUST external assessor advisory council, Linford & Company is prepared to deliver AI security assessments through its experienced team of HITRUST assessors. We also offer comprehensive HITRUST Assessment & Certification Services to help organizations strengthen their AI security and compliance. Contact us today to learn how we can guide you through the HITRUST certification process.
Richard Rieben is a Partner and HITRUST practice lead at Linford & Co., where he leads audits and assessments covering various frameworks including HITRUST, SOC, CMMC, and NIST. With over 20 years of experience in IT and cybersecurity and various certifications including PMP, CISSP, CCSFP, GSNA, and CASP+, Richard is skilled in helping growing organizations achieve their information security and compliance goals. He holds a Bachelor of Science in Business Management and an MBA from Western Governors University.