A recent report from SecurityScorecard suggests that third-party attack vectors continue to be a major security concern. Third-party attacks are reportedly responsible for 29 percent of breaches, and as many as 98 percent of organizations have links with businesses that have experienced breaches.

These figures suggest that despite the business benefits, outsourcing can still carry significant risks. Consider, for instance, a scenario where a company outsources its customer service operations to a third-party provider. While this arrangement may streamline operations, it also exposes the company to risks if the third-party mishandles sensitive customer data or fails to maintain adequate security measures. These risks can have a financial, operational, and reputational impact, as well as legal consequences if compliance measures are not adopted.

Third-party risk management is more important than ever, no more so than when it comes to emerging technologies like artificial intelligence (AI). AI refers to the capacity of machines or computers to emulate human cognitive functions, such as learning, reasoning, and problem-solving. The technology works by analyzing vast datasets, identifying patterns, and making informed decisions based on this analysis. It’s an area with seemingly huge potential, but the unique risks associated with AI technologies mean that it’s important that businesses implement robust third-party risk management practices and ensure their AI providers adhere to stringent security and compliance standards.

This article aims to furnish an initial exploration and overview of the best practices in managing third-party risks in AI.

 

Third-Party Risk Management and AI Security: Essential Perspectives

In simple terms, third-party risk management involves identifying, assessing, and mitigating the risks associated with a company’s suppliers, subcontractors, service providers, or other external partners. If a third party fails to implement adequate security measures or adhere to relevant standards and regulations, their customers could be exposed to significant risks, and you don’t have to look far to see the kind of scale a data breach at a trusted third-party provider can reach. Just this year France’s data protection authority CNIL said that recent data breaches at healthcare payment service providers Viamedis and Almerys impacted upwards of 33 million people in the country, or around half its population.Therefore, when managing third-party risks, it is crucial to ask two key questions: does the third party have access to our infrastructure or sensitive data, and does our company depend on its services?

Every third-party solution carries potential risks, but there are several that are more specific to AI systems. Much has been written about how AI systems can “hallucinate,” confidently producing incorrect or misleading results. Their lack of transparency can be a major risk; you might not know what data an AI system has been trained on or how its algorithm works. Attackers can even exploit this lack of transparency with a data poisoning attack, which attempts to compromise an AI’s training dataset and hence its eventual outputs. In a professional context, getting inaccurate results from an AI system can be serious, such as blocking the wrong financial transactions or offering incorrect medical diagnoses. This underscores the importance of carefully choosing the solution, especially in cases when you can’t manually verify the answer.

These AI-specific concerns come in addition to the kinds of risks posed by third-party solutions more generally. Third-party software may contain undetected flaws, hidden backdoors, or malicious components, thus opening the door to potential cyberattacks. Third-party hardware can be compromised by malicious firmware.

The risks associated with third-party involvement in the field of AI are numerous. Compromising system security, violating data confidentiality, and concerns about data and model quality and integrity are among the most significant. These risks can lead to regulatory violations, such as GDPR breaches, thereby impacting our company’s reputation.

 

Understanding Actors, Categories, and Deployment Models in Artificial Intelligence: Keys to Managing Third-Party Risks

Multiple actors play a part in the delivery of any third-party service, and artificial intelligence is no different. There’s the manufacturer, the supplier, the distributor, the importer, and the deployer. A clear understanding of each one’s responsibilities is important for conducting an adequate risk assessment.

The manufacturer, as the creator of the solution, is responsible for producing the necessary documentation to comply with regional and local standards and regulations. The supplier, who may also be the manufacturer, ensures the availability of the solution by guaranteeing its compliance and the presence of appropriate documentation. The distributor acts as a link between the manufacturer or supplier and the consumer and must ensure that the supplier has taken all necessary measures to ensure the security of the solution. As for the deployer, they handle installation, configuration, and maintenance.

In addition to understanding the various actors, it is important to grasp the different types of AI solutions available. With the advent of AI, companies are increasingly exploring how the technology could be used to automate tasks, generate content, or anticipate customer needs. Among the categories of AI are chatbots, generative AI, virtual assistants, robotic process automation, computer vision systems, and many more. Thanks to its wide range of capabilities and use-cases, the risks associated with AI systems can vary massively. The EU’s AI Act, for example, distinguishes between “unacceptable risk,” “high-risk,” “limited risk,” and “minimal risk” AI, and regulates each differently. Therefore, it is essential to clearly define the intended use of AI and the expected outcomes to understand the associated requirements.

It is also important to consider the available deployment models. Among these are AI as a Service (AIaaS), which offers capabilities in the cloud, allowing companies to access services via the internet without needing technical expertise or dedicated infrastructure. There are off-the-shelf solutions, already pre-configured and ready for direct deployment, which contrast with personalized AI, where models are specifically developed to meet a company’s needs, or open-source AI, where developers can adapt AI to their company’s specific requirements.

Understanding the various actors, AI categories, and deployment models is essential for effectively managing third party-related risks.

 

Risk Management Strategies for AI Solutions with Third-Party Involvement: Evaluation and Compliance

We have defined the risks, identified the actors, and categorized the different models. Let’s now move on to managing third-party risks in AI. In this complex landscape, it is easy to get lost. However, for acquiring companies, the focus should be on ensuring compliance with requirements and the availability of comprehensive documentation, which offers much needed transparency.

Before selecting any solution, it is essential to evaluate the supplier. The supplier must adhere to company security requirements and compliance standards, including GDPR, the AI EU Act, as well as fundamental human rights. Compliance can be verified through certifications, CE labels, or assessments conducted by the purchaser or independent third parties.

The supplier must also provide comprehensive technical documentation, including details on algorithms, the design process, conducted tests, and established security measures. These measures should encompass data protection, encryption, access controls based on the principle of least privilege (PoLP), regular audits and monitoring to detect anomalies, intrusion tests to identify vulnerabilities, as well as regular maintenance and updates. Suppliers should also ensure transparency regarding the data used, both in terms of type and sources, to prevent the use of unauthorized data. While large language models can often be a black box, the supplier must ensure that decision making by the AI solution should be as transparent as possible.

Depending on the type of solution, it may be beneficial to verify the credibility and expertise of the supplier in the relevant sector.

For “high-risk” AI solutions, a thorough risk analysis is essential. This analysis, initially conducted by the supplier, should identify potential risks during design and development and propose mitigation measures. Then, on the purchaser’s side, they must ensure that risks related to integration and use of the solution are clearly identified, so they fully understand its legal and ethical implications. All stakeholders in the solution should be included in this risk management process. Close collaboration between the supplier and the purchaser is essential throughout the process, as risk management methods may differ between developers and users of the AI system.

It’s important to precisely define contracts with the supplier to maximize protection against risks. The contract should explicitly define the responsibilities of each party, specifying who is responsible for maintenance, incident management, measures to ensure business continuity, and data management, including deletion. It should be drafted precisely to safeguard reputation and clarify responsibilities, while remaining flexible to accommodate rapid technological and regulatory changes, ensuring effective governance and risk management. These controls could also align with the recommendations of NIST AI RMF 1.0, including Govern 6.1, 6.2, and Manage 3.1, 3.2.

 

Business Use Case: The Banking AI Solution to Help You Evaluate Solvency

That’s the theory for how organizations should manage the risks of using third-party AI solutions, but what does this look like in practice?

Let’s imagine that you are a financial institution considering the acquisition of an artificial intelligence solution named SolvAI, designed to assist you in evaluating the solvency of your current and potential clients. Given the sensitive nature of the business and the handling of financial data by the solution, it’s important to stay compliant with forthcoming regulations in the market. While GDPR serves as a foundation, given the critical nature of your infrastructure, the EU’s AI Act, NIS2, and DORA must also be considered.

As a purchaser you must make sure of the following:

  • Ensure that the supplier can assist you in remaining compliant with regulations. DORA explicitly specifies third-party management, meaning that the solution provider must adhere to compliance standards, for example.

Compliance by conducting a thorough risk analysis of the supplier, requesting a comprehensive list of provided functions and services, as well as the locations of service and data storage. To do this, you will need to conduct a thorough evaluation of the supplier, analyzing the potential risks they may pose to your organization.

Regulatory compliance aside, there’s a big lift to be done in terms of risk analysis and management. The SolvAI solution is classified as “high risk” according to the AI EU Act, which carries heightened requirements.

You must establish a risk management process for the acquisition and integration of the AI solution:

  • Ensure that SolvAI aligns with your ultimate goals and governance of its use.
  • Integrate risk management into the core operations related to SolvAI. This may require modifications to existing internal processes or the creation of new processes to better manage the risks associated with AI.
  • Involve the supplier in these processes, consulting them for the identification of potential risks.

You should also continuously assess the performance and compliance of the supplier:

  • Regularly monitor its activities.
  • Collect data on past incidents and system performance.
  • Analyze compliance reports and conducting periodic audits.

It’s also essential to pay attention to ethical risks, particularly concerning the use of customer data and transparency in the supplier’s algorithm training process. The importance of data privacy and security throughout the lifecycle of SolvAI, from collection to destruction, must be emphasized to ensure the protection of customer data and compliance with privacy regulations.

  • Verify how the algorithm makes its decisions and what data this is based on, ensuring the protection of customer data and managing supplier access.
  • Ensure the quality of the data used and be vigilant about current rights when preparing the algorithm. For example, you cannot transfer customer data to the supplier without explicit consent, nor without notifying customers of ongoing data processing and specifying the purpose of this processing. Ideally provide pseudonymized data or request the use of dummy data when creating the algorithm.
  • Once the algorithm is in place, it is also essential to limit the supplier’s access to the data and the solution to prevent corruption risks that could affect the solution.

You should implement all necessary security measures to mitigate the risks of attack. This includes implementing multi-factor authentication (MFA), configuring firewalls, encrypting data, using strict access management based on need-to-know and PoLP approaches.

Finally, you must guard against service disruption risks and clearly define roles and responsibilities regarding SolvAI maintenance and continuity or business continuity management, and actively involve the supplier in these processes.

Also, make sure to assess the supplier’s cybersecurity, manage privileged access, supervise subcontractors, provide security training to all involved individuals and those affected by the solution, establish a security incident management plan, and include clear contractual requirements for data security and regulatory compliance.

Third-Party Risk Management in an AI World: Best Practices

Even though the topics addressed in this article are complex and would require in-depth analysis for each, there are five essential takeaways for any business hoping to manage third-party AI suppliers:

  1. Thorough Supplier Assessment: Before engaging an AI solution provider, conduct a comprehensive assessment to verify their compliance with security and compliance standards, as well as their reputation for transparency and expertise in the AI field.
  2. Development of Precise Contracts: Define detailed contracts that clearly specify each party’s responsibilities, including security measures, and incident, business continuity, and data management. Ensure these contracts are flexible to adapt to technological and regulatory changes.
  3. Continuous Risk Management: Adopt a continuous risk management approach throughout the lifecycle of the AI solution. This involves ongoing monitoring of the supplier’s activities, adaptation to emerging risks, and regular updates to contracts and security measures.
  4. Close Collaboration with the Supplier: Establish a close collaborative relationship with the supplier, fostering open and transparent communication. Actively involve yourself in risk management and ensure that the supplier’s risk management methods align with your expectations and needs.
  5. Transparency and Comprehensive Documentation: Require full transparency from the supplier regarding technical documentation, data used, design and testing processes, and security measures implemented. Ensure all relevant information is properly documented to ensure effective risk management.

Was this article helpful?

Bookmark