For the purposes an acquirer doing due diligence on a target company, several implications may arise should a target company be using AI software. Trends suggest that, in a similar way to there being a Synthesizer clause on a Queen or on an Oasis album, companies in the future will be mandated to insert a clause regarding the use of AI in the share purchase agreement for the purposes of an acquirer doing due diligence on a target company.
-
Implications of Using AI:
1.1 Reliability and data protection
AI software mines data from Large Language Models, essentially a body of data which presents itself as the platform where a generative AI device can ascertain and then compile and present data to the user. Generative AI devices can present this data within under a minute, leading to more efficient sourcing of information. That being said however, EY have raised the concern that the LLMs which the Generative AI software uses, when met with a gap in their information, the default response is to invent facts.[1]
Often referred to as “Hallucinations,” this first became apparent when Google introduced their chatbot, when in a quote from the New York times, it ‘Spewed nonsense about the James Webb telescope.’ In a legal sphere, when filing an application for an airline law suit, the chatbot cited as many as 6 fake court precedents which the lawyer submitted to a Manhattan federal judge. Researchers including Dr Simon Hughes of Vectara stated that chatbots may be prone to these ‘hallucinations’ more often than many think, with rates of 27%.[2] This is obviously less than ideal when met with an acquirer carrying out due diligence on a target company.
1.2 Accessibility of data
Data in the virtual data rooms are usually well safeguarded and the sell side is often unwilling to have their confidential information used for the purposes of AI training. Furthermore, the VDR provider is generally mandated on the sell side. As such, development and training for M&A due diligence may be limited in running commercial transactions. Areas such as medical records or employee data will naturally be off limits for AI given that they are sensitive materials.
-
Artificial Warranties
Given the implications of using AI in corporate transactions, deals involving this software can give rise to risks, given the reliability aspects of the generative AI software. As such, buyers may ask for certain protection in order to mitigate the potential risks during transactions. It is recommended that, while buyers should conduct thorough due diligence on the AI technology and its limitations, as well as the target company’s record of use of the AI technology in research and development and financial status. To do this in the due diligence phase will identify any potential areas where the deal terms may be adjusted accordingly.
2.1 ‘No Synthesizers’
Queen famously for their second to fifth albums made a point of using a ‘no synthesizers’ clause on the sleeve of the vinyl, with the rationale being that they wanted to highlight the skill of the recording technicians and the talent of Brian May. However, for their 1984 album ‘The Works’, they broke the rule, due to changing trends in popular music at the time, allowing them to stay current in the evolving industry. Similarly, companies may now opt to include a clause denoting whether they have used AI software, given the risks identified in the due diligence stage. It is recommended that this should be addressed in the share purchase agreement.
-
Types of Warranties
In order to ensure the AI software matches the performance levels with regards to consistency and accuracy, defined parameters should be included in the share purchase agreement, outlining plans for monitoring and human governance for higher risk outputs.
3.1 Risk and Technical support
Furthermore, it is recommended that technical support warranties should be included, including guarantees to carry out consistent software updates and repairs, along with technical support in the event of malfunctions or breakdowns of the software. To optimise the function and output of AI systems, this is obviously essential, as well as providing peace of mind to acquiring companies.
Risk warranties are recommended due to the necessity to identify and mitigate potential risk. These warranties should include impact assessments, steps to rectify risks as well as assurances against reputational damage from the use of AI. This is obviously flexible based on how heavily the AI software was relied upon in carrying out functions, with the level of human involvement playing a key role.
3.2 Non-Discrimination clauses
There should be transparency and disclosure warranties providing insights into how the AI system arrives at certain decisions and should implement non-discrimination measures for impartiality. As bias may be introduced into the process of training an AI tool, obtaining information about the data used to validate and train AI models can help to identify whether the target company considered certain bias issues at the input, and what steps have been taken to address this in the output. This can obviously lead to adverse effects most notably Amazon’s algorithm discriminating against women in their automated recruitment system.[3] It is therefore a necessity for companies to include steps to mitigate this risk as well as warranties providing for the use of AI, as this can pose great reputational threat to certain companies.
3.3 Intellectual property and privacy
It is also recommended for a company to have an intellectual property clause to protect items such as trademarks or licences for IP rights. These may include non-infringement for third party clauses, disclosure of threatened claims in relation to intellectual property as well as procedure for breach of intellectual property being outlined in the sale purchase agreement.
This obviously goes hand in hand with warranties providing protection of data, maintaining privacy and governance. These are obviously critical due to the reliance of AI on data mining. These warranties should therefore cover strong data protection measures as well as compliance with privacy laws. Furthermore, in the context of a target company using AI, it is likely that sellers should include a statement providing for ethical considerations in relation to the collection of data, storage and processing.
-
The AI act 2024 and its impact on M&A transactions
4.1 Prohibited AI systems
The final draft of the AI act 2024 was recently published and circulated between member states, with a view to it being voted in by COREPER on the 2nd of February. Further to the aspect of accessing data, the final text outlines practices of AI systems which are prohibited outright. One such practice is the use of AI systems which use subliminal techniques or practices which could be conceived as manipulative or deceptive to distort behaviour or decision making.
This provision arguably strengthens the credibility of a target company using AI, as it seeks to rectify issues such as the hallucinations which may be encountered through the use of a generative model. These are problems which would easily be conceived as deceptive behaviours of an AI programme, and thus can be sanctioned by way of a fine. Arguably, this area if implemented seeks to improve the outlook for a target company potentially using AI or a buyer using it in the due diligence stage of the transaction.
4.2 High Risk Systems
The AI act also draws a distinction between high and low risk systems. The system will not be considered high-risk if they do not pose a significant harm to health, safety…. (or by) materially influencing the outcome of the decision making. The AI system is not intended to replace or influence the original human assessment without human review. This ultimately will reinforce the use of AI by a target company in making sure that information provided by a target company is materially correct in the due diligence phase.
4.3 Human Governance
The act also states that on the topic of human oversight, the measures of governance will be commensurate to the risk level and the context of use of the AI system. High risk generative AI systems therefore must ensure that natural persons are assigned to ensure adequate governance of the high-risk AI systems and that they have necessary competence, training and authority.
4.4 3rd Party Agreements
The 3rd party agreements reflect GDPR obligations with regards to data processing. The draft act mandates that high risk AI systems must have a written agreement with 3rd parties supplying AI systems and services used in high-risk systems. The agreement must therefore clearly outline the information, capabilities and support necessary based on current industry standards, ensuring the high-risk AI system provider can meet the obligations of the regulation.
4.5 Sanctions
The draft act imposes harsh sanctions if companies using AI fall out of line with the aforementioned regulations. Non-Compliance with the obligations is subject to a fine of up to €35,000,000 or, if higher, 7% annual turnover of the company for the financial year which preceded it. Furthermore, any incompliance with the accords of the high-risk systems will be subject to administrative fines of up to €15,000,000. Finally, the supply of incorrect or misleading information shall be subject to a fine of up to €7,500,000 or, if the offender is a company, up to 1 % of its total worldwide annual turnover for the preceding financial year, whichever is higher.
4.6 How this will affect M&A transactions
The AI act, even at draft stage arguably has implications for M&A deals in that a buyer company can have reassurance in the due diligence stage. The harsh penalties for falling below what is required from the companies using AI technology will ultimately make the process more reliable and ultimately lead to a smoother process of due diligence. That being said however, acquiring companies will ultimately have to include this
-
Recommendations
5.1 Recommendations for target companies
It is recommended that target companies conduct rigorous evaluations of their current systems to ensure compliance with the prohibitions, especially in the area of ‘hallucinations’, given the prior cases of them being fatal in litigation proceedings. Furthermore, the implementation of thorough risk assessment processes to manage risks in lieu of the new act, given the gravity of the penalties which they may encounter should they fail to meet the criteria.
-
Recommendations and Implications for buy-side companies
6.1 Negotiating warranties
One way a buy side company could ensure its safety is more robust is through adequate strengthening of contract terms. While target companies may welcome the use of generative AI in providing benefits and efficiencies, they must also assume the corresponding risks which they may befall through the use of AI. Therefore, they must make appropriate representations and warranties to customers and buyers. Therefore, whatever is recognised in the due diligence phase should then be confirmed through appropriate warranties and indemnity clauses in the share purchase agreement with the target company in the acquisition stage.
It is recommended that companies include representations regarding intellectual property and infringement and the use of third-party content, as well as indemnity clauses should there be a misrepresentation based on an incorrect result of using AI. Similarly, for a traditional acquisition agreement, whether for a merger stock or asset purchase, one must consider whether IP warranties should be tailored for the business being acquired where the use of AI has been identified during the due diligence process. In summary, warranties will provide an added protection for the buy side company who wishes to purchase the target company, given that red flags have been identified, which will provide safety, indemnification and adjust the deal terms accordingly.
6.2 Human Governance
One aspect which potential acquirers will have to be cognisant of in the due diligence is whether or not the target company has adequately invested in human oversight. This may involve enhancing training in order to ensure effective oversight of the AI deployment. Another point of concern for the acquisition of target companies using AI is the necessity to establish thorough documentation and risk assessment processes. This is especially necessary in the context of general-purpose AI models in order to demonstrate both compliance and manage risks.
6.3 The AI Act
Data governance will also be a concern in terms of compliance with the new act, given that data integrity must be maintained in line with both GDPR and the new act. Finally, this report recommends acquirers doing due diligence on target companies to prepare adequately for the cost of compliance. As previously stated in the report, the fines for breaching the AI act, while they are for the purpose of deterring malpractice, are of a large scale. A company therefore doing due diligence on a potential target must be cognisant of this before committing to its purchase.
Contributed by Giacomo Pescatore (Avvocato) and Patrick Hannon (Paralegal).
[1] How AI will impact due diligence in M&A transactions
(https://www.ey.com/en_ch/strategy-transactions/how-ai-will-impact-due-diligence-in-m-and-a-transactions)
[2]Chatbots may ‘Hallucinate’ more often than many realise. (https://www.nytimes.com/2023/11/06/technology/chatbots-hallucination-rates.html#:~:text=Their%20research%20also%20showed%20that,Instagram%2C%20hovered%20around%205%20percent.)
[3] https://www.reuters.com/article/idUSKCN1MK0AG/