Artificial intelligence: advertising claims and due diligence
Cristina Mesa, partner of the Garrigues Intellectual Property Department
Following the tsunami caused by the launch of Chat GPT at the end of 2022, many companies have rushed headlong into offering services under the “artificial intelligence label” knowing full well that this is going to attract the attention of possible buyers. The hype is undeniable, and with this comes the wave of products and services that have emerged in response to the popularity of the so-called Large Language Models and the products that are built around them. Caution.
Advertising the range of products and services that use AI technology is lawful yes, but only if the claims used to promote them comply with applicable advertising legislation. In the United States, the Federal Trade Commission (FTC) has already warned of the risks of making misleading statements in advertising claims such as these and in Spain, the legal framework also requires us to exercise caution.
The FTC and AI claims
The FTC Division of Advertising Practices, warns that when claims are made regarding the use of AI, they must comply with general advertising principles. Specifically, the FTC emphasizes the following points:
Veracity
Although it may seem obvious, not all products that claim to use AI tools actually do. For example, using AI tools to develop a product or service does not mean that we can sell that product or service as having AI in it.
In this context, the FTC has filed a lawsuit against the company Automators AI for claiming that the use of their AI tools in online stores, generated considerable profit. According to the FTC, the company argued that these substantial profits were obtained precisely through the use of AI tools, claiming that the company “integrates AI machine learning into the automation process, resulting in increased revenue and margins” and promised monthly net profits of between $4,000 and “6,000 Dollars and “597,000 in six months” (article published by the FTC).
Exaggeration
The FTC will also look at whether advertising claims are accurate and true or if they exaggerate expected results or apply them in general to any activity, misleading buyers of these services, whether or not they are consumers. In short, the idea is not to create false expectations of what a product or service that uses AI can do. For example, no matter how surprising AI generated results may be, the truth is that they are not adequate for all types of uses. As an anecdote: it is surprising how some lawyers insist on asking Chat GPT to look for case law when it is widely known that it “creates” cases that don’t exist but which would have been ideal for our case (had the case law been real). Following the commotion caused in the United States at the fine imposed on the lawyer who used non-existent case law to support a case, I hope they have all learned the lesson.
Comparisons
Claiming that certain products that use AI are better than others that don't will also be looked at. It is not the comparison itself that is unlawful, but the manner in which it is carried out, so it is important to have scientific support for the claims made. Comparative advertising is a particularly complex field, since the requirements imposed by the Directive on misleading and comparative advertising, transposed in Spain in article 10 of the Unfair Advertising Law, state that the comparison must be objective, relevant, verifiable and representative of the goods and services (or products). Therefore, before disseminating a comparative message on the solutions provided by AI, we must be sure that we can defend the accuracy of the message (substantiation) and that it meets comparative advertising requirements.
As far as buyers of AI-based products or services are concerned, it is essential to bear in mind that when a technology becomes mainstream, there is a risk of insufficient precautions being taken in acquiring those products or services. When adopting new technologies such as AI, it is always necessary to take due care in the acquisition process, starting by not assuming that the advertising is correct or complete.
The Department of Financial Protection and Innovation of the State of California (DFPI) warned of a recent increase in financial scams, particularly in crypto environments which claim to use AI systems that generate “too-good-to-be-true profits”.
The scenario in Spain
In Spain we still do not have guidelines or warnings regarding the misleading use of claims in connection with AI. However, although there are no specific regulations on the advertising of AI-based products and/or services, the principle of veracity will be applicable, which in our opinion, is more than enough.
However, not believing that additional legislation is necessary does not mean that the task that lies ahead will be easy. The principle of veracity envisaged in articles 5 and 7 of the Unfair Competition Law, requires advertising claims to be truthful. This includes not leaving out information that is necessary to understand the message and not suggesting information that may lead the average recipient to receive an incorrect message.
In conclusion
From the seller’s standpoint, when designing an advertising claim it is necessary to look at the different capabilities and limitations of automatic supervised and unsupervised learning systems, reinforced learning, or the new generative AI that is available. In carrying out this task, before making the claim we must be in a position to be able to provide documentary evidence of what we are saying. Technical and/or scientific reports can be used, but the statements made must be supported, since it is up to the advertiser to prove the truth of the claims. This is not an easy task if we bear in mind, as we explained in a previous post, that there is no consensus on what an “AI system” actually is, and least of all on its different types.
From the buyer’s standpoint, the first step is to verify that the service or product they are looking at is adequate for the needs to be covered, since not all AI systems are the same or serve the same purpose. Trying to solve a classification problem (i.e., classifying user groups of an online store to optimize the marketing campaigns) is not the same as trying to solve a regression problem (i.e., determining the likely progress of an investment product) or attempting to create or edit new content in the audiovisual industry using generative AI. Once the system has been chosen, it is also necessary to analyze its performance metrics, such as precision, recall, the F1 score or the mean squared error depending, once again, on the problem we are trying to solve. Here too, the ideal metrics will depend on the type of AI system involved.
When acquiring the products and services and also from the standpoint of compliance, numerous factors need to be borne in mind depending on the specific case and risks involved in the area in which the AI products or services are going to be used. For example, the risk level in human diagnosis prediction systems is not the same as in predictive industrial repair systems. The risk involved in a spam control system is not the same as in software aimed at managing human resources. However, in general, taking into account the following aspects can prove useful:
- Description of the model and whether it is in turn based on third-party models (and to what extent).
- The precision of the system.
- How its algorithm works (without seeking to access the provider’s trade secrets).
- Focusing on eliminating bias (and its corrective mechanisms).
- How the system can be integrated in the systems already in place at the company, including the possibility of tailor-made solutions.
- Provision of update and maintenance services
- Description of training data and guarantees as to their lawfulness. Here it is essential to minimize risks in relation to possible breaches of intellectual property and privacy rights, especially in training processes.
- Confidentiality, with special reference to reuse of the company’s data (e.g. "prompts").
- Skill in using the system.
- Rights in the results and possible limitations in its use.
It may also be useful to request references from other users and to study specific cases of use, in order to verify whether we are indeed acquiring the adequate tool and assuming the level of risk expected in doing so.
Needless to say, in mass adoption systems such as the commercial version of ChatGPT, the margin for negotiation can be nonexistent, since it is mainly governed by standard terms and conditions. It is therefore necessary to analyze whether its use is adequate and safe in view of the company’s activities and, as the case may be, defining adequate policies of use that should always go hand in hand with the relevant training.
This post obviously doesn’t intend to provide an exhaustive list of the precautions that need to be taken when acquiring and promoting an AI system, but it does seek to offer a few recommendations on aspects that need to be taken into account, both as regards the claims of AI products, and also in the care that should be taken when purchasing them.