Smart product liability: issues and challenges

Introduction

In 2023, where do we stand in terms of liability where smart products are concerned?

The rules governing product liability set out in the Civil Code of Québec were introduced early in the 20th century in response to the industrial revolution and the growing number of workplace accidents attributable to tool failures.1 Needless to say, the legislator at the time could not have anticipated that, a century later, the tools to which this legislation applied would be equipped with self-learning capabilities enabling them to perform specific tasks autonomously.

 These “smart products,” whether they are intangible or integrated into tangible products, are subject to the requirements of general law, at least for the time being.

For the purposes of our analysis, the term “smart products” refers to products that have:

  • Self-learning capabilities, meaning that they can perform specific tasks without being under a human being’s immediate control.
  • Interconnectivity capabilities, meaning that they can collect and analyze data from their surroundings.
  • Autonomy capabilities, meaning that they can adapt their behaviour to perform an assigned task more efficiently (optional criterion).2

These capabilities are specific to what is commonly referred to as artificial intelligence (hereinafter referred to as “AI”).

Applying general law rules of liability to smart products

Although Canada prides itself on being a “world leader in the field of artificial intelligence,”3 it has yet to enact its first AI law.

The regulation of smart products in Quebec is still in its infancy. To this day, apart from the regulatory framework that applies to autonomous vehicles, there is no legislation in force that provides for distinct civil liability rules governing disputes relating to the marketing and use of smart products.

There are two factors that have a major impact on the liability that applies to smart products, namely transparency and apportionment of liability, and both should be considered in developing a regulatory framework for AI.4 

But where does human accountability come in?

Lack of transparency in AI and product liability

When an autonomous product performs a task, it is not always possible for either the consumer or the manufacturer to know how the algorithm processed the information behind that task. This is what researchers refer to as “lack of transparency” or the “black box” problem associated with AI.5

The legislative framework governing product liability is set out in the Civil Code of Québec6 and the Consumer Protection Act.7 The provisions therein require distributors, professional sellers and manufacturers to guarantee that the products sold are free from latent defects. Under the rules governing product liability, the burden of proof is reversed, as manufacturers are presumed to have knowledge of any defects.8

Manufacturers have two means to absolve themselves from liability:9

    • A manufacturer may claim that a given defect is the result of superior force or a fault on the part of the consumer or a third party; or
    • A manufacturer may argue that, at the time that the product was brought to market, the existence of the defect could not have been known given the state of scientific knowledge.

This last means is specifically aimed at the risks inherent to technological innovation.10

That being said, although certain risks only become apparent after a product is brought to market, manufacturers have an ongoing duty to inform, and how this is applied depends on the evolution of knowledge about the risks associated with the product.11 As such, the lack of transparency in AI can make it difficult to assign liability.

Challenges in apportioning liability and human accountability

There are cases where the “smart” component is integrated into a product by one of the manufacturer’s subcontractors.In Venmar Ventilation,12 the Court of Appeal ruled that the manufacturer of an air exchanger could not be exempted from liability even though the defect in its product was directly related to a defect in the motor manufactured by a subcontractor.

In this context, it would be reasonable to expect that products’ smart component would be likely to result many similar calls in warranty, resulting in highly complex litigation cases, which could further complicate the apportionment of liability.

Moreover, while determining the identity of the person who has physical custody of a smart product seems obvious, determining the identity of the person who exercises actual control over it can be much more difficult, as custody and control do not necessarily belong to the same “person.”

There are two types of custodians of smart products:

      • The person who has the power of control, direction and supervision over a product at the time of its use (frontend custody);
      • The person who holds these powers over the algorithm that gives the product its autonomy (backend custody)13.

Either one of these custodians could be held liable should it contribute to the harm through its own fault.

As such, apportioning liability between the human user and the custodians of the AI algorithm could be difficult. In the case of a chatbot, for example, determining whether the human user or the AI algorithm is responsible for defamatory or discriminatory comments may prove complex.

C-27: canadian bill on artificial intelligence

Canada’s first AI bill (“Bill C-27”) was introduced in the House of Commons on June 16, 2022.14 At the time of publication, the Standing Committee on Industry and Technology was still reviewing Bill C-27. Part 3 of Bill C-27 enacts the Artificial Intelligence and Data Act.

If adopted in its current form, the Act would apply to “high-impact AI systems” (“Systems”) used in the course of international and interprovincial trade.15

Although the government has not yet clearly defined the characteristics that distinguish high-impact AI from other forms of AI, for now, the Canadian government refers in particular to “Systems that can influence human behaviour at scale” and “Systems critical to health and safety.”16 We have reason to believe that this type of AI is what poses a high risk to users’ fundamental rights.

In particular, Bill C-27 would make it possible to prohibit the conduct of a person who “makes available” a System that is likely to cause “serious harm” or “substantial damage.”17

Although the Bill does not specifically address civil liability, the broad principles it sets out reflect the best practices that apply to such technology. These best practices can provide manufacturers of AI technology with insight into how a prudent and diligent manufacturer would behave in similar circumstances. The Bill’s six main principles are set out in the list below.18

      • Transparency: Providing the public with information about mitigation measures, the intended use of the Systems and the “content that it is intended to generate”.
      • Oversight: Providing Systems over which human oversight can be exercised.
      • Fairness and equity: Bringing to market Systems that can limit the potential for discriminatory outcomes.
      • Safety: Proactively assessing Systems to prevent “reasonably foreseeable” harm.
      • Accountability: Putting governance measures in place to ensure compliance with legal obligations applicable to Systems.
      • Robustness: Ensuring that Systems operate as intended.

To this, we add the principle of risk mitigation, considering the legal obligation to “mitigate” the risks associated with the use of Systems.19

Conclusion

Each year, the Tortoise Global AI Index ranks countries according to their breakthroughs in AI.20 This year, Canada ranked fifth, ahead of many European Union countries.

That being said, current legislation clearly does not yet reflect the increasing prominence of this sector in our country.

Although Bill C-27 does provide guidelines for best practices in developing smart products, it will be interesting to see how they will be applied when civil liability issues arise.


    1. Jean-Louis Baudouin, Patrice Deslauriers and Benoît Moore, La responsabilité civile, Volume 1: Principes généraux, 9th edition, 2020, 1-931.
    2. Tara Qian Sun, Rony Medaglia, “Mapping the challenges of Artificial Intelligence in the public sector: Evidence from public healthcare”, Government Information Quarterly, 2019, 36(2), pp. 368–383, online EUROPEAN PARLIAMENT, Civil Law Rules on Robotics, European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)), available online at  TA (europa.eu).
    3. GOVERNMENT OF CANADA, The Artificial Intelligence and Data Act (AIDA) – Companion document, online.
    4. EUROPEAN COMMISSION, White Paper on Artificial Intelligence:  a European approach to excellence and trust, COM. (2020), p. 3.
    5. Madalina Busuioc, “Accountable Artificial Intelligence: Holding Algorithms to Account”, Public Administration Review2020, online.
    6. Civil Code of Québec (CQLR, c. C-1991, art. 1726 et seq.
    7. Consumer Protection Act, CQLR c. P-40.1, s. 38.
    8. General Motors Products of Canada v. Kravitz, 1979 CanLII 22 (SCC), p. 801. See also: Brousseau c. Laboratoires Abbott limitée, 2019 QCCA 801, para. 89.
    9. Civil Code of Québec (CQLR, c. CCQ-1991, art. 1473; ABB Inc. v. Domtar Inc., 2007 SCC 50, para. 72.
    10. Brousseau, para. 100.
    11. Brousseau, para. 102.
    12. Desjardins Assurances générales inc. c.  Venmar Ventilation inc., 2016 QCCA 1911, para. 19 et seq.
    13. Céline Mangematin, Droit de la responsabilité civile et l’intelligence artificielle, https://books.openedition.org/putc/15487?lang=fr#ftn24; See also Hélène Christodoulou, La responsabilité civile extracontractuelle à l’épreuve de l’intelligence artificielle, p. 4.
    14. Bill C-27, An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts, Minister of Innovation, Science and Industry.
    15. Bill C-27, summary and s. 5(1).
    16. The Artificial Intelligence and Data Act (AIDA) – Companion document, Government of Canada, online. The Artificial Intelligence and Data Act (AIDA) – Companion document canada.ca.
    17. Bill C-27, s. 39(a).
    18. AIDA, Companion document
    19. Bill C-27, s. 8.
    20. TORTOISE MEDIA, The Global AI Index 2023, available at tortoisemedia.com.
Back to the publications list

Written by

Stay tuned for the latest legal news. Subscribe to our newsletter.

Subscribe to publications