Information, Privacy and Defamation

Overview

At Lavery, we were the first major law firm to anticipate, almost 30 years ago, the cardinal importance that information was acquiring in our society. Whether it involves the right of access to governmental information, the protection of personal information, cross-border data flow, the use of information technology, respect of privacy, reputation and personal image or the right to be forgotten, our seasoned lawyers in the information and privacy sector offer you a comprehensive perspective thanks to the depth and breadth of their expertise and the broad range of services they have to offer.

Over the years, Mr. Doray, Ad.E., and his team have represented numerous public and private organizations in matters relating to the confidential nature of documents, the validity of certain governmental decisions, reputation and privacy. They act as legal counsel for many large corporations, professional orders, public organizations and media companies in important cases regarding administrative and constitutional law. Furthermore, they have represented various clients in defamation and invasion of privacy law suits.

Services

  • Legal opinions and dispute resolutions: protection of privacy and personal information in public and private sectors in general and with regards to employer-employee privacy issues in particular (protection of personal information, video-surveillance, use of information technology, collection, use and communication of personal information prior to and during employment, etc.)
  • Legal opinions and dispute resolutions: access to provincial and federal governmental information
  • Legal opinions and dispute resolutions: solicitor-client privilege and litigation privilege
  • Legal opinions and dispute resolutions: freedom of the press, defamation, right to privacy and one's personal image, and protection of information sources
  • Compliance audits and risk management
  • Operational support in case of loss or theft of personal information
  • Representation before government and legislative bodies in matters relating to access to information and privacy policies and legislation
  • Relations with provincial and federal regulatory authorities, including the Commission d'accès à l'information, the Privacy Commissioner of Canada and the Office of the Information Commissioner of Canada
  • Protection of commercial, financial, technical and industrial information supplied to governments by businesses
  • Interpretation of the rules in matters relating to telemarketing and philanthropic solicitation
  • Class action suits in matters relating to the protection of personal information and privacy
  • Structuring online national and international commercial transactions
  • Management and organizational support for businesses and public bodies to ensure their compliance with the applicable rules governing the protection of personal information, namely when designing computer systems and Internet sites
  • Protection of privacy, personal image and reputation in connection with the use of new information technologies (Internet, social media and cloud computing)
  • Archiving and transferring documents electronically
  • Application of the new Canadian anti-spam legislation
  • Application of the Lobbying Transparency and Ethics Act and of the Lobbying Act
  • Application in Canada of the European directives in regards to personal data treatment, the U.S. Helms-Burton law, the Patriot Act and the ITAR
  • Advice relating to the right to be forgotten
  1. Businesses: Four tips to avoid dependency or vulnerability in your use of AI

    While the world is focused on how the tariff war is affecting various products, it may be overlooking the risks the war is posing to information technology. Yet, many businesses rely on artificial intelligence to provide their services, and many of these technologies are powered by large language models, such as the widely-used ChatGPT. It is relevant to ask whether businesses should rely on purely US-based technology service providers. There is talk of using Chinese alternatives, such as DeepSeek, but their use raises questions about data security and the associated control over information. Back in 2023, Professor Teresa Scassa wrote that, when it comes to artificial intelligence, sovereignty can take on many forms, such as state sovereignty, community sovereignty over data and individual sovereignty.1 Others have even suggested that AI will force the recalibration of international interests.2 In our current context, how can businesses protect themselves from the volatility caused by the actions of foreign governments? We believe that it’s precisely by exercising a certain degree of sovereignty over their own affairs that businesses can guard against such volatility. A few tips: Understand Intellectual property issues: Large language models underlying the majority of artificial intelligence technologies are sometimes offered under open-source licenses, but certain technologies are distributed under restrictive commercial licenses. It is important to understand the limits imposed by the licenses under which these technologies are offered. Some language model owners reserve the right to alter or restrict the technology’s functionality without notice. Conversely, permissive open-source licenses allow a language model to be used without time restrictions. From a strategic standpoint, businesses should keep intellectual property rights over their data compilations that can be integrated into artificial intelligence solutions. Consider other options: Whenever technology is used to process personal information, a privacy impact assessment is required by law before such technology is acquired, developed or redesigned.[3] Even if a privacy impact assessment is not legally required, it is prudent to assess the risks associated with technological choices. If you are dealing with a technology that your service provider integrates, check whether there are alternatives. Would you be able to quickly migrate to one of these if you faced issues? If you are dealing with custom solution, check whether it is limited to a single large language model. Adopt a modular approach: When a business chooses an external service provider to provide a large language model, it is often because the provider offers a solution that is integrated to other applications that the business already uses, or because it provides an application programming interface developed specifically for the business. In making such a choice, you should determine whether the service provider can replace the language model or application if problems were to arise. If the technology in question is a fully integrated solution from a service provider, find out whether the provider offers sufficient guarantees that it could replace a language model if it were no longer available. If it is a custom solution, find out whether the service provider can, right from the design stage, provide for the possibility of replacing one language model with another. Make a proportionate choice: Not all applications require the most powerful language models. If your technological objective is middle-of-the-road, you can consider more possibilities, including solutions hosted on local servers that use open-source language models. As a bonus, if you choose a language model proportionate to your needs, you are helping to reduce the environmental footprint of these technologies in terms of energy consumption.  These tips each require different steps to be put into practice. Remember to take legal considerations, in addition to technological constraints, into account. Licenses, intellectual property, privacy impact assessments and limited liability clauses imposed by certain service providers are all aspects that need to be considered before making any changes. This isn’t just about being prudent—it’s about taking advantage of the opportunity our businesses have to show they are technologically innovative and exercise greater control over their futures. Scassa, T. 2023. “Sovereignty and the governance of artificial intelligence.” 71 UCLA L. Rev. Disc. 214. Xu, W., Wang, S., & Zuo, X. 2025. “Whose victory? A perspective on shifts in US-China cross-border data flow rules in the AI era.” The Pacific Review, 1–27. See in particular the Act respecting the protection of personal information in the private sector, CQLR c. P-39.1, s. 3.3.

    Read more
  2. The forgotten aspects of AI: reflections on the laws governing information technology

    While lawmakers in Canada1 and elsewhere2 are endeavouring to regulate the development and use of technologies based on artificial intelligence (AI), it is important to bear in mind that these technologies are also classified within the broader family of information technology (IT). In 2001, Quebec adopted a legal framework aimed at regulating IT. All too often forgotten, this legislation applies directly to the use of certain AI-based technologies. The very broad notion of “technology-based documents” The technology-based documents referred to in this legislation include any type of information that is “delimited, structured and intelligible”.3 The Act lists a few examples of technology-based documents contemplated by applicable laws, including online forms, reports, photos and diagrams—even electrocardiograms! It is therefore understandable that this notion easily applies to user interface forms used on various technological platforms.4 Moreover, technology-based documents are not limited to personal information. They may also pertain to company or organization-related information stored on technological platforms. For instance, Quebec’s Superior Court recently cited the Act in recognizing the probative value of medical imaging practice guidelines and technical standards accessible on a website.5 A less recent decision also recognized that the contents of electronic agendas were admissible as evidence.6 Due to their bulky algorithms, various AI technologies are available as software as a service (SaaS) or as platform as a service (PaaS). In most cases, the information entered by user companies is transmitted on supplier-controlled servers, where it is processed by AI algorithms. This is often the case for advanced client relationship management (CRM) systems and electronic file analysis. It is also the case for a whole host of applications involving voice recognition, document translation and decision-making assistance for users’ employees. In the context of AI, technology-based documents in all likelihood encompass all documents that are transmitted, hosted and processed on remote servers. Reciprocal obligations The Act sets out specific obligations when information is placed in the custody of service providers, in particular IT platform providers. Section 26 of the Act reads as follows: 26. Anyone who places a technology-based document in the custody of a service provider is required to inform the service provider beforehand as to the privacy protection required by the document according to the confidentiality of the information it contains, and as to the persons who are authorized to access the document. During the period the document is in the custody of the service provider, the service provider is required to see to it that the agreed technological means are in place to ensure its security and maintain its integrity and, if applicable, protect its confidentiality and prevent accessing by unauthorized persons. Similarly, the service provider must ensure compliance with any other obligation provided for by law as regards the retention of the document. (Our emphasis) This section of the Act, therefore, requires the company wishing to use a technological platform and the supplier of the platform to enter into a dialogue. On the one hand, the company using the technological platform must inform the supplier of the required privacy protection for the information stored on the platform. On the other hand, the supplier is required to put in place “technological means” with a view to ensuring security, integrity and confidentiality, in line with the required privacy protection requested by the user. The Act does not specify what technological means must be put in place. However, they must be reasonable, in line with the sensitivity of the technology-based documents involved, as seen from the perspective of someone with expertise in the field. Would a supplier offering a technological platform with outmoded modules or known security flaws be in compliance with its obligations under the Act? This question must be addressed by considering the information transmitted by the user of the platform concerning the required privacy protection for technology-based documents. The supplier, however, must not conceal the security risks of its IT platform from the user since this would violate the parties’ disclosure and good faith requirements. Are any individuals involved? These obligations must also be viewed in light of Quebec’s Charter of Human Rights and Freedoms, which also applies to private companies. Companies that process information on behalf of third parties must do so in accordance with the principles set out in the Charter whenever individuals are involved. For example, if a CRM platform supplier offers features that can be used to classify clients or to help companies respond to requests, the information processing must be free from bias based on race, colour, sex, gender identity or expression, pregnancy, sexual orientation, civil status, age except as provided by law, religion, political convictions, language, ethnic or national origin, social condition, a handicap or the use of any means to palliate a handicap.7 Under no circumstances should an AI algorithm suggest that a merchant should not enter into a contract with any individual on any such discriminatory basis.8 In addition, anyone who gathers personal information by technological means making it possible to profile certain individuals must notify them beforehand.9 To recap, although the emerging world of AI is a far cry from the Wild West decried by some observers, AI must be used in accordance with existing legal frameworks. No doubt additional laws specifically pertaining to AI will be enacted in the future. If you have any questions on how these laws apply to your AI systems, please feel free to contact our professionals. Bill C-27, Digital Charter Implementation Act, 2022. In particular, the U.S. Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, October 30, 2023. Act to establish a legal framework for information technology, CQLR c C-1.1, sec. 3. Ibid, sec. 71. Tessier v. Charland, 2023 QCCS 3355. Lefebvre Frères ltée v. Giraldeau, 2009 QCCS 404. Charter of Human Rights and Freedoms, sec. 10. Ibid, sec. 12. Act respecting the protection of personal information in the private sector, CQLR c P-39.1, sec. 8.1.

    Read more
  3. Artificial intelligence in business: managing the risks and reaping the benefits?

    At a time when some are demanding that artificial intelligence (AI) research and advanced systems development be temporarily suspended and others want to close Pandora’s box, it is appropriate to ask what effect chat technology (ChatGPT, Bard and others) will have on businesses and workplaces. Some companies support its use, others prohibit it, but many have yet to take a stand. We believe that all companies should adopt a clear position and guide their employees in the use of such technology. Before deciding what position to take, a company must be aware of the various legal issues involved in using this type of artificial intelligence. Should a company decide to allow its use, it must be able to provide a clear framework for it, and, more importantly, for the ensuing results and applications. Clearly, such technological tools have both significant advantages likely to cause a stir—consider, for example, how quickly chatbots can provide information that is both surprising and interesting—and the undeniable risks associated with the advances that may arise from them. This article outlines some of the risks that companies and their clients, employees and partners face in the very short term should they use these tools. Potential for error and liability The media has extensively reported on the shortcomings and inaccuracies of text-generating chatbots. There is even talk of “hallucinations” in certain cases where the chatbot invents a reality that doesn’t exist. This comes as no surprise. The technology feeds off the Internet, which is full of misinformation and inaccuracies, yet chatbots are expected to “create” new content. They lack, for the time being at least, the necessary parameters to utilize this “creativity” appropriately. It is easy to imagine scenarios in which an employee would use such technology to create content that their employer would then use for commercial purposes. This poses a clear risk for the company if appropriate control measures are not implemented. Such content could be inaccurate in a way that misleads the company’s clients. The risk would be particularly significant if the content generated in this way were disseminated by being posted on the company’s website or used in an advertising campaign, for example. In such a case, the company could be liable for the harm caused by its employee, who relied on technology that is known to be faulty. The reliability of these tools, especially when used without proper guidance, is still one of the most troubling issues. Defamation Suppose that such misinformation concerns a well-known individual or rival company. From a legal standpoint, a company disseminating such content without putting parameters in place to ensure that proper verifications are made could be sued for defamation or misleading advertising. Thus, adopting measures to ensure that any content derived from this technology is thoroughly validated before any commercial use is a must. Many authors have suggested that the results generated by such AI tools should be used as aids to facilitate analysis and decision-making rather than to produce final results or output. Companies will likely adopt these tools and benefit from them—for competitive purposes, in particular—faster than good practices and regulations are implemented to govern them. Intellectual property issues The new chatbots have been developed as extensions to web search engines such as Google and Bing. Content generated by chatbots may be based on existing copyrighted web content, and may even reproduce substantial portions of it. This could lead to copyright infringement. Where users limit their use to internal research, the risk is limited as the law provides for a fair dealing exception in such cases. Infringement of copyright may occur if the intention is to distribute the content for commercial purposes. The risk is especially real where chatbots generate content on a specific topic for which there are few references online. Another point that remains unclear is who will own the rights to the answers and results of such a tool, especially if such answers and results are adapted or modified in various ways before they are ultimately used. Confidentiality and privacy issues The terms and conditions of use for most chatbots do not appear to provide for confidential use. As such, trade secrets and confidential information should never be disclosed to such tools. Furthermore, these technologies were not designed to receive or protect personal information in accordance with applicable laws and regulations in the jurisdictions where they may be used. Typically, the owners of these products assume no liability in this regard. Other issues There are a few other important issues worth considering among those that can now be foreseen. Firstly, the possible discriminatory biases that some attribute to artificial intelligence tools, combined with the lack of regulation of these tools, may have significant consequences for various segments of the population. Secondly, the many ethical issues associated with artificial intelligence applications that will be developed in the medical, legal and political sectors, among others, must not be overlooked. The stakes are even higher when these same applications are used in jurisdictions with different laws, customs and economic, political and social cultures. Lastly, the risk for conflict must also be taken into consideration. Whether the conflict is between groups with different values, between organizations with different goals or even between nations, it is unclear whether (and how) advances in artificial intelligence will help to resolve or mitigate such conflicts, or instead exacerbate them.   Conclusion Chat technologies have great potential, but also raises serious legal issues. In the short term, it seems unlikely that these tools could actually replace human judgment, which is in and of itself imperfect. That being said, just as the industrial revolution did two centuries ago, the advent of these technologies will lead to significant and rapid changes in businesses. Putting policies in place now to govern the use of this type of technology in your company is key. Moreover, if your company intends to integrate such technology into its business, we recommend a careful study of the terms and conditions of use to ensure that they align with your company’s project and the objectives it seeks to achieve with it.

    Read more
  4. Bill C-18 (Online News Act): Canada looking to create a level playing field for news media

    Earlier this month, Canadian Heritage Minister Pablo Rodriguez introduced Bill C-18 (Online News Act) in Parliament. This bill, which was largely inspired by similar legislation in Australia, aims to reduce bargaining imbalances between online platforms and Canadian news outlets in terms of how these “digital news intermediaries” allow news content to be accessed and shared on their platforms. If passed, the Online News Act would, among other things, require these digital platforms such as Google and Facebook to enter into fair commercial agreements with news organizations for the use and dissemination of news related content on their platforms. Bill C-18, which was introduced on April 5, 2022, has a very broad scope, and covers all Canadian journalistic organizations, regardless of the type of media (online, print, etc.), if they meet certain eligibility criteria. With respect to the “digital news intermediaries” on which the journalistic content is shared, Bill C-18 specifically targets online communication platforms such as search engines or social media networks through which news content is made available to Canadian users and which, due to their size, have a significant bargaining imbalance with news media organizations. The bill proposes certain criteria by which this situation of bargaining imbalance can be determined, including the size of the digital platform, whether the platform operates in a market that provides a strategic advantage over news organizations and whether the platform occupies a prominent position within its market. These are clearly very subjective criteria which make it difficult to precisely identify these “digital news intermediaries.” Bill C-18 also currently provides that the intermediaries themselves will be required to notify the Canadian Radio-television and Telecommunications Commission (“CRTC”) of the fact that the Act applies to them. The mandatory negotiation process is really the heart of Bill C-18. If passed in its current form, digital platform operators will be required to negotiate in good faith with Canadian media organizations to reach fair revenue sharing agreements. If the parties fail to reach an agreement at the end of the negotiation and mediation process provided for in the legislation, a panel of three arbitrators may be called upon to select the final offer made by one of the parties. For the purposes of enforceability, the arbitration panel’s decision is then deemed, to constitute an agreement entered into by the parties. Finally, Bill C-18 provides digital platforms the possibility of applying to the CRTC for an exemption from mandatory arbitration provided that their revenue sharing agreements meet the following criteria: Provide fair compensation to the news businesses for news content that is made available on their platforms; Ensure that an appropriate portion of the compensation would be used by the news businesses to support the production of local, regional and national news content; Do not allow corporate influence to undermine the freedom of expression and journalistic independence enjoyed by news outlets; Contribute to the sustainability of Canada’s digital news marketplace; Ensure support for independent local news businesses, and ensure that a significant portion of independent local news businesses benefit from the deals; and Reflect the diversity of the Canadian news marketplace, including diversity with respect to language, racialized groups, Indigenous communities, local news and business models. A bill of this scope will certainly be studied very closely by the members of Parliament, and it would not be surprising if significant amendments were made during this process. We believe that some clarifications would be welcome, particularly as to the precise identity of businesses that will be considered “digital information intermediaries” for the purposes of the Online News Act.

    Read more
  1. Lavery assists Agendrix in obtaining two ISO certifications for data security and privacy

    On February 6, 2023, Agendrix, a workforce management software company, announced that it had achieved certification in two globally recognized data security and privacy standards, ISO/IEC 27001:2013 and ISO/IEC 27701:2019. This made it one of the first staff scheduling and time clock software providers in Canada to obtain these certifications. The company is proactively engaging in all matters related to the security and confidentiality of the data processed by its web and mobile applications. The ISO/IEC 27001:2013 standard is aimed at improving information security systems. For Agendrix’s customers, that means its products comply with the highest information security standards. ISO/IEC 27701:2019 provides a framework for the management and handling of personal information and sensitive data. This certification confirms that Agendrix follows best practices and complies with applicable laws. A Lavery team composed of Eric Lavallée, Dave Bouchard, Ghiles Helli and Catherine Voyer supported Agendrix in obtaining these two certifications. More specifically, our professionals assisted Agendrix in the review of their standard contract with their customers, as well as in the implementation of policies and various internal documents essential to the management of personal information and information security. Agendrix was founded in 2015, and the Sherbrooke-based company now has over 150,000 users in some 13,000 workplaces. Its personnel management software is a leader in Quebec in the field of work schedule management for small and medium-sized businesses. Agendrix’s mission is to make management more human-centred by developing software that simplifies the lives of front-line employees. Today, the company employs more than 45 people.

    Read more
  2. Marie-Hélène Jolicoeur speaks at the ACUQ

    This spring, Marie-Hélène Jolicoeur, a partner of the Labour and Employment Law group, delivered four four-hour training sessions for close to one hundred emergency call centre workers in Québec. These sessions were delivered as part of a skills improvement program created by the Association des centres d’urgence du Québec (ACUQ). Entitled Médias sociaux et confidentialité de l’information, the training reviewed the legal obligations by which emergency call centre workers are bound and analyzed the employment impact of disclosing confidential or other information obtained in the course of employment on social media.

    Read more