Skip to main content

Privacy & Generative AI Symposium Recap

December 2023
OPC Canada Symposium
8 min read
AI-generated Rothko-inspired abstract composition representing Privacy & Generative AI Symposium

Figure 1. This Rothko-inspired abstract composition represents the Privacy & Generative AI Symposium. The deep blue foundation symbolizes foundational privacy principles and comprehensive policy depth, golden convergence areas represent expert insights and successful innovation opportunities, while balanced grey fields express academic objectivity and the governance frameworks needed to balance innovation with human rights.

Summary

This blog post summarizes key insights from the Privacy & Generative AI Symposium organized by the Office of the Privacy Commissioner of Canada on December 7, 2023, where I participated on behalf of The Future of Privacy Forum. The event featured a keynote by Gary Marcus examining AI's limitations and risks, followed by three panel discussions covering AI technology opportunities, governance challenges, and human rights considerations.

The symposium provided critical perspectives on AI's disconnect from reality, data leakage risks, and the misalignment between corporate profit motives and global interests. Discussions emphasized the need for transparent AI development practices, controlled access to AI technology, and the importance of including independent scientific voices in policy decisions to ensure responsible AI governance.

Key Takeaways:

  • AI systems lack true understanding and connection with reality, exhibiting traits akin to psychopathy
  • AI data leakage poses significant privacy risks due to extensive data absorption
  • Corporate profit motives may conflict with democratic and privacy interests
  • Companies should report detailed accounts of data use, testing, and bias incidents
  • Independent scientists must have stronger voices in AI policy discussions
  • Controlled access to AI technology, similar to licensing systems, is needed
  • Effective regulation can enhance safety without hindering innovation
  • AI governance requires balancing innovation with human rights protection

The Office of the Privacy Commissioner of Canada/Commissariat à la protection de la vie privée du Canada (OPC) organized the Privacy & Generative AI Symposium in Ottawa on Dec 7, 2023. I participated in the event on behalf of The Future of Privacy Forum. Here is a concise summary of the discussions.

Keynote: The 'Privacy & Generative AI Symposium' keynote by Gary Marcus provided an in-depth exploration of the challenges and considerations at the intersection of artificial intelligence and privacy. In his keynote, Dr. Marcus highlighted current AI systems' limitations and potential risks, emphasizing their lack of true understanding and connection with reality. He noted that AI cannot comprehend basic concepts like time and weight and may exhibit traits akin to psychopathy. He also warned about the dangers of AI systems inadvertently leaking personal data due to their extensive data absorption.

Moreover, he raised concerns about the misuse of AI in creating disinformation and biases, especially in critical areas like transportation and nuclear weaponry, due to overestimating their capabilities. To give context to these concerns, he covered incidents where AI systems caused real-world impacts, such as tricking human workers into giving up their credentials or influencing stock markets with a viral image of an explosion. He then expressed concern over the misalignment of technologists' and nations' goals with global interests, emphasizing the need for responsible AI development and governance.

He advocated for transparency in AI development, urging companies to report detailed accounts of their data use, testing results, and incidents related to, e.g., bias, cybercrime, election interference, and data leaks. He highlighted the risks associated with the pursuit of profit in new, unreliable technologies like AI. He referenced a cartoon - with the following text: "Yes, the planet got destroyed, but for a beautiful moment in time, we created a lot of value for shareholders." - to underscore that while technology is being developed for financial gain, it could have serious, overlooked consequences for democracy and privacy.

Towards the end of the keynote, he expressed concern about big tech companies' disproportionate influence, lobbying power, and absence of independent voices in policy discussions. He emphasized the need for governments to include independent scientists in AI policy decisions, ensuring a more balanced and informed approach to the future of technology.

During a Q&A session, he addressed the challenges in AI development, particularly regarding large language models. He noted that despite efforts to train AI on extensive internet data, companies may be reaching data limits, which impacts AI improvement. He questioned that scaling AI models indefinitely leads to better performance compared to the end of Moore's Law.

He advocated for a combination of existing laws and new regulations on AI governance, primarily to protect artists' rights and address biases in employment and housing. He suggested creating national AI agencies for comprehensive legislation review.

Regarding the openness of AI software and governance, he supported controlled access to AI technology, similar to a licensing system, while calling for transparent and inclusive governance processes.

Marcus also discussed the relationship between regulation and innovation. He argued that effective regulation could enhance safety without hindering innovation, drawing parallels with the aviation industry. He emphasized the importance of balanced rules that promote safety and technological innovation.

Panel 1 - Generative AI as a Technology: Opportunities and Risks: The first panel discussion highlighted AI's transformative role in various sectors, including healthcare, transportation, and gaming, while emphasizing responsible and safe usage, and the criticality of data privacy. Ethical considerations, the novelty of Generative AI, and the importance of balanced opportunity-risk approaches were discussed. The panel also delved into the transformative potential of AI in areas like biodiversity and climate change management, the significance of national AI strategies, and the necessity of scaling up AI infrastructure and companies. The panelists:

Discussed AI's historical significance and its broad adoption in various sectors like healthcare. Emphasized responsible and safe usage of AI and the criticality of data privacy. Addressed Generative AI's novelty, ethical considerations, and the need for balanced opportunity-risk approaches.

Focused on AI's potential in medical research, intelligent transportation, and gaming. Raised concerns about deepfakes, data bias, and the misuse of AI in spreading misinformation and affecting freedom on the Internet.

Highlighted AI's transformative potential in biodiversity and climate change management. Stressed the importance of national AI strategies, adoption rates, AI infrastructure investment, and companies scaling up.

Representing the OECD, she emphasized the need to bridge the AI and privacy communities. Discussed the importance of data governance and the alignment of AI with privacy laws and principles.

Brought a historical perspective, focusing on the labor aspects of AI, content moderation, and the inferential nature of generative AI. Suggested considering the emotional impact and potentially deceptive nature of AI interactions.

Panel 2 - Generative AI Governance, Standards, and Regulation: The second panel focused on the rapid adoption of AI and its dual-use nature, particularly the governance challenges in democratic settings. It covered the legal intricacies of AI in copyright law and advocated for fair use policies. The importance of developing regulatory frameworks and standards for AI was emphasized, highlighting the role of regulations in building public trust and comparing AI regulations across different jurisdictions. The panelists:

Highlighted AI's rapid adoption and dual-use nature, focusing on the governance challenges in democratic settings.

Delved into the legal intricacies of AI in copyright law, advocating for fair use policies.

Emphasized the need for regulatory frameworks and standards in AI.

Discussed balancing AI's risks and opportunities, pointing out the role of regulations in building public trust.

Compared AI regulations across jurisdictions, focusing on interoperability and the impact of generative models on legislation.

Panel 3 - Innovation and Human Rights in the context of Generative AI: In the third panel, the transformative impact of AI on various industries was discussed, with a special focus on ethical considerations and the balance between AI innovation and human rights, particularly privacy, freedom of expression, and non-discrimination. The panel addressed the challenges of AI, including bias and privacy issues, and stressed the importance of responsible AI development, focusing on AI's impact on civil justice and human rights. The need for proactive measures to prevent discrimination and bias in AI systems was highlighted, alongside the importance of existing human rights frameworks in guiding AI development. The panelists:

Highlighted the transformative impact of AI on various industries and the importance of ethical considerations. Emphasized the need to balance AI innovation with human rights, focusing on privacy, freedom of expression, and non-discrimination.

Discussed the challenges of AI, including bias and privacy, and stressed the importance of responsible AI development.

Emphasized the potential of AI in affecting democratic rights and the necessity for best practices in AI development.

Focused on AI's impact on civil justice and human rights.

Spoke about the need for proactive measures to prevent discrimination and bias in AI systems and the importance of existing human rights frameworks in guiding AI development.

Highlighted the human rights implications of AI in high-impact settings like immigration and policing.

Stressed the risks of AI in surveillance and its potential to infringe on privacy and discussed the broader impact on societal norms and democratic processes.

This recap summarizes key insights from the Privacy & Generative AI Symposium organized by the Office of the Privacy Commissioner of Canada on December 7, 2023.

Copyright © 2019 - 2025 Team Blaeu. Content is licensed under CC BY 4.0 unless otherwise noted. Team Blaeu is a registered trading name of Blaeu Privacy Response Team B.V.

All Rights Reserved.Video provided by JuSun/Creatas Video via Getty Images. Licensed for commercial use. Fonts and other third-party components may be subject to their respective licenses. AI-Assisted Development: Website source code enhanced with Claude (Anthropic) for feature development, error resolution, and systematic code quality improvements. Fonts: Amble Family (Apache License 2.0), Big John (SIL OFL 1.1), Liberation(tm) Fonts (SIL OFL 1.1), OpenDyslexic (SIL OFL 1.1), Font Awesome (CC BY 4.0). Frameworks & Libraries: Dash.js (BSD License), Node.js components (Node.js contributors' terms), Nuxt.js components (MIT), Tailwind CSS (MIT), and Vue.js (MIT).