Public consultation on the Privacy Act – Submission – David Eaves and Naeha Rashid
Prepared by: David Eaves and Naeha Rashid
Table of Contents
- About the Authors
- Our Focus
- Context: The Importance of Unique Identifiers
- The benefits and dangers of UI
- Recommendations for the privacy act
- Appendix: Privacy Threat Model
About the Authors
David Eaves is a lecturer of Public Policy at the Harvard Kennedy School where he teaches digital transformation in Government. In 2009, as an adviser to the Office of the Mayor of Vancouver, David proposed and helped draft the Open Motion, which created one of the first open government data portals in Canada and the world. He subsequently advised the Canadian government on its open data strategy, where his parliamentary committee testimony laid out the core policy structure that has guided multiple governments’ approach to the issue. He has gone on to work with numerous local, state, and national governments advising on technology and policy issues, including sitting on Ontario’s Open Government Engagement Team in 2014–2015 and training every cohort of Code for Canada, Code for America and White House Presidential Innovation Fellows.
Naeha Rashid has worked at the intersection of financial inclusion, social entrepreneurship, and technology for the last several years. She is passionate about leveraging technology solutions to improve the quality and character of people’s lives. While a 2019–20 digital governance fellow at Harvard Kennedy School’s Ash Center, Rashid analyzed issues related to technology and innovation in government. Previously, Rashid worked for CGAP—a member of the World Bank Group—leading her team’s work in Pakistan to catalyze innovation and scaling of digital financial services. She was also a core member of the start-up team for Karandaaz Pakistan, an organization funded by the Bill and Melinda Gates Foundation. Rashid holds a Master in Public Policy from Harvard Kennedy School and undergraduate dual degrees in International Development (honors) and Economics from McGill University. Find her online.
Our Focus
We applaud the Canadian Government’s efforts to modernize the Canadian Privacy Act. The world has greatly changed since Act first came into effect in 1983, particularly with the rise of digital technologies and the increase in volume and complexity by which Canadians share data and information with each other, companies, and their governments.
This act comes at a critical juncture. In the past several years the federal, and many provincial governments, have outlined ambitious goals to recreate and surpass the digital government successes seen elsewhere in the world. The ability to collect, associate, store and share information about individuals across government institutions - and even across the provincial and federal jurisdictions - is likely essential to achieve this ambition.
The facilitation and management of this data sharing ability is explicitly acknowledged as a key goal of the Privacy Act revisions (“promoting the responsible use and sharing of data to advance government objectives in the public interest”). This response will focus on a critical emerging trend the proposed revision to the Act mentions, but does not engage sufficiently: the need to regulate how governments will share data about individuals across ministries and possibly across the federal, provincial and municipal divide.
To attain the advanced data sharing capabilities that the government aims to create, the proposed revisions to the Act should provide a stronger framework to guide those building digital era government services and institutions. Ultimately, the updated Act must maximize the state’s capacity to create public goods and benefits to increase public welfare, while simultaneously ensuring the safety of public service users and their trust in the government’s use of their private information.
Context: The Importance of Unique Identifiers
Over the past 20 years, government services have increasingly become digital by default. The current pandemic - in which virtually all government services have had to be made accessible online - accelerates this trend. Even for services conducted in person or via paper, the information and data about individuals are inevitably collected, stored, and managed with the use of digital technologies.
The digitization of government services creates both new risks to privacy and security as well as opportunities for the state to create new public goods and better serve individuals. Government services do not operate in a vacuum from the private sector. Users of Canadian digital servicesFootnote 1 rightfully desire and expect the simplicity and ease of use often found with online private sector services. The architects of the Privacy Act’s reforms appear to be aware of this - in many places the language used to sketch out the goals of the modernized act hint at a similarly forward-thinking vision of a digital era government. This vision appears to be in line with the model of digital government that emerged in countries such as Estonia in the last decade and is now broadly championed in Europe, at the World Bank and as well as numerous emerging markets such as India and Bangladesh.
Based on the experiences of Estonia and others, there is an emerging consensus that an essential element of a modern era digital government is some form of unique identifier (UI). A unique identifier is a numeric or alphanumeric string that is guaranteed to be associated with a single entity (e.g. individual or business) within a systemFootnote 2. This string is linked with a core set of descriptive attributes for the entity. While we can debate what form a UI should take (functional ID vs foundational ID, mandatory vs not, tokenized vs exposed, etc), they are the critical foundation upon which the most successful examples of digital era governments are built. From a privacy perspective, UIs pose a particularly complex challenge as they are predicated upon “the inherent tension between ready access to data stores and the imperative to safeguard this data from unauthorized access”Footnote 3. Interestingly, when it comes to the issue of “unauthorized access,” much of the media’s attention focuses on illegal data breaches by foreign and malicious actors - and it is clear that effective security is required to protect against such intrusions. However, the Privacy Act is most effective against the greatest threat to individuals, the core concern that sharing personal data could be punitively used against them in the future in unanticipated manners by the government or its agents. (For a more detailed explanation of the privacy threat model linked to government use of personal data, please see the Appendix.)
Given the criticality of UIs to the future of government across all levels of jurisdiction, an updated Privacy Act needs to simultaneously balance enabling the state to create UIs – and the ecosystem of service and public goods they allow for – with the need to protect against the dangers this new capability creates if abused by the state. If the rules are too tough, then the Canadian public will not be able to fully leverage the many benefits that UIs can create for digital government and for the digital economy at large. If the rules are too loose, the significant dangers that accompany the emergence of UIs will be exacerbated and there is a real risk of public trust in government services eroding. While the Act does not need to provide operational guidance on UIs, it should be compatible with allowing work around this component within a system of sufficiently strong safeguards.
The benefits and dangers of UI
A unique identifier creates several benefits for both users and governments. Some of the most significant include:
- Permitting for identification solutions which in turn unlock various opportunities such as remote authentication and verification of identities;
- This is critical to security and to the correct provisioning of government services in a digital era.
- In many countries, a government backed digital identity has been critical to increasing access and equity to the digital economy as well as increasing the efficiency and security of transactions.
- Allowing for data linking and sharing for the same unique individual across multiple government departments and ministries;
- This ability creates the basis for advanced data sharing across government departments, thus allowing for policies like tell us once to be implemented.
- This can help the government predict what services individuals require in advance of any application and create value added services such as prequalification.
- In many scenarios this ability could help mitigate issues of fraud.
- Coding service data against individuals likely facilitates future uses of artificial intelligence technologies to identify trends that could be used to improve or create new public goods (as well as the opposite)
Alongside the benefits of UIs, there are also significant risks, including the following:
- Creating a permanent and immutable record for individuals that follows them for life;
- New security vulnerabilities linked to the collection, storage and access of all data that is connected to a UI. Stringent controls are required to mitigate potential misuse and associated risks;
- New shared fate risk – meaning the risk that if data were compromised, further and extensive data compromise could result within multiple (even unrelated) sensitive systems;
- Increased overuse risk, where a UI is increasingly required by different entities even when the UI provides a significantly higher level of assurance than needed;
- Enhancing the government’s capability to create, track, and monitor individuals’ data and use them to create comprehensive profiles of individuals, thus increasing the risk of malicious surveillance;
- Significant danger of function creep, meaning that the UI may be used for more and different purposes than were originally intended in the future;
- Finally, the more valuable and widely used a credential is, the greater the risk of theft or unauthorized use. UIs and credentials built atop of them are likely to become very widely used and thus both highly valuable and requiring significant security.
BOX 1: Examining the benefits and dangers of UI through the lens of the BC Services Card
One Canadian case begins to hint at the many benefits of operationalizing a unique identifier is that of the British Columbia Services Card. The BC Services Card is a provincial ID card that enables access to a range of government services. Each card has an embedded chip which contains a UI. This UI in turn provides the technical basis upon which residents’ personal information can be linked across disparate government databases.
Originally linked with driver licensing and created to enable eHealth in BC, the card is now a multipurpose e-identity that can be used to access 17 diverse services such as student loans. As of 2020, 12 other use cases which will be connected to the services card were in the pipeline. Notably, the Card model is interoperable with the federal system, meaning that it could be integrated into a wider Canadian identity system in the future. (It is already used to authenticate users with the Canada Revenue Agency.)
The BC Services Card has transformed the way the province collects, accesses and shares personal information amongst departments, agencies and even private contractors. As of August 2020, approximately 150,000 users had registered for the Card (up from 50,000 users in April 2020). The card’s rapid uptake – in part driven by COVID – shows users have come to expect the functionality the Services Card enables.
Despite the successes of the BC Services Card, some limitations and drawbacks remain. Most notable is that the card is largely limited to serving as a single sign-on solution. Consequently, it does not yet allow for more advanced benefits (e.g. predictive analytics and tell us once) to accrue to individuals or the state. On another note, the BC Services Card is not controlled under any specific UI law, and is thus a source of concern for many privacy activistsFootnote 4. One major area of contention has been its mission creep from the original conception over time.
The privacy concerns can be pinned down to the fact that the BC Services Card operates in an environment that is not fit for the purpose; the existing regulations didn’t envision the robustness of data sharing that the card currently enables and could build upon in the future. As a result, there is a perception by some of a lack of regulatory guardrails to sufficiently guide or curtail this work. A UI specific regulation could help manage these problems.
A useful thought exercise for privacy professionals to engage in as they revise the privacy act is to consider how the updates could help advance and constrain the BC Services Card operation.
Recommendations for the privacy act
We have limited our recommendations to three primary areas:
- Discussing the content of specific clauses under proposal 6, “updating rules on the collection, use, disclosure, and retention of personal information.”
- Discussing the content of specific clauses under proposal 8, “introducing stronger accountability mechanisms in the Act would be supplemented by supporting regulations or Government policy.”
- Creating space for UI specific regulations in the Canadian context [unaddressed by the proposed revisions]
Area 1: Discussing Proposal 6
- Updating the provisions that allow for the use and disclosure of personal information for other purposes: set out a list of authorized circumstances where personal information may be used or disclosed for a purpose other than that for which it was originally collected.
- Clarifying “consistent use” in the use and disclosure framework: the Privacy Act could define “consistent use” flexibly, in a way that aligns this Canadian concept with the European approach to “compatible uses”.
While we broadly agree with some of the goals of this proposal – to constrain data re-use to purposes users have explicitly or implicitly consented to – the proposed approach is likely to be both rigid and fragile. It will be impossible for any list to anticipate new opportunities to create public goods, improve services and reduce the administrative burden on citizens. There is a real risk that any such list will overly constrain the government’s ability to adapt to users’ needs in a timely fashion, and that any subsequent updates to the list may trail the expectations of users. We must acknowledge that no list can anticipate every single condition. Finally, an “approved” list of activities will not only lack flexibility, it will create a simple rule, not a set of new norms, around how to adapt services while maintaining privacy in a digital era.
Given this, more adaptive approaches that seek to protect users against the harms of an overreaching government, but still allow new uses of data to emerge, would be advisable. One possibility would be establishing a set of principles to guide decision makers. Such a principle might create a lower barrier to the government to re-use data when they can demonstrate the outcomes are beneficial to users (e.g. identifying and proactively suggesting to a user a benefit they are eligible for) whereas a higher standard of consent is required for uses that may be punitive (e.g. identifying fraud).
An ideal set of principles would enable the beneficial outcomes but prevent punitive ones; examples of such outcomes can be seen in the following scenarios.
Scenario A: Beneficial
Many governments are interested in “moment of life” services and/or lowering the administrative burden to provisioning users with public goods. It would be ideal if the government could proactively inform a user that has enrolled in specific benefits that they also qualify for certain tax deductions or exemptions. The data sharing required to create such services risks diverging from the definition of consistent or compatible use.
Scenario B: Punitive
In the UK, the NHS and the Immigration Enforcement department of the Home Office have a data sharing agreement under which the NHS can report illegal immigrants coming in for medical care for issues like infectious diseases for invalid visas. This type of data sharing not only creates harms for users, but also undermines the goals of the NHS by discouraging individuals from seeking life-saving medical support.
The Act might advocate for (or simply make space for) the creation of bodies modeled after Institutional Review Boards (IRB) found at universities. Such boards, which could exist at the ministerial level but with oversight from a central agency and the privacy commissioner, could be responsible for predicting harms and providing guidance as new scenarios for the use of user data emerge. This type of process would provide oversight and the flexibility to adapt to emerging scenarios, as well as establish norms and expectations government officials can anticipate and rely on as they imagine new services and react to evolving citizen expectations.
- Strengthening accountability for information sharing under 8(2)(f): Requires documentation of all information sharing agreements in writing. The type of clauses that would have to be noted could include the purpose, description of information, safeguards, limits and penalties.
This clause acknowledges the possible emergence of multiple bilateral data sharing arrangements. However, it is unfair and likely impossible to expect the average user of public services to navigate and track what could be thousands of unique data sharing arrangements. Indeed, over time it may become difficult or impossible for any government official to track and understand a complex web of thousands of bilateral data sharing agreements. While it may be outside the remit of the act, the legibility of how privacy works to both users and administrations would benefit from further standardization of the terms of data sharing agreements. Ultimately the design of a small number of standardized agreements will be critical to their efficacy and usefulness.
The act should also set a clearer floor of minimum capabilities the state should provide when collecting individuals’ information. One way to achieve more accountability is to provide users with a clear and accessible log of what organizations and/or individuals are accessing their data. This way users can themselves identify if there has been unauthorized access of their data, and/or monitor how their data is used to calibrate how and when they provide consent. This approach has been taken in Estonia. One note: while such a mechanism should be necessary, it is not sufficient. The burden of policing how data is shared and used should not fall exclusively on the shoulders of users.
Area 2: Discussing Proposal 8
- Clarifying which federal public body is accountable when multiple public bodies are involved: Indicate who is responsible when multiple bodies can access a dataset, or hold a shared dataset.
The existence of UIs makes it possible to radically increase the frequency, quantity and speed with which data is shared. This in turn could significantly alter how services (and related accountabilities) are structured in the federal government. For example, the United Kingdom is exploring creating “canonical” data registries of certain key pieces of information that different ministries will access (rather than collect from users) in service provisionFootnote 5. This approach has many benefits, but it raises critical questions about how responsibility and accountability ought to be distributed between the various entities that “own”, consume, share, and collect various types of user data, given that in order for them to function effectively, they must interoperate – but that they would of necessity span multiple ministries, agencies, and so on.Anticipating such changes, the Act should make room for similarly new governance models to emerge.
Additionally, it may be helpful to create a data hierarchy with associated access rules and risk levels. This could, for example, be structured around three data categories:
- program specific data (which might be restricted to narrower use cases)
- de-identified data (which, with few restrictions, could support data driven policy); and
- core data registries (for example core “tombstone” data that must be secured, but is widely needed across agencies to deliver virtually any service)
The access rules would be different depending on which data set is dealt with, since the level of accountability associated with each varies significantly.
Finally, when data is shared across multiple entities, it is often unclear to users how to identify issues and how to proceed if a problem is found. While users should be able to see how their data is being utilized, the system should not solely rely on them to report issues, and instead should have a set of internal checks and balances. Moreover, users should know who they can complain to and what the redress process is, if they discover that their data is being inappropriately used.
Area 3: Creating room for UI
- Developing UI specific regulations
There are several UI specific rules that could be enumerated, especially given the expectation that UIs are likely to become more ubiquitous as Canada’s digital economy expands. In what is likely to be an identifier rich environment, both federal and provincial bodies are going to require guidance in this arena.
While the revisions to the Privacy Act have created some initial safeguards, the Act has not identified the necessary enablers for a complex identity system to safely take shape. These enablers could either be indicated in a separate section of the privacy act, or – more reasonably devolved into a separate UI specific regulation entirely. Examples of the enablers that should be reflected in some kind of legal framework include the following (note that this list is by no means exhaustiveFootnote 6):
- Defining the scope and purpose of the UI system
- Laying out eligibility requirements
- Delineating system specifications
- Describing the form, role, and process of appointments of any governance mechanisms
- Defining the data sharing and transfer policies
- Establishing whether open standards and technology neutrality will be adhered to
While the enablers define the micro level working of any operational UI, there is also need to define the standards that will govern a national federated and interoperable system. Key questions here would include laying out the coordination and management mechanism of multiple UIs across the federated system.
Finally, there is a real risk that any UI will be co-opted by the private sector. In many ways, Social Security Numbers, National IDs and even cell phone numbers have inadvertently created an identity standard that has enabled companies to track and share individuals’ data across an enterprise and/or the private sector at large. It should not be beyond the remit of the Privacy Act to at a minimum establish the principle that any unique ID infrastructure (or any government data) not be used to establish such a standard, intentionally or otherwise.
Appendix: Privacy Threat Model
This appendix has been extracted from “Base Requirements and Threat Model for the Once-Only Policy”, an article in the 2020 State of Digital Transformation Report. For more information on this topic please refer to the complete Once-Only Guide.
There are five primary threats to privacy associated with the use of personal data by the government. We have broadly these and briefly explained how failures in securing personal data against these threats could impact trust in government and service delivery. Note that the five threats do not constitute a comprehensive list and that the urgency of each of these risks may vary from country to country.
- Securing individual privacy from the state itself
This threat refers to protecting residents’ data from the government itself, and/or misuse by government agents. Significant examples include government employees accessing information that they are not authorized to see (e.g. a librarian seeing a police report). If the government cannot adequately protect against this threat the consequences may range from some erosion of trust in government to complete decimation of trust but low impact on service delivery.
- Secure privacy from actors contracted by the state
This threat refers to protecting resident data from third party government contractors who may have been given access to sensitive information. An example of this would be an app developer contracted by the health department to enhance the government’s COVID track and trace response. Failure to protect against this threat will likely result in some erosion of trust in government and low to medium impact on service delivery.
- Protect privacy against foreign state actors
Here we are particularly concerned with protecting resident data from military level incursions by other malicious states. This is likely to become an increasing challenge for countries as we move towards an increasingly digital world. If this threat comes to pass, it may result in lowered trust in government and will likely cause significant impact on service delivery. The worst case scenario would be a debilitating and devastating attack on a nation’s government and its residents.
- Protect privacy against non-state actors
The idea here is to protect resident data from individuals and organizations (e.g. businesses and political parties) that seek to benefit from this data. It is uncertain what impact failure to do so will have when it comes to citizen trust in government (range from erosion of trust in government, to an unlawful and insidious influencing of people’s democratic choices) but the impact on service delivery is likely to be low.
- Protect individuals’ privacy from people they know
This last threat refers to protecting individuals’ data from their family and friends who are not authorized to access certain types of information. The most clear example which comes to mind is a child accessing their parent’s medical data without the parent’s consent. It is unclear what the impact on trust in government and service delivery would be from such actions.
- Date modified: