The New York Privacy Act (NYPA) made quite a splash when introduced in the NY State Senate this past May. Like the summer’s first cannon ball into the deep end, it sent waves through the data privacy world, with harbinger-of-doom headlines proclaiming it a “sweeping,” “even bolder”, “considerably tougher”, “stringent”, nay, the “strictest” data privacy bill to date. And the headlines weren’t wrong.
The bill stands to impact companies of any size (including non-profits) doing business in or even so much as producing “products or services that are intentionally targeted to residents” of New York. It grants consumers the right to file private lawsuits against companies for privacy violations. And it embraces the concept of businesses as “data fiduciaries”, which is a fancy way of saying that companies would be required to put consumer privacy before company profits. So, yeah, it’s a doozy, especially in terms of the economic impact it would have on companies large and small.
But there’s another way NYPA stands out for toughness that didn’t get attention in the media despite being poised to have an enormous impact on how developers, data controllers, and data processors work with data. Namely, its stance on automated decision making.
First, let’s establish what the bill protects, who would need to comply, and how it compares to the already broad reach of CCPA, so you can better understand if it would impact your team.
NYPA’s definition of “personal data” is very similar to CCPA’s definition of “personal information” except that it’s injected here and there with little shots of PII on steroids, like “mother’s maiden name”, “retina or iris scan”, “voiceprint”, and “non-public communications”. Like CCPA, it casts as wide a net as possible, but going beyond CCPA, it also lines that net with targeted lures to make it unequivocally clear that it means to catch ALL THE DATA.
The law would apply to “legal entities that conduct business in New York state or produce products or services that are intentionally targeted to residents of New York state.” Again, this takes it several steps further than CCPA, in that CCPA specifies applicability thresholds for businesses in terms of revenue, amount of user data processed annually, and profits derived from the sale of personal information. So if your business activities in CA are small enough, you may not be impacted come 2020. Under NYPA, on the other hand, it wouldn’t matter how small you are or how little data you collect, if you’re dealing with the data of New Yorkers, you’d have to comply. 😬 (Here’s looking at you, fellow start-ups and SMBs.)
How about the rights NYPA grants consumers? The IAPP has a great tool for keeping up to date on and comparing all the state privacy laws currently in consideration and passed. Of the 19 laws being tracked, only one has a little x under every single consumer right proposed so far: the NYPA. Along with the six rights commonly granted by the majority of the proposed laws, the NYPA stipulates three additional consumer rights beyond those established by CCPA:
And that right there is what makes this so much stronger in terms of cracking down on how companies work with data.
NYPA defines automated decision-making as “a decision based solely on profiling,” where “profiling” is “any form of automated processing of personal data consisting of the use of personal data…to analyze or predict…[a] natural person’s economic situation, health, personal preferences, interests, reliability, behavior, location, or movements.”
With AI and ML businesses blossoming throughout Silicon Valley and beyond, the use of automated decision-making is steadily on the rise. It’s hard to resist taking the human (and labor hours) out of the equation when an algorithm can get the job done just as well. But NYPA’s argument is that algorithms cannot get the job done just as well and that such profiling can unfairly impact the consumers involved.
Though NYPA still has a ways to go before being signed into law, it isn’t the only one aiming to grant this right. Both Minnesota and Washington state include it in their privacy bills. Moreover, GDPR, the Grande Dame of privacy regulations herself, has already granted this right to European consumers. So it’s only a matter of time before similar legislation goes into effect for companies doing business from sea to shining sea.
The right is granted specifically against decision-making “based solely on profiling which produces legal…or similarly significantly” effects like “denial of…financial and lending services, housing, insurance, education enrollment, criminal justice, employment opportunities, and health care services.” The UK’s Information Commissioner’s Office provides some helpful examples of what this may or may not look like, in view of GDPR. Yet even their explanation shows how “similarly significant” is purposefully open-ended.
In the lead up to CCPA taking effect on January 1, 2020, companies doing business in the US have an opportunity to get ahead of the game: to revamp data privacy practices with an eye toward the future and the national trend toward ever stricter protections.
Like CCPA and GDPR, NYPA specifies that “personal data does not include de-identified data.” The simplest way to secure compliance with all of these laws is to work with data that is PII-free and to ensure your anonymization techniques are rock solid.
Indeed, the NY State Senate summarizes the bill as “[enacting] the NY privacy act to require companies to disclose their methods of de-identifying personal information, to place special safeguards around data sharing and to allow consumers to obtain the names of all entities with whom their information is shared.” The goal is not to stop companies from using consumer data but to compel companies to use data responsibly.
Is NYPA an over-the-top harbinger of doom for companies handling consumer data? Sure, you can look at it that way. Or you can look at it as a herald bringing news from just past the horizon, where comprehensive data privacy is the global standard and companies have fully equipped themselves to comply.
De-identify, anonymize, pseudonymize, obfuscate, mask, or synthesize. Just get ‘er done. You’ll be glad you did. And we’ll be glad to help you along the way.