3
MONDAY, MARCH 5, 2018 | 9 NYLJ.COM | Cyber security BY JESSICA B. LEE T he countdown to the enforce- ment date of the EU General Data Protection Regulation (GDPR) has begun and it’s becom- ing increasingly clear that many U.S. organizations are poised to be caught in its crosshairs. Orga- nizations that offer goods or ser- vices in the EU (whether or not a payment is involved) or that monitor the behavior of individu- als in the EU, will be subject to the GDPR’s requirements whether or not they have a presence in the EU. For U.S. organizations that are being exposed to the EU’s regulatory regime for the first time, panic may be setting in (if it hasn’t already). Require- ments around honoring expanded data subject rights, maintaining records of processing, docu- menting the legal basis for such processing, and complying with the new security breach notifica- tion requirements, among others, may be particularly challenging for organizations that don’t have well–developed data governance policies or centralized systems and databases. The GDPR replaces the pre- vious Data Protection Directive 95/46/EC (the Directive) as the governing privacy regulation in the EU. While key principles of data privacy addressed in the Directive remain largely the same, there are some significant policy changes, and, as a result, a fair amount of uncertainty about how the regulation will be enforced. With reports suggesting that many organizations won’t be “fully compliant” by May 25, 2018 (the GDPR’s enforcement date), the next year or two may prove instructive as the first round of enforcement begins. Although some will find this uncertainty frustrating, there may be a silver lining. Where the Directive included an obligation to notify supervisory authori- ties about an organization’s processing activities, the GDPR allows organizations to document their own processing activities, determine if they are compliant with the specific requirements, identify and mitigate any risks created by their data use, and ultimately hold themselves accountable for compliance. This emphasis on accountability and record keeping may actually help create the safety net needed to navigate the GDPR’s grey areas. Organizations with a robust data governance program, that have a documented and considered approach to GDPR compliance, are much less likely to be at the front lines of GDPR enforcement, and certainly should not be sub- ject to the highest fines (up to $20 million or 4 percent of global annual turnover). GDPR: Accountability For Risk-Based Approach Article 5(2) of the GDPR intro- duces the accountability princi- ple, which requires organizations that control the processing of personal data (“controllers”) to demonstrate (read: document) compliance with the GDPR’s prin- ciples relating to the processing of personal data (i.e., lawfulness, fairness and transparency; pur- pose limitation; data minimiza- tion; accuracy; storage limitation; and integrity and confidentiality). This notion of accountability is not new; it was included as a basic data protection principle in the OECD Guidelines in 1980 (and the most recent update in 2013) and has been incorporated in various forms in other inter- national privacy regulations. However, previous iterations of the accountability principle were centered on assigning respon- sibility or fault for failures in privacy compliance. Under the GDPR, accountability is recast as an obligation to establish a sys- tematic and ongoing approach to privacy. In effect, it codifies the obligation to create a data gover- nance program that incorporates the principle of privacy by design, using tools like privacy impact assessments to routinize data protection within an organiza- tion. More than just a mandate to create policy documents, the GDPR creates a regulatory envi- ronment under which privacy and data governance are forced to become a standard element of an organization’s operations. This principle of accountability must be viewed in the context of the GDPR’s risk-based approach to privacy. Under Article 24 of the GDPR, controllers are required to assess the nature, scope, context and purpose of processing, and based on the risks presented: (1) implement appropriate techni- cal and organizational measures to ensure and be able to demon- strate that data pro- JESSICA B. LEE is a partner in the advanced media and technology practice at Loeb & Loeb. The GDPR: A Silver Lining For Data Governance » Page 11 BY IAN G. DIBERNARDO AND JEFFREY MAN C yberattacks have become commonplace over the last few years. No industry is immune to attacks, which have only increased in frequency and intensity as hackers and bad actors have become more sophisticated. One way a com- pany can protect against cyber- attacks is to have testing (which can mimic a hacker intrusion) performed on its computer sys- tems and networks to uncover vulnerabilities. This is known as penetration or “pen” testing. Although conducting pen testing is prudent and becom- ing common, it is also fraught with potential pitfalls. When embarking on such a project, a company should fully under- stand its scope and include certain contractual protec- tions with the pen tester. This article discusses best practices related to pen testing, including relevant contractual provisions and precautions to take before, during and after embarking on this potentially risky endeavor. Increased Need For Pen Testing In recent years, requests for pen testing have increased dra- matically. This is in part due to new privacy and security laws and frameworks that have been adopted in various jurisdictions and industries that either man- date or recommend pen testing as part of a company’s data secu- rity program. For example, the National Institute of Standards and Technology (NIST), in its “Technical Guide to Information Security Testing and Assessment Recommendations of the National Institute of Standards and Tech- nology” (Special Publication 800-115), provided an overview of useful vulnerability validation techniques that can be utilized as part of a cybersecurity policy, which include pen testing. Recently, New York enacted the State Department of Finan- cial Services Cybersecurity Reg- ulations for Financial Institutions (NYS-DFS 23 NYCRR 500), which became effective on March 1, 2017. Under the regulations, all “Covered Entities” are required to enact a cybersecurity program that “includes monitoring and test- ing … designed to assess the effec- tiveness of the Covered Entity’s cybersecurity program.” This monitoring and testing requires “continuous monitoring or period- ic Penetration Testing and vulner- ability assessments” (23 NYCRR 500.05). As the regulations state: Absent effective continu- ous monitoring, or other systems to detect, on an ongoing basis, changes in Information Systems that may create or indicate vul- nerabilities, Covered Entities shall conduct: (a) annual Penetration Test- ing of the Covered Entity’s Information Systems deter- mined each given year based on relevant identified risks in accordance with the Risk Assessment; and (b) bi-annual vulnerability assessments, including any systematic scans or reviews of Information Systems rea- sonably designed to identify publicly known cybersecu- rity vulnerabilities in the Covered Entity’s Information Systems based on the Risk Assessment. Id. (emphasis added). Com- pliance with these penetration testing requirements for Covered Entities is mandated as of March 1, 2018. Based on these, and other, recommendations and require- ments, companies are investing in pen testing, some for the first time and others with renewed focus. Pre-Contract Diligence Pen testers come in various forms, that range from large, well-established consulting com- panies to emerging niche tech providers, and not all pen testers are created equal. Because of the sensitive nature of the work that is being performed, a company should investigate each poten- tial pen tester before allowing it potential access to the com- pany’s systems and data. While each project is unique, a compa- ny should investigate a variety of factors before choosing the right provider, including, for example, by focusing on both the pen testing entity and its systems. For example, diligence should include considering whether the pen tester uses consultants or employees, as well as the ten- ure and experience of its team. Similarly, at the diligence phase, a company should understand the extent to which a pen tester utilizes any third-party systems or tools and, in any event, con- firm such systems themselves have been tested and certified. Moreover, by having the dili- gence memorialized as a standard “checklist,” pen testers can be rig- orously compared, and perform- ing robust diligence early in the process will help avoid costly sur- prises and delays down the road. Scope of Testing In general, pen testing is implemented by a software vendor that undertakes certain actions (that may mimic actions that a hacker may take) to uncov- er vulnerabilities of a particular computer network. Because your computer network is only as safe as your least-protected link, it is important that tests cover a wide variety of software and hard- ware, while not losing sight of the forest for the trees. Therefore, prior to undertaking the test- ing, a company should among other items, consider: Which applications and systems will be tested? What access points will be tested? Are there touch points with third-party providers (e.g., a cloud provider) that will need to be tested? Preparing for the pen test- ing can include the pen tester detecting information about net- work architecture, systems and applications, as well as profiling the company through publicly available information in a man- ner similar to which a hacker might find such information. Alternatively, the company can provide this information to the tester. Vulnerabili- IAN G. DIBERNARDO is the co-practice group leader of the intellectual prop- erty and technology group of Stroock & Stroock & Lavan. JEFFREY MANN is a special counsel in that group and a Certified Information Privacy Profes- sional (CIPP/US). » Page 10 Pen Testing: The Good, The Bad and the Agreement While confidentiality is important in many re- lationships, due to the unique role of a pen tes- ter, the intricacies of the confidentiality provisions take on added significance. The GDPR replaces the previous Data Protection Directive 95/46/EC as the governing privacy regulation in the EU. SHUTTERSTOCK SHUTTERSTOCK Six Common Misconceptions About Cybersecurity BY JED DAVIS I nterest in cybersecurity is escalating across the legal profession, reflecting the complex and potentially cata- strophic threats that clients, par- ticularly financial services firms, now face. The combined power, speed and baked-in vulnerabili- ties of information technology (IT) have given rise to previously unimaginable but now-endemic risks to organizations. Malicious actors can and do steal, lock or destroy confidential data, in bulk or in smaller but still-devastating caches, and then exploit the information’s resale, extortion or spite value. Moreover, even accidental errors can cause con- fidential information to leak, with similarly costly regulatory, litiga- tion and business fallout. Because these risks are deep and potentially disastrous, law- yers are increasingly tasked with counseling clients about how to contain them. Frequently this requires dispelling clients’ mis- conceptions about those risks and effective countermeasures. Below we explore each of six such misconceptions that often beset organizations. Avoiding these errors is essential to fulfill- ing the core functions of a cyber- security programs: (1) identifying cyber-risks, (2) protecting criti- cal infrastructure using appro- priate safeguards; (3) detecting incidents; and (4) responding and (5) recovering from them. National Institute of Standards, Framework for Improving Critical Infrastructure Cybersecurity (v. 1.0) (2014) at 7-8 (NIST Frame- work). 1. “We don’t face the same risks as [Name of Fortune 500 Victim of Massive Credit Card Hack].” Got data? Then you have cyber risk. Yet, many organi- zations remain in denial about cyber exposure. For example, a broker-dealer that serves only institutional clients may incor- rectly infer from its minimal holding of personally identifiable information (PII) that it has little to worry about. That business may not require the fortress-like protections eventually adopted JED DAVIS is a partner in the New York Office of Day Pitney and the co-head of its cybersecurity practice. He is a formerly federal cybercrimes pros- ecutor and also previously worked as a managing director at a global investigations firm. by large, well-known victims of identity theft (e.g., card proces- sors or big box stores). Even a small leak of SSNs or other PII, however, can trigger breach noti- fication and/or remedial obliga- tions under one or more state laws. Moreover, organizations of any size are vulnerable to an expanding array of cybercrimes, any of which can interrupt or destroy a business, including ran- somware attacks, impersonation schemes to effect wire transfer frauds, and theft of inside infor- mation. Leadership needs to appreciate the severity of this new and dangerous reality. Unless and until it does, an organization is ill-prepared to develop and ful- fill the core functions set forth in the NIST Framework 2. “We can’t afford new tech- nology.” Leadership may also recognize that an organization is at substan- tial risk, but mistakenly assume that lack of budget » Page 10 Inside 10 Regulatory Gap: Cybersecurity at K-12 Schools BY NICOLE DELLA RAGIONE AND LEORA F. ARDIZZONE 11 Examining Coverage for Cyber Risks Under Property and Liability Policies, BY ERIC B. STERN AND ANDREW A. LIPKOWITZ CYBERSECURITY: Angela Turturro, Sections Editor | Rafał Pytel, Design Got data? Then you have cyber risk. Yet, many orga- nizations remain in denial about cyber exposure.

nylj.com Monday, March 5, 2018 9 Cybersecurity€¦ · (NYS-DFS 23 NYCRR 500), which became effective on March 1, 2017. Under the regulations, all “Covered Entities” are required

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: nylj.com Monday, March 5, 2018 9 Cybersecurity€¦ · (NYS-DFS 23 NYCRR 500), which became effective on March 1, 2017. Under the regulations, all “Covered Entities” are required

Monday, March 5, 2018 | 9nylj.com |

Cybersecurity

by Jessica b. Lee

T he countdown to the enforce-ment date of the EU General Data Protection Regulation

(GDPR) has begun and it’s becom-ing increasingly clear that many U.S. organizations are poised to be caught in its crosshairs. Orga-nizations that offer goods or ser-vices in the EU (whether or not a payment is involved) or that monitor the behavior of individu-als in the EU, will be subject to the GDPR’s requirements whether or not they have a presence in the EU. For U.S. organizations that are being exposed to the EU’s regulatory regime for the first time, panic may be setting in (if it hasn’t already). Require-ments around honoring expanded data subject rights, maintaining records of processing, docu-menting the legal basis for such processing, and complying with the new security breach notifica-tion requirements, among others, may be particularly challenging for organizations that don’t have well–developed data governance policies or centralized systems and databases.

The GDPR replaces the pre-vious Data Protection Directive 95/46/EC (the Directive) as the governing privacy regulation in the EU. While key principles of data privacy addressed in the Directive remain largely the same, there are some significant policy changes, and, as a result, a fair amount of uncertainty about how the regulation will be enforced. With reports suggesting that many organizations won’t be “fully compliant” by May 25, 2018 (the GDPR’s enforcement date), the next year or two may prove instructive as the first round of enforcement begins.

Although some will find this uncertainty frustrating, there may be a silver lining. Where the Directive included an obligation to notify supervisory authori-ties about an organization’s processing activities, the GDPR allows organizations to document their own processing activities, determine if they are compliant with the specific requirements, identify and mitigate any risks created by their data use, and ultimately hold themselves accountable for compliance. This emphasis on accountability and record keeping may actually help create the safety net needed to navigate the GDPR’s grey areas. Organizations with a robust data governance program, that have

a documented and considered approach to GDPR compliance, are much less likely to be at the front lines of GDPR enforcement, and certainly should not be sub-ject to the highest fines (up to $20 million or 4 percent of global annual turnover).

GDPR: Accountability For Risk-Based Approach

Article 5(2) of the GDPR intro-duces the accountability princi-ple, which requires organizations that control the processing of personal data (“controllers”) to demonstrate (read: document) compliance with the GDPR’s prin-ciples relating to the processing of personal data (i.e., lawfulness, fairness and transparency; pur-pose limitation; data minimiza-tion; accuracy; storage limitation; and integrity and confidentiality). This notion of accountability is not new; it was included as a basic data protection principle in the OECD Guidelines in 1980 (and the most recent update in 2013) and has been incorporated in various forms in other inter-

national privacy regulations. However, previous iterations of the accountability principle were centered on assigning respon-sibility or fault for failures in privacy compliance. Under the GDPR, accountability is recast as an obligation to establish a sys-tematic and ongoing approach to privacy. In effect, it codifies the obligation to create a data gover-nance program that incorporates the principle of privacy by design, using tools like privacy impact assessments to routinize data protection within an organiza-tion. More than just a mandate to create policy documents, the GDPR creates a regulatory envi-ronment under which privacy and data governance are forced to become a standard element of an organization’s operations.

This principle of accountability must be viewed in the context of the GDPR’s risk-based approach to privacy. Under Article 24 of the GDPR, controllers are required to assess the nature, scope, context and purpose of processing, and based on the risks presented: (1) implement appropriate techni-cal and organizational measures to ensure and be able to demon-strate that data pro-

Jessica B. Lee is a partner in the advanced media and technology practice at Loeb & Loeb.

The GDPR: A Silver Lining For Data Governance

» Page 11

by ian G. DibernarDo anD Jeffrey Man

C yberattacks have become commonplace over the last few years. No industry is

immune to attacks, which have only increased in frequency and intensity as hackers and bad actors have become more sophisticated. One way a com-pany can protect against cyber-attacks is to have testing (which can mimic a hacker intrusion) performed on its computer sys-tems and networks to uncover vulnerabilities. This is known as penetration or “pen” testing.

Although conducting pen testing is prudent and becom-ing common, it is also fraught with potential pitfalls. When embarking on such a project, a company should fully under-stand its scope and include certain contractual protec-tions with the pen tester. This article discusses best practices related to pen testing, including relevant contractual provisions and precautions to take before, during and after embarking on this potentially risky endeavor.

Increased Need For Pen Testing

In recent years, requests for pen testing have increased dra-matically. This is in part due to new privacy and security laws and frameworks that have been adopted in various jurisdictions and industries that either man-

date or recommend pen testing as part of a company’s data secu-rity program. For example, the National Institute of Standards and Technology (NIST), in its “Technical Guide to Information Security Testing and Assessment Recommendations of the National Institute of Standards and Tech-nology” (Special Publication 800-115), provided an overview of useful vulnerability validation techniques that can be utilized as part of a cybersecurity policy, which include pen testing.

Recently, New York enacted the State Department of Finan-cial Services Cybersecurity Reg-ulations for Financial Institutions

(NYS-DFS 23 NYCRR 500), which became effective on March 1, 2017. Under the regulations, all “Covered Entities” are required to enact a cybersecurity program that “includes monitoring and test-ing … designed to assess the effec-tiveness of the Covered Entity’s cybersecurity program.” This monitoring and testing requires “continuous monitoring or period-ic Penetration Testing and vulner-ability assessments” (23 NYCRR 500.05). As the regulations state:

Absent effective continu-ous monitoring, or other systems to detect, on an ongoing basis, changes in Information Systems that may create or indicate vul-

nerabilities, Covered Entities shall conduct:(a) annual Penetration Test-ing of the Covered Entity’s Information Systems deter-mined each given year based on relevant identified risks in accordance with the Risk Assessment; and(b) bi-annual vulnerability assessments, including any systematic scans or reviews of Information Systems rea-sonably designed to identify publicly known cybersecu-rity vulnerabilities in the Covered Entity’s Information Systems based on the Risk Assessment.

Id. (emphasis added). Com-pliance with these penetration testing requirements for Covered Entities is mandated as of March 1, 2018.

Based on these, and other, recommendations and require-ments, companies are investing in pen testing, some for the first time and others with renewed focus.

Pre-Contract Diligence

Pen testers come in various forms, that range from large, well-established consulting com-panies to emerging niche tech providers, and not all pen testers are created equal. Because of the sensitive nature of the work that is being performed, a company should investigate each poten-tial pen tester before allowing it potential access to the com-pany’s systems and data. While each project is unique, a compa-ny should investigate a variety of factors before choosing the right provider, including, for example, by focusing on both the pen testing entity and its systems. For example, diligence should include considering whether

the pen tester uses consultants or employees, as well as the ten-ure and experience of its team. Similarly, at the diligence phase, a company should understand the extent to which a pen tester utilizes any third-party systems or tools and, in any event, con-firm such systems themselves have been tested and certified.

Moreover, by having the dili-gence memorialized as a standard “checklist,” pen testers can be rig-orously compared, and perform-ing robust diligence early in the process will help avoid costly sur-prises and delays down the road.

Scope of Testing

In general, pen testing is implemented by a software vendor that undertakes certain actions (that may mimic actions that a hacker may take) to uncov-er vulnerabilities of a particular computer network. Because your computer network is only as safe as your least-protected link, it is important that tests cover a wide variety of software and hard-ware, while not losing sight of the forest for the trees. Therefore, prior to undertaking the test-ing, a company should among other items, consider: Which applications and systems will be tested? What access points will be tested? Are there touch points with third-party providers (e.g., a cloud provider) that will need to be tested?

Preparing for the pen test-ing can include the pen tester detecting information about net-work architecture, systems and applications, as well as profiling the company through publicly available information in a man-ner similar to which a hacker might find such information. Alternatively, the company can provide this information to the tester. Vulnerabili-

ian G. DiBernarDo is the co-practice group leader of the intellectual prop-erty and technology group of Stroock & Stroock & Lavan. Jeffrey Mann is a special counsel in that group and a Certified Information Privacy Profes-sional (CIPP/US). » Page 10

Pen Testing: The Good, The Bad and the Agreement

While confidentiality is important in many re-lationships, due to the unique role of a pen tes-ter, the intricacies of the confidentiality provisions take on added significance.

The GDPR replaces the previous Data Protection Directive 95/46/EC as the governing privacy regulation in the EU.

SHU

TT

ER

STO

CK

SHU

TT

ER

STO

CK

Six Common Misconceptions About Cybersecurityby JeD Davis

I nterest in cybersecurity is escalating across the legal profession, reflecting the

complex and potentially cata-strophic threats that clients, par-ticularly financial services firms, now face. The combined power, speed and baked-in vulnerabili-ties of information technology (IT) have given rise to previously unimaginable but now-endemic risks to organizations. Malicious actors can and do steal, lock or destroy confidential data, in bulk or in smaller but still-devastating caches, and then exploit the

information’s resale, extortion or spite value. Moreover, even accidental errors can cause con-fidential information to leak, with similarly costly regulatory, litiga-tion and business fallout.

Because these risks are deep and potentially disastrous, law-yers are increasingly tasked with counseling clients about how to contain them. Frequently this requires dispelling clients’ mis-conceptions about those risks and effective countermeasures. Below we explore each of six such misconceptions that often beset organizations. Avoiding these errors is essential to fulfill-ing the core functions of a cyber-security programs: (1) identifying cyber-risks, (2) protecting criti-cal infrastructure using appro-priate safeguards; (3) detecting incidents; and (4) responding and (5) recovering from them. National Institute of Standards, Framework for Improving Critical Infrastructure Cybersecurity (v.

1.0) (2014) at 7-8 (NIST Frame-work).

1. “We don’t face the same risks as [Name of Fortune 500 Victim of Massive Credit Card Hack].”

Got data? Then you have cyber risk. Yet, many organi-zations remain in denial about cyber exposure. For example, a broker-dealer that serves only institutional clients may incor-rectly infer from its minimal holding of personally identifiable information (PII) that it has little to worry about. That business may not require the fortress-like protections eventually adopted

JeD Davis is a partner in the New York Office of Day Pitney and the co-head of its cybersecurity practice. He is a formerly federal cybercrimes pros-ecutor and also previously worked as a managing director at a global investigations firm.

by large, well-known victims of identity theft (e.g., card proces-sors or big box stores). Even a small leak of SSNs or other PII, however, can trigger breach noti-fication and/or remedial obliga-tions under one or more state laws. Moreover, organizations of any size are vulnerable to an expanding array of cybercrimes, any of which can interrupt or destroy a business, including ran-somware attacks, impersonation schemes to effect wire transfer frauds, and theft of inside infor-mation. Leadership needs to appreciate the severity of this new and dangerous reality. Unless and until it does, an organization is ill-prepared to develop and ful-fill the core functions set forth in the NIST Framework

2. “We can’t afford new tech-nology.”

Leadership may also recognize that an organization is at substan-tial risk, but mistakenly assume that lack of budget » Page 10

Inside 10 Regulatory Gap: Cybersecurity at K-12 Schools

by nicolE dElla ragionE

and lEora F. ardizzonE

11 Examining Coverage for Cyber Risks Under Property and Liability Policies, by Eric b. StErn

and andrEw a. lipkowitz

CyberseCurity: angela turturro, Sections Editor | rafał pytel, Design

got data? then you have cyber risk. Yet, many orga-nizations remain in denial about cyber exposure.

Page 2: nylj.com Monday, March 5, 2018 9 Cybersecurity€¦ · (NYS-DFS 23 NYCRR 500), which became effective on March 1, 2017. Under the regulations, all “Covered Entities” are required

10 | Monday, March 5, 2018 | nylj.com

by nicoLe DeLLa raGione anD Leora f. arDizzone

W hile data breaches at Equi-fax, Yahoo, Anthem and Tar-get have made the national

news, data breaches at school districts are not as widely publi-cized. Schools are a treasure trove of children’s personally identifi-able information (PII) (e.g., name, address, Social Security number) and protected health information (PHI), as well as the PII and PHI of faculty and payment card infor-mation (e.g., debit and credit card numbers) of parents. Schools are particularly vulnerable to attack because districts with scarce funds must devote them to education, and cannot always divert pre-cious resources to cybersecurity. However, economizing on cyber-security can be shortsighted. The Federal Trade Commission (FTC) reported that identity thieves often steal children’s information from schools, noting that a child’s iden-tity is attractive to thieves because it is a “clean slate,” enabling a thief to use a child’s Social Security num-ber to obtain employment, govern-ment benefits, or credit without detection until the child is of age to obtain credit. See Prepared Statement before the Subcommit-tee on Social Security of the House Committee on Ways and Means on Child Identity Theft Field Hearing Plano, Texas Sept. 1, 2011.

Hackers have victimized a num-ber of school districts. In 2013, Newsday reported that personal data of a number of students in the Sachem School District, in Suffolk County, was posted to an online forum, allegedly by a 17-year-old student in the district. Candice Rudd et al., “Holbrook teen pleads not guilty to hacking charge,” News-day (Nov. 23, 2013). USA Today reported on March 24, 2015, that

the Swedesboro-Woolwich School District, in New Jersey, was held ransom by hackers. Although the school did not pay the ransom, it lost the use of its systems and had to delay certain web-based testing until its network was rebuilt. Carly Romalino, “Cyberattack disrupts school testing,” USA Today (March 24, 2015).

In June 2017, the Miami Herald reported that hackers infiltrated four Florida school district net-works in an effort to hack into other government agency sys-tems, including state voter sys-tems. Kyra Gurney, “Hack attacks highlight vulnerability of Florida schools to cyber crooks,” Miami Herald (June 18, 2017). The Wall Street Journal reported on Oct. 23, 2017, that in the past year three dozen school systems in the coun-try were hacked resulting in the theft of paychecks and data. Even

more distressing, school districts in Montana and Iowa were hacked by actors who accessed student information and sent threatening messages to school officials and parents, including threats to kill children. Tawnell Hobbs, “Hackers Target Nation’s Schools,” Wall St. J. (Oct. 23, 2017). On Jan. 31, 2018, the FBI issued a Private Industry Notification to the US Department of Education’s Office of Inspector General advising of cybercriminal threats directed at schools and stu-dents, specifically identifying “The Dark Overlord,” which infected systems with ransomware and which may have stolen students’ PII. “Private Industry Notification,” Fed. Bureau of Investigation, Cyber Div. (Jan. 31, 2018).

While these stories are alarm-ing, the regulatory landscape and enforcement of data breaches affecting schools is not as robust

as it is in the health care, banking and retail industries. Schools that receive federal funding are subject to the Family Educational Rights and Privacy Act (FERPA), 20 U.S.C. §1232g. Generally, FERPA affords parents the right to access their children’s educational records, request corrections to those records, and to limit disclosure of a child’s educational record. See 20 U.S.C. §1232g; 34 C.F.R. Part 99.

While FERPA is designed to protect educational records from disclosure, there is not yet a mandate upon schools to adopt cybersecurity and privacy poli-cies to keep pace with the trends in education to adopt more online teaching tools and curricula, digi-tal record keeping and cybercrime. Moreover, even where a violation of FERPA occurs, there is no pri-vate right of action. A parent or student can file a complaint with the U.S. Department of Education (DOE) identifying an alleged viola-tion of FERPA, but enforcement first seeks to obtain voluntary compli-ance, and only after such efforts fail, the DOE can seek to recover funds improperly spent, withhold payments or sue for enforcement. See 34 CFR §§99.63-67.

The FTC, an emerging player in cyber enforcement, has authority under the Children’s Online Privacy Protection Act of 1998, 15 U.S.C. §6501 et. seq. (COPPA). However, COPPA is limited to protecting chil-dren under 13, and only applies to commercial websites and online services directed to children under 13, which collect, use or disclose children’s PII. A significant compo-nent of COPPA is the requirement that commercial sites directed to children under 13 receive verifiable parental consent prior to the col-lection of a child’s PII. An impor-tant exception to the rule permits a school to stand in the shoes of a parent where web-based services are offered in schools, solely for the benefit of students and the school and so long as the child’s PII will not be used for commercial purposes. “Complying with COPPA: Frequently Asked Questions,” U.S. Fed. Trade Comm’n (March 20, 2015). Permitted applications include things like homework help lines, individualized online educa-

tion modules, online research and web-based testing.

On Dec. 1, 2017, the FTC and DOE hosted a workshop designed to address trends in education tech-nology including student access to personal computing devices as well as the prevalence of online tools and curricula. Among other things, the workshop sought to clarify how the agencies can ensure student privacy is protected without inter-fering with education technology. Notably, a panelist, Amelia Vance, from the Education Policy Coun-cil at the Future of Privacy Forum, stated, “Schools will never be held liable under COPPA,” apparently closing another avenue of mandat-ing security and privacy policies on schools that use commercial appli-cations in education. Transcript: FTC Workshop: Student Privacy and Ed Tech, at p. 15 (Dec. 1, 2017).

Not content to sit on the side-lines of the issue, Gov. Andrew Cuomo signed Education Law §§2-C and 2-D into law on March 31, 2014. These provisions of the Education Law apply, inter alia, to public schools and their third-party contractors. The law charges the Commissioner of the New York State Education Depart-ment (NYSED) with promulgating regulations to establish standards for data security and privacy poli-cies to be adopted by New York schools. See N.Y.S. Ed. L. §2-d(5).

Although those regulations have not yet been promulgated, the stat-ute requires schools to develop a provisional parents’ bill of rights, and third-party contractors, are currently subject to certain pro-visions of the law if they transmit or receive student data.

Specifically, contracts with third-party contractors must include, inter alia: (1) a data secu-rity and privacy plan (Plan) which outlines how state, federal and local data security and privacy requirements will be implement-ed, (2) have a signed parent bill of rights, (3) prohibition on (a) the use of education records for any purpose other than as expressly provided in the contract, or (b) disclosure of any education records except with appropriate consents or in accordance with applicable law. See N.Y.S. Ed. L. §2(d)(5)(f). Third-party contrac-tors are also required to maintain reasonable administrative, tech-nical and physical safeguards to protect students’ PII, and to use encryption technology to protect data while stored or in transit. See N.Y.S. Ed. L. §2(d)(5)(f)(4) and (5). The statute explicitly precludes any private right of action against any school, school district or the NYSED. See N.Y.S. Ed. L., §2-d(7).

It is clear that while there is a duty on the part of schools to protect the privacy of students’

educational records (including PII), there is no mandated requirement to adopt and implement specific privacy and security standards as exists in healthcare, banking and retail. Despite the gap in the regula-tions, schools must be proactive in protecting students’ data. The trend towards more online and web-based applications in edu-cation, and in electronic record retention is not going to change. Therefore, the risk to children’s PII and the integrity of school’s networks will only continue to increase. Schools must acknowl-edge that they are repositories of data desirable to hackers and their portals to educational web-sites and data storage sites must be secured. School districts should start implementing best practices to meet their duties under FERPA and the NYS Education Law to protect the data of their students. Cybersecurity measures adopted by other industries are instructive, and at minimum, schools should immediately implement the best practices recommended by the FBI to the DOE. See Private Indus-try Notification, supra. Ignoring the lurking danger of a cybersecurity attack until government mandates are enacted is a not only an avoid-able mistake, but could well be a breach of a school’s federal and state mandates to protect the pri-vacy of their students’ identity.

nicoLe DeLLa raGione is an associate at Ruskin Moscou Faltischek and a mem-ber of its cybersecurity and data privacy practice group. Leora f. arDizzone is of counsel at the firm and a member of the cybersecurity and data privacy, regulatory health law and healthcare professionals practice groups.

Regulatory Gap: Cybersecurity at K-12 Schools

SHU

TT

ER

STO

CK

The regulatory landscape and enforcement of data breaches affecting schools is not as robust as it is in the health care, banking and retail indus-tries.

ties that the tester may uncover can range from minor issues, like misconfigured servers or outdated program code, to major issues like compromised credentials or inad-vertently exposed gateways that can leave proprietary or personal information open to attack. Regard-less of the magnitude of the prob-lem uncovered, the goal of these tests is to isolate any issues in a controlled way so that they can be rectified before a data breach or other security contravention occurs.

Confidentiality And Third-Party Concerns

It is also imperative when entering into an arrangement for penetration testing to negotiate an agreement that proactively protects the company in several key respects. This includes hav-ing robust confidentiality provi-

sions. Beyond protecting informa-tion provided to the pen tester, confidentiality obligations should extend to information collected, even from public sources, by the pen tester during an initial investigatory phase of the test-ing, information obtained during the testing, results of the testing, including any reports, as well as any other information that a com-pany would not want to become public.

Additionally, a company must also consider if any third-party information or systems are impli-cated in the pen testing. For example, a company undertak-ing pen testing should consider whether it has licensed any third-party software that is governed by an agreement prohibiting pen testing, either explicitly or implicitly. For some pen test-ing, the company may provide technical information pertaining to its systems and applications, and care must be taken to ensure such information does not include third-party confidential

information and does not other-wise violate an obligation owed to the third party.

While confidentiality is impor-tant in many relationships, due to the unique role of a pen tester, the intricacies of the confidential-ity provisions take on added sig-nificance.

Limits on Testing

In addition to choosing a pen testing company, it is important to stipulate what resources will be used and the specific parameters regarding where and by whom the testing will be performed. Prefer-ably, a company should try to limit the testers to highly quali-fied individuals who are either employed by or consulting for the pen testing company, and who are bound by legal and contrac-tual obligations to maintain the confidentiality of the company’s information and the specifics of the pen testing program. It is also crucial to know if the testing will originate from the United States

or a foreign country. If there are concerns about storage or access in certain foreign countries, the agreement should expressly exclude pen testing from those countries. A company should also understand whose resources will be utilized to perform the testing. For example, if hardware belong-ing to a third party is to be used, additional safeguards may need to be put in place.

Allocation of Risk/Liability

It would be fantastic if no vul-nerabilities were found by the pen testing. However, the agree-ment must identify what hap-pens if the pen testing identifies vulnerabilities or the testers are able to improperly access data or information on or about your system. Because the question of who bears the risk in the event something goes awry can be of great consequence to your com-pany, it is wise to include robust warranties, covenants and indem-nification provisions that can pro-

tect your company in the event of a future dispute. These can include service provider warran-ties, as well as warranties related to data accessed during the test-ing and afterwards. An additional covenant should require the tester to ensure the system is restored to its original level of operability (other than correction of vulner-abilities) once the testing is com-pleted. In other words, the tester should guarantee that the testing will not harm the network or its operability following the testing. If the test damages the compa-ny’s systems or makes them less secure, this could negate any ben-efits that pen testing is intended to generate.

For financial companies that are subject to additional regulations and regulatory oversight, there are added concerns. As part of any agreement, a financial com-pany should seek to obtain the tester’s agreement to be bound by the same data security and data privacy regulations that the financial company is bound by. The

more robust this protection is in the agreement, the more helpful it will be in satisfying questions that regulators may raise regard-ing compliance. For example, the agreement can include indemnifi-cation for any third-party or regu-latory claims that arise from the testing. Like other relationships, it is important to make sure that both sides are vested in the success and safety of the project. It can be bet-ter in an agreement to account for as many eventualities as possible, rather than trying to sort things out after an incident occurs.

Conclusion

Penetration testing is an impor-tant tool in a company’s cyberse-curity arsenal, and in some cases it is even required by regulators. However, to ensure the full ben-efit of pen testing, while avoiding additional risk, companies need to manage the project, including through diligence and thoughtful and robust contractual protec-tions.

Pen Testing« Continued from page 9

to replace existing IT means that safety cannot be improved. This assumption perpetuates a fallacy that has fostered the prevailing unsafe state of things. Over the last four decades, layers of IT were designed and rapidly rolled out to favor connection, volume and speed. From a security perspec-tive, this makes IT fundamentally flawed. It also means that new IT is unlikely to fix the underlying flaws because that new technology is ret-rofitted onto the existing, perilous structure. In these circumstances, there are lower-cost people and process improvements, which management should emphasize. For example:

• Analyze sensitive data hold-ings—and cut access to them;

• Budget for security improve-ments, based on periodic penetra-tion testing (a limited application of tech that is now affordable to most organizations)

• Mandate yearly security awareness training of all manag-ers and staff.

3. “Our IT director handles our cybersecurity.”

Over the last 40 years, it has also become commonplace to cabin IT management in a sepa-rate department or to outsource

it to a vendor. As cyberattacks and accidents have surged, these arrangements put companies at increased peril. Cybersecurity is a multidisciplinary responsibility. As a threshold matter, technical exper-tise in IT and cybersecurity are not the same. IT personnel know which protocols and configurations are within expected parameters. By contrast, experts in cyberse-curity know how to spot hidden intrusions and other abuse. Con-trolling cyber-risk can also require other expert assistance, including privileged advice from counsel, and (as most breaches occur due to human error) advice on corporate controls. Effective cybersecurity depends as well on an internal incident response team that com-plements IT professionals with a cross-section of troubleshooters from across the organization. That cross-section should include com-pliance, risk management and also have input from ordinary employ-ees who understand (sometimes better than anyone else) the par-ticular risky ways that users per-form that organization’s work. With insights honed in realistic drills, that multidisciplinary team can develop the shared knowledge and collaborative process with which to navigate:

• The spectrum of regulatory and litigation concerns that arise in an actual or suspected breach,

• The identification and reten-tion of outside counsel and other experts,

• Cyberliability insurance, including negotiation of coverage and issuing timely claims notice,

• Internal crisis communica-tions, including briefing board and senior management and obtaining their approval, and

• External communications, including addressing public or stakeholder concerns before and once a breach determination is made.

4. “We already have a detailed manual.”

In response to frequent head-lines about data breaches, some financial service companies and other similarly-situated businesses transpose earlier solutions to long-standing compliance regulations (e.g., the FCPA, AML laws, SEC and FINRA rules): they adopt cyberse-curity manuals. While something is usually better than nothing, manuals can foster a false sense of security if they come directly and untailored from stock templates, whether supplied by counsel a company’s outside IT provider or worse still, pulled straight off the Internet. Unless the organization thereafter applies findings about its specific risks to customize the manual, it is ill-suited to con-tain those risks. Moreover, in the midst of a suspected and actual

breach, any manual (even a truly risk-based one) is, for reasons dis-cussed above, cold comfort, unless complemented with rigorous drills to test and refine the company’s incident response plan.

5. “We’ll need to change our approach if the SEC tightens cybersecurity rules.”

Such a change is already an imperative. The SEC has for several years been tightening requirements and appears likely to tighten up still more. Most recently, on March 21, 2018, the SEC issued its (unanimous) “Com-mission Statement and Guidance On Public Company Cybersecu-rity Disclosures.” Though cast as “reinforcing and expanding” a 2011 staff advisory, the new Guidance marks a new and demanding era, aimed at avoiding a recurrence of recent debacles at Yahoo, Equifax and elsewhere. Henceforth, public companies will need to file much more detailed public disclosures before, during and after actual and suspected security breaches and concomitantly, to devote more resources to efforts to such risks from ever unfolding. For example, the new Guidance emphasizes that companies must in periodic filings “provide timely and ongo-ing information” about “material cybersecurity risks and incidents,” may need revise prior disclosures in light of new findings and need

continually to evaluate whether their controls suffice timely to warn leadership. Moreover, even before the Commission’s recent raising of standards for public companies, the SEC staff increased its over-sight of registered broker/dealers and investment advisers (BDs & IAs). Increasingly since 2014, the staff has leveraged the business continuity provisions of Regula-tion S-P (adopted 2004), the ‘red-flag” identity theft requirements of Regulation S-ID (adopted 2013) and the agency’s plenary examina-tion powers to import the criteria of the NIST Framework as prod cybersecurity upgrades at BDs & IAs. As stated and applied by the Office of Compliance Inspections and Examinations (OCIE) in Risk Alerts, “sweep” testing and in docu-ment requests and deficiency let-ters, the criteria used by the staff are often lifted verbatim from that Framework. As such, the SEC now contemplates that regulated enti-ties will engage in periodic and detailed risk assessments, docu-ment existing controls and inci-dents, and prepare written plans for improvement. For larger BDs & IAs, these requirements are already standard and therefore no surprise. For small and mid-sized firms, how-ever, the increased requirements are sometimes a slow-moving shock, revealed when OCIE makes its next examination visit.

6. “We’re not regulated by NYDFS, so its cyber regulations don’t matter.”

Last year, the New York State Department of Financial Services (NYDFS) promulgated the most sweeping cybersecurity regula-tions ever issued in the United States. Over a two-year phase-in ending March 2019, entities licensed by NYDFS are required to conduct an intensive risk assessment, implement a cyber-security program and policies and specified mandatory security approaches, such as vulnerabil-ity and penetration testing and encryption. Moreover, NYDFS set an unprecedented 72-hour deadline for notifying the agency of cybersecurity events (current-ly, no other state specifies fewer than 30 days’ notice). While it is possible that other jurisdictions will refrain from reaching as far as New York has, future restraint should not be assumed. Dismay over the Equifax data breach recently prompted NYDFS to pro-posed expand its cybersecurity regulations to govern the major credit bureaus. The public’s con-tinuing deep concern over data breaches nationally could well result in other states mandating compliance with the NIST Frame-work, if not necessarily in as much granular detail as NYDFS has in New York.

Misconceptions« Continued from page 9

Page 3: nylj.com Monday, March 5, 2018 9 Cybersecurity€¦ · (NYS-DFS 23 NYCRR 500), which became effective on March 1, 2017. Under the regulations, all “Covered Entities” are required

Monday, March 5, 2018 | 11nylj.com |

tising injury” is typically defined as “injury, including consequential ‘bodily injury’, arising out of one or more of the following offenses: ... e. Oral or written publication, in any manner, of material that violates a person’s right of pri-vacy.” The argument raised by insureds in favor of coverage is typically that when a third-party hacker obtains personally identifi-able information the “publication” requirement of Coverage B has been satisfied. This, however, has not been a successful argument.

Nationally, courts have gener-ally rested their decisions regard-ing coverage for data breaches under a CGL policy on whether the insured was responsible for the act of “publication.” Recently, in Innovak Int’l v. Hanover Ins. Co., No. 8:16-CV-2453-MSS-JSS, 2017 U.S. Dist. LEXIS 191271 (M.D. Fla. Nov. 17, 2017), the insured was sued for damages resulting

from the release of the underly-ing claimants’ personal private information after the insured was the subject of a data breach. The District Court upheld the insur-er’s denial of coverage because there was no alleged publication of the personal information by the insured. The District Court explained that even if the hack-er’s actions in appropriating the personal information could be considered a “publication,” the policy required publication by the insured.

The Innovak holding followed that of the New York Supreme Court in Zurich American Insur-ance Company v. Sony Corpora-tion of America, No. 651982/2011, 2014 WL 8382554 (N.Y. Sup. Ct. Feb. 21, 2014), which arose out of the April 2011 hacking of Sony Corp.’s PlayStation online servic-es. The court held that there was no “publication” by the insured, rather, the only “publication” was perpetrated by the hackers, and therefore, because Coverage B was not triggered there was no coverage under the policy.

Conversely, in Travelers Indemnity Co. of America v. Por-tal Healthcare Solutions, 644 Fed.Appx. 245 (4th Cir. 2016), which arose out of a class-action wherein it was alleged that the insured

negligently permitted the class’s private medical records to be available to search engines on the Internet for more than four-months, the Fourth Circuit found a covered “publication” by the insured. There was coverage in this case because it was the insured’s act that published the medical records on the Internet. The Fourth Circuit rejected the insurer’s argument that its publi-cation was unintentional or that information was not published to a specific third party. The fact that the information was made publicly available by the insured over the Internet rendered it a covered publication.

The requirement that the act of “publication” be done by the insured, while not explicit in the policy language, is consistent with prior non-data breach case law. In Evanston Insurance Co. v. Gene by Gene, 155 F. Supp. 3d 706 (S.D. Tex. 2016), the allegations that the insured improperly pub-lished the plaintiff’s DNA results on its website triggered a duty to defend. However, in Penn-America Insurance Co. v. Tomei, No. 480 WDA 2015, 2016 WL 2990093 (Pa. Super. May 24, 2016), there was no covered publication where the insured was sued by plain-tiffs whose claims arose from the videotaping and publication by a third party of videos of patrons as they undressed during tanning sessions. The Pennsylvania court reasoned that because a third party made the video-tapes available, and not the insured, there was no publication by the insured.

The national trend is that a “publication” must be made by the insured in order to trigger cover-age under a standard CGL policy. This requirement, although not plain in the language of the stan-dard provision, is supported by the manner in which courts have historically applied the provision. Accordingly, absent the unusual circumstance where the insured publishes personal information itself, an insured is unlikely to be able to obtain coverage for third-party losses due to data breaches under their CGL policies.

Insureds who are concerned about coverage for data breaches and cyber attacks would be well-advised to purchase cyber policies and carefully review the coverage afforded therein and to make sure than any business property and crime policies are endorsed to provide coverage for cyber and electronic losses.

by eric b. stern anD anDrew a. Lipkowitz

O ver the past few years, data breaches have become more frequent and have

impacted an increasing number of people. As computer hacking and data breaches become more com-mon, an issue that is often raised is whether, and to what extent, damages resulting from these inci-dents fall within the coverage of the policies held by the corporate victims of the attacks. This article explores courts’ differing conclu-sions when faced with claims for cyber risks under different types of insurance policies, looks at some of the recent cyber-crimes and the direct financial and legal impact on businesses, and posits solutions to address insurance coverage for cyber-related risks.

A cyber-hacking or data breach event, such as the ones suffered recently by Equifax, Target, Yahoo, and Sonic, typically involves a third-party gaining unauthorized access to a company’s computer system, stealing customer infor-mation and then using that stolen information to apply for mort-gages, credit cards and student loans, and tapping into bank debit accounts, filing insurance claims and tax refunds, and racking up substantial debts. The theft of the personal financial information of their customers causes direct loss to the company itself, through lost records, reputational damage, business interruption, and costs to correct and repair the damage done by intruders, and may also subject the company to lawsuits from their customers.

Naturally, companies have sought coverage for these cyber-losses from their insurers. An insured seeking to protect itself from losses due to data breaches and cyber-attacks can procure specific first-party policies that will cover such loss. For example, certain property policies have been found to provide coverage for data breaches when the policy contains a specific definition of property to include electronic data.

In NMS Services v. The Hart-ford, 62 Fed.Appx. 511 (4th Cir. 2003), the Fourth Circuit held that there was coverage under a business property policy for an insured’s loss of business and costs to restore records lost when a former employee hacked into the insured’s net-work. Similarly, in Lambrecht & Associates v. State Farm Lloyds, 119 S.W.3d 16 (Tex. App. Ct. 2003), the insured suffered direct losses due to a hack of its system. The Texas Court of Appeals found that the insurer could not prove as a matter of law that the dam-aged property was not covered under the insured’s business property policy, which covered “accidental direct physical loss to business personal property.” However, the court also denied the insured’s motion for summary judgment, finding an issue of fact as to whether the insured’s losses were “accidental.”

Under certain circumstances, crime policies may also provide coverage for the insured’s direct loss as a result of a data breach. In Retail Ventures v. National Union Fire Ins. Co. of Pittsburgh, Pa., 691 F.3d 821 (6th Cir. 2012) the insured incurred $6.8 mil-lion in losses arising from a data breach caused by a hacker that compromised customer credit card and checking account infor-mation. The insured was covered by a blanket crime policy, which contained a specific rider that covered computer fraud. As a result, the expenses related to the hack, including attorney fees associated with municipal investigations, were all found to be covered.

Although business property and crime policies may provide coverage for direct losses suffered by the insured as a result of a data breach, there is no coverage for liability to third-parties under these policies. For example, in Camp’s Grocery v. State Farm Fire & Cas. Co., No. 4:16-cv-0204-JEO, 2016 U.S. Dist. LEXIS 147361 (N.D. Ala. Oct. 25, 2016), the court rejected the insured’s argument that an inland marine endorsement in the policy pro-vided coverage for an underlying lawsuit arising from a data breach, holding that the endorsement only provided first-party cover-age for certain computer related losses, and did not provide cov-erage against claims brought by third parties.

The policies available on the market which may provide cov-

erage for liability due to data breaches are cyber-policies. However, cyber-policies vary, they are not held by all companies and not all liabilities may be cov-ered. For instance, in P.F. Chang’s China Bistro v. Fed. Ins. Co., No. CV-15-01322-PHX-SMM, 2016 U.S. Dist. LEXIS 70749, (D. Ariz. May 26, 2016), the insured’s credit card transactions were hacked by a third party. The insurer covered substantially all of the damages suffered directly by the insured as well as the liability claims brought by the insured’s customers. How-ever, the district court found that there was no coverage for the fees the insured owed to its credit card service-provider as a result of the breach. Unlike the customers, who suffered a covered “Privacy Injury,” the service-provider did not suffer any covered injury and, as a result, there was no coverage for the fees.

Insureds have also sought coverage for data breaches and cyber-attacks from their commer-cial general liability insurers. The oft-used theory for coverage for these lawsuits is that the data breach is a covered “publica-tion” under Coverage B of the standard Commercial General Liability policy. While policies may differ, “personal and adver-

Examining Coverage for Cyber Risks Under Property and Liability Policies

eric B. stern is a partner in Kaufman Dolowich & Voluck’s Woodbury, N.Y. office where he concentrates his prac-tice on all aspects of insurance coverage litigation. anDrew a. Lipkowitz is an associate in the office and primarily focuses his practice on insurance cov-erage litigation and monitoring.

SHU

TT

ER

STO

CK

An insured seeking to pro-tect itself from losses due to data breaches and cyber-attacks can procure specific first-party policies that will cover such loss.

cessing is performed in accordance with the GDPR; and (2) review and update those measures where necessary. Organizations are directed to take into account “the state of the art and the costs of implementation” and “the nature, scope, context, and purposes of the processing as well as the risk of varying likelihood and severity for the rights and freedoms of natu-ral persons.” The GDPR provides suggestions (although no man-dates) for which measures might be considered “appropriate to the risk.” The pseudonymization and encryption of personal data, the ability to ensure the ongoing confi-dentiality, integrity, availability and resilience of processing systems and services, the ability to restore the availability and access to per-sonal data in a timely manner in the event of a physical or technical incident, and the creation of a pro-cess for regularly testing, assessing and evaluating the effectiveness of technical and organizational mea-sures for ensuring the security of the processing will provide a good start for organizations to start map-ping out their compliance efforts.

DPIAs. Historically, national data protection authorities in Europe (DPAs) have recommend-ed privacy impact assessments (PIAs), tools used to identify and mitigate privacy risks during the design-phase of a project, as an element of privacy by design. Under Article 35 of the GDPR, data protection impact assessments (DPIAs)—a more robust version of the PIA—are now mandatory when an organization is engag-ing in activities that pose a high risk to an individual’s rights and freedoms. The DPIA presents an opportunity to demonstrate that safeguards have (hopefully) been integrated into an organization’s data processing activities and that the risks presented by a process-ing activity have been sufficiently mitigated

While the risks analysis itself is largely left in the hands of each organization, determinations that

are wildly off-base may not be defensible. However, if an organi-zation can justify its position, rely-ing on industry practice or other guidance, even if regulators ulti-mately determine that additional measures were required, it may be able to avoid significant fines. Nota-bly, the failure to complete a DPIA itself could result in fines of up to 10 million Euros or up to 2 percent of the total worldwide turnover of the preceding year.

Records of Processing. Under the Directive, organizations were obligated to notify and register pro-cessing activities with local DPAs. The GDPR eliminates this require-ment and instead puts the burden on both controllers and processors to maintain an internal record of processing activities, which must be made available to DPAs upon request. These records must con-tain all of the following information: (1) the name and contact details of the controller and where applicable, the data protection office; (2) the purposes of the processing; (3) a description of the categories of data subjects and of the categories of personal data; (4) the categories of recipients to whom the personal data have been or will be disclosed including recipients in third coun-tries or international organizations; (5) the transfers of personal data to a third country or an international organization, including the docu-mentation of suitable safeguards; (6) the envisaged time limits for erasure of the different categories of data; and (7) a general descrip-tion of the applied technical and organizational security measures. Where processing activities take place across a variety of discon-nected business units, organizing these records may be challenging. Organizations will need to audit each of their business units and their corresponding systems and processes to determine their pro-cessing activities and consider mov-ing to a more centralized system.

Next Steps: Preparing For May 25th and Beyond

Between now and May 25th, organizations should be focused on creating the processes and

documents that will help tell the story of their GDPR compliance:

• Investigate and document the flow of data through your organi-zation. Understand the sources of data the organization has control over, the systems or databases that data is stored in, the controls in place to protect that data, and how and when it’s transmitted to third parties.

• Create records of processing and a process going forward for keeping those records up to date.

• Audit vendors and update agreements to include GDPR com-pliant provisions.

• Track the key requirements of the GDPR and document the data protection policies in place to address those obligations. Create a procedure for data breach response, data retention, and responding to data subject requests.

• Create a DPIA process—includ-ing a system to determine when a DPIA is needed and the team in charge of completion.

• Create a schedule and process to periodically audit the effective-ness of your data governance pro-gram.

• Conduct annual privacy train-ing for employees.

While the process of preparing for the GDPR may be lengthy and expensive, it may ultimately give information security and inter-nal data governance teams the resources needed to more effec-tively and strategically manage an organization’s data. And, as the GDPR creates affirmative obliga-tions for controllers to vet third party vendors for compliance with the GDPR’s obligations, being able to demonstrate compliance with the GDPR through a strong data governance program won’t just be a required regulatory obligation; it may be a selling point that distin-guishes you as an organization that is safe to do business with.

GDPR« Continued from page 9

DiD yOu bOrrOW tHis?

why share when you can have your own copy

of the new york law Journal delivered directly

to your home or office. For subscriptions—or

to purchase back issues— call 1-877-256-2472.

HOW COMPETITIVE IS YOUR

FOOTPRINT BY PRACTICES?

Ask Legal Compass:at.alm.com/legalcompass

ALI-18-271299 Legal Compass update banners ads_5.667x10.indd 1 2/14/18 10:21 AM