A substantial rise in schools’ use of online educational technology products has caused educators to become increasingly reliant on these products to develop their curricula, deliver materials to students in real time, and monitor students’ progress and learning habits through the collection of data by third-party cloud computing service providers. Unfortunately, with these advances come the data security concerns that go hand-in-hand with cloud computing—such as data breaches, hacking, spyware, and the potential misappropriation or misuse of sensitive personal information. With the Family Educational Rights and Privacy Act (FERPA)—federal legislation enacted to safeguard the privacy of student data—in place for four decades, the education sector is ripe for new standards and guidance on how to protect students’ personal information in the era of cloud computing. California has tackled this issue head on, with the passage of two education data privacy bills by its legislature on August 30, 2014. Senate Bill 1177 and Assembly Bill 1442 (together, the Student Online Personal Information Protection Act (SOPIPA)) create privacy standards for K-12 school districts that rely on third-parties to collect and analyze students’ data, and require that student data managed by outside companies remain the property of those school districts and remain within school district control.California has long been an innovator in the realm of privacy law, having enacted the nation’s first data breach notification law in 2003, and more recently, a 2013 law granting children and young adults the right to delete posted content from online services, mobile apps or other digital services for which they are registered users. The passage of SOPIPA is a significant milestone in education data privacy law reform. Acting as a measure to fill in FERPA’s gaps, the bills place restrictions on companies that operate online sites, online applications, or provide web-based services to K-12 students.
Schools have always kept records of students to track their individual progress, as well as to create databases aggregating information such as test scores, attendance records and demographic data in order to meet benchmarks and develop curricula. Whereas teachers and school administrators previously aggregated student records themselves, it is now the norm for educators to outsource this task, as management of such databases can be more efficiently and systematically performed by privately-owned and operated education service providers, websites, and app makers. According to estimates provided by the Software and Information Industry Association, a U.S.-based software and information trade association, the market for education software for pre-K through 12th grade students was approximately $8 billion in the 2011-12 school year, up $500 million from only two years prior. One of the reasons for this increase is the fact that school districts often lack the technical expertise to create and manage these databases. The development of cloud-based computing and technology products that operate online has resulted in an increased number of third-party operators that collect and possess sensitive student data, including grades, disciplinary history, grades and demographic information.
The challenge with this practice is that third-party operators are not subject to the provisions of FERPA. Included among FERPA’s requirements is a mandate for schools that receive federal funding to: (i) allow parents access to their children’s files in order to request corrections; and (ii) obtain parents’ consent before sharing such information. Enacted in 1974, FERPA is ill-equipped to adequately safeguard against 21st century education data security concerns. Under the law, “an educational agency or institution may disclose personally identifiable information from an education record only on the condition that the party to whom the information is disclosed will not disclose the information to any other party without the prior consent of the parent or eligible student.” 34 CFR § 99.33(a)(1). Plainly stated, FERPA applies only to the schools themselves, not to third-party cloud computing or online service providers. If a school provides student data to such a service provider, the regulation allows the school to disclose that data to the provider without parental or student consent because a “contractor, consultant, volunteer or other party” to whom a school or school district has outsourced institutional functions “may be considered a school official” and thus, is shielded from liability, even if the Department of Education (DOE) alleges a FERPA violation against the school or school district. 34 CFR 99.31 § 99.31(a)(1)(i)(B)(1)-(3). Even more troubling is that only the DOE can sue a school for FERPA violations; parents and students have no cause of action under the law.
Why is this an issue? Currently, information provided to a school about a child’s medical history, behavioral issues or academic performance—potentially damaging information—could be leaked, exposed by hackers, or more likely, sold to advertisers by the private companies hired by the schools themselves. In a December 2013 report entitled Privacy and Cloud Computing in Public Schools, Fordham Law School’s Center on Law and Information Privacy surveyed twenty school districts across the country and uncovered the following:
- 95% of the school districts surveyed rely on cloud computing for multiple functions, including monitoring student performance, providing support for classroom activities, data hosting and student guidance.
- Only 25% of the school districts inform parents of their use of cloud services.
- 20% of school districts do not have policies governing the use of their online services.
- Only 25% of the contracts between the school districts and cloud service providers give schools the right to audit and inspect the service provider’s practices with respect to the student data collected.
- Fewer than 7% of the contracts between the school districts and cloud service providers restrict the sale or marketing of student information by vendors.
- Only one contract required the cloud service provider to notify the school district in the event of a data security breach.
California’s law, if signed by Governor Jerry Brown, would step in. Under SOPIPA, any operator of a company to whom student data is provided will be prohibited from using, sharing, disclosing, or compiling personal information about a K-12 student for any purpose other than the K-12 school purpose. When an operator is no longer using the information for a legitimate educational purpose, the student requests deletion, or the student ceases to be a student at the school or school district using the operator’s services, the student’s personal information must be deleted. Finally, SOPIPA creates a private right of action for parents or students alleging that an online service provider has violated the statute. Under the law, a SOPIPA violation would be an unlawful business practice, allowing individuals as well as government entities to seek judicial remedies.
While it is expected that Governor Brown will sign SOPIPA into law soon, the fate of education data privacy across the rest of the U.S. remains to be determined. Although data privacy is a perennial hot-button issue, the dialogue surrounding protection of personal data against its appropriation or misuse tends to focus not on education data privacy, but on the interactions between businesses and consumers, and the ways in which technological advances have made both groups vulnerable to data breaches, hackers, malware and the like. The tide may be turning, however. In February 2014, the DOE released guidance on FERPA compliance aimed at providing educators and parents alike with information on how to protect student privacy while using online educational services. While well intentioned, the DOE’s guidance only went so far as to acknowledge that advances in technology have changed the way student data is used and “raises new questions” about privacy protection. In addition, as of April 2014, 83 bills concerning education data security were being considered in 32 states, according to the Data Quality Campaign, a nonpartisan educational advocacy organization. Perhaps most promising is the Protecting Student Privacy Act (PSPA), bipartisan legislation introduced in late July by Senators Edward J. Markey (D-MA) and Orrin Hatch (R-UT). The PSPA would amend FERPA to mandate the requisition of federal funding if a school district fails to put in place data security safeguards to protect and provide parents with greater access to sensitive student data held by third parties to whom educational functions are outsourced. In short, PSPA would apply rules to the data that schools share with outside parties similar to the constraints imposed by FERPA on schools themselves, echoing the California legislation. More than anything, these developments signal that educators and legislators must work together to strike a balance among student privacy, technological innovation and student data needs.
In a recent article published by Law 360, Proskauer litigation associate Courtney Bowman outlines how companies can make inroads in the e-commerce market in the Middle East and North Africa (MENA). Although often overlooked, the region’s relative wealth and level of internet penetration make its more stable areas attractive markets for those companies willing to undertake the steps necessary to understand the region’s cultural nuances and customer preferences. Two of the most significant barriers to e-commerce growth in MENA is the widespread reluctance of customers to shop online due to fears about the security of online transactions, as well as the low rate of credit card use in the region. As the article notes, however, e-commerce companies willing to offer solutions tailored to address these concerns, such as cash cards and m-payment systems, may be poised to establish a potentially lucrative presence in this part of the world.
Corporate Counsel published an article authored by Nolan Goldberg, Senior Counsel, Intellectual Property and Technology, concerning the recent decision compelling Microsoft to produce e-mails located on foreign servers. The article, entitled “Is the Flap Over Microsoft Emails in Ireland Overblown?”, provides a counter-point to critics who believe that Judge Preska’s Order will have broad implications for the U.S. technology industry.
Capital One Financial Corp. (“Capital One”) and three collection agencies have agreed to pay one of the largest settlement amounts in history — $75.5 million — to end a consolidated class action lawsuit alleging that the companies used an automated dialer to call customers’ cellphones without consent in violation of the twenty-two-year-old Telephone Consumer Protection Act (“TCPA”). Judge Holderman of the Northern District of Illinois preliminarily approved the settlement in late July.
TCPA Allegations and the Proposed Settlement
In 2012, separate cases against each defendant were consolidated by the U.S. Panel on Multidistrict Litigation because the collection firms were collecting debts on behalf of Capital One. The allegations were largely the same: that the companies used autodialers and/or pre-recorded messages in calls to cell phones without the consumers’ express consent.
Without admitting any wrongdoing, according to the settlement agreement, Capital One will pay $73 million into the settlement fund with the other three companies contributing approximately $2.5 million. According to estimates provided by class counsel in its July 14th court submission, the proposed agreement would provide between $20 and $40 to each member of the class, which is estimated to include about 21 million people, and is defined to include all people in the United States who received a call from Capital One’s dialers to a cellphone from an automatic telephone dialing system with an attempt to collect on a credit card debt from January 2008 to June 2014 and those who received calls from participating vendors from February 2009 to June 2014.
The TCPA provides redress for those who receive unsolicited telephone calls, texts or faxes, and includes statutory penalties of $500 per violation and $1,500 for willful violations. As such, commentators have noted what class counsel acknowledged to the court; i.e., the settlement fund “does not constitute the full measure of statutory damages potentially available to the class,” but counsel argued that this fact “should not weigh against preliminary approval.”
If the settlement is approved, up to 30 percent of the settlement amount (about $22.5 million) will be awarded to the consumers’ attorneys, and each of the five lead plaintiffs each will receive no more than $5,000.
In addition to any monetary award, the settlement identifies the “core relief” as changes to Capital One’s business practices, and notes that Capital One already has “developed and implemented significant enhancements to its calling systems designed to prevent the calling of a cellular telephone with an autodialer unless the recipient of the call has provided express consent.”
In seeking preliminary approval of the proposed settlement, plaintiffs’ attorneys noted that the parties remained sharply divided on many issues including “three critical ones.” The disagreements include the defenses that Capital One had raised, including: (i) whether the Capital One customer agreement provided the prior express consent to make automated calls to class members’ cell phones; (ii) whether the statute allows “prior express consent to be obtained after the transaction that resulted in the debt owed,” and (iii) whether the action could fairly be maintained as a class action given the predominance of individual issues.
It bears mentioning that the Capital One litigation was filed prior to the adoption of the October 2013 amendments to the regulations implementing the TCPA. Under the new rules now in effect, for profit-businesses have to acquire “prior express written consent” before making any type of call or sending any text message using autodialers or prerecorded voices to cellphones.
The Capital One settlement is subject to formal approval, including notification to the class. The final hearing on the settlement is set for December 9, 2014.
In April, Microsoft tried to quash a search warrant from law enforcement agents in the United States (U.S.) that asked the technology company to produce the contents of one of its customer’s emails stored on a server located in Dublin, Ireland. The magistrate court denied Microsoft’s challenge, and Microsoft appealed. On July 31st, the software giant presented its case in the Southern District of New York where it was dealt another loss.
U.S. District Judge Loretta Preska, after two hours of oral argument, affirmed the magistrate court’s decision and ordered Microsoft to hand over the user data stored in Ireland in accordance with the original warrant. Microsoft argued that the warrant exceeded U.S. jurisdictional reach. However, the court explained that the decision turned on section 442(1)(a) of Restatement (Third) of Foreign Relations. The provision says that a court can permit a U.S. agency “to order a person subject to its jurisdiction to produce documents, objects or other information relevant to an action or investigation, even if the information or the person in possession of the information is outside the United States.” Because Microsoft is located in the U.S. , the information it controlled abroad could be subject to domestic jurisdiction.
Microsoft had the support of large U.S. technology companies, including Apple, AT&T and Verizon. The larger issue for these companies lies in the U.S. government’s power to seize data and content held in the cloud and stored in locations around the world. When a conflict arises between the data sharing laws of the country where the servers are located and U.S. law, it can put these companies in the difficult position to choose to follow one country’s laws over the other.
Microsoft further argued that the ramifications for international policy are substantial. The company argued that compelling production of foreign stored information was an intrusion upon Irish sovereignty. It said that the decision could be interpreted by foreign countries as a green light to make similar invasions into data stored in the U.S. However, Judge Preska dismissed these concerns as diplomatic issues that were incidental and not of the court’s immediate concern.
The order has been stayed pending appeal.
On August 7, 2014 the PCI Security Standards Council issued new guidance to supplement PCI DSS Requirement 3.0 and help organizations reduce the risks associated with entrusting third-party service providers (“TPSPs”) with consumer payment information. More and more merchants use TPSPs to store, process and transmit cardholder data or manage components of the entity’s cardholder data environment. A number of studies have shown that breach is tied increasingly to security vulnerabilities introduced by third parties. To combat such risk, a PCI special interest group made up of merchants, banks and TPSPs, together representing more than 160 organizations, created practical guidelines for how merchants and their business partners can work together to comply with the existing PCI standard and protect against breach.
Below are some high-level recommendations found in the “Information Supplement: Third-Party Security Assurance”:
TPSP Due Diligence: Conduct due diligence and risk assessment when engaging TPSPs to determine whether the skills and experience of the TPSP are appropriately suited to the task. Ask:
- What technology and system components are used by the TPSP for the services?
- Does the TPSP use other third parties?
- What other core processes or services are housed in TPSP facilities?
- How many facilities does the TPSP have where cardholder data will be located?
- Consult with your “acquiring bank”, “merchant bank”, or “acquiring financial institution” (each an “Acquirer”) to ensure the TPSP services are approved.
- Review the participating payment card brand service-provider listings and websites as well as the PCI DSS validation documents.
- Perform a risk assessment on the TPSP based on industry-accepted methodology.
Engaging the TPSP: Implement a process for engaging TPSPs.
- Set forth the expectations of all parties involved and review expectations at least annually so as to keep a consistent and mutually agreed upon mode of operation.
- Assess scope of TPSP’s responsibility and consider including contractual provisions in documents with TPSPs that require evidence sharing.
- Establish a communication schedule so that changes are communicated to the appropriate people in a timely manner.
- Track how the TPSP’s services and products match up with the PCI DSS requirements.
Written Agreements, Policies and Procedures: Once a TPSP is chosen, the entity and the TPSP should memorialize the agreement in writing.
- If a TPSP claims its services are PCI DSS Compliant, consider documenting such compliance, the date of compliance assessment and any components that were excluded from the assessment.
- An entity should keep in mind all regional requirements that apply, such as state-specific requirements and all legislative considerations such as definitions of protected information and breach-notification thresholds.
- Review agreements with Acquirers to ensure TPSPs are meeting additional requirements.
- Review compliance programs for each payment card brand to make sure the TPSP is in compliance.
- Keep industry specific regulations in mind.
- Make TPSP aware of the company incident response plan, its requirements and the allocation of responsibility in the case of a suspected data breach.
- Consider what requirements and responsibilities will continue to impact TPSP even after the engagement has formally ended (e.g. if a TPSP continues to store an entity’s cardholder data as part of a backup system).
Monitor Third-Party Service Provider Compliance Status: Develop a robust compliance monitoring program and document it.
- Make sure all resources involved in monitoring understand the scope of the cardholder data environment and establish a deliverable for the TPSP.
- Set forth a procedure for maintaining the TPSP list which includes information such as name and primary points of contact at the TPSP, specific services provided, last date of review, etc.
- Consider including the following in your TPSP monitoring procedure: a list of evidence and supporting documentation that will be collected from the TPSP, a detailed description of the PCI DSS compliance status, a report template, details describing how status review results are to be shared and approved, and policies for retention of monitoring program data.
By properly implementing a third-party assurance program a company can help ensure that data is kept in a safe and compliant manner.
On July 23, 2014, the Massachusetts Attorney General announced a consent judgment with an out-of-state Rhode Island hospital, Women & Infants Hospital of Rhode Island (“WIH” or the “Hospital”), resolving a lawsuit against WIH for violations of federal and state information security and privacy laws involving the loss of over 12,000 Massachusetts residents’ sensitive patient health records. The regulations and laws at issue were Mass. G.L. c. 93A, Mass. G.L. c. 93H and its implementing regulations codified at 201 C.M.R. 17.00 et. seq., as well as federal regulations under the Health Insurance Portability and Accountability Act (“HIPAA”).
Massachusetts’ data security regulations 201 C.M.R. 17.00 et. seq. are among the most comprehensive in the country. When the regulations first went into effect in March of 2010, many wondered whether the Massachusetts Attorney General would pursue actions against out-of-state enterprises given the regulations’ unique reach to all “persons” or entities inside or outside of Massachusetts that own or license the personal information of Massachusetts residents. Since 2010, however, the Massachusetts Attorney General has predominately focused efforts on data breaches of Massachusetts-based businesses—launching enforcement proceedings against Massachusetts hospitals, a major Boston restaurant group, and a medical billing practice and associated medical providers.
In 2011, WIH misplaced nineteen backup tapes from two prenatal centers—one in Providence, Rhode Island and one in New Bedford, Massachusetts. The tapes contained personal information and protected health care information, including patients’ names, dates of birth, Social Security numbers, dates of medical examinations, physicians’ names and ultrasound images, for 12,127 Massachusetts residents and approximately 1,200 Rhode Island residents. The Massachusetts Attorney General’s Office cited to “deficient employee training and internal policies” which prevented the breach from being discovered and reported in a timely manner. The Hospital did not discover that the tapes were missing until the spring of 2012 and failed to report the breach to consumers and the Massachusetts Attorney General’s Office until the fall of 2012.
The consent agreement requires the Hospital to pay $150,000 to the Commonwealth of Massachusetts and to take steps to ensure compliance with state and federal security laws, including hiring an outside firm to perform audits and maintaining an up-to-date inventory of all locations, custodians, and descriptions of unencrypted electronic media and patient charts containing personal information. Unlike Massachusetts, however, the Rhode Island Attorney General did not bring a civil suit against WIH, stating that under the Rhode Island identity theft protection law, the Attorney General was satisfied by the actions taken by the hospital to notify Rhode Island residents potentially impacted by the data breach and to offer them one year of credit monitoring. This may be a sign of Massachusetts’ more aggressive approach to privacy and data security enforcement.
The case is significant because it represents one of the first Massachusetts enforcement actions against an out-of-state entity under both Massachusetts regulation 201 C.M.R. 17.00 and the new provisions of the Health Information Technology for Economic and Clinical Health (“HITECH”) Act. The HITECH Act provides state attorneys general with the authority to enforce out-of-state violations of HIPAA, including disclosure of Protected Health Information (“PHI”), on behalf of state residents. Thus, this case also represents the continued efforts of state attorneys general to use their relatively new enforcement power to enforce HIPPA under HITECH.
If this consent judgment is representative of future privacy enforcement proceedings launched by the Massachusetts Attorney General, then businesses outside the Commonwealth that hold relevant privacy information may be well-advised to broadly re-examine their data security procedures, including preventative measures, to avoid running afoul of Massachusetts’ strict data security regulations. Furthermore, any business entity that handles PHI under the protection of HIPPA and the HITECH Act may want to undergo a similar internal data security review given the increasing frequency of enforcement proceedings by attorneys general nationwide.
As we’ve previously reported, cyber risks are an increasingly common risk facing businesses of all kinds. In a recent speech given at the New York Stock Exchange, SEC Commissioner Luis A. Aguilar emphasized that cybersecurity has grown to be a “top concern” of businesses and regulators alike and admonished companies, and more specifically their directors, to “take seriously their obligation to make sure that companies are appropriately addressing those risks.”
Commissioner Aguilar, in the speech delivered as part of the Cyber Risks and the Boardroom Conference hosted by the New York Stock Exchange’s Governance Services department on June 10, 2014, emphasized the responsibility of corporate directors to consider and address the risk of cyber-attacks. The commissioner focused heavily on the obligation of companies to implement cybersecurity measures to prevent attacks. He lauded companies for establishing board committees dedicated to risk management, noting that since 2008, the number of corporations with board-level risk committees responsible for security and privacy risks had increased from 8% to 48%. Commissioner Aguilar nevertheless lamented what he referred to as the “gap” between the magnitude of cyber-risk exposure faced by companies today and the steps companies are currently taking to address those risks. The commissioner referred companies to a federal framework for improving cybersecurity published earlier this year by the National Institute of Standards and Technology, which he noted may become a “baseline of best practices” to be used for legal, regulatory, or insurance purposes in assessing a company’s approach to cybersecurity.
Cyber-attack prevention is only half the battle, however. Commissioner Aguilar cautioned that, despite their efforts to prevent a cyber-attack, companies must prepare “for the inevitable cyber-attack and the resulting fallout.” An important part of any company’s cyber-risk management strategy is ensuring the company has adequate insurance coverage to respond to the costs of such an attack, including litigation and business disruption costs.
The insurance industry has responded to the increasing threat of cyber-attacks, such as data breaches, by issuing specific cyber insurance policies, while attempting to exclude coverage of these risks from their standard CGL policies. Commissioner Aguilar observed that the U.S. Department of Commerce has suggested that companies include cyber insurance as part of their cyber-risk management plan, but that many companies still choose to forego this coverage. While businesses without cyber insurance may have coverage under existing policies, insurers have relentlessly fought to cabin their responsibility for claims arising out of cyber-attacks. Additionally, Commissioner Aguilar’s speech emphasizes that cyber-risk management is a board-level obligation, which may subject directors and officers of companies to the threat of litigation after a cyber-attack, underscoring the importance of adequate D&O coverage.
The Commissioner’s speech offers yet another reminder that companies should seek professional advice in determining whether they are adequately covered for losses and D&O liability arising out of a cyber-attack, both in prospectively evaluating insurance needs and in reacting to a cyber-attack when the risk materializes.
Read Commissioner Aguilar’s full speech here.
Over the past decade, the EU has made significant technological and legal strides toward the widespread adoption of electronic identification cards. An electronic ID card, or e-ID, serves as a form of secure identification for online transactions – in other words, it provides sufficient verification of an individual’s identity to allow that person to electronically sign and submit sensitive documents such as tax returns and voting ballots over the Internet. Many people see e-IDs as the future of secure identification since they offer the potential to greatly facilitate cardholders’ personal and business transactions, and the EU Commission has recognized this potential by drafting regulations meant to eliminate transactional barriers currently hindering the cards’ cross-border reach. However, the increasingly widespread use of e-ID systems also gives rise to significant data security concerns.
Countries including Spain, Italy, Germany, and Belgium already have adopted e-ID systems, and the precise mechanics of the systems differ from country to country. In the Estonian system, for example, each e-ID carries a chip with encrypted files that provide proof of identity when accessed by a card reader (which a cardholder may purchase and connect to his or her computer). Once the card is inserted into the card reader, the user inputs different PIN numbers to access the appropriate database and electronically sign e-documents.
In fact, as recently detailed in The Economist, the small Baltic country of Estonia has one of Europe’s most highly-developed e-ID systems and exemplifies the underlying potential of this technology. Around 1.1 million of the country’s 1.3 million residents have electronic ID cards, which they can use to take advantage of the country’s fairly advanced array of e-government offerings. Estonians can use their e-IDs to go online and securely file their taxes, vote in elections, log into their bank accounts, access governmental databases to check their medical records, and even set up businesses, among many other tasks. Estonia even has established an e-prescription system that permits doctors to order a refill by forwarding an online renewal notice to a national database, thereby allowing a patient pick up a prescription from any pharmacy in the country simply by presenting his or her e-ID. The Estonian government also has announced a plan to start issuing cards to non-Estonians, so that citizens of other countries can easily set up businesses in Estonia or otherwise take advantage of that country’s many e-services. Estonia’s e-ID system thus illustrates how these cards can enhance convenience and save time that may otherwise be spent waiting in line to file documents in government offices, and they represent a significant step in that country’s efforts to brand itself as “e-Estonia.”
Naturally, the use of these cards to access such large quantities of personal data implicates important data security issues. Estonia assures its cardholders that their transactions are secure because each card’s files are protected by 2048-bit public key encryption, and because users need to enter multiple PIN numbers to access and use certain online services. To date, Estonia’s e-ID system has not suffered a major data breach. Nevertheless, the security of the system has been called into question by researchers that claim that Estonia’s e-voting process is vulnerable to manipulation by skilled hackers.
So what other factors may hinder the deployment of this technology, beyond the large upfront costs of developing an e-ID system and distributing e-ID cards? As mentioned above, the e-ID system requires the adoption of extensive data security measures to ensure the confidentiality of personal data. Furthermore, systems like those established by Estonia are so efficient in part because they draw on personal data – including health information – held within government databases. Citizens of other countries, such as those that have largely privatized medical systems like the United States, may be much more wary of government efforts to consolidate this type of personal information, even for the sake of efficiency. Others countries share a similar concerns about governmental collection of personal information. When the U.K. government announced plans to issue ID cards linked to a national identity register, for example, opposition proved so fierce that the government abandoned its pursuit of the project. Denmark and Ireland also do not issue ID cards to their citizens.
Regardless of this opposition, the European Commission believes that e-IDs will facilitate business within the EU and is dedicated to removing many of the legal barriers hindering the implementation of this technology. As early as 1999, the Commission issued Directive 1999/93/EC, which provided a framework for the legal recognition of electronic signatures. And in 2012, the Commission issued its draft regulation on electronic identification and trust services for electronic transactions. The regulation set forth a mutual recognition scheme mandating that all member states recognize and accept electronic IDs issued in other member states for the purposes of accessing online services. The regulation would, for example, allow an Italian student attending a German university to pay her school fees online via the university’s German website by using her Italian e-ID.
In sum, e-IDs have the potential to simplify the lives of cardholders – but only if those issuing the cards are willing to take the appropriate security precautions and work to achieve mutual recognition of other countries’ IDs.
The CNIL’s report starts with what was the central issue in data protection throughout 2013, the U.S. Prism program and more generally any mass surveillance programs of European citizens by foreign entities. The CNIL created a working group on the related subject of long-arm foreign statutes which allow foreign administrations to obtain personal data from French and European citizens. Such statutes have various purposes (combating money laundering, corruption, the financing of terrorism, etc.) and lead to the creation of black lists. In addition, the CNIL addresses those subjects with the other Data Protection Agencies within the Article 29 Working Party.
Another important topic was the proposed creation in France of a centralized national register where all consumer credit lines opened by an individual would have been listed, in order to allow credit companies to verify an individual’s level of debt. Indeed, consumer credit lines are fairly easily granted in France, and some consumers accumulate credit lines beyond their payment capacities and ultimately default in payment. The CNIL rendered negative advice on this register arguing that it breached the proportionality principle of the French law on data protection. Indeed, since only a small minority of people defaults, it considered that the collection and processing of data from all credit users was disproportionate. The register was nevertheless approved by the Parliament, but was immediately overruled by the French constitutional court in 2014, which, like the CNIL, considered that the register breached the right to privacy.
With regards to of the CNIL’s auditing and sanctions in 2013, the CNIL’s priorities remained committed to training, promoting awareness on data protection and issuing guidance for companies. Imposing financial penalties remains an exception. Statistics of the CNIL’s auditing and sanctions activities in 2013 demonstrate this quite clearly:
5640 complaints: Complaints to the CNIL were stable in 2013. The CNIL attributes this stability to its new guidance available on its website. This guidance deals with common issues such as video surveillance and direct marketing, and helps companies to comply, thus stabilizing the number of complaints to the CNIL.
414 audits: 75% of the CNIL’s audits in 2013 were of private companies, and 25% were of public administration. Many audits occurred after a complaint was filed with the CNIL (33% of the audits), but audits were also conducted at the initiative of the CNIL (27%) or following a previous sanction to make sure that the companies were now compliant (16%). Finally, 24% of the audits were devoted to sectors chosen by the CNIL: in 2013, companies dealing with open data as well as surveys were audited, and the social services administration was also audited.
14 decisions with sanctions: This includes 7 warnings and only 7 financial penalties.
For 2014, the CNIL has identified four major topics: open data, health data, and “digital death”. On open data, the CNIL will audit the current legal framework and will propose improvements. The CNIL itself wishes to open its data (rendered anonymous) to the public. With regards to health data, the CNIL will investigate the impact on privacy from apps and other tools (“quantified self”) that allow individuals to monitor their health and physical activity. The CNIL will address “digital death”, in particular how to deal with data of a deceased person. Finally, the CNIL will conduct audits in the penitentiary administration in order to verify whether the rights of prisoners to privacy are respected.
In France, before implementing a whistleblowing process, a company must inform and consult with its employees’ representatives, inform its employees and notify the French Data Protection Agency (CNIL).
There are two possible ways to notify the CNIL of a whistleblowing system:
- request a formal authorization from the CNIL (this is quite burdensome and difficult to obtain), or
- opt for the standard whistleblowing authorization (AU-004).
The standard whistleblowing authorization (AU-004) was enacted by the French Data Protection Agency in 2005 in order to facilitate notifying the CNIL of whistleblowing systems. As long as the company undertakes to comply with the principles and scope of the standard authorization, it is automatically authorized to implement the whistleblowing system. As enacted in 2005, the types of wrongdoings that could be reported through a whistleblowing system under the standard authorization were quite broad. Companies were authorized to adopt whistleblowing systems for purposes of regulatory internal control requirements, to comply with French law requirements and the United States Sarbanes-Oxley Act, and to protect vital interests of the company or the physical or psychological integrity of its employees.
However, in 2010, the CNIL had to modify the scope of the wrongdoings which could be reported when using a standard whistleblowing authorization pursuant to a decision of the French Supreme Court dated December 8, 2009 (see our post of December 15th, 2010: http://privacylaw.proskauer.com/2010/12/articles/data-privacy-laws/french-data-protection-agency-restricts-the-scope-of-the-whistleblowing-procedures-multinational-companies-need-to-make-sure-they-are-compliant/). In order to comply with the French Supreme Court decision, the CNIL narrowed whistleblowing reporting under the standard authorization to the following types of wrongdoings:
- Companies concerned with the U.S. Sarbanes-Oxley Act, section 301(4); and
- Japanese SOX of June 6, 2006.
The scope of the standard authorization was therefore very limited, requiring companies needing a broader scope of whistleblowing reporting to obtain a formal authorization from the CNIL and therefore to face the risk of a refusal.
From 2011 to 2013, given the scope limits of the standard authorization, the CNIL has had to process a high volume of filings for formal authorizations to implement whistleblowing systems.
Given the increased volume of requests from companies, on January 30, 2014, the CNIL decided to modify again the scope of application of the standard whistleblowing authorization (AU-004) to widen it.
As a consequence, companies implementing whistleblowing systems in France within the following categories can benefit from the new standard authorization:
- Discrimination and bullying at work;
- Health and safety at work; and
- Environment protection.
In its updated standard whistleblowing authorization, the CNIL also stated its preference against anonymous whistleblowing. Anonymous whistleblowing is allowed only if:
- The facts are serious and the factual elements are sufficiently detailed; and
- The treatment of the alert is subject to particular precautions such as a prior checking before it is sent through the whistleblowing process.
On June 25, 2014, the Supreme Court unanimously ruled that police must first obtain a warrant before searching the cell phones of arrested individuals, except in “exigent circumstances.” Chief Justice John Roberts authored the opinion, which held that an individual’s Fourth Amendment right to privacy outweighs the interest of law enforcement in conducting searches of cell phones without a warrant. The decision resolved a split among state and federal courts on the search incident to arrest doctrine (which permits police to search an arrested individual without a warrant) as it applies to cell phones.
The case of Riley v. California as heard before the Supreme Court combined two cases, one involving a smartphone and the other involving a flip phone. In the first case, Riley v. California, the police arrested David Leon Riley, searched his smartphone, and found photographs and videos potentially connecting him to gang activity and an earlier shooting. In the second case, United States v. Wurie, Brima Wurie was arrested for allegedly dealing drugs, and incoming calls on his flip phone helped lead the police to a house used to store drugs and guns.
Roberts wrote that neither of the two justifications for warrantless searches – protecting police officers and preventing the destruction of evidence – applies in the context of cell phones. According to the Court, the justification of protecting police officers falls flat since data on a cell phone cannot be used as a weapon. Roberts was also not persuaded by concerns that criminals could destroy evidence through remote wiping. He pointed out that police have alternatives to a warrantless search in order to prevent the destruction of evidence, including: turning the phone off, removing its battery, or placing the phone in a “Faraday bag,” an aluminum foil bag that blocks radio waves.
The Chief Justice focused on the differences between modern cell phones and other physical items found on arrested individuals to support his argument that modern cell phones “implicate privacy concerns far beyond those implicated by the search of a cigarette pack, a wallet, or a purse.” He cited modern cell phones’ huge storage capacity and how they function as “minicomputers that…could just as easily be called cameras, video players, rolodexes, calendars, tape recorders, libraries, diaries, albums, televisions, maps, or newspapers.” Roberts also noted that data viewed on a phone is frequently not stored on the device itself, but on remote servers, and that officers searching a phone generally do not know the location of data they are viewing.
However, Roberts maintained that exigent circumstances could still justify warrantless searches of cell phones on a case-by-case basis. Such circumstances include: preventing imminent destruction of evidence in individual cases, pursuing a fleeing suspect, and providing assistance to people who are seriously injured or are threatened with imminent injury.
Robert’s opinion is in line with the Court’s stance in the 2012 case United States v. Jones, which held that installing a GPS device on a vehicle and using the device to track the vehicle constitutes a search under the Fourth Amendment.
Justice Samuel Alito concurred in the judgment and agreed with Roberts that the old rule should not be applied mechanically to modern cell phones. However, he made two points that diverged from Roberts’ opinion. First, he disagreed with the idea that the old rule on searches incident to arrest was primarily based on the two justifications of protecting police and preventing destruction of evidence. Second, if Congress or state legislatures pass future legislation on searching cell phones found on arrested individuals, the Court should defer to their judgment.
The Riley opinion recognizes the unique role that cell phones play in modern life (“such a pervasive and insistent part of daily life that the proverbial visitor from Mars might conclude that they were an important feature of the human anatomy”) and that they “hold for many Americans ‘the privacies of life.’”
Special thanks to Tiffany Quach, 2014 summer associate, for her assistance in preparing this post.
On July 2, 2014 Singapore’s new Personal Data Protection Act (the “PDPA” or the “Act”)) will go into force, requiring companies that have a physical presence in Singapore to comply with many new data protection obligations under the PDPA. Fortunately, in advance of the Act’s effective date, the Singapore Personal Data Commission has recently promulgated Personal Data Protection Regulations (2014) (the “Regulations”) to clarify companies’ obligations under the Act.
Under the PDPA, an individual may request from an organization that is subject to the Act access to, and correction of, the personal data that the organization holds about that individual. The Regulations clarify that the request must be made in writing and must include sufficient identifying information in order for the organization to process the request. The Regulations also specify that the request for access or correction should be made to the company’s Data Protection Officer (which companies are now required to appoint under the Act). Under the Regulations, an organization must respond to the request for access to personal data “as soon as practicable” but if it is anticipated that it will take longer than 30 days to do so, the organization must so inform the individual within that 30 day period.
The Regulations confirm that individuals under the Act are entitled to expansive access rights: a company must provide them with access to all personal data requested, as well as “use and disclosure information in documentary form”. If such is not possible however, the organization can provide the applicant with a “reasonable opportunity to examine the personal data and use and disclosure information.”
Perhaps in an effort to reduce the burden and expense to organizations complying with an access request by an individual, the Regulations provide that an organization may charge an individual a “reasonable fee” to respond to an individual’s request for access to the personal data the company holds related to the individual, provided it has previously communicated an estimate of the fee to the applicant.
The Regulations also contain a number of details regarding the transfer of personal data outside Singapore. Specifically, the Regulations clarify that before transferring personal data to another jurisdiction, the transferring organization in Singapore must ensure that the recipient is “legally bound by enforceable obligations… to provide to the transferred personal data a standard of protection that is at least comparable to the protection under the Act.”
“Enforceable obligations” under the PDPA are similar to that under the European Union, and include the existence of a comparable data protection law, a written contract that provides for sufficient protections, as well as “binding corporate rules.”
The Regulations (together with recently issued Advisory Guidelines On Key Concepts In The Personal Data Protection Act (revised on 16 May 2014)) now provide much needed guidance in helping companies comply with their new data protection obligations under the Act.
After a decision denying class certification last week, claims by Hulu users that their personal information was improperly disclosed to Facebook are limited to the individual named plaintiffs (at least for now, as the decision was without prejudice).
The plaintiffs alleged Hulu violated the federal Video Privacy Protection Act by configuring its website to include a Facebook “like” button. This functionality used cookies that disclosed users’ information to Facebook. But, the U.S. District Court for the Northern District of California credited expert evidence presented by Hulu that three things could stop the cookies from transmitting information: 1) if the Facebook “keep me logged in” feature was not activated; 2) if the user manually cleared cookies after his or her Facebook and Hulu sessions, or 3) if the user used cookie blocking or ad blocking software.
In its decision, the court ruled that these methods of stopping disclosure of information rendered the proposed class insufficiently ascertainable. To maintain a class action, a class must be sufficiently ascertainable by reference to objective criteria. Plaintiffs argued that the class membership could be ascertained by submission of affidavits from each class member, but the court reasoned that individual subjective memories of computer usage may not be reliable, particularly given the availability of $2,500 statutory damages per person under the Video Privacy Protection Act. Thus, the court found plaintiffs had not defined an ascertainable class.
Additionally, because an individual inquiry into whether each member of the putative class used any one of the methods to block disclosure of information would be required, the court found individual issues predominated and would preclude certification of the class as defined. Hulu also argued the total potential damages were out of proportion to any actual damages suffered and thus violated due process, but the court noted that while the argument might have some merit, the issue was not ripe given the denial of class certification.
While the case, In re Hulu Privacy Litigation, can still continue on behalf of the named plaintiffs, the court left open the possibility of proceeding with subclasses or a narrower class definition. Consequently, additional class certification practice is likely in this closely watched case.
Last month, a federal district court in the Northern District of California issued an order that may affect the policies of any company that records telephone conversations with consumers.
The trouble began when plaintiff John Lofton began receiving calls from Collecto, Verizon’s third-party collections agency, on his cell phone. The calls were made in error – Lofton did not owe Verizon any money because he wasn’t even a Verizon customer – but Lofton decided to take action when he discovered that Collecto had been recording its conversations with him without prior notice. Lofton brought a class action against Verizon under California’s Invasion of Privacy Act, theorizing that Verizon was vicariously responsible for Collecto’s actions because Collecto was Verizon’s third-party vendor and because Verizon’s call-monitoring disclosure policy did not require the disclosure of recordings in certain situations. Verizon filed a motion to dismiss, arguing that the recordings did not invade Lofton’s privacy and therefore did not run afoul of the statute.
The court denied the motion to dismiss, holding that the statutory language of § 632.7 of the Invasion of Privacy Act banned the recording of all calls made to cell phones – not just confidential or private calls made to cell phones – without prior notice. The statute’s treatment of cell phones thus diverges from its treatment of landlines, as recordings of calls made to landlines only have to be disclosed via prior notice if the call is “confidential.”
Though the case is ongoing, this ruling indicates that Lofton v. Verizon Wireless (VAW) LLC ultimately may have a significant impact on how companies interact with consumers over the phone. First, the prevalence of cell phones means that companies should assume that § 632.7 applies to a large percentage of its calls with consumers – not only because it is highly likely that these consumers use cell phones instead of landlines, but because it may be difficult for the company to tell whether these consumers are in California and subject to § 632.7. Second, this recent order indicates that companies may be held responsible for their third-party vendors’ lack of disclosure, meaning that companies should change their policies to require their third-party vendors to refrain from recording phone conversations without prior notice, and also monitor the vendors for compliance with this requirement. In sum, when it comes to providing prior notice of recordings to consumers, companies shouldn’t phone it in – they should ensure they and any third party vendors err on the side of disclosure to avoid legal hangups down the line.
http://www.agpd.es/portalwebAGPD/revista_prensa/revista_prensa/2013/notas_prensa/common/diciembre/131219_PR_AEPD_PRI_POL_GOOGLE.pdf More decisions are to be expected.
This was the first issue that the CNIL had to decide upon. The territorial scope of French law derives from the rules set out by the EC Directive n°95/46. Hence, French law is applicable either because 1) the data controller carries out his activity within an establishment in France, or 2) the data controller is not established in France nor in the EU, but uses “means of processing” of personal data located in France to collect data.
Google claimed that the French law did not apply because Google Inc. in California is solely responsible for data collection and processing, and that Google France is not involved in any activity related to the data processing performed by Google Inc.
The CNIL rejects this argument, arguing that Google France is involved in the sale of targeted advertisement, which value is based on the data collection of Internet users. Hence, Google France is involved in the activity of personal data processing, even though it does not perform the technical processing of personal data. The CNIL’s argument is similar to the argument developed by the Advocate General in the case currently opposing Google and the Spanish DPA before the European Court of Justice (“ECJ”) (http://curia.europa.eu/juris/document/document.jsf?text=&docid=138782&pageIndex=0&doclang=EN&mode=lst&dir=&occ=first&part=1&cid=198456/). The ruling of the ECJ on this issue is eagerly awaited.
In addition, the CNIL ruled that Google Inc. placed cookies on the computers of French users, and that such cookies were “means of processing” of personal data located in France because they are used to collect data from the users’ computers. Therefore, even if Google Inc. were to be considered as the sole data controller, French law would nevertheless apply because of the location of the cookies in France.
Are all data collected by Google “personal data” within the meaning of French and EU Law?
One of the main issues is the difference put forward by Google between “authenticated users”, who have registered their ID to use services such as Gmail and “unauthenticated users” who use services that do not require identification such as Youtube! or “passive users” who visit a third-party website where Google has placed Analytics cookies for targeted advertising.
According to Google, it holds “personal data” only on “authenticated users” and not on “unauthenticated users” and “passive users”. The CNIL rejects the argument because the definition of personal data under French law includes information that indirectly identifies a person. The CNIL considers that, even if the name of the user is not collected, the collection of an IP address combined with the collection of precise and detailed information on the browsing history of the computer amounts to indirectly identifying a person, because it gives precise information of a person’s interests, daily life, choices of life etc.
Therefore, all data collected by Google is considered by CNIL as personal data.
The CNIL, following the findings of the Article 29 Working Party, found four breaches of French law on data protection.
Secondly, Google should have informed users and obtained their consent before placing advertising cookies on their terminal. Obtaining consent for cookies does not require opt-in consent from the user, but the user must be properly informed before the cookies are placed on the terminal, of their purposes and on how to refuse them. The CNIL found that, with regards to unauthenticated users, Google placed cookies prior to any information, in breach of French Data Protection law. In addition, the information provided to users is not sufficient. Only two services of Google (Search and YouTube!) have a banner with information on cookies. Moreover, little information is given regarding the purposes of the cookies: stating that cookies are meant “to ensure proper performance of the services” is not deemed to be sufficient information in order to obtain an “informed consent” from the user. With regards to “passive users” who visit third-party websites where Google placed its “Analytics” cookies, the CNIL considers that, since Google uses the data collected for its own activity (by producing statistics and improving its service), it acts as a data controller and is responsible for obtaining consent.
Thirdly, Google has not defined the duration during which it retains the data collected and has not implemented any automatic processes for deleting data. For example, no information is available as to the duration during which the data is kept once an authenticated user has canceled its account.
Based on a December 3rd decision by the Second Circuit Court of Appeals, class actions under the Telephone Consumer Protection Act (TCPA) can now be brought in New York federal court. This decision marks a reversal of Second Circuit precedent, and will likely increase the number of TCPA class actions being filed in New York. Companies should review their telemarketing practices and procedures in light of the potential statutory penalties under the TCPA.
The Better Business Bureau (“BBB”) and the Direct Marketing Association (“DMA”) are in charge of enforcing the ad industry’s Self Regulatory Principles for Online Behavioral Advertising (“OBA Principles”), which regulate the online behavioral advertising activities of both advertisers and publishers (that is, web sites on which behaviorally-targeted ads are displayed or from which user data is collected and used to target ads elsewhere). Among other things, the OBA Principles provide consumers transparency about the collection and use of their Internet usage data for behavioral advertising purposes. Specifically, the “Transparency Principle” requires links to informational disclosures on both: (i) online behaviorally-targeted advertisements themselves, and (ii) webpages that display behaviorally-targeted ads or that collect data for use by non-affiliated third parties for behavioral advertising purposes. The “Consumer Control Principle” requires that consumers be given a means to opt-out of behavioral advertising.
Through its “Online Interest-Based Advertising Accountability Program”, the BBB recently enforced the OBA Principles in a series of actions—some with implications for publishers and some with implications for advertisers.
Last month, the BBB admonished publishers to heed these self-regulatory requirements when it issued its first Compliance Warning. The BBB encouraged the use of the DAA endorsed AdChoices Icon on any page from which a website operator allows a third party to collect user data for behavioral advertising or transfers such data to a third party for such purposes. This policy gives consumers “just-in-time” notice and enables them to decide whether to participate in behavioral advertising.
Recent inquiries by the BBB into the behavioral advertising activities of BMW of North America, LLC and Scottrade, Inc. illustrate the requirements of the OBA Principles on Web site publishers. In these inquiries, the BBB found that both companies had failed to provide enhanced notice and opt-out links on webpages where they allowed third-party data collection for behavioral advertising purposes. Both companies quickly achieved compliance in response.
The BBB warned that publishers that do not comply with the first party transparency requirements for third party data collection, but are otherwise in compliance with the OBA Principles, will face enforcement action by the BBB beginning on January 1, 2014.
The Transparency Principle requires third parties (e.g., advertisers and ad service providers) that engage in behavioral advertising on non-affiliated websites to provide enhanced notice to web users about behavioral advertising through a similar hyperlinked disclosure preferably containing the AdChoices Icon in or near the behaviorally-targeted advertisement itself. The disclosure should contain information about the types of data collected for behavioral advertising purposes, the use of such data, and (under the Consumer Control Principle) the choices available to users with respect to behavioral advertising. This requirement is designed to notify consumers that the ad they are viewing is based on interests inferred from a previous website visit.
Recently, the BBB found three companies in violation of the OBA Principles and on November 20th released decisions resolving their inquiries into the violations. The BBB visited a company website, and when it left the company site to browse other websites, found ads from the original company on the non-affiliated sites. However, the ads did not include enhanced notice, which, the BBB said, was in violation of the third party transparency requirements of the OBA Principles. According to the BBB, the party responsible for this omission was 3Q Digital, an ad agency. The agency used the self-service features of a demand side platform to serve targeted ads. The BBB found that when 3Q Digital used the self-service platform it stepped “into the shoes of companies explicitly covered by the OBA Principles” and assumed the compliance responsibilities of covered companies.
In response, 3Q Digital’s client included the AdChoices Icon in all its interest-based ads. 3Q Digital, in turn, took similar steps with all its other online campaigns.
The BBB’s recent enforcement activities emphasize the need for companies (whether they advertise online, display third party ads on their online properties, or simply contribute user data for use by others for behavioral advertising) to be vigilant of the specific requirements of the OBA Principles that are applicable to them based on how they participate in behavioral advertising. One motivation for the existence of the OBA Principles is to show regulators that legislation in this area is not necessary. To ward off legislation and avoid enforcement action by the BBB and the DMA, all parties involved should be mindful of the OBA Principles, and make them a part of their compliance programs.
An article published by Law360 last week quoted Jeremy Mittman, co-Chair of Proskauer’s International Privacy Group and a member of the firm’s International Labor Group, on the data protection reform legislation recently passed by European Parliament and the difficulties multinational companies face to comply with both EU and U.S. privacy laws.
Jeremy was again solicited to comment on the EU-U.S. Safe Harbor Program in an article published by Politico on November 7. The article mentions Jeremy’s experience drafting Safe Harbor certifications and EU model contracts.
The determination of the territorial scope of the current EU Directive n° 95/46 is still under dispute both before national Courts and the European Court of Justice (ECJ). This issue may soon become moot with the adoption of future data protection regulation, which may modify and expand the territorial scope of EU data privacy law, especially following the results of the recent vote of the European Parliament’s Committee on Civil Liberties, Justice and Home Affairs. The following is meant to help determine the current state of affairs regarding the issue of the territorial (and extraterritorial) scope of the future EU law following this vote of the European Parliament.
As the internet has allowed companies to easily provide services from a distance, the issue as to what laws are applicable to personal data has become more complex. This was not fully anticipated when the current EU Directive on personal data protection was adopted in 1995. Modifications to the rules regarding territorial scope set by Article 4 of the current EU Directive have been a highly debated issue in the EU.
An ongoing case before the ECJ highlights this complexity, and the legal uncertainty, surrounding the territorial scope of the current EU Directive. In this case, a Spanish citizen lodged a complaint against Google Spain and Google Inc. before the Spanish Data Protection Agency (“AEPD”) because Google refused to take down data that appeared when his name was entered in the search engine. As a defense, Google argued that Spanish law was not applicable because the processing of personal data relating to its search engine does not take place in Spain, as Google Spain acts merely as a commercial representative: the technical data processing takes place in California. According to Article 4.1 (a) of the EU Directive, national law is applicable if “the processing is carried out in the context of the activities of an establishment of the controller on the territory of the Member State.” The ECJ will therefore have to determine whether Google Spain, “in the context of its activities,” may be considered as processing data, even though, as a commercial subsidiary, it does not technically process personal data.
The Advocate General has given a positive answer to that question in a non-binding Opinion delivered last summer. In the Opinion, he argues that since the business model of search engines relies on targeted advertising, the local establishment in charge of marketing such targeted advertising to the inhabitants of a particular country must be considered as processing personal data “in the context of its activities,” even though the technical operations are not performed there. The ECJ is expected to render its decision at the end of this year.
In the near future, the applicable law in such a situation may more easily be determined based on the draft Regulation proposed by the European Parliament.
- First, the European Parliament has proposed Article 3.1 of the EU Directive be amended to clarify that “this Regulation applies to the processing of personal data in the context of the activities of an establishment of a controller or a processor in the Union whether the processing takes places in the Union or not.” (emphasis added). If the draft Regulation is adopted as such, EU law would therefore unequivocally apply to the activities of the subsidiaries established in the EU of foreign companies, regardless of the actual place of data processing.
- Second, the European Parliament proposes to amend Article 3.2 of the EU Directive, which concerns the extraterritorial application of EU law (i.e., the situation where the data controller does not have any presence in the EU). The draft Regulation provides that EU law would nonetheless apply if the processing of data is related to “offering of goods or services” to data subjects in the European Union. In accordance with the Article 29 Working Party, which stated in its Opinion 01/2012 that the offering of goods or services should include free services, the European Parliament has proposed amending Article 3.2 to provide that the EU law would apply to any processing activity related to the offering of goods or services to data subjects in the EU, “irrespective of whether a payment of the data subject is required.”
The draft amended regulation will now be negotiated with the European Council (the governments of the EU Member States). The European Parliament is pushing for a vote of the regulation in the spring 2014. However, such a timetable is far from assured, given the general “slow track” of the proposed legislation coupled with recent pronouncements by the leaders of several EU countries suggesting a timetable closer to 2015.