As a growing number of states pass legislation which will protect individuals’ social media accounts from employer scrutiny, they have encountered a surprising adversary – FINRA and other securities regulators.
To date, at least six states have enacted social media employee privacy laws (which were blogged about here, here, here, and here) and upwards of thirty-five states have considered legislation since the beginning of 2013. Washington State may soon join the ranks with SB 5211, a bill unanimously passed by both chambers of Washington legislature on April 27, 2013, which now awaits the Governor’s signature. Social media password protection laws, although unique to each state, generally restrict employers from requesting or requiring that employees or applicants provide their social media user names, passwords, and account information. Supporters believe the laws are necessary to protect employee and prospective employee privacy and to prevent against unlawful employer action in response to an employee’s social media use.
FINRA, the Financial Industry Regulatory Authority, fears that the new employee privacy laws may directly conflict with securities rules and threaten investor protection. With an increasing number of financial firms taking to Facebook and Twitter to interact with investors and give financial advice, FINRA has set forth various guidelines governing social media use. Under FINRA rules, securities firms must “adopt policies and procedures reasonably designed to ensure that their associated persons who participate in social media sites for business purposes are appropriately supervised,” and broker-dealers must be able to “retrieve and supervise business communications regardless of whether they are conducted from a device owned by the firm or by the associated person.” FINRA Regulatory Notice 11-39 (August 2011). According to FINRA, if the employee of a broker-dealer is engaging in business communications over a social networking site, the broker-dealer must have access to the account for general monitoring and for its records. Broker-dealers must also be able to freely follow up on red flags, or misuse of an account. FINRA fears that the adoption of social media employee privacy laws may conflict with monitoring and reporting requirements and could force some employers into a lose-lose situation—violate state law or violate a FINRA rule. FINRA worries that employers who choose the former will increase investor risk and the potential for securities fraud.
FINRA has sent letters to lawmakers in approximately ten states seeking carve-outs to social media employee privacy laws for the financial services industry. Many of the laws already include narrow exemptions, which allow for employers to require disclosure if an employee’s alleged misconduct has risen to a certain level. FINRA does not appear satisfied with these exemptions, which may be too limited for broker-dealers to be in full compliance with monitoring, recording and supervision requirements. California has rejected FINRA’s request for an exception for the financial services industry, but it remains to be seen how the states will react in general.
FINRA is not alone in its concerns that social media privacy laws are too broad. On May 6, 2013, Governor Christie of New Jersey conditionally vetoed a social media employee privacy Bill which he criticized for its over-breadth and for putting employers at increased risk.
While it is too soon to predict how this conflict between employee privacy interests and financial industry oversight will be resolved, what is apparent is the increasingly complex issue of handling privacy in the age of social media.
The U.S. Department of Health and Human Services (HHS) Office for Civil Rights (OCR) published on its website a series of factsheets designed to educate consumers unfamiliar with their rights under the Health Insurance Portability and Accountability Act’s (HIPAA) Privacy and Security Rules. These four factsheets are described in detail below and are available in eight languages on OCR’s website at: www.hhs.gov/ocr/privacy/hippa/understanding/consumers.
I. OCR Consumer Factsheet: “Your Health Information Privacy Rights”
- OCR tells consumers that HIPAA gives them rights over their health information including the right to get a copy of their information, make sure it is correct and know who has seen it.
- OCR says that in most cases consumers must be given a copy of their medical record and other health information within 30 days.
- Consumers can ask to change any wrong information in their file if they believe that something is missing or incomplete. OCR states, “Even if the hospital believes the test result is correct, you still have the right to have your disagreement noted in your file.”
- OCR summarizes how a consumer’s health information can be used and shared for specific reasons not directly related to the consumer’s care (i.e., “making sure doctors give good care, making sure nursing homes are clean and safe, reporting when the flu is in your area, or reporting as required by state or federal law”).
- OCR encourages consumers to learn how their health care providers and health insurers are using and sharing their health information.
- OCR encourages consumers to let their health care providers and health insurers know if there is information that they do not want to be shared.
- OCR also tells consumers that they can make reasonable requests to direct their health care provider to contact them at a different place or in a different manner. For example, if the doctor’s office usually sends a postcard with an appointment reminder, the consumer may request that the appointment reminder be sent in an envelope instead.
II. OCR Consumer Factsheet: “Privacy, Security, and Electronic Health Records”
- OCR explains that electronic health records (EHRs) are electronic versions of the consumer’s paper medical record and includes health information such as medical history, notes, diagnoses, medications, lab results, and immunizations.
- OCR tells consumers that their privacy rights are the same whether the health information is stored as paper or in an electronic form.
- In the factsheet, OCR summaries the benefits of health care providers using EHRs. Consumers should expect “improved quality of care”, “more efficient care”, and “more convenient care.”
- OCR summarizes certain protections that can safeguard EHR systems including access controls like passwords and PIN numbers, encrypting, and an audit trail feature.
- OCR describes for the consumers the breach notification requirement for health care providers.
III. OCR Consumer Factsheet: “Understanding the HIPAA Notice”
- OCR provides a four step process for consumers to follow to make sure that they understand the “Notice of Privacy Practices” and their rights under HIPAA.
- Step 1: OCR encourages consumers to “Get a Copy of the Notice of Privacy Practices”
- Step 2: OCR encourages consumers to “Read the Notice”
- The Notice explains how the health care provider or insurer is allowed to use or share their information
- Explains the consumers’ privacy rights
- Explains the doctor or insurer’s legal duties to protect consumers’ health information
- Provides the contact information about the doctor or insurance company’s privacy polices.
- Step 3: OCR encourages consumers to “Ask Questions about the Notice or Your Rights”
- Step 4: OCR encourages consumers to “Know What You are Signing”
- HIPAA requires the consumer’s doctor, hospital, or other health care provider to ask for written proof that he or she received the Notice of Privacy Practices acknowledgement of receipt.
- Consumers are not required to sign the acknowledgment of receipt; however providers must keep a record that the consumer decided not to sign the form. Providers must still care for consumers who do not sign the acknowledgment of receipt.
IV. OCR Consumer Factsheet: “Sharing Health Information with Family Members and Friends”
- OCR summarizes and provides examples of when a health care provider or health plan may share relevant information with family members or friends involved in the consumer’s health care or payment for health care.
- OCR states that under HIPAA, a health care provider may share a consumer’s information face-to-face, over the phone, or in writing … if:
- The consumer gives the provider or plan permission to share the information.
- The consumer is present and does not object to sharing the information.
- The consumer is not present, and the provider determines based on professional judgment that it is in the consumer’s best interest.
- OCR provides frequently occurring examples for each of these scenarios.
- The consumer’s hospital may discuss the consumer’s bill with his or her daughter who is with the consumer and has a question about the charges, if the consumer does not object.
- The consumer’s doctor may discuss the drugs the consumer needs to take with the consumer’s health aide who has accompanied the consumer to his or her appointment.
- The consumer had emergency surgery and is still unconscious. The consumer’s surgeon may tell the consumer’s spouse about his or her condition, either in person or by phone, while the consumer is unconscious.
- A doctor may not tell a consumer’s friend about a past medical problem that is unrelated to the consumer’s current condition.
Finally, OCR provides information to consumers on who to contact if their HIPAA rights are being denied or their health information is not being protected.
The Securities and Exchange Commission (the “SEC”) and Commodity Futures Trading Commission (the “CFTC”) recently adopted rules requiring entities subject to their respective enforcement authorities to adopt and implement programs to detect and respond to indicators of possible identity theft, as required by the Dodd-Frank Wall Street Reform and Consumer Protection Act of 2010 (the “Dodd-Frank Act”). The SEC rules apply to entities such as broker-dealers, investment companies and investment advisers, while the CFTC’s rules apply to entities such as futures commission merchants, commodity trading advisors and commodity pool operators.
The Dodd-Frank Act requirement shifted rulemaking responsibility and enforcement authority for identity theft rules governing such entities to the SEC and CFTC from the six federal agencies that had jointly adopted identity theft rules under the Fair Credit Reporting Act in 2007.
The rules adopted by the SEC and the CFTC specify: (1) which financial institutions and creditors must develop and implement a written identity theft prevention program; (2) the objectives of such program; (3) the elements that the program must contain; and (4) the steps financial institutions and creditors need to take to administer the program. The rules do not contain any requirements that were not already in the rules established in 2007, nor do they expand the scope of those rules to include new categories of entities that the rules did not already cover. However, the rules and the related adopting release contain examples and minor language changes that are designed to help guide entities with compliance.
The rules will become effective 30 days after publication in the Federal Register, and the compliance date will be six months after that effective date.
Are social media companies based in the United States subject to European data privacy laws? Two recent judicial decisions – one in France and the other in Germany – arrived at different answers. The Civil Court of Paris held that Twitter, based in California, was obligated under the French Code of Civil Procedure to reveal the identity of its users in France who posted racist tweets. In Germany, on the other hand, an administrative court held that Facebook, also based in California, was not subject to a German law that would have prohibited Facebook from requiring users to register under their real names.
To be subject to French data protection laws, a data controller must be either established on French territory or use a means of processing data that is located on French territory. However, Article 145 of the French Code of Civil Procedure does not include such geographical limitations and allows parties, upon application to the court, to seek evidence before a case has been formally instituted whether there is a legitimate reason to preserve or establish the evidence.
In October 2012, the Union of French Jewish Students (“UFJS”) filed a summary action under Article 145 to require Twitter to identify individuals who had posted anti-Semitic tweets (a criminal offense). On January 24, 2013, the court ruled that while Twitter was not subject to French data protection laws, it was nevertheless obligated to hand over the identities of users in France who post racist tweets.
Twitter has yet to hand over the authors’ identities, however, and the UFJS is taking further legal action against Twitter, claiming $50 million in damages. Twitter has appealed the decision.
In Germany, the Düsseldorfer Kries (ULD), Germany’s consortium of state data protection regulators, issued an opinion against Facebook’s real-name policy, which requires users to register accounts under their real names and remove fake accounts. This policy is central to Facebook’s business model, but the ULD argued it violates users’ online privacy. German data protection laws provide for the right to anonymous use of social media.
Facebook appealed to a German administrative court, arguing that because it processes the relevant data in Ireland (which does not have a right to anonymous use of social media), and not in Germany, it was not subject to the German law. The court agreed, though the ULD has announced it plans to appeal.
These two decisions illustrate the patchwork of local laws that social media companies may face when conducting business in the EU. We will continue to monitor this issue, as well as these cases as they go up on appeal.
California Assembly Member, Bonnie Lowenthal, recently introduced the “Right to Know Act of 2013″ (AB 1291), which would require any company that retains a California resident’s personal information to provide a copy of that information to that person, free of charge, within 30 days of the request. The company would also have to disclose a list of all third parties with whom it has shared the resident’s data during the previous 12 months, the contact information of such third parties, and the types of personal information that was shared. In contrast to the existing Shine the Light Act, this legislation would not be limited to data sharing for direct marketing purposes, and would not provide exceptions for companies that maintain an opt-in or opt-out policy for data sharing. Moreover, the legislation’s definition of “personal information” is broader, and includes data such as online usage information. Also, the legislation would apply to businesses even if they do not have a direct relationship with the California resident, such as data aggregators and online ad networks. Additional requirements also exceed what is present in the existing law. If a company does not comply, California residents would be empowered to file a civil suit to force compliance. The law does not distinguish between brick-and-mortar businesses and online companies.
Although the Right to Know Act contains certain provisions intended to prevent abuse, it provides for an unprecedented level of data access for California residents. Under CA Civil Code § 1798.83 (better known as the Shine the Light Act), California residents may request from a company an accounting of disclosures made to third parties for direct marketing purposes, as well as general facts about the types of data disclosed. The Right to Know Act would allow California residents to know all of the ways that their personal information is being shared – including via online interactions – with the exception only of data sharing with service providers who are only permitted to use the information to provide service to the company.
The Right to Know Act provides that in lieu of responding to individual California resident requests, a company can provide a California resident with a notice about what data will be disclosed and to whom— prior to or immediately following a disclosure. In addition, a company would only have to provide each California resident with the required information once every 12 months.
The bill is expected to be debated by California legislators within the next few months.
On April 5, 2013, New Mexico joined six other states (including, among others, Utah, Maryland and California) in passing a new law prohibiting employers from requesting or requiring that a prospective employee provide access to his or her social networking accounts. Proskauer’s Labor & Employment group has discussed the new law here.
National Enforcement Actions in Six EU Countries
Google decided not to implement the Article 29 Working Party’s recommendations.
Following a meeting with Google on March 19,, 2013 the national Data Protection Authorities of 6 of the 27 EU Member States announced that each will launch investigations and enforcement procedures against Google. Unlike the initial assessment phase that was coordinated by the CNIL on behalf of the other EU authorities, these investigations and enforcement procedures are not being jointly pursued. Indeed, each national Data Protection Authority has its own procedures, powers and sanctions.
Although the authorities have announced that they will cooperate together, Google will nevertheless face six distinct national procedures, and should they result in divergent decisions, there is no system to reconcile them. One goal of EU data protection reform is to establish a new system of supervision when data processing has an EU-wide impact. Under the proposition for a new EU data protection regulation made by the European Commission in January 2012 [http://ec.europa.eu/justice/data-protection/document/review2012/com_2012_11_en.pdf] and currently under review before the European Parliament, only the Data Protection Authority of the EU country where the company has its main establishment would be in charge of taking legally binding decisions against a non-compliant company (one-stop shop). In addition, mandatory cooperation between national authorities, as well as a consistency mechanism at the EU level, would be implemented to ensure consistency across investigations and enforcement procedures.
Following a growing trend among states, on March 26, 2013, the Utah legislature passed the Internet Employment Privacy Act, which prohibits employers from requesting that job applicants or employees disclose passwords protecting their personal internet accounts. Proskauer’s Labor & Employment group has discussed the new law here.
In a recent ruling arising from certain certified questions in Tyler v. Michaels Stores, Inc., Civ. No. 11-10920-WGY (D. Mass. Jan. 6, 2012, the Massachusetts Supreme Court interpreted “personal identification information” under Mass. Gen. Laws, ch. 93, § 105(a) Section 105(a) to include a consumer’s ZIP code and determined that collecting such personal information is a violation of state privacy law for which the consumer can sue (see slip opinion).
By way of background, the plaintiff, Tyler, alleged she was making a credit card purchase at Michaels (an arts and crafts retailer) when a cashier asked her for her ZIP code. Tyler provided her ZIP code. Tyler alleged her ZIP code was later used by Michaels to find Tyler’s mailing address and telephone numbers and send her unwanted and unsolicited marketing materials. Tyler filed a class action complaint against Michaels claiming unjust enrichment and seeking a declaratory judgment that collecting ZIP codes is a violation of Section 105(a). The Massachusetts District Court found that Tyler’s complaint sufficiently alleged a violation of Section 105(a); however, the District Court found the complaint did not allege a cognizable injury under the statute (see our blog post here). Further, the district court judge opined that the statute was meant to protect identity fraud and was not, as Tyler argued, for the protection of consumer rights. At the judge’s invitation Tyler filed a motion to certify the following three questions to the Massachusetts Supreme Court, each regarding the proper interpretation of Section 105(a):
1) Under Section 105(a), is a ZIP code “personal identification information” because such a ZIP code may be required by the credit card issuer to complete the transaction? Yes, the Supreme Court answered, because regardless of whether the ZIP code was explicitly defined in the statute as personal identification information, it could be used to obtain such information.
2) Under Section 105(a), can a consumer bring an action for a privacy right violation even without identity fraud? Yes, the Supreme Court answered, disagreeing with the District Court judge, because they found no reason to limit the statute’s application to only identity fraud, especially when the title of the statute is “Consumer Privacy in Commercial Transactions.”
3) Under Section 105(a), does the term “credit card transaction form” apply to both paper and electronic credit card transactions? Yes, the Supreme Court answered, because limiting the statute to only paper transactions would render the statute obsolete in the age of electronic transactions.
The Supreme Court also found that possible injuries resulting from a violation of Section 105(a), such as where the merchant uses the information for its own business purposes by sending the customer unwanted marketing materials or by selling the information for profit, are injuries distinct from violation of the statute itself and are recognizable under the law.
So what’s next? This case will now be sent back to the District Court for further proceedings.
Given the Massachusetts Supreme Court ruling, more law suits in Massachusetts and other states with similar consumer privacy statutes are likely to follow. Therefore, other retailers and businesses may want to review their policies regarding credit card transactions and how their employees request consumer information. In particular, retailers and other businesses may want to reconsider whether their employees should request a ZIP code, even if provided by consumers voluntarily, when such information is not required by the credit card issuer.
As announced during the 2013 State of the Union Address, President Obama recently signed an Executive Order on cybersecurity. The primary goals of the Executive Order are to (a) improve communication between private companies and the federal government about emerging cyber threats and (b) safeguard the nation’s critical infrastructure against cyber attacks by developing and implementing baseline cybersecurity standards. Critical infrastructure refers to those systems and assets, both physical and virtual, so vital to our nation that any cyber attacks upon them would have a debilitating impact on national security, economic security, and/or public health or safety.
According to a report issued by the Department of Homeland Security (the “DHS”) in December 2012, there were 198 cyber attacks on the nation’s critical infrastructure last year, several of which were successful. One such successful attack involved highly sophisticated malware found on critical engineering workstations at a power generation facility. According to the DHS’ Industrial Control Systems Cyber Emergency Response Team Monitor, an “ineffective or failed cleanup would have significantly impaired” the power plant’s operations. Critical infrastructure systems ranging from air traffic control systems, highways, and hospitals to electrical grids, water systems, power plants and financial systems all have virtual components that are vulnerable to cyber attack. Over the past year, the need for stronger defenses against cyber attacks has gained traction in the public eye, as hackers have successfully targeted numerous high profile companies, including major newspapers, banks, and federal agencies.
President Obama’s Executive Order on cybersecurity comes in the wake of proposed cybersecurity legislation, which was stalled in Congress last year. The Executive Order relies heavily on a voluntary program that encourages private companies operating critical infrastructure to adopt baseline cybersecurity standards, which the federal government will develop with industry assistance.
The main points of the Executive Order are as follows:
- Cybersecurity Information Sharing: The government will increase the volume, timeliness, and quality of cyber threat information shared with private sector entities. This will enable private companies to better protect and defend themselves against cyber threats. Federal agencies will timely disseminate unclassified reports of cyber threats targeting specific entities to the targets and will distribute classified reports to those critical infrastructure entities authorized to receive them.
- Cybersecurity Framework: The National Institute of Standards and Technology, an agency of the Department of Commerce, will work with critical infrastructure operators to develop a framework of baseline standards designed to strengthen the digital security of the nation’s critical infrastructure (the “Framework”). Existing standards and industry best practices will be incorporated into the Framework to the fullest extent possible. To account for organizational differences and allow for technological innovation, the Framework will provide technology-neutral guidance.
- Voluntary Critical Infrastructure Cybersecurity Program: Federal agencies will establish a voluntary program to encourage critical infrastructure operators to adopt the Framework. The DHS will spearhead the effort, working with sector-specific agencies and industry council to implement the Framework’s best practice standards and to incentivize participation in the voluntary program. Various federal agencies will assess the effectiveness of incentives and whether there is sufficient authority under existing legislation to provide them.
- Privacy and Civil Liberties Protections: Federal agencies must ensure that privacy and civil liberties protections are incorporated into their activities under the Executive Order.
Some have lauded President Obama’s efforts, opining that the voluntary standards could become quasi-mandatory in practice, by essentially setting a new negligence bar for cybersecurity. Others have been more skeptical, arguing that without intervention by Congress, the Executive Order may have little practical effect. President Obama himself has emphasized the need for bipartisan action on the cybersecurity front, stating during the 2013 State of the Union Address that “[n]ow Congress must act as well, by passing legislation to give our government a greater capacity to secure our networks and deter attacks.”
“We know hackers steal people’s identities and infiltrate private e-mail. We know foreign countries and companies swipe our corporate secrets. Now our enemies are also seeking the ability to sabotage our power grid, our financial institutions and our air traffic control systems,” the President stated. Given the constant headlines about hackers from abroad gaining access to and disrupting the workings of large corporations and government agencies, the President’s Executive Order comes as a welcome first step towards strengthening the nation’s cybersecurity.
On December 28, 2012, the Standing Committee of China’s National People’s Congress, China’s legislative body, passed the “Decision on Strengthening Network Information Protection” (the “Decision”), which contains various principles for protecting, collecting and using electronic personal information in China. According to the Decision, these principles were passed in order to protect network information security, protect the lawful interests of citizens, legal persons and other organizations, and safeguard China’s security and social order.
The Decision provides legal protection for electronic information that is personally identifiable or involves personal privacy, and imposes various obligations on network service providers and other entities that collect and use the electronic personal information of Chinese citizens (collectively, “Network Service Providers”). Some of the significant obligations contained in the Decision include:
- Prohibition on stealing, illegally obtaining, selling or illegally providing electronic personal information;
- Requirement that Network Service Providers clearly and publicly indicate the objective, methods and scope for the collection and use of electronic personal information;
- Requirement that Network Service Providers obtain consent when collecting or using electronic personal information and keep such information confidential;
- Requirement that Network Service Providers adopt technological measures to ensure information security; and
- Prohibition on the sending of commercial electronic communications to fixed telephones, mobile telephones or to e-mail accounts without consent.
Network Service Providers must also improve their management of information disseminated by their users. When that information violates laws or regulations, Network Service Providers are required to take certain affirmative actions, including stopping the dissemination of the information, preserving the relevant records and informing the relevant government departments.
Further, the Decision requires any entity providing access to internet, fixed telephones or mobile telephones or providing information publication services (e.g., microblogging) to gather real identity information from users at the time of entering into agreements or confirming service provision with users.
Under the Decision, when citizens discover any network information that discloses their personal identity, invades their personal privacy or otherwise infringes their lawful rights or are being harassed by commercial electronic information, they have the ability to require Network Service Providers to delete the relevant information or adopt necessary measures to stop the infringing activity. Any individual or organization may report illegal or criminal acts against the Decision to the appropriate government department, and the infringed may also file a lawsuit against the infringers in accordance with law.
Penalties for violating the Decision include warnings, fines, confiscation of unlawful income, cancellation of permits, closure of websites or ban on engaging in web-related business in future, which would also be entered into social credit records and be made public, or other civil, administrative or criminal penalties.
Taking effect as of the date of its publication (i.e., December 28, 2012), the Decision is a great step forward for privacy protection in China. However, the provisions of the Decision are very general and still need to be completed by more specific and detailed implementing rules. So, the implementation and enforcement of the Decision remains to be tested in practice.
The California Supreme Court held on February 4, 2013 that the provision of the Song-Beverly Credit Card Act of 1971 (the “Act”) prohibiting retailers from requesting personally identifying information as a condition to processing credit card transactions does not apply to online purchases of electronically downloadable items. (Apple v. Super. Ct., S199384, Case No. B238097, available at http://www.courts.ca.gov/opinions/documents/S199384.PDF.) The Court agreed with Apple that online sales of electronically downloadable products fall outside the coverage of the Act. The Court’s reasoning emphasized that the collection of some personally identifying information is important in preventing online fraud. Although the Act does not apply to the transactions in question, the Court pointed out that online retailers are not given free rein because other state and federal laws do apply to place limits on the collection and use of personally identifying information.
Among the provisions of the Act, codified at California Civil Code section 1747 et seq, is a prohibition in section 1747.08 against retailers’ requesting or requiring a credit card holder’s personal identification information in order to process a credit card transaction. The Court has previously held that requesting and recording a Zip Code during a credit card transaction in a brick-and-mortar store is forbidden under the Act. Pineda v. Williams-Sonoma Stores, Inc., 51 Cal. 4th 524 (2011). The Court wrote in Apple that the plain meaning of the statute’s language was not decisive of the issue at hand, and an analysis of the legislature’s statutory scheme as a whole was necessary. The Court also pointed out that section 1747.08 of the act makes no reference to online transactions, which is unsurprising, given that the provision that later became section 1747.08 was enacted in 1990.
The plaintiff in the underlying trial court case alleged that Apple requested or required his address and telephone number in order to accept his credit card payment for electronically downloadable items. Apple demurred to the Complaint, arguing that online transactions fall outside the scope of the Act, and that holding otherwise would undermine the prevention of online identity theft and fraud. Although not addressed in the opinion, presumably, Apple’s payment card processor cross-checks the address information provided by the customer with the payment card billpay address as a method to verify the customer is the authorized cardholder.
The Court noted in its Apple decision various exceptions to the prohibition outlined in the Act, including where the retailer is contractually required to provide personally identifying information to complete the transaction, uses the Zip Code solely to prevent fraud, is obligated to collect information by a federal or state law, or collects the information for a purpose incidental but related to the credit card transaction (like shipping or delivery information). Furthermore, section 1747.08, subdivision (d) specifically states that the Act does not prohibit retailers from requiring safeguards, in the form of reasonable forms of positive identification, as a precondition to a credit card transaction.
The Court reasoned that since the law’s exceptions and its allowance to check IDs at the point of sale do not have practical applicability in e-commerce transactions, it must be that the legislators did not intend the law to apply to e-commerce transactions at all. The Court seemingly was also influenced by a desire to balance the protection of consumers from undesired solicitation against the need to authenticate payment card purchasers who are not physically present to show an ID or provide their signature on a transaction form.
The Court explicitly did not identify specifically what types of personally identifying information would be allowable to collect for authentication purposes. The Court held only that section 1747.08 cannot have been intended to apply to online sales of downloadable products because holding otherwise would foreclose anti-fraud protections enabled by the collection of personal information during e-commerce transactions.
Data use and sharing disclosures on mobile devices need work, the FTC said in a staff report released last week. The report recommends ways that actors in the mobile marketplace—such as mobile operating system providers, application developers, advertising networks, and analytics companies—can inform consumers of data collection and sharing practices. While the FTC tailors recommendations for each group, the recommendations are essentially focused on providing consumers with timely and understandable data use disclosures. If such disclosures do not materialize, FTC Chairman Jon Leibowitz said to reporters in a teleconference discussing the report, the mobile industry may face regulatory or legislative mandates.
The report is in part the result of the FTC’s May 30, 2012 workshop, which brought together members of the mobile industry, trade associations, academia, and consumer privacy groups to discuss privacy issues presented by mobile devices. The report is also in response to increasing consumer concern about privacy on mobile devices.
While providing a wealth of benefits to consumers and players in the mobile marketplace, mobile devices have presented novel privacy issues because they are personal to the consumer and are used for numerous activities such as surfing the Internet and social networks, sending e-mails and messages, taking and sharing photographs, and simply making phone calls. Additionally, mobile devices are almost always turned on and are almost always with the user. All this facilitates new avenues and levels of data collection, but the space available for disclosures is limited to the size of the mobile device’s screen – often just a few inches.
While the report does not carry the force of law, it offers several suggestions for mobile privacy disclosures and provides a window into the FTC’s approach to mobile privacy. For instance, the report indicates that the FTC views adherence to a “strong privacy code” favorably and considers geolocation information to be “sensitive”—akin to financial, health, and children’s data.
The FTC report recommends the following with respect to specific actors in the mobile marketplace:
Operating System Providers:
- Provide disclosures and obtain consumers’ affirmative express consent before allowing apps to access data;
- Consider a one-stop “dashboard” approach and the use of icons to allow consumers to review the types of content accessed by apps and to depict the transmission of user data;
- Implement developer best practices that require developers to make privacy disclosures, enforce those requirements, and educate app developers;
- Provide clear disclosures about the extent to which the platform reviews apps before making them available for download; and
- Offer a Do Not Track function for mobile devices that allows consumers to prevent tracking by ad networks or other third parties.
- Provide layered disclosures and obtain affirmative express consent before collecting and sharing sensitive information (to the extent the platforms have not already done so);
- Coordinate with ad networks and other third parties such as analytics companies to better understand the third-party software and provide accurate disclosures to consumers;
- Participate in self-regulatory programs, trade associations, and industry organizations to develop uniform, short-form privacy disclosures.
Advertising Networks and Other Third Parties:
- Communicate with app developers towards providing truthful disclosures;
- Work with platforms to ensure effective implementation of mobile Do Not Track.
Trade associations, Academics, Experts and Researchers:
- Develop short-form disclosures for app developers;
- Promote standardized privacy policies that will enable consumers to compare data practices across apps;
- Educate app developers on privacy issues.
While the FTC has indicated that it will continue to monitor developments in the mobile marketplace and is open to further suggestions and proposals, it encourages actors in the mobile marketplace to implement the recommendations in the report. In the end, the FTC hopes the report will help build trust between businesses and consumers.
Two and a half years after initiating a review of the Children’s Online Privacy Protection Rule (the “Rule”), the Federal Trade Commission (FTC) announced on December 19, 2012 that the Rule will be amended to clarify perceived ambiguities and to strengthen the Rule’s protections for children who engage in online activities in light of significant technological changes in the online industry since the Rule went into effect more than 12 years ago (the “Amended Rule”).
The Amended Rule includes significant modifications, which are outlined below:
- The definition of “Personal information” – In a nod to new technologies and the prevalence of social networking, “personal information” has been expanded to include geolocation information and persistent identifiers that can be used to recognize a user over time and across different websites or online services, as well as some usernames, photographs, videos and audio files. While this revision seems to nullify the ability to avoid falling under COPPA by keeping users anonymous by assigning them a unique number, persistent identifiers are only covered by the Rule’s definition of Personal Information if they can be used to track users across websites, and the FTC notes that parental notice and consent are not required when an operator collects a persistent identifier (like an IP address) solely to support the website’s or online service’s internal operations (e.g., contextual advertising, frequency capping, legal compliance, site analysis, and network communications). Acknowledging the likelihood of future technical innovation, the FTC has included in the Amended Rule a process whereby industry members may request the FTC to approve additional activities to be included within the definition of “support for internal operations.”
- The definition of “Operator” – The Amended Rule clarifies that a child-directed site or service that integrates outside services (such as plug-ins or advertising networks) which themselves collect personal information from the site’s or service’s visitors qualifies as an “operator” of those services. Thus, website operators are now responsible for not only their own compliance with the Amended Rule, but compliance by third parties who collect personal information on their websites using third party services.
- The definition of “Website or online service directed to children” – The Amended Rule now explicitly provides that a “website or online service directed to children” also covers, for example, a plug-in or ad network when it has actual knowledge that it is collecting personal information directly from a user of a child directed website or online service.” However, exceptions from certain of the Rule’s requirements apply if certain conditions are not met. Additionally, the new Rule codifies, an allowance previously only articulated by the FTC in its FAQ – that is, the ability to differentiate child users from other users of a teen or general audience site by asking the user his age.
- Parental Notice: The Amended Rule revises notice requirements so that privacy policies and direct notices to parents are concise (i.e., by removing extraneous information) and timely (i.e., through a “just-in-time” message).
- Consent Mechanisms: The Amended Rule allows for several new methods by which operators can obtain parental consent, including electronic scans of signed parental consent forms, video-conferencing, use of government-issued identification and alternative payment systems (i.e., debit cards and electronic payment systems). It also allows operators to petition the FTC or a safe harbor program to approve additional methods. The amended Rule also slightly narrows the commonly used “multiple use” exception by adding a requirement that the contact information collected cannot be combined with any other information collected from the child. Finally, the FTC removed the little used public key encryption exception.
- Confidentiality and Security Requirements: The Amended Rule provides that operators only release information subject to the Rule to service providers and other third parties who are capable of, and provide assurances to, adequately safeguarding and securing data related to children.
- Safe Harbor Audits: The Amended Rule requires that safe harbor programs (i.e., approved self-regulatory programs) audit their members annually and report the aggregated audit results to the FTC annually.
- Data Retention and Deletion: The Amended Rule now requires an operator of a website or online service to limit its retention of personal information for only as long as is reasonably necessary to fulfill the purpose for which the information was collected. The operator is also required to delete such information using reasonable measures to protect against unauthorized access to, or use of, the information in connection with its deletion.
Companies who have web sites or online services that are targeted to children under 13 years of age or that knowingly collect PI from them should review their current data practices to ensure their compliance with the Amended Rule, which is scheduled to go into effect on July 1, 2013.
Recently announced changes to the Health Insurance Portability and Accountability Act (HIPAA) Privacy and Security Rule represent one of the most significant developments in health care privacy law in the past 10 years. Known as the final omnibus rule, the changes were announced by the U.S. Department of Health and Human Services on January 17, 2013, and published in final form in the Federal Register on January 25. The rule greatly enhances patient privacy protections, provides individuals new rights to their health information and strengthens the government’s ability to enforce the law.
Click here to read a recent Client Alert that was published by Proskauer on the new rule. This alert explores the rule’s provisions and many of the new requirements it imposes on covered entities, business associates, and subcontractors engaged by business associates.
On January 10, 2013, President Obama signed into law H.R. 6671, an amendment to the Video Privacy Protection Act of 1988 (VPPA) codified at 18 U.S.C. § 2710, which will permit companies, such as Netflix, to obtain advance consent from consumers to automatically share their movie viewing history on social media sites. While Facebook users have been able to share on an ongoing basis articles that they have read or songs to which they have listened, the VPPA prevented a similar sharing of videos that they have watched. H.R. 6671 allows Netflix to bring its movie sharing Facebook app, available internationally for some time, to its United States subscribers.
As background, the VPPA was enacted in 1988 after a local newspaper released a list of the videos rented by controversial Supreme Court candidate Robert Bork. While the videos Bork favored were hardly salacious, legislators, reacting to what they viewed to be an invasion of Bork’s privacy, quickly passed the VPPA. The VPPA prevents the disclosure of personally identifiable rental records by a “video tape service provider,” without “written consent” from the viewer for each video or without a warrant from the police.
Netflix and other online video companies believed that the VPPA, created in a pre-Internet world, was ambiguous in scope and in need of modernization. Digital distributors were unclear whether online consent qualified as written consent and whether the VPPA even applied to them since “video tape service provider” was defined as any person who dealt with “prerecorded video cassette tapes or similar audio visual material.”
The amendment to the VPPA gives companies the right to obtain advance consent, including electronic consent, from consumers for the ongoing sharing of their video rental information. Consent can be given (i) at the time disclosure is sought, or (ii) for a period not to exceed 2 years or until consent is withdrawn. The law also requires that service providers present the opportunity “in a clear and conspicuous manner, for the consumer to with-draw on a case-by-case basis or with-draw from ongoing disclosures.” H.R. 6671
For those eager to effortlessly share their viewing tastes via social media, Netflix anticipates launching its social media apps in the United States within the year. As for consumers who would like to keep their Facebook friends privy to their love of foreign films, while keeping their Molly Ringwald viewing habit under wraps, the safeguards provided for in H.R. 6671 should allow for just that.
For the fourth time since the Massachusetts data security regulations took effect in March 2010, the Massachusetts Attorney General’s Office (“AGO”) has settled allegations that Massachusetts-based entities violated the regulations. On January 7, 2013, Suffolk Superior Court approved consent judgments pursuant to which five entities agreed to collectively pay $140,000 to settle allegations that they mishandled and improperly disposed of medical records containing personal information and protected health information. The settlement amount includes civil penalties, attorneys’ fees and an allocated amount for a data protection fund to support efforts to improve the security and privacy of sensitive health and financial information in Massachusetts. A copy of the complaint and corresponding consent judgments are attached here.
The medical records contained information relating to more than 67,000 residents, and included names, Social Security numbers, health insurance information and medical diagnoses that were not redacted or destroyed before they were discarded at a local transfer station. The five entities include Goldthwait Associates, which provided medical billing services, in addition to four pathology groups that worked with Massachusetts hospitals and medical centers.
The AGO alleged that Goldthwait Associates mishandled and disposed of medical records containing personal information and protected health information that it received from the pathology groups. In addition, the AGO alleged that the four pathology groups failed to have appropriate safeguards in place to protect the personal information they provided to Goldthwait Associates, and did not take reasonable steps to select and retain a service provider that would maintain appropriate security measures to protect such confidential information. The complaint alleged that Goldthwait Associates violated the Massachusetts Consumer Protection Act, M.G.L. c. 93A; the Massachusetts Data Disposal and Destruction Act, M.G.L. c. 93I; and the Massachusetts Security Breach Act and its corresponding regulations, M.G.L. c. 93H/201 CMR 17.00. In addition, the complaint alleged that the four pathology groups violated the Massachusetts Security Breach Act and its corresponding regulations, M.G.L. c. 93H/201 CMR 17.00; and HIPAA Privacy and Security Rules, 45 C.F.R. §§ 160 to 164.
Unlike the other data security violations prosecuted by the AGO where the settling entity was required to disclose a data breach to the AGO, this matter first became public in 2010 when a Boston Globe photographer was discarding his own garbage at the transfer station and noticed a large stack of paper which, upon closer inspection, he discovered to be medical records. It thereafter became apparent that the owners of Goldthwait Associates had recently retired and, in an effort to dispose of their records as cheaply and quickly as possible, had hired their son to discard the documents at a local transfer station. The complaint stated that Goldthwait’s “failure to institute and implement reasonable data security measures to protect the confidentiality of protected health and personal information entrusted to Goldthwait, and instead allow an untrained third-party to dispose of the documents at a dump, resulted in a serious violation of patient privacy and violations of state consumer protection and data security laws.”
Since the regulations went into effect in March 2010, the AGO has sent a consistent message of enforcement. In a statement announcing the January 7th settlement, Massachusetts Attorney General Martha Coakley stated: “Personal health information must be safeguarded as it passes from patients to doctors to medical billers and other third party contractors . . . . We believe this data breach put thousands of patients at risk, and it is the obligation of all parties involved to ensure that sensitive information is disposed of properly to prevent this from happening again.”
On January 17, 2013, U.S. Department of Health and Human Services Secretary Kathleen Sebelius announced the final omnibus rule that among other things (1) increases patient privacy protections; (2) provides individuals with new rights to receive a copy of their electronic medical record in an electronic form; and (3) provides individuals with the right to instruct their provider not to share their information about their treatment with their health plan when they pay in cash. The new rule formally expands patient privacy and security requirements to business associates, contractors and subcontractors. The rule also strengthens the government’s ability to enforce the law with increased penalties for noncompliance based on the level of negligence. Penalties are increased up to a maximum penalty of $1.5 million per violation.
In announcing the new patient privacy protections, HHS Secretary Sebelius recognized that “Much has changed in health care since HIPAA was enacted over fifteen years ago.” “The new rule will help protect patient privacy and safeguard patients’ health information in an ever expanding digital age.”
More to follow as we dig deeper into the new privacy and security requirements.
Ever on the forefront of consumer privacy protection, California is again making news in the privacy world with the California Attorney General’s recent publication of “Privacy on the Go: Recommendations for the Mobile Ecosystem,” which includes privacy recommendations for app developers, app platform providers, mobile ad networks, makers of operating systems and mobile carriers. With this publication, California joins the FTC and the GSMA as entities that have published non-binding guidance with respect to mobile privacy (which we blogged about here and here, respectively).
In the publication, the Attorney General notes that these recommendations often “. . . offer greater protection than afforded by existing law, [and] are intended to encourage all players in the mobile marketplace to consider privacy implications at the outset of the design process.” The report outlines the following specific recommendations:
For App Developers:
- Start with a data checklist to review the personally identifiable data your app could collect and use it to make decisions on your privacy practices.
- Be transparent with respect to your privacy practices.
- Avoid or limit collecting or retaining personally identifiable data not needed for your app’s basic functionality.
- Give users access to personally identifiable data the app collects and retains about them.
- Use security safeguards.
- Be accountable for compliance with applicable laws.
- Use enhanced measures – “special notices” or the combination of a short privacy statement and privacy controls – to draw users’ attention to data practices that maybe unexpected and to enable them to make meaningful choices.
For App Platform Providers:
- Make app privacy policies accessible from the app platform so that they may be reviewed before a user downloads an app.
- Use the platform to educate users on mobile privacy.
For Mobile Ad Networks:
- Avoid using out-of-app ads that are delivered by modifying browser settings or placing icons on the mobile desktop.
- Move away from the use of unchangeable device-specific identifiers and transition to app-specific or temporary device identifiers.
For Operating System Developers:
- Develop global privacy settings that allow users to control the data and device features accessible to apps.
For Mobile Carriers:
- Leverage your ongoing relationship with mobile customers to educate them on mobile privacy and particularly on children’s privacy.
While the California Attorney General acknowledges that the recommendations are just that – recommendations – it is clear that as “smart phones” become ubiquitous, more federal and state regulation will impact, in one way or another, all participants in the mobile ecosystem.
For the second year in a row, Proskauer has conducted a global survey, “Social Media in the Workplace Around the World 2.0”, which addresses the use of social media in the work place. In 2012, Proskauer surveyed multinational businesses in 19 different countries (Argentina, Brazil, Canada, China, The Czech Republic, France, Germany, Hong-Kong, India, Ireland, Italy, Japan, Mexico, Singapore, South Africa, Spain, The Netherlands, the United Kingdom and the United States) in order to provide a worldwide perspective of workplace use of social media. This survey not only shed light on notable developments in the use of social media in the workplace, but also helped identify consistent traits.
Despite legal and cultural differences, the survey revealed a surprising degree of commonality across jurisdictions as to best practices utilized by employers with respect to social media in the workplace. The 5 overarching best practices for companies are identified below.
Best Practice 1: Implement a policy dedicated to social media.
Employers should implement a policy dedicated to social media use that clearly sets out acceptable and unacceptable usage inside and outside the workplace as well as after employment comes to an end. The policy should comply with and be implemented in accordance with local requirements, including privacy laws.
The use of social media may have a detrimental impact upon a company. If an employee discloses confidential information via social media it could be particularly difficult for employers to identify the individuals who have had access to that information. Unlike emails where the list of recipients can usually be determined, the breadth of an employee’s social media post depends on the confidentiality settings of that employee’s social media account.
Without clear policies, it can be difficult to lawfully sanction employees for misuse of social media. In Spain, the High Court of Justice of Madrid accepted offensive statements posted on Facebook by an employee as evidence towards the appropriateness of the employer’s disciplinary action. The High Court found the dismissal of the employee to be fair because the company’s code of conduct explicitly permitted disciplinary measures for offensive or defamatory remarks made by employees against the company (High Court of Justice of Madrid May 25, 2011).
Best Practice 2: If monitoring is implemented, be respectful of local requirements.
Employers who choose to monitor their employees’ social media use at work must have clear, express and well-communicated policies about the extent and nature of this monitoring. Employers must also comply with and implement their policies in accordance with local requirements.
For instance, in France, in order to implement a monitoring system, employers have to (i) inform and consult employee representatives, (ii) inform the employees who are impacted, and (iii) notify the French Data Protection Agency. Failure to comply with these steps may result in sanctions against the employer, suspension of the monitoring policy, and/or may make any evidence collected through the monitoring system inadmissible against the employee.
Best Practice 3: Implement a proportionate monitoring system.
Any monitoring should go no further than necessary to protect the employer’s business interests. Monitoring should be conducted only by designated employees who have been trained to understand the limits on permissible monitoring and to comply with local privacy requirements, including with respect to the safe storage, confidentiality and onward transfer of personal data.
Employers must balance the necessity to protect their corporate interests against the privacy rights of employees.
Best Practice 4: Be cautious when using information from social media to recruit or discipline employees.
Before relying on information from social media sites to make employment-related decisions, employers should proceed cautiously. Not only could the information be inaccurate, but making decisions based on such information creates the risk of unlawful discrimination, a breach of data privacy requirements and/or infringement of an employee’s rights to privacy.
In Italy, it is not permissible to refer to social media sites when recruiting and selecting candidates. In France, although not prohibited by law, recruiters abide by a Code of Conduct which stresses that selection of applicants should be based only on their professional skills and should exclude all elements pertaining to their private lives.
When contemplating disciplinary measures based on information gleaned from social media, employers must try and balance their employees’ rights to free speech and privacy with business considerations. Certain rights weigh more heavily depending on the country and location, and may impact the particular employer’s decision.
A comparison of decisions by the Employment Tribunal in England and the French Court of Appeals highlights the country-specific nature of balancing employer and employee rights. In both cases an employee posted comments on Facebook that debased the reputation of the employee’s respective employer. In the United Kingdom, the Employment Tribunal ruled that the employee’s privacy settings, which limited access to his posts to his Facebook friends, did not give him a reasonable expectation of privacy over these posts. Since his employer had been harmed by his derogatory comments, his dismissal was found proper. However, the French Court of Appeals held that the termination of an identically situated employee was unlawful, and found that there was no evidence that the negative posts could be read by people other than the Facebook friends of the employee. Therefore the employee’s Facebook wall was considered private and dismissal of the employee was found to be improper.
Best Practice 5: Protect your company’s confidential information.
The misuse of confidential information by employees via social media is emerging as a major issue. Employers should not only address this concern through social media policies, but should also amend provisions dealing with misuse of confidential information to explicitly cover social media. This is particularly important for companies subject to specific regulations and professional confidentiality obligations like the banking and medical sectors.
As social media becomes more prevalent in the workplace, it is increasingly important to consider its global implications, especially for multinational companies.