You Are Viewing Best Practices

FTC v. Google 2012 – Misrepresentation of Compliance with NAI Code a Key Element

Posted by fgilbert on August 9th, 2012

Google was hit by a $22.5 million penalty as a result of an investigation by the Federal Trade Commission covering Google’s practices with users of the Safari browser. A very interesting aspect of this new case against Google (Google 2), is that it raises the issue of Google’s violation of the Self-Regulatory Code of Conduct of the Network Advertising Initiative (NAI Code). This is an interesting evolution in the history of the FTC rulings. At first, the FTC focused on violation of privacy promises made in Privacy Statements, then it went on to pursue violation of the Safe Harbor Principles. In this new iteration, the FTC attacks misrepresentation of compliance with industry standard.

Misrepresentation of user’s ability to control collection or use of personal data

Two elements distinguish this case (Google 2) from most of the prior enforcement actions of the FTC. One is that the large fine results, not directly from the actual violations of privacy promises made in Google’s privacy policy, but rather from the fact that Google’s activities are found to violate a prior settlement with the FTC, dated October 2011 (Google 1).

In Google 1, Google promised not to misrepresent:

  • (a) The purposes for which it collects and uses personal information;
  • (b) The extent to which users may exercise control over the collection, use and disclosure of personal information; and
  • (c) The extent to which it complies with, or participates in, a privacy, security, or other compliance program sponsored by the government or any other entity.

According to the FTC complaint in Google 2, Google represented to Safari users that it would not place third party advertising cookies on the browsers of Safari users who had not changed the default browser setting (which by default, blocked third party cookies) and that it would not collect or use information about users’ web-browsing activity. These representations were found to be false by the FTC, resulting in a violation of Google’s obligation under Google 1 (see paragraph (b) in bulleted list above.

Misrepresentation of compliance with NAI Code

The second, and more interesting element of the Google 2 decision, is the FTC analysis of Google’s representation that it adheres to, or complies with the Self-Regulatory Code of Conduct of the Network Advertising Initiative (NAI Code). In the third count of the FTC Complaint in Google 2, the FTC focuses on Google’s alleged violation of the NAI Code.

This alleged violation allows the FTC to show that Google violated its obligation under Google 1 to not “misrepresent the extent to which it complies with, or participates in, a privacy, security, or other compliance program sponsored by the government or any other entity” (see the requirement under (c) in the bulleted list above). The FTC found that the representation of Google’s compliance with the NAI Code was false, and thus violated its obligation in Google 1 not to make any misrepresentation about following compliance programs.

Evolution of the FTC Common Law

Google 2 shows an interesting evolution of the FTC “Common Law.” In its prior cases, the FTC first focused on violations of companies’ privacy promises made in their public Privacy Statements. Then, in several consent orders published in 2011, including Google 1, the FTC expanded the scope of its enforcement action to violations of the Safe Harbor of the US Department of Commerce and the EU Commission. Now, with Google 2, the FTC expands again the scope of its enforcement action to include, as well, violation of Industry Standards such as the NAI Code.

What this means for businesses

The Google 2 Consent Order has significant implications for all businesses.

Companies often use their membership in industry groups as a way to show their values, and to express their commitment to certain standards of practice. Beware which industry group or program you join; understand their rules. As a member of that group or program, you must adhere by its code of conduct, rules or principles. Make sure that you do, and that all of the aspects of your business do comply with these rules.

When a business publicizes its membership in an industry group or a self-regulatory program, it also publicly represents that it complies with the rules or principles of that group or program. For example, those of the Safe Harbor (as was the case under Google 1) or those of the NAI (as was the case under Google 2), or others. Remember that these representations may have significant consequences, and may create a minefield if not attended properly. To stay out of trouble, the company must also make sure that these representations are accurate, and that it does abide by these promises at all times, and with respect to all of its products.

When a company makes a public commitment to abide by certain rules, it must make sure that it does comply with these rules; otherwise, it is exposed to an unfair and deceptive practice action. Make sure that you periodically compare ALL promises your business makes, with what ALL of your products, services, applications, technologies, actually do.

Comments Off on FTC v. Google 2012 – Misrepresentation of Compliance with NAI Code a Key Element

Mobile App Privacy Webinar on April 19, 2012

Posted by fgilbert on April 17th, 2012

On Thursday April 17, 2012, at 10am PT / 1pm ET, I will be moderating and presenting at a one-hour webinar organized by the Practising Law Institute: “A New Era for Mobile Apps?  What Companies Should Know to Respond to Recent Mobile Privacy Initiatives”.

The webinar will start with an overview of the technologies and ecosystem that surround the operation and use of mobile application, presented by Chris Conley, Technology and Civil Liberties Attorney, ACLU Northern California (San Francisco).

Patricia Poss, Chief, BCP Mobile Technology Unit, Federal Trade Commission (Washington DC) will then comment on the two reports recently published by the Federal Trade Commission:  “Mobile Apps for Children” (February 2012) and the final report “Protecting Consumer Privacy in an Era of Rapid Change”, which both lay out a framework for mobile players (March 2012).

I will follow with an overview of the recent agreement between the California State Attorney General and six major publishers of mobile apps, which sets up basic rules and structures for the publication and enforcement of mobile app privacy policies, and the Consumer Privacy Bill of Rights, which was unveiled by the White House in February 2012.  I will end with suggestions for implementing privacy principles in the mobile world.

To register for this webinar, please visit PLI website.


Comments Off on Mobile App Privacy Webinar on April 19, 2012

Never too Small to Face an FTC COPPA Action

Posted by fgilbert on November 9th, 2011

Some companies think that they are small and can fly under the radar, and need not worry about compliance.  They should rethink their analysis of their legal risks after the recent FTC action against a small social networking site.

On November 8, 2011 the FTC announced a proposed settlement with the social networking site, which collected personally information from children without obtaining prior parental consent, in violation of COPPA, and made false statements in its website privacy notice, in violation of the FTC Act.

In this case, the personal information of 5,600 children was illegally collected. This was much less than the violations identified in some of the recent FTC COPPA enforcement actions. For example, the 2006 action against Xanga revealed that Xanga had collected 1.7 million records, the 2008 action against Sony, that Sony had collected 30,000 records, and the 2011 action against W3 Innovations identified 50,000 illegally collected records.

The Problem

The social networking site Skid-e-kids targeted children ages 7-14 and allowed them to register, create and update profile information, create public posts, upload pictures and videos, send messages to other Skid-e-kids members, and “friend” them.

According to the FTC complaint, the website owner – a sole proprietor – was prosecuted for:

  • Failing to provide sufficient notice of its personal data handling practices on its website;
  • Failing to provide direct notice to parents about these practices; and
  • Failing to obtain verifiable parental consent.

In addition, these practices were found to be misleading and deceptive, which in turn was deemed to violate Section 5 of the FTC Act.

The site online privacy statement claimed that the site requires child users to provide a parent’s valid email address in order to register on the website and that it uses this information to send parents a message that can be used to activate the Skid-e-kids account, to notify the parent about its privacy practices, and that it can use the contact information to send the parent communications about features of the site.

According to the FTC, however, Skid-e-kids, actually registered children on the website without collecting a parent’s email address or obtaining permission for their children to participate. Children who registered were able to provide personal information, including their date of birth, email address, first and last name, and city.

The Proposed Settlement

The proposed Consent Decree and Settlement Order against Jones O. Godwin, sole owner of the site is available at The proposed settlement would:

  • Bar Skid-e-Kids from future violations of COPPA and misrepresentations about the collection and use of children’s information.
  • Require the deletion of all information collected from children in violation of the COPPA Rule;
  • Require that the site post a clear and conspicuous link to, the FTC site focusing on the protection of children privacy, and that the site privacy statement as well as the privacy notice for parents also contain a reference to the On Guard Online site;
  • Require that, for 5 years, the company engaged qualified privacy professionals to conduct annual assessments of the effectiveness of its privacy controls or become a member in good standing of a COPPA Safe Harbor program approved by the FTC;
  • Require that, for 8 years, records be kept to demonstrate compliance with the above.

A lenient fine … subject to probation

An interesting aspect of the proposed settlement is that the settlement, in effect, imposes only a $1,000 fine to the defendant. The fine is to be paid within five days of the entry of the order. However, if Skid-e-Kids fails to comply with some of the requirements of the Settlement, it will have to pay the full $100,000 fine that is provided for in the settlement.

Specifically, a $100,000 will be assessed if:

  • The defendant fails (a) to have initial and annual privacy assessment (for a total of 5 annual assessments) conducted by a qualified professional approved by the FTC and identifying the privacy controls that have been implemented, how they have been implemented and certifying that the controls are sufficiently effective; or (b) to become a member in good standing of a COPPA Safe Harbor program approved by the FTC for 5 years; or
  • The disclosures made about the defendant’s financial condition are materially inaccurate or contain material misrepresentations.

The Lesson for Site with Children Content

This new case is a reminder that the COPPA Rule contains specific requirements that must be followed, no matter the size of the site, when intending to collect children personal information. The COPPA rule defines procedures and processes that must be followed rigorously.

Among other things, the COPPA Rule requires websites that are directed to children and general audience websites that have actual knowledge that they are collecting children information to:

  • Place on its website a conspicuous link to its privacy statement;
  • Provide specified information in the website privacy statement, describe in clear terms what personal information of children is collected, how it used, and explain what rights children and parents have to review and delete this information;
  • Provide a notice directly to the parents, which must include the website privacy statement, and inform the parents that their consent is required for the collection and use of the children’s information by the site, and how their consent can be obtained;
  • Obtain verifiable consent from the parents before collecting or using the children’s information;
  • Give parents the option to agree to the collection and use of the children’s information without agreeing to the disclosure of this information to third parties.

In addition, we suggest also including, clearly and conspicuously, (a) in the website privacy statement; (b) in the notice to parents; and (c) at each location where personal information is collected a notice that invites the user to visit the On Guard Online website of the Federal Trade Commission for tips on protecting children’s privacy online:




Comments Off on Never too Small to Face an FTC COPPA Action

New EU Directive on Consumer Rights Affects Website Terms

Posted by fgilbert on November 9th, 2011

In late October 2011, the European Council of Ministers formally adopted the new EU Consumer Rights Directive. The new Directive will drastically affect the rules that apply to online shopping. Numerous provisions will also apply to both the online and the offline markets.

Scope of the Consumer Rights Directive

The Directive is intended to protect “consumers,” i.e., all natural persons who are acting for purposes that are outside

their trade, business, craft, or profession. It creates new obligations for “traders,” a broad term that encompasses all categories of persons who sell products or services. The Directive defines the term “trader” as any natural or legal person who is acting, directly or indirectly for purposes relating to his/its trade, business, craft of profession in relations to contracts covered by the Directive. These contracts include: sales contracts, service contracts, distance contracts, off-premises contracts, and public auction contracts that are concluded between a trader and a consumer.

There are numerous exceptions, such as contract for healthcare services, for financial services, for the construction of new buildings, for package travel, for passenger transport services, or contracts concluded by means of automatic vending machines.

Effect on US Companies

US companies that operate websites that sell to European customers, as well as their affiliates who make direct sales to EU consumers, must start evaluating the numerous consequences that the implementation of the Directive on Consumer Rights will have on their operations. The consequences include:

  • Practical consequences: The Directive introduces a new way of doing things. Thus, there will be a need to adapt the exisitng processes, procedures, and interaction with the customer to the new order. Forms and purchase orders will have to be revised.
  • Logistics: The Directive encourages returns. Under the new regime, customers will have 14 days to change their minds and return the purchased goods. Thus, the rate of return will increase. Logistics will have to change to allow the company to handle a heavier rate of returns.
  • Financial consequences: Merchants and traders will have to bear more costs. For example, hotline services will be permitted to charge only for actual telephone rate for phone calls.
  • Rewrite of Terms:  Terms of sale will have to be clearer and more explicit. For example, the additional charges must be clearly explained, or the customer will not bear these charges. Thus, new terms will to be drafted in order to communicate better with customers.

Overview of the changes

The Directive will require extensive changes in the Consumer Protection Laws of the Member States, including changes to implement the following requirements:

  • Pre-ticked boxes on websites will be banned

Pre-ticked boxes will be banned, so that consumers do not inadvertently get charged for options or services that they did not intend to purchase. Currently, consumers are frequently forced to untick these boxes if they do not want extra services.

  • Price transparency will be increased

Consumers will not have to pay charges or other costs if they were not properly informed before they place an order. Traders will be required to disclose the total cost of the product or service, as well as any extra fees.

  • Hidden charges and costs on the Internet on the Internet will be eliminated

Consumers will be required to explicitly confirm that they understand that they have to pay a price. This measure is expected to prevent hidden charges and cost that arise when companies try to trick consumers into paying for “free services,” such as horoscopes or recipes.

  • Surcharges for the use hotlines prohibited

Traders who operate telephone hotlines allowing the consumer to contact them in relation to the contract will not be able to charge more than the basic telephone rate for the telephone calls.

  • Surcharges for the use of credit cards prohibited

Traders will not be able to charge consumers more for paying by credit card (or other means of payment) than what it actually costs the trader to offer such means of payment.

  • Better consumer protection in relation to digital products

Information on digital content will have to be clearer, including about its compatibility with hardware and software and the application of any technical protection measures, for example digital rights management applications, which limit the right for the consumers to make copies of the content.

  • 14 Days to change one’s mind on a purchase

Consumers will be able to return the goods that they purchased if they change their minds within 14 calendar days. This change extends by 7 days the current period during which purchases can be returned. In addition, if a seller has not clearly informed the customer about the right to return the goods, the return period will be extended to a year.

The 14-day return period will start from the moment the consumer receives the goods. The rules will apply to Internet, phone, and mail order sales, sales outside shops (e.g. on the consumer’s doorstep, in the street, at a home party or during an excursion organized by the trader).

The right of withdrawal is extended to online auctions, such as eBay. However, the ability to return goods bought in auctions will be limited to goods bought from a professional seller. In the case of digital content, such as music or video downloads, consumers will have a right to withdraw from purchases of digital content only up until the moment the actual downloading process begins.

  • Better refund rights

Traders will be required to refund consumers for the product within 14 days of the withdrawal. This includes the costs of delivery. In general, the trader will bear the risk for any damage to goods during transportation, until the consumer takes possession of the goods.

  • Clearer information on who pays for returning goods must be provided

Traders who want the consumer to bear the cost of returning goods after they change their mind, will be required to clearly inform consumers about this requirement beforehand. Otherwise, they will have to pay for the return themselves.

At a minimum, they will have to clearly give, before the purchase, an estimate of the maximum costs of returning bulky goods (e.g. a sofa) bought on the Internet or through mail order.

  • Common rules will apply throughout the European Union

A single set of rules for distance contracts (sales by phone, post or internet) and off-premises contracts (sales away from a company’s premises, such as in the street or the doorstep) will apply throughout the European Union. Standard forms will be used, such as a form to comply with the information requirements on the right of withdrawal.

Implementation in the national laws

The EU Member States will have two years to implement the Directive into their national laws. The deadline for implementation will be computed from the date of publication of the Directive in the Official Journal of the European Union.

Based on experience with other implementations of other directives, we can expect that several EU countries will have implemented the Consumer Rights Directive by the end of 2013, and the remainder will follow during the following years. As always, the manner in which each country implements the Directive will be crucial. If the member states diverge in their interpretation of the Directive, websites, which reach customers across borders, will have to juggle with these discrepancies.

Relations with existing directives

The Directive on Consumer Rights will replace the current Directive 97/7/ECon the protection of consumers in respect of distance contracts and the current Directive 85/577/EECto protect consumer in respect of contracts negotiated away from business premises.

However, Directive 1999/44/ECon certain aspects of the sale of consumer goods and associated guarantees and Directive 93/13/EECon unfair terms in consumer contracts will remain in force.

Comments Off on New EU Directive on Consumer Rights Affects Website Terms

How to Build a Winning Privacy Program

Posted by fgilbert on October 27th, 2011

Many companies post on their websites a statement indicating that they care about the privacy of their customers or users, and then describe in general terms their policies with respect to certain categories of personal information. The golden rule for these privacy statements is “Say what you do, and do what you say you do.” Let’s assume that the company actually “said what it does;” that the disclosures in its privacy statement are accurate, complete, and up-to date; and that they clearly describe the company’s commitment to protect personal information. How, then, does it ensure that it “does what it said it does”?

How can CEOs and Board of Directors ensure that the company in their custody actually does what its privacy statement provides? Indeed, failure to act in accordance with this privacy statement could cause the company to be investigated by one or several of the Federal or State enforcement agencies. These enforcement actions have often resulted in the investigated entity agreeing to be supervised by the enforcement agency for 20 years, as was the case recently in the Google case. Fines in the millions may have to be paid, as was the case for Sony, Choice Point, and others. The company could also become the target of a suit for fraud and misrepresentation, breach of contract, negligence and much more. There, again, the disruption, damages and lawyers fees could be crippling.

To ensure that it acts in accordance with its public commitment to protect the privacy of its users and customers, a company must have a “Privacy Program” that addresses as appropriate the different aspects of privacy protection that attach to the personal information that it collects, processes, or shares with third parties. In the recent settlement of the Federal Trade Commission investigation of Google, Inc., the FTC has provided its views and requirements for a “Privacy Program.” This excellent and concise description can serve as a blueprint for companies that understand that they must build a Privacy Program to implement and support their privacy statements.

According to the Federal Trade Commission, a Privacy Program intended to protect customer and third party information must meet the following requirements:

Design and Analysis

The Privacy Program must be reasonably designed to:

·   Address the privacy risks related to the development and management of new and existing products and services for consumers; and

·   Protect the privacy and confidentiality of personal information

Meeting the Needs of the Company

The Program must contain privacy controls and procedures appropriate to the company’s size and complexity, the nature and scope of its activities, and the sensitivity of the personal information that it has committed to protect, or that it is required by law to protect.

Components of the Privacy Program

The Privacy Program must include at least the following:

·   A responsible person

The company must designate one or several individuals to coordinate and be responsible for the Privacy Program.

·   An analysis of needs

The Program must identify what personal information is to be protected according to the promises made in its Privacy Statement(s) and its other legal obligations. It must then identify the reasonably foreseeable, material risks, both internal and external, that could result in the company’s unauthorized collection, use, or disclosure of personal information.

·   An assessment of the risks

The program must include an assessment of the sufficiency of any safeguards in place to control the risks of unauthorized collection, use, or disclosure of personal information. This assessment should include consideration of risks in each area of relevant operation. At a minimum, this assessment should include an assessment of the design and development of products, and the management and training of employees.

·   Privacy Controls and Procedures

Reasonable privacy controls and procedures should be designed and implemented to address the risks identified through the privacy risk assessment.

·   Testing and Monitoring

The effectiveness of these privacy controls and procedures should be regularly tested and monitored. Infringers should be disciplined.

·   Control of Service Providers and Third Parties

Reasonable steps and measures should be developed and used to identify and retain service providers capable of appropriately protecting the privacy of personal information that these third parties receive from the company. Written contracts should require these service providers to implement and maintain appropriate privacy protections.

·   Evaluation and Adjustment

The Privacy Program should include a process that ensures that the Program is periodically evaluated and adjusted in light of the results of the testing and monitoring and of any material changes to the company’s operations or business arrangements, and any other circumstances that the company knows or has reason to know may have a material impact on the effectiveness of its Privacy Program.


The content and implementation of the Program must be documented in writing.

The program described above is intended to address the protection of customers, clients, and other individuals with whom a company interacts. A slightly different guidance would apply in the case of the collection and processing of employee personal information, since this information is usually collected in a different manner, held and used by different people, and is subject to different laws. However, all companies do have a legal obligation to protect the personal information of their employees, and they would equally benefit from taking the steps described above to ensure the proper protection of their employee personal information.

Action Item

It is not enough to make statements and representations in a document. A company or other entity that wants, or is required by law, to have a privacy policy must also adopt a plan or Privacy Program, that identifies and implements the appropriate policies, procedures, processes and measures – including discipline – that are needed to ensure that there is substance behind their privacy statement, and that they policy that these statements describe is actually implemented and followed.

Posted in Best Practices
Comments Off on How to Build a Winning Privacy Program

Compliance By Design

Posted by fgilbert on October 15th, 2011

How to build cloud applications that anticipate your customers’ legal constraints?

To succeed and gain market share, developers of cloud services and cloud-based applications must take into account the compliance needs of their prospective customers. For example, a cloud that offers services to the health profession must anticipate that its customers are required to comply with HIPAA, the HITECH Act, and the applicable medical information state laws. If it fails to do so, it will not be able to sign-up customers. Similarly, a cloud that uses servers that are located throughout the world must be sensitive to the fact that foreign data protection laws will apply, and that these laws have stringent requirements that differ from those in effect in the United States. If you fail to address these obstacles, your potential customers will take their business elsewhere.

Understand the Legal Constraints that Govern your Customers

Companies that use cloud services or cloud based applications remain responsible for fulfilling their legal obligations and compliance requirements. These restrictions and requirements come from federal laws or state laws, and their related regulations, may stem from standards or from preexisting contracts, or may result from foreign laws.

These companies will demand that their cloud service providers be aware of these requirements and design their applications and offerings in such a manner that it provides the customer with the necessary tools to comply with its own legal or contractual obligations.

A savvy cloud architect, designer or developer will anticipate its customers’ needs and design applications that facilitate the customers’ compliance requirements, and help them fulfill their legal obligations.

Consider, for example, the following:

– Federal Laws

Numerous federal laws and their related regulations may apply to the specific category of data that are hosted in the cloud. Several laws and regulations, as well as orders issued by the Federal Trade Commission, require companies to adopt specific privacy and security measures to protect certain categories of data, and to pass along these requirements when entering into a contract with a third party such as a service provider or a licensee.

There are other requirements, such as ensuring the authenticity and integrity of financial records in order to comply with the Sarbanes Oxley Act. On the marketing side, anti-spam and other laws limit the use of personal data for commercial purposes and require the use of exclusion databases to ensure that communications are made only to the appropriate party.

– State Laws

Numerous state laws also create obligations on companies, and these obligations follow the data when these data are entrusted to third parties. For example, there are restrictions on the use of social security numbers or driver license numbers. If your application requires the processing of these data, it should include the required technology to mask the numbers from most users, and block mailings that would disclose these protected numbers, when required by law.

Some state laws require that companies enter into written contracts with their service providers – including of course cloud providers – and these contracts must contain very specific provisions. If you are not prepared to sign these contracts and abide by the related requirements, do not waste time building a cloud application.

– Standards

Standards such as PCI DSS or ISO 27001 define specific information security requirements that apply to companies, and flow down to subcontractors, in a domino effect similar to that of federal or state laws.

– Foreign Laws

Cloud customers will also want to know in which country their data will be hosted, because the location of the data directly affects the choice of the law that will govern the data. If the data reside in a foreign country, it is likely that that country’s laws will govern at least some aspects of access to the servers where the data are hosted. For example, that country’s law may permit the local government to have unlimited access to the data stored in its territory whereas you may be more familiar with the stricter restrictions to access to US stored data by US law enforcement.

– Crossborder Transfer Prohibitions

When servers are located abroad, there is also a significant obstacle:  the prohibition against the cross border transfers of personal data. This is for example the case throughout the European Union, where the data protection laws of all member states have implemented in their national laws the 1995 EU Data Protection Directive prohibitions against transfers of personal data out of the European Economic Area to countries that do not offer an adequate level of protection for personal data and privacy rights.

As part of your Compliance by Design endeavor, you should anticipate that your customers might be concerned about where the personal data of their employees or clients will be hosted or located, because foreign data protection laws may impose restrictions on these data. And you should design your offering accordingly.

Ensure Personal Data Protection

A substantial amount of data that might be held in the cloud will be personal data. In the US and abroad, personal data are protected by a growing number of privacy and data protection laws. In general, these laws put on the entity that originally collected the data and has become the custodian of these personal data, an obligation to protect the privacy rights of the individuals to whom these data pertain.

In a cloud environment, each entity or data steward must continue to be able to fulfill the legal requirements to which it is subject and to meet the promises and commitments that it made to the third parties from whom it collected the personal data. It must also ensure that individuals’ choices about their information continue to be respected, even when the data are processed in a cloud environment. For example, individuals may have agreed only to specific uses of their information. Data in the cloud must be used only for the purposes for which they were collected, whether the data were collected in or through the cloud, or otherwise.

Anticipate the Need to Provide for Access, Modification, and Deletion of Personal Data

In addition to the above, the applicable law or privacy notice may allow individual data subjects to have access to their personal data, and to have this information modified or deleted if inaccurate or illegally collected. In this case, the cloud service provider must design its application in anticipation of the fact that the application will have to allow, easily and conveniently, for the exercise of these access, modification and deletion rights to the same extent and within the same timeframes, as it would in an off-cloud relationship.

Ensure Adequate Information Security

You should also be prepared to address your customer’s security needs. All data entrusted to you will require a reasonable level of security, whether they are the photos of the company picnic, or the secret formula for that special product for which your customer is famous. In addition, many categories of data that might be hosted in the cloud, such as personal data, financial data, customer purchases and references, or R&D data are sufficiently sensitive to require being protected through more extensive security measures.

The obligation to provide adequate security for personal data stems from numerous privacy and data protection laws, regulations, standards, cases, and best practices. For some categories of data, such as personal data or company financial data, specific laws or security standards require the use of specific security measures to protect these data. These laws and standards include, among others, the Sarbanes Oxley Act, GLBA, HIPAA, Data Protection Laws in Europe or Asia, as well as the PCI DSS and the ISO 27001 security standards. Further, the common law of information security created by the FTC or State Attorney General rulings also requires that adequate security measures be used to protect sensitive data. The obligation to maintain a reasonable level of security may also result from contracts or other binding documents where the cloud customer has previously committed to a third party that it would use adequate security measures.

You should design the security foundation and architecture of your cloud offering to address the applicable security requirements of the market that you wish to reach. You should also be prepared to commit to your client that you will use specified information security measures to protect the personal data processed through your cloud application.

Be Prepared to Disclose Security Breaches

Security incidents are prone to occur. The US States and an increasing number of foreign countries have adopted security breach disclosure laws that require the custodian of specified categories of personal data to notify individuals whose data might have been compromised in a breach of security. Frequently, the local State Attorney General, Data Protection Supervisory Authority, or other government agency must be notified, as well.

If a security incident occurs in the cloud, the customer – who usually has the primary contact with the concerned individuals –, expects to be informed of the incident, so that it can, in turn, notify the affected business contacts, employees or clients of the occurrence of the breach. To do so, the cloud customer must have been informed promptly of the occurrence, nature, and scope of the breach of security.

Thus, as a cloud service provider you should have in place the processes necessary to identify a security breach, and to promptly notify your customers of the occurrence of the breach. Just like your own customers, you should have in place a security incident response plan to address the security breach thoroughly and expeditiously, promptly stop any leakage of data, eliminate the cause of the breach of security, identify who and which category of data were or might have been affected, and interact with your customers to mitigate the effect and consequences of the breach.

Ensure Business Continuity

Your customers and prospects may also be required by law or by contract to ensure the continuity of their operations and uninterrupted access to their data. This is the case, for example, under the HIPAA Security Safeguards. A hospital that provides technology or medical information database services to the physicians on its staff must provide continued access to patient information in order to ensure proper patient care. This requirement applies as well to the business associates that provide services to the hospital. The PCI DSS standard also requires companies to have an incident response plan that includes business recovery and continuity procedures.

When these applications are hosted in a cloud, the customers or prospects will want to ensure that the cloud service provider has in place proper business continuity and disaster recovery capabilities because they are essential to ensure the viability of their own operations and in some cases because this is required by applicable law. Thus, if you design a cloud offering, be sure to plan and implement appropriate disaster recovery and business continuity measures, so that you can help your customers meet their own business continuity requirements.

Be Prepared to Assist your Client with its E-Discovery Obligations

If there is a civil suit in which the cloud service customer is a party, or if there is an investigation by a government agency, the cloud service provider is likely to receive a request for access to the information that it holds as the hosting entity. This request may come directly from the customer, for the benefit of the customer, or it may come from third parties who wish to have access to evidence against the customer.

You should anticipate your customers’ request for assistance in implementing a litigation hold or responding to a request for documents. You should be ready to respond to inquiries from your prospects or potential customers about how you will work and cooperate with them to address compliance with the requirements of the E-Discovery provisions of the Federal Rules of Civil Procedure and the State equivalents to these laws. You should plan and agree ahead of time on each other’s roles and responsibilities with respect to litigation holds, discovery searches, the provision of witnesses to testify on the authenticity of the data, and the provision of primary information, metadata, log files and related information.

Anticipate Requests for Due Diligence and Monitoring

Whether it is required to do so by law, by contract, or otherwise, your customer or prospect will also want to conduct due diligence before entering into the contract, and will also want to be able to periodically monitor the performance and security of your applications. Consider, for example the monitoring and testing requirements under the Security Safeguards under HIPAA or GLBA, or those in the orders issued by the Federal Trade Commission or the State Attorneys General.

Be prepared to respond to these requests for due diligence, monitoring, or inspection and provide for the cloud customer’s ability to conduct its investigation in a manner that satisfies the customer’s needs while not disrupting your operations. For example, develop a security program that is consistent with industry standards, provide for easy to access logs for access to data, and put in place controls that prevent the modification of data.

Ensure a Smooth Termination

No one wants to lose a good customer. Be realistic, however, and accept that termination might occur. Do not be an obstacle to the termination of a contract, or your reputation will suffer. Show your prospective clients that they can trust you, and that they will not be kept hostage if they want to move on.

Accept that, in case of termination of the contract, the cloud customers must be able to retrieve their data, or to have destroyed data that are no longer needed. Make it easy for them to do so; show respect for, and awareness of your customers’ own constraints. Be prepared to respond to a customer’s request for the return, transfer, or destruction of the data, assess in advance the costs associated with it, and have in place technology, processes and procedures to be used to address the special needs resulting from termination.

Planning for termination will reduce disputes and the resulting disruptions. If termination is not planned properly, problems might occur. The data might have been commingled with other customers’ data to save space or for technical reasons. This entanglement might make it difficult, time consuming, expensive, or perhaps impossible to disentangle the data.


If you want your cloud offering to be successful, put yourself in your customers’ shoes. Anticipate their needs. Help them comply with their obligations. Design a cloud offering that will allow them to continue to comply with their own obligations in the same way as they did when their data, files, trade secrets, and other crown jewels were in their direct control.

Comments Off on Compliance By Design

Hot issues in Privacy & Security

Posted by fgilbert on May 23rd, 2011

Top ten list of issues presented by Francoise Gilbert as part of her Conference Chair address, at the PLI Privacy & Security Conference in San Francisco, May 23-24, 2011.

# 10 –
In the US, numerous privacy and security bills in the pipeline
Greater compliance burden expected

# 9 –
Abroad, new data protection laws enacted

# 8 –
Security breach continues to be top concern in the US and
More security breach notice laws are developing abroad
Cost of breach expected to increase everywhere

# 7 –
EU data protection 2.0
Back to the drawing board with new rules

# 6 –
Tracking and profiling entering the red zone

# 5 –
Tempest in the EU cookie jar

# 4 –
Everything mobile
Geolocation major source of privacy issues

# 3 –
Cloud computing saves money
But brings new legal headaches

# 2 –
Privacy by design, Right to be forgotten, Smart grid
New legal constraints or technical opportunities?

# 1 –
Privacy and security fiascos becoming very expensive
Million-dollar damages in privacy or security suits and enforcement actions

A copy of the presentation is available here.

Comments Off on Hot issues in Privacy & Security

When Will Your CEO’s Social Media Postings End-Up in a Court Room?

Posted by fgilbert on October 7th, 2010

Social networks such as Facebook and MySpace allow members to create an online profile that may be accessed by other members.  Some social networks have privacy controls that allow members to choose who can view their profiles or contact them.  Others do not require pre-approval to gain access to a member’s profiles.

These materials are easy target for trial or litigation attorneys who may wish to use them to impeach the opposing party or its witnesses.

According to two recent opinions of State Bar Associations regarding the ethics of accessing materials posted on a social networking site to gather information for pending litigation, it is appropriate to access and use this information so long as the information is publicly posted on the social network site.  If access is restricted, the lawyer may not employ deception – e.g., under false pretense “friend” the targeted person – to access these materials.

These two opinions provide concrete examples of why it is important for companies to ensure the propriety and accuracy of their executives and employees’ postings on social media.  If litigation occurs, attorneys will not hesitate to look for material on social media, for use as evidence or in order to impeach litigants or witnesses.  So long as the postings are not protected by the appropriate privacy and security settings, it will be fair to use their content in litigation.

The New York State Bar Association addresses access to the pages of a party to the litigation, while the Philadelphia opinion addresses access to non public pages of a witness.

Access to the Public Pages of a Litigant

The New York State Bar Association (NYSBA) was issued in response to an inquiry about accessing parts of a litigant’s postings on social networks that are accessible to all members of the network.

NYSBA opined that a lawyer who represents a client in a pending litigation, and who has access to the social network used by another party in litigation, may access and review the public social network pages of that party to search for potential impeachment material provided that the lawyer does not employ deception (including, for example, employing deception to become a member of the network).

As long as the lawyer does not “friend” the other party or direct a third person to do so, NYSBA considers that accessing the social network pages of the party would not violate the ethics rules prohibiting deceptive or misleading conduct and false statements of fact or law.

Obtaining information about a party available on social media is similar to obtaining information that is available in publicly accessible online or print media, or through a subscription research service such as Nexis.

Access to the Restricted Pages of a Litigant

NYSBA’s opinion pointed to the major difference between the facts above, and an attempt at friending someone in order to access information for which access is restricted.  In a footnote to its opinion above, NYSBA commented that an attempt to friend an unrepresented party would violate the rule that prohibits a lawyer from stating or implying that he or she is disinterested, and requires the lawyer to correct any misunderstanding as to the lawyer’s role.

Further, a lawyer’s attempt at friending a represented party in a pending litigation would violate the “no-contact” rule, which prohibits a lawyer from communicating with the represented party about the subject of the representation without the prior consent of the litigant’s lawyer.

Access to the Restricted Pages of a Witness

A few months earlier, the Philadelphia Bar Professional Guidance Committee addressed the propriety of friending an unrepresented adverse witness in a pending lawsuit to obtain potential impeachment material.   The lawyer wanted to cause a third party to access the social media pages maintained by a witness in order to obtain information that might be useful for impeaching the witness at trial. These pages were not generally accessible to the public, and required that the witness have previously allowed someone to friend her.

The Guidance Committee, concluded that the proposed conduct would violate the Pennsylvania Rules of Ethic that (a) prohibit conduct involving dishonesty, fraud, deceit or misrepresentation, (b) prohibit the use of false statement of fact or law to a third person and (c) hold lawyers responsible for the acts of third parties under their supervision.  The proposed friending by a third party would constitute deception as well as a supervisory violation because the third party who would friend the witness would omit a material fact: that the third party would be seeking access to the witness’ social networking pages solely to obtain information for the lawyer to use in the pending lawsuit.

How Companies Can Reduce the Risk of Backlash from Postings on Social Media

The two opinions discussed above provide concrete examples of why it is very important to monitor managers and employees’ postings on social media.  If litigation occurs, opposing counsel will not hesitate to look for material on social media, in order to impeach the other party or its witnesses.  So long as the postings are not protected by the appropriate privacy settings, the two ethical opinions above indicate that it will be fair to use them in litigation.

Given that some social networks are notorious for changing privacy settings without prior clear and conspicuous notice to their members, and happily sharing and publishing more than what some of their members intended, it is essential to keep in mind  – or assume – that any posting on a social network is likely to become public.  The unwanted publication may result from the negligence of the author or of his “friends”, or from technical or procedural glitches caused by the social network host or its service providers and partners.

Appropriate training of employees and executives to raise their awareness of the consequences of their use of social networks is essential to help reduce the likelihood of mishaps.  Colorful descriptions of one’s day at the office may become critical evidence to impeach a witness or questions the truthfulness of his statements.

In order to provide guidance, increase appreciation of the wonders and dangers of social media, and raise the awareness of employees and executives, consider the following:

  • Establish rules and guidances about what may or may not be posted on social media, in blogs, in user forums and other public forums about activities of the company;
  • Update and refine these rules as incidents occur and lessons are learned;
  • Arrange for periodic reminders about these rules, such as brown bag lunches, newsletters, or formal training sessions;
  • Organize periodic situational training for executives and employees to ensure that they appreciate the nature and extent of the threats and risks to which information are exposed and the potential results if the materials were used against them or against the company;
  • Ensure that executives and employees understand the frailty of Internet walls and that any statement anywhere on the Internet is likely to become public at some point – most likely in a court room (and if not, on the first page of the paper or the news site);
  • Periodically monitor the postings by employees and executives in order to verify compliance;
  • Discipline the infringers accordingly.


New York State Bar Association – Opinion 843 (September 2010).

Philadelphia Bar – Opinion 2009-02 (March 2009).

Posted in Best Practices
Comments Off on When Will Your CEO’s Social Media Postings End-Up in a Court Room?

Google Engineer Fired for Accessing User Accounts

Posted by fgilbert on September 17th, 2010

Google fired a software engineer because he allegedly took advantage of his position as a member of an elite technical group at the company to access user accounts in violation of the company policy.  Accounts accessed included those of four minors whom he had encountered through a technology group, according to reports by CNN and Gawker.

While there is no allegation of sexual predatory behavior, the engineer appears to have spied on minors’ accounts, accessed their contact lists and chats transcripts.

Given Google’s size it is almost predictable that an incident such as this would happen. When a company has thousands of employees, it is just a matter of statistics and probability. If X% of the country’s population is immature, emotionally unstable or has other personal problems, it is likely that these same characteristics will appear in the workforce of companies, despite the employers’ attempts at identifying the problem employee and prevent the occurrence of any mishap.

Events similar to the Google firing have occurred in hospitals where employees have taken advantage of their access privileges to snoop into celebrities’ health records.  In this case patients records were copies or stolen for the purpose of selling them to the press. As a result, California enacted a law – California Health & Safety Code Section 1280.15 that requires hospitals and clinics to prevent the unlawful or unauthorized access to patient’s medical information and to report these incidents. The law provides for significant fines for hospitals and clinics who fail to provide adequate protection for patients’ records.  Since the enactment of the law, several hospitals have been fined.

It is very difficult to predict and anticipate incidents such as the one that occurred at Google. Human behavior is too unpredictable. There are, however, a few things that a company can do to attempt to prevent this type of situation, or reduce the probability of their occurrence.

Reference checks

Before hiring or promoting an employee, adequate reference and background checks should be conducted. While most companies conduct a reference check when hiring a new employee, in many cases, the investigation is informal, and is limited to acquiring a better understanding of the person’s skills. These reference checks should be adapted to the nature of the position and the rights and responsibilities that the new hire will have.

Background checks

When an applicant’s responsibilities will give him access to sensitive information, such as personal data or company trade secrets, his background should be checked extensively. An in-depth evaluation might include conducting a criminal record investigation and interviewing character witnesses. This type of investigation is highly regulated, and requires significant precautions. While the administrative burden and financial cost of conducting these in-depth investigations are substantial, the cost is negligible when compared to the potential effect on the company’s reputation and market capitalization that a security or privacy incident might have.


It is also crucial to train the employee (or contractor) appropriately. Initial and ongoing training, periodic reminders, and other education regarding privacy and awareness are essential to help reduce the probability of these occurrences. Young or immature employees, in particular, need appropriate, focused, education and awareness sessions for them to acquire the right reflexes when confronted with the temptation to “play God” with a database.


In addition to education and awareness, it is important to ensure that the lessons learned during the training sessions are actually applied in practice.  In other words, the company should regularly monitor the employees’ activities. Companies have a responsibility to their clients and the other employees to ensure that the workforce abides by its rules of ethics and behaviors. They also have an obligation to their shareholders to ensure that the company’s assets (including its intellectual property assets and its reputation) or market value are not jeopardized through the negligence, immaturity or other behavior of their employees. To this end, employee supervision and periodic monitoring of their activities are crucial for identifying derailments while they are still manageable. Many technologies are available for this purpose.


Companies can also supplement their monitoring through the use of whistle blowing hotlines and customer hotlines that allow employees and customers to report problems that they identify.  These hotlines must be administered in such a way as to ensure anonymity, when needed.  The information collected must be reviewed and the matter investigated promptly and with appropriate discretion to protect the individuals concerned.

A company or a group is only as good as its weakest link.  It is a daunting task – but a necessary one – to ensure at all times that all employees understand and abide by the rules.

Comments Off on Google Engineer Fired for Accessing User Accounts

Lessons from FTC v. Twitter

Posted by fgilbert on August 17th, 2010


Security is not just for credit card and social security numbers

The proliferation of security breach disclosure laws has brought companies’ attention to the need to protect financial information, social security, and drivers license numbers. Since most of these laws target only these categories of data, and most state laws that require the use security measures also have focused on these categories of data, many companies have limited their information security efforts to the protection of a small amount of data: credits cards, social security and drivers license numbers. As a result, other categories of data that have not been in the limelight or the subject of investigative reporting have been neglected.

The recent FTC action against Twitter provides a significant warning that information security measures must not be limited to a small set of data. Rather, companies that collect personal data must provide adequate security measures to all types of data in their custody, according to the nature and probability of the risks to which these data are exposed. Each category of data is to be protected with measures that are appropriate to the nature of these data, the risks to these data, and the promises made by the company to its users.

Series of Security Breaches

Not so long ago, Twitter was an early stage start-up with a tight budget. As such, the company had its own ways of doing business on a dime. The company grew very quickly to become a prominent social networking company with users on all continents.  However, in the course of this commercial expansion, it failed to adapt its security practices to the magnitude of its reputation and nature of its subscribers.

A succession of security breaches in January through May 2009 revealed significant deficiencies to the Twitter information systems and networks. During this period, Twitter suffered security breaches that allowed hackers to access users’ accounts and non-public personal data, such as email and IP address and mobile phone number. The hackers were also able to reset passwords and send messages from user accounts. Among the widely reported hacks were fake tweets purportedly from sources such as then-President elect Obama and Fox News.

Access to user accounts was possible due to inadequate administrative controls. According to the FTC complaint, hackers accessed Twitter’s administrative accounts by submitting “thousands of guesses” using a password guessing tool. It was not that difficult to guess the passwords of the administrative accounts because many passwords were a dictionary word without numbers or other characters.

Failure to provide reasonable and appropriate security

In its privacy policy, Twitter claimed that it employed “administrative, physical, and electronic measures designed to protect” nonpublic user information from unauthorized access. It also stated on its website that direct messages “are not public; only author and recipient can view direct messages” and that if users did not want to keep their account public they could make their account private, which would give users control over who follows them and who can view their tweets.

The FTC investigation, however, revealed that for three years from July 2006 to July 2009, Twitter did not take reasonable and appropriate measures to prevent unauthorized administrative control of its system. Among the deficiencies, the FTC found that Twitter failed to:

  • Require administrative passwords to be complex;
  • Prohibit administrative passwords from being stored in plain text in personal email accounts;
  • Disable or suspend administrative accounts after a certain number of unsuccessful login attempts;
  • Provide an administrative login page exclusive to authorized persons and separate from the login webpage provided to other users;
  • Require and enforce that administrative passwords be changed periodically;
  • Restrict access to administrative controls to only those who need access;
  • Impose other reasonable restrictions on administrative access, such as by restricting access to specified IP addresses.

Consent Decree

The proposed consent decree, for which comments were to be sent by July 26, 2010, provides that Twitter, Inc. will enter into a consent agreement for its violation of Section 5 of the FTC Act. Under the terms of the settlement, Twitter is barred for 20 years from misleading its users about the extent to which it protects the security, privacy, and security of non-public consumer information. The agreement requires Twitter to establish, implement, and maintain a comprehensive information security program that is “reasonably designed to protect the security, privacy, confidentiality, and integrity” of nonpublic user information.

The program must be documented in writing and must contain appropriate administrative, technical, and physical safeguards. The safeguards must be appropriate to Twitter’s size and complexity, the nature and scope of its activities, and the sensitivity of the nonpublic user information. Among other things, Twitter must:

  • Designate an employee to be responsible for coordinating the information security program.
  • Identify reasonably foreseeable, internal and external material risks that could result in the unauthorized disclosure, misuse, loss, alteration, destruction, or compromise of nonpublic user information, and assess the adequacy of the safeguards in place to control these risks.
  • Design and implement reasonable safeguards to control the risks identified through risk assessment;
  • Regularly test and monitor the effectiveness of the safeguards’ key controls, systems, and procedures.
  • Take reasonable steps to select service providers capable of appropriately safeguarding nonpublic user information and entering into a contract that requires them to implement and maintain appropriate safeguards.
  • Periodically evaluate and adjust its information security program.

In addition, Twitter must obtain assessments and reports on the efficacy of its security program, from a qualified independent third party professional every two years for 10 years. The assessment must include a review of the administrative, technical, and physical safeguards that Twitter has implemented and maintained during the reporting period; an explanation of how the safeguards are appropriate to Twitter; and an explanation how the safeguards meet or exceed the requirements set out above.

Lessons from the Twitter Case

Since the late 1990’s, the Federal Trade Commission has developed a common law of privacy and data protection that was based on the FTC Act Section 5 bar against unfair and deceptive trade practices. Numerous FTC enforcement actions have targeted companies that suffered a breach of security that compromised financial information, credit information, or credit card information.

In its first case against a social networking site regarding information security, the FTC passes in a higher gear, and reminds companies of the need to apply adequate security measures to all information, and not just to credit card and social security numbers.

The significance of the Twitter case is not that it is the first case that targets a social networking company. What is more important is that the case focuses on the protection of data other than “the big four” (i.e. social security, drivers license, financial, and credit card information). The Twitter case is an important reminder that a company information security plan must address all categories of personal data that the company collects or hosts, and provide for each category of data such level of information as reasonably adapted to the nature of the information and the risks to this information.

Twitter has learned the hard way that its unique power to reach the world in a few seconds is assorted with a commensurate obligation to protect adequately that same information that is needed to launch a tweet. Like Twitter, each company has its own set of data, with its own unique vulnerability. It needs to address these vulnerabilities in accordance to the level of risk to each category of data, which is unique to the particular circumstances of the company.

Posted in FTC, Best Practices
Comments Off on Lessons from FTC v. Twitter