page image

CHAPTER III - Managing the Hidden Liabilities of Digital Assets

Managing the Hidden Liabilities of Digital Assets

The Risks of Poorly Managed Data

While new types of digital assets offer many social and commercial benefits, they also come freighted with distinct liabilities. One session of the conference therefore focused on the responsibilities and trust associated with the management of personal data. A failure to meet either explicit legal responsibilities or implied social expectations can result in a serious loss of trust, harm to brand reputation and even lost revenues. Breaches in secure management of digital assets can also invite regulatory scrutiny and intervention, which may result in cumbersome and costly legal and government oversight.

The only way to overcome the challenge of poor stewardship of personal data, wrote tech commentator Om Malik on the day of the conference, is for “companies themselves [to] come up with a clear, coherent and transparent approach to data. Instead of an arcane Terms of Service, we need plain and simple Terms of Trust. To paraphrase Peter ‘Spiderman’ Parker’s Uncle Ben—with big data, comes big responsibility. The question is will the gatekeepers of the future rise to the challenge?”

In approaching this topic, Jerry Murdock, Co-Founder of Insight Venture Partners, pointed out that “identity” is not really a unitary concept, but rather a complicated idea with multiple use cases, which he identified as follows:

Authentication: Is this really you?
Authorization: Do I have your permission?
Representation: Here is who I claim I am.
Communication: How do I connect?
Personalization: What do you prefer?
Reputation: How do others regard you?

Vijay Sondhi, Senior Vice President of CyberSource and Authorize. net, both of Visa, Inc., gave a brief overview of the many liabilities associated with digital assets, most of them associated with security breaches and unauthorized disclosures of personal information. Sondhi stressed that in addition to trying to prevent breaches, another challenge is figuring out “what to do when breaches occur—because systems do get hacked, especially with organizations like merchants who are not otherwise in the business of security. Cyberattacks are hard to prevent.” Sondhi reported that thousands of attempted hacks occur at businesses every day. The point is that any software and management system must be designed in advance for resiliency so that in addition to preventing breaches, companies can also recover quickly and minimize the impact of events when they occur.

In recent years there has been a parade of significant data breaches among major retailers and companies, including eBay, Neiman Marcus, Adobe, Target and Home Depot. The Target data breach, first reported in December 2013, consisted of 40 million credit card numbers and addresses, phone numbers and other personal information. The episode was the largest hack of a retailer’s data up to that point. Following the breach, which was also during a very cold winter, and losses at Canadian stores, Target experienced a 16 percent decline in profit driven by slower in-store traffic. This in turn led to major upheavals within the company, including the departure of the CEO and CIO.

The odd thing about security breaches of data is that the repercussions for sales volume and company reputation can vary greatly, said Bill Coleman. Another significant hack of personal data—an 18-month attack on the retailer T.J. Maxx that ended in May 2007—had little lasting impact on either the reputation or sales of the company, he noted. Yet the Target hack six years later became a veritable landmark in the history of security breaches because of the significant public reaction.

All of this suggests that the ways in which retailers access people’s personal data—the social context and expectations of data collection— matter a great deal. It is common for website retailers such as Amazon to collect all sorts of data about a person’s browsing and shopping habits, and then to issue personalized recommendations about what to buy. The public may or may not realize such data collection is going on, but even when discovered, it is mostly accepted.

But the reaction was quite different when the retailer Nordstrom decided that it would like to try to collect similar data about in-store shoppers. In a pilot experiment, it tracked customer identities and movements within the store using low-powered Bluetooth technology. It even required people to log into fitting rooms and tracked what apparel items were tried on. The experiment provoked such outrage among shoppers that it was quickly shut down.

Sondhi concluded that such an episode shows how the notion of “brand permission” can be a very unpredictable thing. What is acceptable online may not be acceptable in physical stores. “People had no problem when online retailers told them, ‘You bought these ten things. Would you like these others?’ In fact, some consumers liked that.”

Leaving aside the social variables that define a perceived breach of personal trust, there is general agreement that the technologies for preventing breaches can and must improve, if only because the scale and intensity of hacker attacks is so great. Sondhi reported that “the number of attacks and instances of malware have increased hundreds of fold over the last ten years.”

The Target breach has catalyzed the adoption of a more secure “chip” technology for credit and debit cards in the U.S. Each card contains a small computer chip with a little operating system on it that generates a unique one-time code for each transaction. “When the system went into effect in the UK,” said Sondhi, “counterfeit fraud dropped significantly. This chip is extremely secure so if card data is compromised in a breach, it cannot used be for counterfeit fraud.”

Yet card fraud was not eliminated; it simply moved to other arenas. In the UK, it simply moved online, at about the same percentage as previously occurred in retail stores. A similar dynamic occurred in Canada and Australia when the chip card was introduced there.

The U.S. continued to rely on a magnetic strip, rather than move to a chip card, because card-issuing banks did not want to pay the $10 needed to put the chip on each card. Retailers, for their part, were not eager to pay for the additional cost of new terminal readers to process chip-based cards. In the aftermath of the Target breach, such misgivings were swept away. Target will begin using chip cards in 2015. In October 2014, President Obama ordered its use for U.S. Government payments by credit cards by 2015 as well.

Innovative Technologies to Stop Data Fraud

Participants mentioned a number of technological innovations that are making it harder to hack into data systems, but they conceded that the problem amounts to an ongoing arms race. Each side is constantly trying to outmaneuver the innovations developed by its adversaries.

One new approach that is now being rolled out involves “tokenization.” Rather than allowing banks to retain all customer data, the Visa affiliate CyberSource will issue “network tokens,” employing a public/private key security system. By separating transaction data from a customer’s personal information, via tokenization, retailers can be protected from having to be the stewards of “too much information.”

Michael Fertik of Reputation.com reported that a variety of online businesses are offering “ephemeral communications,” that use encrypted communications, such as TigerText and Whisper. These communications are designed to self-destruct after a given period of time on the theory that personal information not stored anywhere is far less vulnerable to unauthorized capture. There are also new forms of “homomorphic encryption” that allow computers to perform operations on data without looking at the actual data, said Fertik.

A variety of sophisticated new authentication and cryptographic methods have also emerged in recent years, said John Clippinger of ID3. The more notable systems include OpenID Connect, OAuth 2, zero knowledge proofs, Byzantine Fault Tolerance and Merkel Trees. Such systems can allow a user to authenticate his or her identity without revealing a legal name or other attributes to the identity provider.

An interesting discovery is that the algorithms used by social networks are much richer and more reliable as predictors of a person’s identity than the more conventional fraud algorithms used by payment systems. So attempts are being made to incorporate some of these social data-points into security systems—even as hackers try to reverse-engineer the new approaches. The arms race continues.

Bill Coleman, Partner with Alsop Louie Partners, agreed that we need to find new mechanisms to separate authentication from authorization. “We have to enable the verification of someone’s identity without having to actually pass along personal information. Until we do that,” he said, “things are going to get worse, especially as we move into the Internet of Things,” where the identities of a vastly larger universe of devices will need to be securely established.

Another strategic shift in improving data security: the use of rapid and constant feedback loops from the network ecosystem so that a security method can self-improve itself through peer learning. Cory Ondrejka of Facebook noted that distributed groups of hackers often develop near-perfect communication links among themselves so that they can search for the weakest points in a computer system, and share that knowledge. This approach was also used by guerrilla fighters in Iraq, who made rapid damage assessments of their roadside IED [improvised explosive devices] attacks so that they could learn from the success and failure of each one.

Hacker groups that are networked may pose more of a threat than isolated groups, but they are also more vulnerable precisely because their multiple interconnections provide more points of detection. Ondrejka reported that Facebook had recently shut down a bot network consisting of 250,000 computers.

Can Law, Technology and Social Practice Be Blended?

John Clippinger agreed with Ondrejka that the architecture for secure authentication and identity must become more dynamic and evolutionary. He cited a famous 1881 essay by Oliver Wendell Holmes, Jr., “The Common Law,” which “brilliantly describes a biological model for how the law should evolve—moving from custom to social norm to common law and statutes, and then back again. That is a vital, creative process that the technology should embrace.” One lesson that we can take from “The Common Law,” Clippinger said, is that statutes should focus on a very high, meta-level of public need so that social practice and custom can experiment and evolve and, in time, generate the most stable consensual forms of law.

He cited the core principles of documents like the Consumer Data Bill of Rights promulgated by the European Union. Such a document can be the structure for a range of specific implementations that conform to the principles, with dynamic social practices actually enacting and enforcing the principles. “Safe harbor” provisions in statutes can be used to sanction open-ended experimentation and innovation, in effect providing “meta-rules” for social and technology-based enforcement. In turn, companies that devise effective implementations can reap branding advantages from their systems, and groups of users can use contract law to validate social agreements among themselves in the use of digital assets.

Clippinger envisions the rise of “smart contracts” that are digitally encoded and enacted using computer scripting language. He reported that software algorithms can be technically designed to “negotiate” mutual understandings and self-execute them in virtual spaces, without resort to conventional paper contracts, lawyers and courts.

A persistent undercurrent in so many debates about privacy and security is that nothing can be done about it and people do not really care. Another participant noted that despite publicity about data security breaches, people rarely take steps to improve or change their passwords.

Fertik strenuously disputed the idea that people do not care. “People believe they can reasonably rely on institutions and infrastructure after a data breach occurs. They may not be fully justified in their feeling. But the costs of breaches do get absorbed into the system and create change. The Target breach resulted in their adoption of the chip and PIN system. And there is always, always liability following a breach. If someone does not handle a breach well, the liability grows very quickly.”

Fertik cited how the costs of dealing with security and privacy threats in earlier generations of the Internet and computing technology “have been ingested” by leading tech companies and institutions, which have made needed changes in the infrastructure. Similarly, in earlier generations of payment systems, the merchant banks, retailers, credit card companies, and others have taken steps to prevent or diminish security harm.

While it may be difficult to roll back the open availability of personal data that now exists, Fertik emphasized that the real struggle is over the “next 9,000 data points. What should be required of the Visas and Facebooks and Googles to develop a new set of norms for the next 9,000 pieces of data?”

“What is missing from today’s discussion is the idea of insurance,” said Fertik. “If there is any industry that is just pregnant with opportunity, it is digital cyber-insurance risk. An intelligent insurance platform would allow everyone to socialize the risk.”

The problem with this idea, said Bill Coleman, is that “it is backward-looking and presumes a steady-state world.” In addition, he said, it is exceedingly difficult to quantify the value of harm to organizations when so much of their capitalization value is based on intangibles— intellectual property, brand, consumer goodwill, social trust and other non-physical assets.

Another participant pointed out that there are no reliable accounting protocols for putting a price on a company’s intangible values. The larger macro-economic liabilities of mismanaging data assets are not really known, either. It may be that we simply need a “black swan” catastrophe or significant loss experience in order to get a provisional grip on how to set prices for such damage, and how to set suitable insurance premium rates.

Perhaps some incentives to responsible stewardship of data could arise by assigning property rights to intangible digital assets, said Professor Thomas Malone.

There is an opportunity to do something analogous to what Creative Commons did for copyrighted works. We could establish different classes of data, and provide different classes of contracts for each one, specifying the rights and responsibilities that belong to each one. For example, you could say that certain data requires the consent of all participants in its creation if the asset is to be sold or transferred in some way. For other kinds of data, you might require only one-party consent. Another licensing term might be mandatory licensing. The contracts could include various kinds of penalties for breaches and exceptions for law enforcement. There could even be a whole library of these digital assets and transaction types, with different contract terms for each one.

Michael Fertik warned that similar “badging systems” to simplify websites’ terms of service have failed because they have been created by lawyers, who tend to be focused on static categories of behavior and complicated exceptions. There are also jurisdictional issues that complicate enforcement, and companies often legitimately need certain flexibility beyond the standard “badges.”

Share On