CHAPTER SEVEN

LIBERTY AND PRIVACY

Escaping the Algorithmic Prison

IF SOME LATTER-DAY PATRICK HENRY were to stand up in Congress today and reprise his “Give me liberty or give me death” speech, the target of its invective would likely be the rulers of the Internet.

The liberation Henry demanded in 1775 was not just from England. After the Revolutionary War had been fought and won, he refused to attend the Constitutional Convention, lest a too-strong national government become as tyrannical as that of King George III. To counter the fears of Henry and other anti-Federalists, a Bill of Rights was appended to the Constitution that explicitly upheld individuals’ rights of worship, a free press, trial by jury, the right to petition the government, and so on.

In the world of the Autonomous Revolution, the most insidious threats to our liberties are posed not by governments (in democratic countries, at least), but by the commercial enterprises and groups with social, political, and belief agendas that are increasingly reading our files and our minds, predicting and seeking to influence our behaviors, and controlling our actions and access to information and opportunities.

When James Madison drafted the first ten amendments to the Constitution, incursions on human liberties occurred in physical space. In response, property lines could be drawn on maps. Doors had locks. Prison was a place that you were taken to in handcuffs. Trials took place in courtrooms, where defendants were represented by attorneys and judgments were rendered by juries of their peers.

It is fair to assume that if Madison had known about information and spatial equivalences, his Bill of Rights would have been more expansive. If he had been able to imagine virtual space—the space created on the Internet by institutions such as Facebook, Google, and Amazon—he would have demanded that liberty and freedom be guaranteed there, too. If he had anticipated that individuals would be tried and convicted in absentia by invisible algorithms, he would have searched for a way to prevent that. If he had understood the ability of commercial enterprises and groups with social, political, and belief agendas to influence and control human actions and behavior, he would have found ways to constrain the behavior of these commercial shadow governments.

Societal phase change has altered both the form of unreasonable search, commercial punishment, and the nature of imprisonment. It is almost impossible to get a fair trial in virtual space, and when it’s over, you might not even know the verdict or understand that you have been imprisoned. All that you will know is that many of the things that you are reaching for are always just beyond your grasp—such as that great new high-paying job that no one will interview you for.

And you may be completely guiltless: imagine the loan company that thinks you are late on a car payment that in fact was lost in their system due to a software upgrade. In response, the loan company remotely disables your ability to start your car. Picture the virtual hassle as you argue over the phone with an unsympathetic automaton.

In what follows, we will discuss these new threats to liberty and the new processes that are empowering them. We will explain why the currently proposed solutions to them are doomed to be ineffective. We will make the case that if we are serious about maintaining our privacy and freedom, we will have to consider adopting a different business model for the Internet. And we will describe how that business model should look.

Our discussion will focus on trends that are already occurring in the free market and that we believe pose the greatest threats to the liberties of individuals living in liberal democracies. For those who live under authoritarian regimes like those in China, Russia, and Iran, or in formerly democratic countries that have taken an authoritarian turn, such as Turkey and Hungary, the more pressing concern, obviously, is government surveillance via virtual technologies of the sort that Orwell feared. If you are a Uighur living in Xinjiang in northwestern China, where your ID card indicates your “reliability status” and CCTV cameras surround you and record your face and license plate wherever you walk or drive, we understand that our focus on commercial institutions may seem naïve.1

Even democratic governments are undertaking activities that should be raising alarms. The use of metadata to track our phone calls; the increasing use of automated number-plate recognition (ANPR) by local police departments to track our movements; the use of Cellebrite services by ten thousand law enforcement agencies to analyze what we do on our smartphones; artificial intelligence software that predicts where crimes might be committed and who might commit them; and numerous other spy tools are all deeply troubling.2

You would also be right to be concerned about the next advances in emotion detection, in which the same cameras used to track and recognize your face are enhanced to detect your emotions. By analyzing the narrowing and widening of eyes, a grimace, the tightening of the jaw, a smirk, or a smile, they will be able to tell whether someone at a party rally is a supporter of a totalitarian regime or a dissident.

You should also be deeply concerned that in 2014 Facebook filed for a patent on an “emotion detection” technology that will likely be used to influence and control your behavior. After all, leveraging the individual’s emotional state is the key to creating emotional contagions that create fashion trends, polarize countries, and empower authoritarian regimes.3

Fortunately, to date the misuse of these technologies has affected only a relatively small percentage of the citizens of democracies, and the protests against them have been appropriately vigorous. But the life of virtually every citizen in these democracies has already been affected by the misuse of personal information by private industry. We have been so numbed by the wondrous applications and so value the free services we are being offered that we have come to passively accept and ignore the massive exploitation of our daily lives.

The cry of a modern-day Patrick Henry protesting the commercial misuse of our personal information would be “Give me liberty or give me intolerable inconvenience.” For as we will see, if we do not do something radical to change the vector we are on, we will be condemned to a life of constantly erasing cookies to protect our privacy, discovering misleading information about ourselves that has been distributed across the vast Internet universe, and then engaging in frustrating, time-consuming, and often expensive exchanges, probably with machines, to get the information corrected.

When new tools made it frictionless to collect information about us and to parse it to not only know what we are thinking now but also how we are likely to behave in the future, a critical tipping point was passed. Some of the factors involved were:

1. The dramatically reduced cost of collecting data on individuals over the Internet

2. Having knowledge of customer location because low-cost GPS was integrated into smartphones

3. Being able to identify individuals, track their movements, and analyze their emotions using facial recognition software and low-cost cameras4

4. The willingness of individuals to freely post information about themselves on social media

5. The widespread access to public records

6. The development of artificial intelligence tools that empower the owners of big data to look into our minds, influence our thoughts, predict our behavior, and motivate us to act in specific ways

Nowadays, if you want to shop anonymously, you should probably wear a face mask. Otherwise, shopping malls and retail stores will track your every move around the store. They ostensibly do so for security reasons, but they also derive a commercial benefit from the intrusion.5 Fifty-nine percent of fashion retailers in the UK use video data to determine such factors as what items or point-of-sale advertisements best grab customer attention.6

When you visit Internet sites, cookies and tracking pixels are installed on your computer or phone. Using these pixels, advertisers can determine how many people see their ads and visit their websites, how long they spend looking at advertisements, and what motivates them to make a purchase.7 Companies using this technology are asked to voluntarily comply with the industry’s Online Interest-Based Advertising Accountability Program and inform users that they are being tracked. This is accomplished by displaying a small blue triangle on their ads.8 In theory, this gives the consumer a heads-up, so he or she can opt out of the program. But few consumers notice the blue triangle or know what it means, and fewer still know how to opt out—and companies are quite content to keep them ignorant.

Cookies have multiple purposes, many of them beneficial for the customer. They allow sites to recognize return visitors and customize what they are shown. They can also be used to track their users’ activities. At any given moment, a lot of sites can be watching. In one case, a reporter discovered that 105 different Internet companies tracked his behavior over a thirty-six-hour period.9

Some sites will aggressively load your browser with cookies. Dictionary.com, which claims to be the world’s leading online source for English definitions, has been known to cram two hundred tracking cookies onto a user’s browser just because that user wanted the definition of a single word.10 Even if the user employs DoNotTrackMe plug-ins, companies have found ways to create “flash cookies” in Adobe’s Flash player to overcome that defense.11

The Internet will get even more intrusive in the future. It is highly likely that Internet service providers such as Comcast and Verizon, which can already capture data on every site you visit on the Internet, will be allowed to sell your personal information without your permission.

And that is only part of the problem. Increasingly, you don’t even need to log onto the Internet to make your personal information available to the Web. Stanford faculty member and former Microsoft executive Mike Steep recently conducted a test to see what happened to his data. He placed a couple of posts on social networking sites, joined friends for dinner at a Palo Alto restaurant, and bought an item at a CVS drugstore. Then, using a special auditing tool, he tracked what happened.

Within the first ninety days, those four pieces of data proliferated into 500 records on corporate servers scattered around the world. The CVS visit, he found, had been scraped by Apple and then used by Facebook. Within six months, the footprint of these minor events had grown to 1,500 entries—“all items I never entered on an order form,” says Steep. “If you think there is privacy anymore, you are dreaming.”

NOWHERE TO HIDE

The erosion of our privacy has been growing for many years. It began to gather momentum as credit rating agencies—Experian, TransUnion, and Equifax—along with companies like Acxiom and ChoicePoint, grew their massive databases. Acxiom, for example, looks at 50 trillion data transactions per year and maintains a database on 500 million consumers worldwide, with about 1,500 pieces of information per person.12

When Reed Elsevier purchased ChoicePoint (LexisNexis Risk Solutions) in 2008, it became public that the service had 17 billion records and had sold information to 100,000 clients.13 Some of the information it had was of value to the government, but federal law didn’t allow government agencies to collect it themselves. No problem: it was perfectly legal for them to purchase that same information on the commercial market, which is one reason that seven thousand of ChoicePoint’s customers were government agencies.14 You can be sure all of those numbers have grown in the years since.

Which isn’t to say that the government has subcontracted all of its information-gathering; it still does plenty of surveillance itself. All of us have watched spy movies in which a bug is planted in someone’s phone or a camera is hidden in a chandelier—but who would have thought that those secret microphones have already been installed in countless homes and that the secret agents who did it were the homeowners? Many of us were surprised to learn that the CIA can and does hack into “endpoints” like smartphones and watch and listen to their owners through their microphones and cameras. Using a tool called Weeping Angel, a TV can be put into a fake “off” mode and turned into a listening device.

“Smart speakers,” such as Amazon’s Alexa-equipped Echo, are always listening for the next command and they can easily be hacked.15 As IoT devices that can see, hear, and monitor surrounding activity proliferate, the NSA’s job gets easier and easier.

We could go on for pages, but the key point is that it now costs just fractions of a penny to monitor our Internet behavior and track us in physical space. Using that information, companies, institutions, and the government can learn about our tastes, discover our thoughts, and predict our behavior. And they can use those same tools to shape our thoughts, manipulate our elections, control our behavior, motivate us to act, and deprive us of our freedoms.

WHEN CUSTOMERS BECOME PRODUCTS

Another critical tipping point that enables companies’ new business models turns traditional commerce upside down. Increasingly, users of media are no longer customers so much as they are products.

The transition point between customer and product is fuzzy, with no bright demarcation line. Think of it this way: in the past, people used to subscribe to newspapers and magazines. They paid modest amounts; subscription revenues accounted for only about 20 percent of newspaper companies’ income.16 Advertisements paid for the rest.

Of course, the subscribers were always part product, since advertisers paid publications for their subscribers’ attention. Publications also supplemented their advertising revenues by selling subscriber profiles and mailing lists. But the information about their customers that publications captured was typically very coarse and circumscribed. A media company could tell you that a customer lived in a particular zip code, that the average income level in that zip code was $100,000 per year, and that voters from that zip code were predominantly Republican. But that was about it. From that limited information, you would have to infer whether the individual in question was a likely customer. The poor quality of this kind of information led to the famous quote by John Wanamaker, “Half the money I spend on advertising is wasted; the trouble is I don’t know which half.”17

This is very different from knowing, as is the case today, what the customer reads, where he shops, whether her friends also shop at a particular store, what his friends like and purchase, and that she has been searching the Internet for low prices on a particular brand of luxury car. Of particular import is that today’s technologies allow advertisers to glean granular data about the tastes and buying motivations of individual customers, not just the broad strokes of their demography.

Needless to say, the value of the information that businesses could collect increased dramatically when this transformation hit—at the same time that the cost of collecting it plummeted. That enabled businesses to sell valuable access to targeted customers at very attractive prices. For example, researchers from Ohio State University discovered that click-through rates increased by 670 percent when ads were behaviorally targeted. The targeted ads also appeared to have a positive effect on consumers’ psyches, making them more likely to purchase.18

As media businesses came to understand this, their consumers transitioned from being mostly customers and somewhat products to being mostly products and somewhat customers who could be sold to advertisers and other businesses at a steep profit. The result of the structural transformation was a brand-new phenomenon: widespread implementation of free or “freemium” business models,19 in which a service is provided for free, though the hidden price is the customers’ privacy.

The freemium model has, in fact, been around for a very long time. In an obvious example, commercial radio and broadcast television have depended upon it for years. Customers get to listen to and view programming for free, while making themselves into a captive audience for the advertisers who pay for it. But there is a big difference between the freemium business models of the past and those of today.

In the past, the quality of the information sold to advertisers was pretty bad. For that reason, it was not especially valuable. By comparison, the information sold today is very good and actionable, and is thus worth a great deal more. Also, the services and content provided to users in the past were good but not great. Today, you can still get all of those legacy services. But on top of that, you now get access to vast libraries of past content. On YouTube you get content packaged to match not just your demographic but also your specific needs and interests. To top it off, you get access to history’s largest indexed library. Virtually all human knowledge is at your fingertips. In short, the freemium business model went from being limited and of low quality to being as good as it can get.

The reader may feel that consumer information is the product being sold by these new companies … but we believe it is more accurate to say that the consumer himself or herself is the real product because, as a result of the sale of information, the consumer has surrendered his or her privacy—and with it, a piece of personal sovereignty.

In this new world, perhaps the best way to think of the customer is as a personal information production factory. By searching the Internet, posting on Facebook, clicking on websites, and driving a car to a specific location, the customer produces actionable and hence monetizable personal information product. To carry the metaphor further, the customer uses his or her own capital equipment (computer, smartphone, or automobile) to produce the tracking information being sold by commercial companies. In some cases (Facebook, Instagram, Twitter) they even produce the content themselves, in the form of their posts.

If I own a factory in the real world, I get to sell my product at the price I choose. If I put my product on the shipping dock and you steal it, I can have you put in jail.

Compare that to the freemium business model that now prevails in the virtual world: Internet companies take the valuable personal information product that users leave on their shipping docks and do not pay them anything for it beyond the services or content they have already used or seen. When they sell that information to a third party for a lot of money, the producer of that information does not get to share a penny.

The reason Facebook and Google are so profitable is that they are skillful at arbitraging the difference between the price they paid for the personal information product—zero—and the price at which they sell that product to advertisers. Because that later transaction is largely invisible, the information producers feel they are getting a good deal.

By turning the customer into the product, commercial enterprises have built massive databases with two basic objectives: first, to influence the choices that consumers make; second, to exert control over them and motivate them to take certain actions. When companies target ads at potential customers and decide what information to provide, they are using their massive databases to help influence consumer choices. When companies use the information in databases to decide not to sell you automobile insurance or not to tell you about a job you might want to apply for, they are using information to control consumers. If they decide to tell you about a certain product but not about a competing one, they are attempting to channel and control your actions.

If you are concerned about your liberty, you should be deeply concerned about the growing number of these databases and the ways they can be used to restrict your choices.

INVISIBLE PRISONS

Companies and the government use algorithms to make important decisions about us. Employing massive data files, they profile us and predict our tastes, spending habits, and even our creditworthiness and moral behavior—and take actions accordingly.

As a result, some of us can no longer get loans or have trouble cashing checks. Others are being offered only usurious credit card interest rates. Many have trouble finding employment or purchasing health insurance because of their Internet profiles. Others may have trouble purchasing property and insurance of all kinds. Algorithms also select some people for government audits, while subjecting others to gratuitous and degrading airport screenings. In fact, millions of Americans are now virtually incarcerated in what can only be characterized as algorithmic prisons. They are still able to move about in the world but can no longer fully participate in society. They are virtual prisoners.

The FBI has 100,000 names on its no-fly list; about 1,000 of them are U.S. citizens.20 Thousands more are targeted for enhanced screening by the Transportation Security Administration (TSA). By using data, including “tax identification number, past travel itineraries, property records, physical characteristics, and law enforcement or intelligence information,” the TSA’s algorithm predicts how likely a passenger is to be dangerous.21 The agency then acts accordingly.

In the past, it was possible to sneak around the edges. Unfortunately, the Internet has perfect recall and X-ray vision. Most of us have done dumb things that we would never do again. Some of us have said things in private that we would never discuss in public. Some of us have made racist and misogynistic comments on chat boards. We might have been trolling, or maybe we were spurred on by other commenters on the site.

Then there are those nude photos we might have sent to a lover. Or the drunken debauchery of a frat party. We might have posted something regretful on Facebook the morning after we got overly aggressive while engaging in consensual sex. After a tough day at work, we might have railed against our boss and even mused about hurting him. If we lost our job, we might have visited an anti-capitalism website. Maybe we are a Muslim and spent time checking out an ISIS website. We might have even sent a message to a recruiter. Perhaps we did a lot of those things before we were twenty-five years old—and now years have gone by and we are looking for a job.

Suppose one of those algorithmic prisons has a lot of 100 percent factual information about us, including “made threatening comments about his boss,” “thought about joining ISIS,” “may have forced himself on a woman,” “may be alcoholic.” We suspect that information would put us at the bottom of the candidate list of any job opening for which we might apply.

Algorithms constrain our lives in virtual space as well, whether we have done anything regrettable or not. They analyze our interests and select the things we see. In doing so, they limit the range of things to which we might be exposed. As Eli Pariser puts it in his book The Filter Bubble, “you click on a link, which signals your interest in something, which means you are more likely to see articles about that topic” and then “you become trapped in a loop.”22 You are being shown a distorted view of the world. In a very tangible sense, you are the subject of discrimination.

If you’re having trouble finding a job as a software engineer, it may be because you got a low score from the Gild, a company that predicts the skills of programmers by evaluating the open-source codes they have written, the language they use on LinkedIn, and how they answer questions on software social forums.23

Algorithmic prisons are not new. Even before the Internet, credit reporting and rating agencies were a power in our economy. Fitch’s, Moody’s, and Standard and Poor’s have been rating business credit for decades. Equifax, the oldest credit rating agency, was founded in 1899.24 But the new software, combined with the pervasiveness of the Internet and the latest data analysis tools, represents a whole new level of control.

When algorithms get it right (and in general they do a pretty good job), they provide extremely valuable services. They make our lives safer by identifying potential threats to our society. They make it easier to find the products and services we want, increasing the efficiency of businesses. For example, Amazon constantly alerts me to books it correctly predicts I will want to read. But when algorithms get it wrong, inconvenience and sometimes real suffering follows.

Most of us would not be concerned if ten or a hundred times too many people ended up on the TSA’s enhanced airport screening list, so long as an airplane hijacking was avoided. Similarly, in times when jobs are scarce and applicants many, most employers would opt for tighter algorithmic screening. After all, there are lots of candidates to survey, and more harm may be done by hiring a bad apple than by missing a potentially good new employee. Meanwhile, avoiding bad loans is key to the success of banks. Missing out on a few good clients in return for avoiding a big loss is a decent trade-off. That is, until the person who is inconvenienced or harmed is us. As the cost of surveillance gets cheaper and the tools more pervasive, the likelihood of that happening is increasing.

The federal Consumer Financial Protection Bureau lists more than forty consumer-reporting companies. These are services that provide reports to banks, check cashers, payday lenders, auto and property insurers, utilities, gambling establishments, rental companies, medical insurers, and companies wanting to check employment histories. The good news is that the Fair Credit Reporting Act requires those companies to give consumers annual access to their reports and allows a consumer to complain to the Consumer Financial Protection Bureau if he or she is being treated unfairly.25 But how many of us want to spend time regularly checking reports from two score companies and then filing paperwork with the Consumer Financial Protection Bureau to appeal an injustice?

Even if an algorithmic prisoner knows that he or she is in jail, that person may not know why or who the jailer is. Unable to get a loan because of a corrupted file at Experian or Equifax? Or could it be TransUnion? This person’s personal bank could even have its own algorithms to determine a consumer’s creditworthiness. Just think of the needle-in-a-haystack effort consumers must undertake if they are forced to investigate dozens of consumer-reporting companies, looking for the one that threw them behind algorithmic bars. Now imagine a future that might contain ten or maybe a hundred times as many algorithms that pass judgment upon you.

It is impossible to fathom all the implications of algorithmic prisons. Yet one thing is certain: even if they do have great economic value for businesses and make our country somewhat safer, many of us will be seriously harmed as algorithmic prisons continue to proliferate … and still more of us will experience great frustration.

The Fifth Amendment guarantees American citizens due process in physical space. But what due process standards apply to algorithms? The Fifth Amendment also ensures that we are not tried twice for the same offense. But if there are hundreds of sites out there, they could all be trying and punishing us for the same “crime.”

The Sixth Amendment guarantees citizens a fair trial. But most algorithms base their decisions on economic rationales. If mistakes are costly, then the algorithms will err on the side of caution, reversing one of the most fundamental principles of common law, Sir William Blackstone’s formulation: “It is better that ten guilty persons escape than that one innocent suffer.”26

Attempting to free yourself from algorithmic prisons could make you resemble the tragic protagonist of Kafka’s The Trial. How do you correct a credit report or get your name off an enhanced screening list? Each case is different. The appeal and pardon process may be very difficult—if there is one—and you might have to repeat it at each of the offending companies. And before everything, you need to realize that you actually are in prison.

Bruce Schneier’s recent book Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World addresses a host of potential protections for consumers, including protecting whistle-blowers, making vendors liable for privacy breaches, and passing laws that protect certain categories of personal data—financial data, health care information, student data, video rental records, and so on. Individuals should be entitled to know what data is being collected about them, he writes, and how the algorithms that pass judgment actually evaluate that data.27

Transparency is good in principle, but it doesn’t solve the problem on its own. Just envision yourself going through pages of disclosures every time you visit a website to determine what data is being collected on you and whether or not you agreed to that collection. Think of the time it would take merely to understand the algorithms. Then imagine the frustration that would occur as companies continually modify their policies.

Schneier does offer users practical advice on some of the steps they can take to deflect or foil surveillance—including configuring your browser to delete cookies every time you close it, closing your browser numerous times a day, and entering random information onto Internet forms to confuse Google profiling.

These solutions offer an excellent early glimpse of the nature and magnitude of the threat. But most of them are hard for regular, trusting, law-abiding people to implement in the course of their daily lives. People want to use email and the Internet without having to worry constantly about guarding the gates to their lives. We need a practical solution that is simple and puts the user back in control.

DEFENDING PRIVACY

Throughout this book, we have argued that phase change creates new rules and new institutions. In many cases, we will need to establish new laws and regulations to control the behavior of those new institutions.

One example of this is the General Data Protection Regulation (GDPR) that the European Union put into effect on May 25, 2018.28 The rule requires opt-in consent from users before data can be collected; that “by-default” sites use the highest possible privacy settings; and that personal data be made more difficult to tie back to an individual, either by using pseudonymization or a stronger form of encoding anonymization. Users must be informed of how long their data will be retained and be given the right to be digitally “forgotten.” Controllers would also be forced to design data protection into all systems by default, thus reducing the chances of theft.

If the United States were to adopt its own version of GDPR, it would certainly help. Until then, users will have to do a lot of work themselves to ensure they are protected. On a website run by Oath, the media firm created by the merger of AOL and Yahoo!, a user is asked for consent to “use your … data to understand your interests and personalize and measure ads.” If the user does not agree, he or she has the option to follow links … only to then discover that had they agreed, the user would have granted Oath permission to share their data with more than one hundred ad networks.29

It is highly likely that the GDPR will be improved and strengthened in the coming years. After all, it superseded weaker legislation—the Data Protection Directive—implemented in 1998.30 So one approach to solving the problem is to chip away at it, every decade or so bringing out a new version of the GDPR. In all likelihood this will be the chosen solution.

Still, we would argue that the GDPR is an all-too-typical example of using Industrial Revolution rules to deal with Autonomous Revolution issues.

If we really want to solve the problem of Internet privacy, we have to engage in phase-change solution thinking. What is really broken here is the defective freemium business model of the Internet—and the way to solve that problem is to give the user complete ownership and control over his or her data.

One way to do this would be to create information fiduciaries that would hold individuals’ information. Think of the fiduciary as a personal information safety deposit box that can be unlocked only if the holder supplies a key or knowingly withdraws the information and sends it to the desired recipient.

These fiduciaries would have the right to collect all the personal information they can from legal sources. The owner, in turn, would have complete control over who can access the information stored with a fiduciary.

The fiduciary would organize the information by tiers or levels. The first level would be pretty innocuous stuff, while the highest level would be highly sensitive kinds of information that an owner would permit only a limited number of institutions to view.

The individual—the owner—would have the right to examine the information in his or her file at any time. For simplicity’s sake, the owner might opt to use only one fiduciary, and he or she would have the ability to work with that fiduciary to correct any misinformation. Should the owner choose to release information to an Internet site, it would be illegal for that site to sell it or provide it to a third party.

Here is the way such a system could work. Suppose an information owner wants to get free services from Google. He or she would agree to provide Google with First Level information in return for its services. Google might deem the owner’s First Level information not valuable enough to warrant those services and ask for Second Level information as well, or, if that’s not acceptable, propose a fee, say $5 a month.

Perhaps an auto insurance company wants access to the owner’s Fourth Level information to determine what price it will give her on her insurance. She could agree to give the company access, but it would not have the right to use that data for any other purpose.

We suspect the fiduciaries of the world would love this business model because they would charge information users for access to the information.

A NEW KIND OF PRIVACY SERVICE

What would it take to begin this transformation of the privacy model? It would begin by turning companies such as Equifax, Experian, TransUnion, Acxiom, and ChoicePoint into those fiduciaries. Even a company like Google or Facebook could establish and offer fiduciary services.

What about the algorithmic prisons? Suppose one of those prisons was very good at analyzing credit risk, and a lot of banks wanted to use its services. A bank could agree to pay the prison for analyzing a person’s private information. With the explicit permission of that person, the bank would get access to his or her information from their information fiduciary. Once the prison had performed this task for the bank, it would by law erase the information and not use it for any other purpose.

This concept of an information fiduciary would have an added benefit: it would offer a high level of protection against illegal search and seizure by the government. The government could not buy our information on the open market. Instead, it would be required to obtain a search warrant to look at the information stored with the fiduciary.

Of course, this is a radical idea. Many people—and institutions—will object to it. But think about it this way:

Our current situation just happened. No one really understood what was going to transpire, so we just sat back and watched. Now we are stuck with a system that has some very undesirable aspects. But suppose for a moment that, at the turn of the twenty-first century, the visionaries of the Internet had appeared before Congress and said:

As we go through the process of adapting to phase change, we have a choice between using our old rules to chip away at the problem or adopting a radical new approach that can cure it. We believe the right approach is the radical one, because our individual liberties are too precious to put at risk.

Had that occurred, had new institutions based on the new rules been instituted, the Internet would have evolved very differently from what it has become today. Had the freemium model never been allowed to emerge, many of our current threats to privacy might be all but unknown.

Purging the existing model as we have suggested above would be gut-wrenching and extremely difficult, but we believe it offers the best solution to the problem. But other approaches would also greatly improve the situation.

There is a lot of hard work to be done, but if we commit to aggressively attacking the problem we can have our privacy and our liberty for years to come.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset