Steven Cherry: Hi, this is Steven Cherry for IEEE Spectrum’s “Techwise Conversations.”
Facebook bought Instagram a little over a year ago, a deal that Vanity Fair called “the ultimate Silicon Valley fairy tale,” a billion-dollar sale of a company less than two years old. Maybe even more remarkably, at the time of the sale, Instagram had exactly 13 employees.
How could a billion-dollar company have only 13 employees? In the words of my guest today, it’s not because those employees were so extraordinarily valuable. It’s because much of its value “comes from the millions of users who contribute to the network without being paid for it.” He wants, in brief, “to monetize more of what’s valuable from ordinary people,” and he has some ideas about how to do it.
Jaron Lanier is one of the world’s great polymaths. He’s a computer scientist, composer, visual artist, and the author of a new book, Who Owns the Future?, published last month [Editor’s note: Publication was on 7 May] by Simon & Schuster. He joins us by phone.
Jaron, welcome to the podcast.
Jaron Lanier: I am delighted to be doing it.
Steven Cherry: If I had to summarize your argument, it would be that we’ve embedded into the architecture of the Internet some ideas that are making us collectively poorer instead of richer, or at least less richer than we could be. We’re losing wealth because more and more of our economy involves information, and information is not being bought and sold properly. You use an example of baking bread, which is a great metaphor for money anyway. Tell us about local, remote, and robotic bread making.
Jaron Lanier: Okay, so let us hypothesize that in the future there would be robotic devices that would create bread, or perhaps bread could be 3-D printed, or perhaps bread might go from some tiny seedling on its own, automatically. In a sense, it already does that, of course. But let us just suppose, at any rate, that there are technologies for making bread which require vastly less human labor than in our present times. And this might include the workers in the field who gather the grains and process the grains. The whole thing might become what we call more “automated,” right?
And so then, the usual line of thinking is that, “Well, it’s sad that all the people who might have had jobs making bread before or making the components for bread or transporting the bread, it’s sad that all those jobs went away. But we can count on new jobs coming about, or at least new paths to sustenance, because technology always creates new opportunities.” And that’s something that I think is true. However, it all really depends on how we think about the technologies.
If we think about the technologies purely in the terms of sort of an artificial intelligence framework, where we say, “Well, if the machine does it, then it’s as if nobody has to do anything anymore,” then we create two problems that are utterly unnecessary. There’s a microeconomic problem, and there’s a macroeconomic problem. The microeconomic problem is that we’re pretending that the people who do the real work don’t exist anymore. But then the macroeconomic level also has to be considered. If we are saying that we’re automating the world—which is what happens when you make technology more advanced, and therefore there will be more and more use of these corpora driving artificial intelligence algorithms to do everything, including bread making—if we’re saying that the information that drives all this is supposed to be off the books, if we’re saying that it’s the free stuff, it’s not part of the economy, it’s only the sort of starter material or the promotional material or whatever ancillary thing it might be, if the core value is actually treated as an epiphenomenon, what will happen is the better technology gets, the smaller the economy will get, because more and more of the real value will be forced off the books. So the real economy will start to shrink. And it won’t just shrink uniformly; it’ll shrink around whoever has the biggest routing computers that manage that data.
Steven Cherry: A key notion in your argument is that of a siren server. The Greek Sirens were half-women island demigods who lured sailors and their ships onto rocky shores with their beautiful voices and songs. In your mythology, the siren servers are Facebook, Google, Apple, and their ilk, and I guess the ship is our economy and we’re the sailors. Your argument starts by distinguishing bell curve network distributions from power law ones, which you call “winner take all.” Maybe let’s start there.
Jaron Lanier: Okay, sure. So if we look at human affairs, we’ll notice that different kinds of organizing principles yield different distributions of outcomes for people. So if a human activity is organized as a contest with a small number of winners, then obviously the result will be a small number of winners. An example might be, oh, I don’t know, like some reality-TV show, like a singing contest or “Survivor” or something, where there are only a few people who—or one person who wins at the end. And then another sort of outcome is the bell curve, where there’s the preponderance of people kind of in the middle.
So in terms of networks, what happens is, if you have people all competing for their place to be sorted by a single central hub, then you get a power law. And that’s what happens to people who are competing in the app economy, through like an Apple Store. And it’s what happens on YouTube and any other situation like that. If you have a thickly connected graph, where the nature of competition is broader, for instance, just how rich somebody gets, assuming that there are many different ways to get rich, you might get a bell curve.
If you look at the information profile of what people are exposed to on a thickly connected network like a Facebook, you do see a broader distribution; you don’t see a bell curve power law distribution. In other words, people are exposed to a wider variety of things and don’t concentrate only on stars on Facebook. And that’s a remarkable thing. The only problem is that right now the economics of richly connected networks is such that only the central server makes money, and then the people don’t pay each other. That’s treated as free information.
Some other great examples to me are the free systems of Linux or the Wikipedia, which are supported by people who do think information should be free, which is a view I used to have but no longer do. But if you look at all the people who contribute to a big Wikipedia article or to, say, the Linux stack, what you see is a remarkably broad range. You do see some people who contribute a lot more than others, but you see a big hump in the middle of people who contribute a fair amount. And so there, once again, is this middle-class distribution showing up; it’s just in a nonmonetized way.
So this does get a little subtle, but the basic idea is that a siren server is forcing people into a power law distribution of outcomes, and that’s a somewhat artificial imposition. And so that’s fundamentally what a siren server is.
Steven Cherry: In situations where we have winner-take-all distributions, whether it’s the Internet with its Googles, or our economy as a whole with the 1 percent, we’ve traditionally had something that you call “levees.” What are they?
Jaron Lanier: Yeah, okay. I think the easiest way to describe this is to set up a little bit of a contrivance, which is a three-act play. So Act I is the 19th century, and the 19th century is completely consumed with anxiety that technology will throw people out of work, with some of the examples being the Luddite riots, early science fiction, the writing of Marx, “The Ballad of John Henry”—many other examples that I’ve gone into elsewhere. And so you have this tremendous anxiety.
Act II is the 20th century, and what happens in the 20th century is that that anxiety is resolved favorably, because it turns out that when there are better technologies, people can actually get better jobs. The transition from horse-driven vehicles to motorized vehicles represented a kind of profound improvement in the experience of the operator, right, because dealing with horses is tremendously smelly, difficult, and somewhat dangerous work, because they kick you. I mean, I love horses, but the truth is that dealing with the feed and the waste from them, and the brushing and dealing with the shoeing, I mean, it’s a lot. It’s a big deal. And operating a motor vehicle is so much easier in comparison that people pay to drive sports cars. I mean, people like driving. So the natural question to ask is, If technology has made operating a vehicle that much easier, why on earth should that person be paid? Why is it still a job worth paying?
And the answer is actually twofold, from two different perspectives, which we could say from below and from above. From below, the reason was that the labor movement said it was and fought really hard to make that the answer.
But from above, there’s another very interesting idea, which is markets can’t thrive without customers, so you need a middle class to be the customers. So, for instance, Henry Ford, who was a complete creep and otherwise, it has to be said, but in this particular way was very enlightened, where he said, you know, I can’t just put a product out there. I have to create a whole ecosystem in which my product will have customers. And that means making sure that my own factory workers can afford to buy the product. So the pricing has to match what wages can handle. And it also means supporting the idea of industries that treat it as a monetized function, because otherwise it’ll never take off. And so there was this understanding at that time that you have to build a monetized ecosystem.
So the 21st century comes along, which will be our third act, and what we decide is, Hey, you know what? We’re going to renege on the wisdom of the 20th century. We’re going to reject it. We’re going to say, sure, maybe it was still okay to pay people when they were driving vehicles, but you know what? At this point it’s ridiculous. Life’s getting so easy, vehicles can drive themselves. It’s time to stop paying people. You know, this is the end of the line. Now things are too good.
And, of course, the problem with that is that the same logic that applied to the 20th century really does still apply to the 21st century. If, in order to bring people the fruits of technology, you have to undo the ecosystem of the economy, well, then that’s what you’re doing. And then even though there might be a heroin-like hit because the initial phases of it feel really good, in the initial phases you can have little tiny companies that become incredibly valuable because they’ve become hubs for data, and people can get free treats online. So you have these benefits, so that feels really good. But in the long term, of course, you’re shrinking markets and, indeed, destroying the market economy without a clear alternative. And so that’s the problem with our third act so far, and we have to figure out a way to resolve it.
Steven Cherry: You blame networks in the broad sense for the 2008 depression. And you compare Wall Street’s financial machinations to Facebook and Google. You say that they each radiated risk away from themselves as if we had an infinite capacity to absorb it. And you have a marvelous analogy to air-conditioning. Tell us about air-conditioning and risk.
Jaron Lanier: Let’s suppose that you want to be Maxwell’s demon. And Maxwell’s demon is this character who operates a little tiny door letting hot molecules pass one way and cold molecules pass the other way. And if you can operate this little guy, he could eventually, just as a matter of by opening and closing this little door, separate the hot from the cold. And then at that point, he could open up another big door and let them mix again, running a turbine, and then repeat, and then you get perpetual motion. And, indeed, every perpetual motion machine boils down to an attempt to make a Maxwell’s demon.
And so if we ask, Why doesn’t this work? Why don’t we have perpetual motion? the answer is that the very act of computing, the act of discrimination, the act of measurement, the act of even the smallest manipulation in response to those things, the act of keeping track of it all so you know what you’re doing, all that stuff is real work. And it takes energy. It radiates its own waste heat. It’s real work. And you can never get ahead. There’s no free lunch. The work involved in doing that will always be greater than the work you can earn by doing it. So that’s an aspect of thermodynamics in a nutshell.
So what I’m proposing is that finance, and indeed consumer Internet companies and all kinds of other people using giant computers, are trying to become Maxwell’s demons in an information network. The easiest way to understand it is to think about an insurance company. So an American health insurance company, before big computing came along, would hire actuaries to set rates. But the idea of, on a person-by-person basis, attempting to decide who should be in the plan so that you could only insure the people who need it the least on an individual basis, that wasn’t really viable. But with big computing and the ability to compute huge correlations with big data, it becomes irresistible. And so what you do is you start to say, “I’m going to…”—you’re like Maxwell’s demon with the little door—“I’m going to let the people who are cheap to insure through the door, and the people who are expensive to insure have to go the other way until I’ve created this perfect system that’s statistically guaranteed to be highly profitable.”
And so what’s wrong with that is that you can’t ever really get ahead. What you’re really doing then is you’re radiating waste heat. I mean, for yourself you’ve created this perfect little business, but you’ve radiated all the risk, basically, to the society at large. And if the society was infinitely large and could absorb it, it would work. There’s nothing intrinsically faulty about your scheme except for the assumption that the society can absorb the risk. And so what we’ve seen with big computing in finance is a repeated occurrence of people using a big computer to radiate risk away from themselves until the society can’t absorb it. And then there’s some giant bailout and some huge breakage. And so it happened with Long-Term Capital [Management] in the ’90s. It happened with Enron, and we saw a repeat of it in the events leading to the Great Recession in the late aughts. And we’ll just see it happening again and again until it’s recognized that this pattern is just not sustainable.
Steven Cherry: You make another comparison of the Internet to Wall Street when you ask whether Facebook is becoming too big to fail in the way that Bank of America and AIG were in 2008.
Jaron Lanier: Right. Well, what happens with these things, siren servers can become very large because of network effect and Metcalfe’s Law and such things. So what happens is once you have a siren server that becomes valuable because of the data it’s already gathered or the connections it’s already gathered, then its value can accelerate, gathering more and more data and more and more connections. So in the case of finance, this turns into “too big to fail.” But it happens with any big computer. And in a sense, Facebook might be at the point where it’s too big to fail. It might not be possible for it to simply go bankrupt and cease to exist because it’s become too ingrained of sort of an infrastructure element for so many people.
And this is sort of a weird situation. You have a publicly traded company, which is sort of essential infrastructure, that’s controlled by one person. It’s kind of all things at once. And it’s a very weird situation. And it’s only possible because of the incredibly amplified power that accrues to those computers that win the network effect game.
Steven Cherry: Yeah. So you have an alternative in mind, and I guess there are two key points to it. One is that we should stop copying, and the other is two-way links. Let’s start with the first. What’s wrong with copying?
Jaron Lanier: Well, copying is a strange idea if you think about it from first principles. And, indeed, the first concept of digital networking, dating back to Ted Nelson’s work in the ’60s, didn’t include copying. And the reason why is if you have a network, the original’s right there, so why would you copy it? I mean, you know, it’s really that simple. In the book I tell a story about when I was a kid, visiting Xerox Park the first time in the ’70s and asking people, you know, “Why the hell are you copying files here when they had created Ethernet.”
And it was sort of strange because by rights it should have made—I mean copying was only necessary when computers weren’t connected because what you’d do is you’d copy the file to a disk or card or tape or whatever it had been in the old days and move it to another computer and reload it. So there was, like, a practical reason for it. But once you had computers connected, why on earth would you still copy files?
And the answer was really interesting. They said, “You know, we’re sponsored by a copying machine company, Xerox, and so we simply cannot say that even in the abstract copying will become obsolete.” So, in a sense, it was an anachronism used to please a sponsor.
But the problem with copying is—well, there are multiple problems. One is that it makes information less valuable because it loses the context. So if you don’t know what—like information is only information in context. So there’s a way in which copying intrinsically degrades the quality of information.
But economically, the problem is very simply that copying severs the link to where the information came from, so it creates this illusion that the information just came from the sky or from angels or sirens or some imaginary place. And that creates this economic falsehood that people didn’t really create.
Any time you have a no-copy network, there’s bidirectional links to the network, obviously, because you need to know where the thing was—you know, in order to have a local cache and a single logical copy, there has to be a back link as a matter of course.
So we severed those in protocols like HTML, where you can just copy things, and there’s a one-way link and there’s no way to really know who’s watching who. So what does that mean? It means that companies like Google had to come along to scrape the entire global network constantly to reconstitute the back links to try to contextualize things so they could do things like sort them to give search results that are meaningful.
Steven Cherry: So these two-way links, which in the book you also credit Ted Nelson with the idea of, these are the key to making sure that people get paid for their contributions. Is that right?
Jaron Lanier: Right. Well, it means that all information is contextualized in the sense of its history of origin. It means, therefore, that people can take responsibility for it and be rewarded for it. So it creates a higher-quality network, and it creates a possibility of an economy in an information society.
Steven Cherry: So we could imagine IBM Watson–like medical robots that diagnose ailments and perform surgeries. And it’s doing so based on the knowledge of hundreds, maybe thousands, of doctors and researchers, none of whom are getting paid now and will eventually be out of work. And I guess you just want them to be paid for that knowledge.
Jaron Lanier: Right. And there’s a subtle point here, which is that corpora has to gather a ton of data at first, and then only incremental data, to keep them up-to-date later, presumably, in most cases, anyway. So, for instance, at first you have to gather tons of data about human ailments or about—you have to get tons of examples of translations to create a statistical automated language translation service or whatever the example is. And then years down the road, the basic thing is functioning, but then you have to update it because reality isn’t fixed, right? So illnesses will shift, germs will evolve, or in the case of language, slang will evolve or specialized terms. And so you have to keep on updating the corpora.
And what I’m proposing is that the payment for data has to be kind of relative to its value in the market conditions at the time. I make a funny analogy about this where an action-movie star these days might be paid quite a lot, even though all he does is grunt once in a while and run around. And so you might say, “Well, this guy’s getting a million dollars per grunt, which seems pretty weird because a lot of people will grunt for free.” But nonetheless, he’s in the right place at the right time to provide just the right grunt, and so he gets a million bucks. That’s what the market says.
Steven Cherry: So you raise and answer a number of objections to your scheme. One specific argument you didn’t directly address in the book is one made by Clay Shirky, and it was in a seminal article about 13 years ago called “The Case Against Micropayments.” And let me just read one of his key points there: “Users want predictable and simple pricing. Micropayments, meanwhile, waste the users’ mental effort in order to conserve cheap resources by creating many tiny unpredictable transactions. Micropayments thus create in the mind of the user both anxiety and confusion, characteristics that users have not heretofore been known to actively seek out.”
Jaron Lanier: Yeah. I mean, I’m afraid Clay is going to be remembered as the Lysenko of economics someday. But, I mean, basically what he’s defining is a user-interface challenge. And we have to assess it in comparison to other challenges. So right now, your data is constantly being analyzed by unforeseen forces and having influences upon you that are unknowable but generally are reflected in vastly less accessible and more complicated experiences with health care and finance in general.
And so, I mean, this notion that what you see won’t hurt you because it’s simple is just profoundly stupid and antihuman. I just hardly even know how to begin to—like if that attitude was correct, then we should all stop doing good user-interface design and just let all the computers tell us what to do. I mean, it’s just an incredibly repulsive argument to me.
Steven Cherry: Yeah. Your system requires a single universal identity for each person. And it would be hard to be anonymous or even pseudonymous in your world, even if you were willing to forgo your share of micropayments.
Jaron Lanier: I rather disagree with you on that, although there’s many details to work out. Right now, once again, we have this attitude that what you don’t see doesn’t matter. But, of course, what you don’t see does matter because other people see it. So who is really anonymous? Who knows if they’re anonymous? I mean, this is like this crazy cat-and-mouse game. So the truth is that nobody knows when they’re being anonymous or not right now, and that’s a stupid situation.
In the world I’m talking about, the situation would be clearer. If you don’t want yourself tracked, you set the price of your information high, and if nobody can afford to buy it, you pay a price. I mean, it’s really that simple. The mechanism becomes much more transparent. And so I think the problem right now is that you’re comparing—I mean, in a sense it’s a very difficult thing, because my ideas are hypothetical since they haven’t been tested, but we’re comparing them against a false or phony or illusory sense of what has been tested.
So right now there’s an environment where people are being lied to, where they’re being told they’re anonymous where they are sometimes, or something. And obviously, I’m proposing a hypothetical feature. And I can’t prove it would be better, but at least the contest should be fair, and we should be honest about what we’re doing right now.
Steven Cherry: It’s hard not to think of the NSA and Verizon and the PRISM program this month in the context of your book. Facebook is going to have to pay you for the information that it culls from you, but presumably the NSA would not. You don’t discuss any…
Jaron Lanier: Oh, no, no, no, no. That’s a core idea in the book, is the NSA damn well will. Yes, absolutely, they will pay you. And this is absolutely a key idea.
Steven Cherry: All right. Fair enough. It’s hard to see them agreeing to that, but.
Jaron Lanier: No, no, no, no. Of course they—listen, here’s the deal. We don’t give the police—we don’t issue the police an arbitrary number of free batons and police cars and guns; they have to pay for them out of a budget. And that’s a critical idea that’s been a part of all democracies, and the reason why is the citizenry isn’t a citizenry unless it controls the power of the purse. And, you know, if we say that the government doesn’t have to pay for information, that’s the same as giving them a license for infinite spying, which eventually means infinite power as technology becomes more advanced. It’s an absolutely unviable alternative.
So, yes, they must pay. And the reason that’ll be enforced is because lawyers and accountants will be on their ass. And just to answer some obvious things, yeah, if they have a specific criminal investigation, they don’t have to tell the people in advance that they’re getting paid, because that would reveal it. Yeah, that would be under court order; that should be an exception, as it always has to be in a democracy. They will not be able to do omni spying anymore. They won’t be able to spy in advance without people knowing they’re being spied on, because the people will get money, and that’s proper. It is actually a totally reasonable solution.
And so you can’t have democracy in a highly evolved information society if information is free. You just can’t. I mean, because you’ll be giving the government an infinite spying license. And it might sound like an odd idea, but I hope once you roll it over in your brain, you’ll start to see that it’s just a very simple and sensible idea.
Steven Cherry: Yeah. I think it doesn’t go too far to say that you seem to think that the fate of democracy is up for grabs here. And I guess partly that’s a general argument about the need for a strong middle class, both economically and politically. But to some extent, it’s just the need to blunt the power of these siren servers.
Jaron Lanier: Right. Well, we have to reflect the idea that information processing is real. Information processing is real work with real consequences. And if we enter into this fantasy that it isn’t—which is a fantasy that we like because we want to believe in something for nothing. We want to believe we can create artificial intelligences without human input and that sort of thing. But if we can give up the fantasy and accept that information processing is real, then we can apply that realism to make better democracies and better markets and better government. And that’s really all I’m proposing.
Steven Cherry: Well, Jaron, Odysseus saved his ship from the Sirens by plugging his sailors’ ears with wax and so they couldn’t hear the Sirens’ songs. And I guess that’s the Luddite strategy. Your near namesake, Jason, saved the Argo by having Orpheus play even more beautiful music than the Sirens. I think you’ve written some really beautiful music in your new book, and we’ll see if it could be more attractive than Facebook and Google and Apple. Thanks for writing it, and thanks for joining us today.
Jaron Lanier: Okay. Okay. Well, thank you for this Homeric odyssey.
Steven Cherry: We’ve been speaking with computer scientist Jaron Lanier, author of Who Owns the Future?, about his view that we’re paradoxically concentrating wealth and power in the very networks we once expected would flatten the world’s playing field.