Artificial Intelligence: Treasure Map or Ethical Minefield for Venture Capital?

Scale-Up engages constantly with AI. Alex Lazovsky shares some thoughts that guide us.

Scale-Up VC blog
9 min readOct 20, 2020

Ever since the first humans started using spears and clubs, the technology we invent has presented us with ethical decisions. Some decisions, like mandating lifeboats on cruise ships, are easy because the benefits are so obvious. Others, like banning nuclear weapons, are clear because the consequences are so atrocious. Some, however, are ambiguous with strong arguments for and against. Is greater mobility worth 1.35 million road traffic deaths per year? Good question.

Such technological revolutions and ethical conundrums arise seldom, perhaps just once a generation, if that. But we’re in the middle of one right now due to artificial intelligence. As I’ve argued before, AI is here, it’s huge, and our societies are just letting it happen rather than understanding what’s going on and making informed, reasonable, forward-thinking choices.

Today I’d like to make another contribution to the debate. As a venture capitalist in Silicon Valley, I make decisions about pursuing AI daily. Of course, technological development follows the capital. We’re engaging with the technology, as we must. Let me share a few ethical dimensions of AI’s development that my peers and I frequently consider and some constructive ways to approach them.

What AI is and isn’t

It’s easy to overlook the AI that has already penetrated our everyday lives because it doesn’t look like what pop culture has led us to expect. There are no androids, like Star Trek TNG’s Data or the Cylons from Battlestar Galactica. And thank goodness. Determining whether such beings were alive, whether they could be guilty of crimes, whether they could be owned as property, whether damaging them should be considered assault or vandalism, and so on would be better answered by jurists and theologians than by a humble venture capitalist.

Fortunately, it seems that we won’t have to worry about such problems any time soon (but eventually perhaps). As distinguished scholars like Daniel Dennett and Yuval Noah Harari argue, artificial intelligence is very different from artificial consciousness. The former is already pervasive, but we needn’t expect the latter any time soon.

So how do we characterize the AI we do have, and how is it different from artificial consciousness? AI is just the application of algorithms — rule-based procedures in pursuit of some result — to computation problems. Every time a photo app recognizes a face, every time a bot recommends an ad or product, every time GPS directions automatically adapt to traffic conditions, every time a device collects data and uses it to improve its effectiveness, AI algorithms are at work.

However, these algorithms don’t “know” they’re improving, optimizing, and learning. There’s no self in them to do the knowing. Siri has no hopes, no memories, no personality. It’s (she’s?) just an equation in a box that can tweak itself. In that sense, AI is, at best, like the hidden computer in John Searle’s Chinese room argument, and nothing like a person.

AI on the left; Sci Fi on the right. (Images: Wikimedia & Jeff Hitchcock)

While excluding theology from our ethical considerations helps, we still need to tackle economics and politics, which is a VC’s home territory anyway.

Disruption of the labor market

AI is already disrupting the labor market, and the disruption is only going to intensify, with up to 14% of the global labor force being made redundant in the next decade. That’s a technical way of saying that around 500 million people are about to lose their jobs — a number higher than the combined labor forces of Russia, the EU, and the USA.

Disrupting labor on that scale has a few ethically relevant aspects:

  1. For the vast majority of the world who do not live off their capital gains, work is how people feed and clothe themselves, save for their retirement, and spoil their children.
  2. Work is important psychologically. Philosophers from Locke to Hegel have recognized that we change the world with our work, and we learn who we are by seeing ourselves reflected back in those changes. Work shapes the world, and that changed world shapes the workers.
  3. AI is moving the disruption up the value chain to affect workers with ever higher qualifications. People used to be able to make a living digging holes and stacking crates, but no more. Drivers and sailors are already endangered species. Actuaries, teachers, and call center operators are on borrowed time. As more qualified workers become displaced, it’s getting harder and harder to retrain them fast enough and highly enough to keep them in the game.

On the other hand, is freeing people from labor not a longstanding dream of humanity? Does anyone ever imagine shovels in paradise? If AI can help deliver something like fully automated luxury capitalism, do we not have an ethical obligation to pursue it?

The problem here is probably less the outcome of AI’s disruption than the transition between here and there. Two generations from now, the entire world could live in glorious comfort and prosperity. Two decades from now might be chaos.

Managing that transition is going to require creative solutions. Universal basic income is a common suggestion. The basic idea seems to be to redistribute the gains produced by AI from the winners to the losers. Another suggestion is a shortened work week. Similarly, this idea redistributes the leftover work among more workers, perhaps sacrificing efficiency for equality.

Both of these ideas seem too simple. Just moving money or hours from one pile to another might treat symptoms, but not causes. Idle hands remain the devil’s playthings. But Frithjof Bergmann, a German-American philosopher, might have a better, more nuanced idea. His New Work initiative seeks to combine working less, profiting from AI’s labor liberation, with helping people to use their new-found time to pursue activities that are more meaningful for them.

From a VC perspective, it’s important to see the opportunity in every problem. Higher-value labor is going to be displaced, which is a rich opportunity for those displacing it. But we can also help to cushion that blow. There will certainly be demand for high-value vocational training, as the immediate buzz around Google’s Professional Certificate Training Program would indicate. New Work also shows that people will probably seek other meaningful ways to occupy themselves as labor becomes increasingly superfluous. The labor disruption offers attentive VCs ways to generate returns as well as to help those affected move into other, more productive pursuits.

Your insurance agent, your kid’s teacher, and your personal assistant will soon be in here. Your travel agent already is. (Image: Nenad Stojkovic)

Distribution of benefits

There’s a flip side to half a billion people becoming redundant: a massive productivity increase. People apply AI to new and existing business models because it can help them to save money and make more. Generally speaking, AI will aggregate the losers’ losses and remit it as profit to someone else.

Increasing efficiency and reaping the profits isn’t necessarily a bad thing. On the contrary, using technology to improve our lives, with the greatest rewards going to those who make the biggest improvements, is what capitalism is all about. But AI might be changing the calculus drastically. If there were, say, 50 day laborers for every medieval landowner and 500 factory workers for every industrial-era factory owner, AI might skew the ratio of have-nots to haves into the millions.

How does that look in numbers? According to one study, global inequality has increased 15 percentage points in the last two hundred years with much slower technological change than now. Not only can AI beat a steam engine’s productivity increase, AI can design a better steam engine and replace the engineer. The trend for the next two hundred years is going to be much steeper.

Inequality is a problem for two reasons. First, something just feels wrong when some people in the economic system have more wealth than they could ever spend while others scrounge and starve. As even liberal philosopher John Rawls wrote, “The least advantaged are not, if all goes well, the unfortunate and unlucky — objects of our charity and compassion, much less our pity — but those to whom reciprocity is owed as a matter of basic justice.” Sure, VCs invest money to make more of it, but we’d rather be helping the economy grow and provide others with opportunities than having millions of listless, unoccupied, pitiable people around.

Second, inequality is correlated with social and political unrest. Such instability hurts people and makes it hard for them to plan for their lives and businesses. Investors are no different. Stochastic shocks like revolutions and terrorist attacks are toxic for the investment climate, beyond the tragedy of the casualties they cause.

As VCs, we can do two things. First, we need to take a long view and anticipate AI’s indirect effects in the longer term. Machines aren’t going to launch a revolution, but are some applications of AI more likely to start an uncontrollable chain reaction than others? Are they worth the risk?

Second, we can help by promoting technologies that help everyone to profit from our era’s radical productivity gains. Robinhood is just one example: it helps everyone to start investing in equity, and it’s a great example of a growing unicorn for VCs with deeper pockets. Of course, we’re looking for solutions too, but solutions that are suitable in light of our skills and position.

And it’s important to note that AI need not strictly be the problem. Not only can AI make everyone’s life more comfortable, it could perhaps help us to optimize the system overall. Perhaps stability, prosperity, and fairness can be quantified and entered into an actively learning algorithm to suggest social, legal, and economic optimization measures. That would make a compelling pitch.

Rights and control

Thinking about unrest as a reaction to AI’s disruptions and algorithmically formulated policy solutions to prevent it brings us to the next consideration: rights and control. Capitalism and democracy — the combination that has largely prevailed throughout the world over the last 200 years — are based on the idea of autonomous individuals deciding for themselves. Without that foundation, without free people deciding according to their own preferences, both become a farce. But algorithmic governance and algorithmic bias are here. AI is shaping our societies and our minds.

It’s no surprise that politicians welcome technical solutions to relieve them of making decisions. The habit of deferring responsibility to technical experts is why we have the term “technocrat.” It just so happens that the technocrat is now computational.

There are probably cases where algorithmic governance is beneficial. If, say, a government water agency uses AI to determine the optimal requirements and layout for water pipes given projected demographic changes, that’s super. It’s a technical decision that would probably be outsourced to a third-party expert anyway. If AI provides a cheaper, better solution to a technical question, fine. As a matter of fact, automating more mundane planning decisions with machine learning might also be a B2B growth area to watch.

But what about questions that affect people’s basic rights, like using AI to find and arrest the parents of refugee children? And what about predictive policing algorithms (I’m already nervous) that cause racist practices because the data they were fed was racially biased in the first place.

The VC community is split over such cases. Some see them as growth industries and evaluate positive returns without pangs of conscience. Others see them as progressive investments that are good for society. Still others are sharp opponents.

Palantir, a recently public company that provides B2B and government-grade security solutions using Big Data & AI, is an excellent case in point. Many profited from its recent and successful IPO. Peter Thiel is notorious for supporting Donald Trump, whose administration is a frequent Palantir customer. Thiel sits on Palantir’s board and vocally supports the business. Others are so hostile to the threat they see in Palantir that the company has nearly been chased out of Silicon Valley.

Facial recognition is a valuable tool, but any tool is a weapon if you hold it right. (Image: Gerd Altmann via pixabay)

Generally speaking, VCs do not seek to become Bond villains (although Elon Musk is putting on a very convincing audition for the next Moonraker). We want to leave the world better than we found it, but — naturally — we all have our own ideas about what that means. When it comes to fundamental questions like security vs. rights or the efficiency of algorithms vs. the legitimacy of transparent democratic governance, there is no “VC perspective” any more than there is a “Russian perspective” or an “American perspective”. Each one of us is an individual, after all.

When it comes to fundamental questions of rights, the best we can do is to invest according to our conscience, balancing financial returns with ethical priorities. And we can share our ideas with others in articles like this one, to help them to see from our privileged vantage point and to make informed, autonomous decisions for themselves.

--

--

Scale-Up VC blog

Scale-Up aims the sharpest & most experienced VCs at the most dynamic tech disruptors. Welcome.