Loading...

The case for technological optimism

Nov 29 2017 10:56
Johan Fourie

Johan Fourie is associate professor in economics at Stellenbosch University.

Related Articles

Facebook trains AI to help prevent suicides

Machine learning is here. How switched on are you?

SA firm to use AI to predict crime before it happens

 

A few months ago, I visited the Computer History Museum in Mountain View, California. The museum, with more than 90 000 objects on display, is dedicated to the computer revolution, from its roots in the 20th century to self-driving cars today.

It’s remarkable to observe the profound change in technology over the past three decades. 

The mobile computing display, I thought, summarised this best, showing the first laptop computers of the 1980s to a modern-day iPhone. 

What also became clear from the exhibitions was that those in the know at the start of the revolution were right about the transformational impact of computers, but almost certainly wrong about the way it would affect us.

We are now at the cusp of another revolution. 

Artificial intelligence (AI), led by remarkable innovations in machine-learning technology, is making rapid progress. It is already around us. 

Facebook’s image-recognition software, the voice recognition of Apple’s Siri and, probably most ambitiously, the self-driving ability of Tesla’s electric cars all rely on machine learning. 

Computer scientists are finding more applications every day, from financial markets (Michael Jordaan recently launched a machine-learning unit trust) to court judgments (a team of economists and computer scientists have shown that the quality of New York verdicts can be significantly improved with machine learning technology). 

Ask any technology optimist and they’ll tell you the next few years will see new applications that we currently cannot even imagine. But there is a paradox. 

A new NBER working paper by three economists, Erik Brynjolfsson, Chad Syverson and Daniel Rock, affiliated to MIT and the University of Chicago, shows something peculiar: a decline in labour productivity over the past decade. 

Across both the developed and developing world, growth in labour productivity, meaning the amount of output per worker, is falling. 

Whereas one would expect that rapid improvements in technology would increase total factor productivity, boosting investment and raising the ability of workers to build more stuff faster, we observe slower growth, and in some countries even stagnation. 

Some, therefore, are pessimistic about the prospects of AI, and in technological innovation more generally. 

Robert Gordon, in his The Rise and Fall of American Growth, argues that, despite an upward shift in productivity between 1995 and 2004, US productivity is on a long-run decline. 

Other notable economists, including Nicholas Bloom and William Nordhaus, are somewhat pessimistic about the ability of long-run productivity growth to return to earlier levels. 

Even the US Congressional Budget Office has reduced its 10-year labour productivity forecast, from 1.8% to 1.5%. On 10 years, that is equivalent to a decline of $600bn in 2017. 

How is it possible, to paraphrase Robert Solow in 1987, that we see machine-learning applications everywhere but in the productivity statistics? 

The simplest explanation, of course, is that our optimism is misplaced. Has Siri or Facebook’s image-recognition software really made us that more productive? Some technologies never live up to the hype. 

Brynjolfsson and co-authors, though, make a compelling case for technological optimism, offering three reasons for why “even a modest number of currently existing technologies could combine to substantially raise productivity growth and societal welfare”. 

One reason for the apparent paradox, the authors argue, is the mismeasurement of output and productivity. 

The slowdown in productivity in the last decade may simply be an illusion, as most new technologies – think of Google Maps’ accuracy in estimating our arrival time – involve no monetary cost. 

These “free” technologies significantly improve our living standards, but are not picked up by traditional estimates of GDP and productivity. 

A second reason: the benefits of the AI revolution are concentrated, with little improvement in productivity for the median worker. Google (now Alphabet), Apple, and Facebook have seen their market share increase rapidly in comparison to other large industries. 

Where AI was adopted outside ICT, these were often in zero-sum industries, like finance or advertising. 

A third, and perhaps most likely, reason: it takes a considerable time to be able to sufficiently harness new technologies. 

This is especially true, the authors argue, “for those major new technologies that ultimately have an important effect on aggregate statistics and welfare”, also known as general purpose technologies (GPT). 

There are two reasons why it takes long for GPTs to reflect in statistics. It takes time to build up the stock necessary to have an impact on the aggregate statistics. 

While cellphones are everywhere, the applications that benefit from machine learning are still only a small part of our daily lives. 

Second, it takes time to identify the complementary technologies and make these investments. 

As Brynjolfsson and co-authors argue, even if we do not see AI technology in the productivity statistics yet, it is too early to be pessimistic. 

The high valuations of AI companies suggest that investors believe there is real value in those companies, and it is likely that the effects on living standards may be even larger than the benefits that investors hope to capture. 

Machine-learning technology in particular will shape our lives in many ways.

But much like those looking towards the future in the early 1990s and wondering how computers may affect our lives, we have little idea of the applications and complementary innovations that will determine the Googles and Facebooks of the next decade. 

Let the Machine (Learning) Age begin! 


Johan Fourie is associate professor in economics at Stellenbosch University.

This article originally appeared in the 30 November edition of finweek
Buy and download the magazine here.

technology  |  artificial intelligence

NEXT ON FIN24X

 
 
 
Loading...