TL;DR: On the impact of AI on society and economy and its potential to enable a zeroth world with unprecedented economic output.

In this post I want to talk about the impact that artificial intelligence might have on society and economy; not because of the “terminator scenario” but because of what it already can achieve right now. Over the last few months I have had many such discussions within industry, academia, and government and this is a summary of what I think; as always biased and incomplete.

Before delving into the actual discussion, I would like to clarify what I consider artificial intelligence (AI) as this is a very elusive term that has been overloaded several times to suit various narratives. When I talk about artificial intelligence (AI), what I am talking about is any technology, technology complex, or system, that: 1) (Sensing) gathers information through direct input, sensors, etc. 2) (Learning) processes information with the explicit or implicit aim of forming an evaluation of its environment. 3) (Deciding) Decides on a course of action. 4) (Acting) Informs or implements that course of action.

For those familiar, this is quite similar to the OODA loop, an abstraction that captures dynamic decision-making with feedback. The (minor) difference here, is that we (a) consider broader systems and we (b) do not require necessarily a feedback. In terms of (Acting) we also assume some form of autonomy, however the action might be either only suggested by the system or directly executed. The purpose of this “definition” is not to add yet another definition to the mix but to make precise, for the purpose of this post, what we will be talking about. For simplicity from now on we will refer to such systems as AI or AI systems. We will also refer to larger systems as AI system if they contain such technology at their core.

Examples of where such AI systems are used or appear are:

  • Credit ratings
  • Amazon’s “people also bought”
  • Autonomous vehicles
  • Medical decision-support systems
  • Facial recognition

Also, note that I chose the term “AI systems” vs many other equally fitting terms as it seems to be more “accessible” than some of the more technical ones, such as Machine Learning or Decision-Support Systems. Otherwise this choice is really arbitrary; let’s not make it about the choice of words.

Impact through Hybridization

A lot of the current discussion has been centered around the direct substitution of technology, workers, etc. by AI systems, as in robot-in-human-out. In believe however that this is not the likely scenario in the short to mid term as it would require a very high maturity level of current AI and machine learning technology that seems far away. Those wary of AI would argue that the singularity, where basically AI systems improve themselves, will drive maturity exponentially fast. Whether this is likely to happen I do not know as predictions of such type are tough. Most of those voices wary of AI, seem to argue from a utilitarian perspective a la Bernoulli and rather want to err on the safe side; from a risk management perspective not necessarily a bad approach. Most of those unconcerned argue that the we have not figured out some very basic challenges and as such there is no real risk.

Comparison different step size rules [Source: XKCD]

While this discourse might be important in its own right, I want to focus more on the (relatively) immediate, short-term impact: timelines of the order of 10 - 20 years, which is really short compared to the speed with which societies and economic systems adapt.

Scaling and Enabling through AI

In order for AI to have a disruptive impact on society full maturity is not required; neither is explainability although this might be desirable. The reason for this is that we can simply “pair up a human with an AI”, which I refer to as Hybridization, forming a symbiotic system in a more Xenoblade-esque fashion. The basic principle is that 90% of the basics can be performed efficiently and faster by an AI and for the remaining 10% we have human override. This will (1) enable an individual to perform tasks that were out of reach at unprecedented speed and (2) allows an individual to aggressively scale up her/his operations by operating on a higher level, letting the AI take care of the basics.

While this sounds Sci-Fi at first, a closer look reveals that we have been operating like this for many decades, we build tools to automate basic tasks (where basic is relative to the current level). This leads to an automate-and-elevate paradigm or cycle: automate the basics (e.g., via a machine or computer) and then go to the next level. A couple of examples:

  • Driver + Google Maps
  • Engineer + finite elements software
  • Vlogger + Camera + Final Cut Pro
  • MD + X-Ray

I am sure you can come up with hundreds of other examples. What all these examples have in common is (1) an enabling factor and (2) a scale-up factor. Take the “Engineer + finite elements software” example: The engineer can suddenly compute and test designs that were impossible to verify by himself beforehand and required a larger number of other people to be involved. However with this tool, the number of involved people can be significantly reduced (the individual’s productivity skyrockets) and completely new unthinkable things can be suddenly done.

What AI systems bring to the mix is that they suddenly allow us to (at least partially) tool and automate tasks that were out of reach so far because of “messy inputs”, i.e., these AI systems allow us to redefine what we consider “basic”.

An example

Let us consider the example of autonomous driving. Not because I like it particularly but because most of us have a pretty good idea about driving. Also today’s cars already have very basic automation, such as “cruise control” and “lane assist” systems, so that the idea is not that foreign. Traditionally, a car has one driver. While AI for autonomous driving seems far from being completely there yet, we do not need this to achieve disruptive improvements. Here are two use cases:

Use case 1: Let the AI take care of the basic driving tasks. Whenever a situation is unclear the controls are transferred to a centralized control center, where professional drivers take over for the duration of the “complex task” and then the controls are passed back to the car. This might allow a single driver, together with AI subsystems to operate 4-10 cars at a time; the range is arbitrary but seems reasonable: not correcting for correlation and tail risks, a 4x factor would require the AI to tackle 75% of the driven miles autonomously and a factor of 10x would require 90% of the driven miles being handled autonomously. Current disengagement rates of Waymo seem to be far better than that.

Use case 2: Long-haul trucking. Highway autonomy is much easier than intracity operations. Have truck drivers drive the truck to a “handover point” on a highway. Truck driver gets off the truck, the truck drives autonomously via the highway network to the handover point close to its destination. Human truck driver “picks up” the truck for last-mile intracity driving. If you now consider the ratio between the intracity portions and the highway portions of the trip, the number of required drivers can be reduced significantly; a 10x factor seems conservative. Moreover, rest times etc can be cut out as well.

Clearly, we can also combine use case 1 and 2 for extra safety with minimal extra cost. What we see however from this basic example is that AI systems can scale-up what a single human can do by significant multiples. Also in the long-haul example from above, the quality of life of the drivers goes up, e.g., less time spent away from family (that is for those that keep their job). However, the very important flip-side of this hybridization is that it threatens to displace a huge fraction of jobs: at a scaling of 10x about 90% of the jobs might be at risk; this is of course a naive estimate.

Other tasks, which might become “basic” are:

  • Call center operations: We already have call systems handling large portions of the call until being passed to an operator. AI-based systems bring this to another level. Think: Google Duplex
  • Checking NDAs and contracts: Time consuming and not value add. There are several systems (have not verified their accuracy) that offer automatic review, e.g., NDALynn, LawGeex (see also TechSpot).
  • Managing investment portfolios: Robo-Advisors in the retail space deliver similar or better performance than traditional and costly (and often subpar) investment advisors; after all, the hot shots are mostly working for funds or UHNWIs. (see here and here)
  • Design of (simple) machine learning solutions: Google’s AutoML automates the creation of high-performance machine learning models. Upload your data and get a deployment ready model with REST API etc. No data scientist required.
  • I know of other large companies using AI systems to automate the RFP process by sifting through thousands of pages of specifications to determine a product offering.

Of course, just to be clear, all of the above come also with certain usage risks if not used properly or without the necessary expertise.

The bigger picture: learning rate and discovery rate

What this all might lead to is a Zeroth World whose advantage (broadly speaking in terms of development: economic, educational, societal, etc) over the First World might be as large as the advantage of the First World over the Third World.

GDP per employed person

A very skewed but still informative metric is GDP per person employed. It gives generally a good idea of the productivity levels achieved on average. There are a couple of special cases, for example China with an extremely high variance. Nonetheless, in the graphics below generated from Google’s dataset you can see a strict separation between (some) First World countries and (some) Third World countries; note that the scale is logarithmic:

GDP per employed person [Source: Google’s dataset]

Now imagine that some countries, upon leveraging AI systems achieve a 10x gain in output per employed person. That will be the Zeroth World: people operating at 10x of their First World productivity levels. Hard to imagine, but that is roughly the separation between the US and Ghana for example.

The graph above is very compatible with well-known trends, e.g., Singapore strongly investing in automation or China being the country with largest number of industrial robots going online. JP Morgan estimates that automation could add up to $1.1 trilion to the global economy over the next 10-15 years. While this is only 1-1.5% of an overall boost in global GDP, in actuality the effect might be much more pronounced as it will be concentrated in few countries leading to a much stronger separation; still even if the whole boost would be accounted to the US it would still be just about 5%. But AI systems go beyond mere manufacturing automation and it is hard to estimate the cumulative effect. To put things into context, in manufacturing an extreme shift happened around the 2000’s when the first wave of strong automation kicked in. Over the last 30 or so years we roughly doubled manufacturing output and close to halved the number of people; see the graphics from Business Insider:

Manufacturing output vs. automation [Source: Business Insider]

That is 4x in about 30 years in a physical space, with large, tangible assets and more generally with lots of overall inertia in the system. It is quite likely that AI systems will have an even more pronounced effect because they are more widely deployable, so that the 10x scenario is not that ambitious.

Learning rate vs discovery rate

To better understand what AI systems reasonably can and cannot do, without making strong predictions about the future we need to differentiate between the learning rate and the discovery rate of a technology. In a nutshell, the learning rate captures how fast, e.g., prices, resources required etc fell over time for an existing solution or product, i.e., by how much flying got cheaper over time. This captures various improvements over time in deploying a given technology. The learning rate makes no statement about new discoveries or overcoming fundamental roadblocks. That is exactly what the discovery rate captures. While the learning rate tends to be quite observable and often follows a relatively stable trend over time, the discovery rate is much more unpredictable (due to its nature) and that is where often speculation about the future and its various scenarios comes into play. I will not go there: the learning rate alone can provide us with some insights. Note that we refer to those two as “rates” as it is very insightful to consider the world in logarithmic scale, e.g., measuring time to double or halve. Let us consider the examples of historical prices for GFlops:

Learning rate GFlops

[Source: AIImpacts.org]

We can find a very similar trend in historical prices for storage:

Learning rate storage [Source: jcmit.net]

These two are probably pretty much expected as they roughly follow Moore’s law, however there are many similar examples in other industries with different rates. For examples historical prices for solar panels or flights. Now let us compare this to the recent increase in compute deployed for training ai systems:

Learning rate storage [Source: OpenAI Blog]

Compared to Moore’s law with a doubling rate of roughly every 18 months (so far) for the compute deployed here the doubling rate much higher at about only 3.5 months (so far). Clearly, neither can continue forever at such aggressive rates, however this example points at two things: (a) we are moving much faster than anything that we have seen so far and (b) with the deployment of more compute usually a roughly similar increase in required data comes along (the reason being, that training algorithms, usually based on variants of stochastic gradient descent, can only make so many passes over the data before overfitting). Notably those applications in the graph with the highest compute are not relying on labeled data (except for maybe Neural Machine Translation to some extent; not sure) but are reinforcement learning systems, where training data is generated through simulation and (self-)play. For more details see the AI and Compute post on OpenAI’s blog. The graph above is not exactly the learning rate as it lacks the relation to e.g., price, however it clearly shows how fast we are progressing. It is not hard to imagine that with new hardware architectures, in a not too distant future that type of power will be available on your cell phone. So even without new discoveries, just following the natural learning rate of the industry and making the current state-of-the-art cheaper, will have profound impact. For example, just a few days ago Google’s Deepmind (not completely uncontroversially) won against pro players at playing StarCraft 2 (see also here). The training of this system required an enormous amount of computational resources. Even in light of the controversy, this is still an important achievement in terms of scaling technology, large-scale training with multiple agents, demonstrating that well designed reinforcement learning systems can learn very complex tasks, and more generally to “make it work”; whether reinforcement learning in general is the right approach to such problems is left for another discussion. In a few years we will teach building such integrated large-scale systems at universities end-to-end as a senior design type of project and then a few years later you will be able to download such a bot in the App Store. Crazy? Think of neural style transfer a few years back. You can now get Prisma on your cell phone. Sure it might offload the computation to the cloud—at least previous versions did so—but that is not the point. The point is that complex AI system designs at the cutting edge are made available to the broader public only a few years after their inceptions. Google Duplex is another such example making restaurant etc. reservations for you. To be clear, I am also very well aware of the limitations etc., but at the same time I fail to see a fundamental roadblock and existing limitations might be removed quickly with good engineering and research.

Impact on society and economy

In a nutshell: we are moving very fast. In fact, so fast that the consequences are unclear. Forget about the “terminator scenario” as a threat to society. Not because it might or it might not happen but rather because already the current technology just following its natural learning rate cycle poses a much more immediate challenge with the potential to lead to huge disruptions, both positive and negative.

One very critical impact to think about is workforce. If AI enables people to be more productive then either the economic output increases or the number of people required to achieve a given output level will decrease; these are the two sides of the same coin. The reality is that while there (likely) will be significant improvements in terms of economic output, there is only so much increase the “world” can absorb in a short period of time: at an economic world growth of about 2-3% per year the time it takes to 10x the output is roughly 80-100 years; even with significantly improved efficiency due to AI systems you can only push the output so far. What this means is that we might be facing a transitory period where efficiency improvements will drastically impact employment levels and it will take a considerable time for the workforce to adjust to these changes. In light of this one might actually contemplate whether populations in several developed countries are shrinking in early anticipation of the times ahead.

The other critical thing to think about is the concentration of power and wealth that might be accompanied by these shifts. Already today, we see that tech companies accumulate wealth and capital at unprecedented rates, leveraging the network effects of the internet. Yet, still somewhat tight to the physical world, e.g., due to users, there is still some limit to their growth. It is easily imaginable however, that the next “category of scale” will be defined by AI companies, with an insane concentration of resources, wealth, and power that pales current concentration levels in the valley.

We will likely also see the empowering of individuals beyond what we could imagine just a few years back by (a) multiplying the sheer output of an individuals due to scaling but also (b) by enabling the individual to do new things leveraging AI support systems. Then the “best” will dominate and technology will enable that individual to act globally, removing the last of the geographic entry barriers. As a simple example, take the recent “vlog” phenomenon, where one-person video productions can achieve a level of professionalism that rivals that of large scale productions. Executed from any place in the world and distributed world-wide through youtube. Moreover, the individual can directly “sell” to her/his target audience cutting out the middle man. This might provide a larger diversity and also a democratization of such disciplines but at the same time might also remove a useful filter in some cases.

These shifts, brought about by AI systems and resulting technology, come with a lot of (potential) positives and negatives and the promises of AI systems are great. Being high on possibilities of this new paradigm, it is easy to forget though that there might be severe unintended consequences with potentially critical impact on our societies and economies. In order to enable sustainable progress we need to not just be aware but prepare and actively shape the use of these new technologies.