Q&A with David Wood


Chair, London Futurists and Principal, Delta Wisdom




Q: Hi David, would you be able to introduce yourself and give us an overview of your role?

My background was in the smartphone and mobile computing industry, where I saw phenomenal change over a 25 year period. That change followed a pattern of slow change followed by fast change. I believe the same pattern will be repeated in many other disruptive technology sectors in the next 5-10 years. Because multiple disruptions will be taking place in the same time period, the overall outcomes are very hard to foresee. That’s why more attention needs to be paid to these possibilities, to collectively improve our foresight capabilities.

As such, I nowadays provide consultancy on anticipating future disruptions, and I chair the London Futurists meetup. The mission of London Futurists is “serious analysis of radical scenarios for the next 3-40 years”.

Q: In a 2015 Bank of England study, it was revealed that up to 15m jobs in Britain are at risk of being lost to a “third machine age” where sophisticated robotics will replace human activity. To what extent would you agree with this, and why?

The figure of 15m jobs, as stated by Bank of England chief economist Andy Haldane, is in my view an underestimate of the number of jobs in Britain at risk from automation. Artificial intelligence is presently in the early phase of a second generation of capability. The first generation saw computers outperforming humans in calculations – manipulating data via pre-programmed algorithms. This new generation, known as machine learning, is seeing computers outperforming humans in insight – generating their own algorithms by observing data. In parallel, robot systems are swiftly improving in their abilities both to sense emotion and to manifest emotion. As a result, many of the jobs sometimes thought to be safe from automation – jobs involving creativity and/or human rapport – may be replaced by robots sooner than people expect.

The exact way this will happen is subject to many uncertainties. But I see it as around 50% likely that at least 90% of existing jobs will be replaced by automation by 2040. And most of the new jobs will be handled by automation too – because robots can learn new skills a lot faster than humans.

Q: How soon until companies – such as financial institutions – might feel a significant effect of this labour shift?

I agree with Amara’s Law: We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run. In the short run, technology frequently fails to live up to booster predictions – often due to poor design, the presence of too many bugs, difficulty in setup, lack of meaningful applications, concerns over safety and cost, etc. But when the associated ecosystem reaches sufficient maturity, a rapid flip can occur, from sub-exponential growth to super-exponential growth.

Following this pattern, the community around machine learning is nowadays making rapid progress, following several decades of stagnation. The growth is fuelled by huge investments from some of the world’s more capable technology companies, including Google, Facebook, Microsoft, Baidu, and IBM. Because it’s a community that generally publishes its research in an open manner, new breakthroughs from one company are quickly adopted – and further improved – by others.

We’re already seeing the impacts of labour shift. New jobs are still being created – as old ones are destroyed – but they tend to require fewer humans to carry them out. As a result, more and more people are transitioning from full-time to part-time, piecemeal work. This phenomenon is sometimes called the “winner takes all” economy. Productivity continues to increase, but the rewards of this productivity are shared by a smaller set of the workforce. Median household income is falling in real terms. This brings its own risks in terms of social alienation.

Whereas technological unemployment was hardly mentioned in the 2015 general election campaign, things will be very different by the next general election in 2020. One foretaste of this is the recent landmark publication by the RSA, “Creative citizen, creative state: The principled and pragmatic case for a Universal Basic Income”.

Q: For those of us left in work(!), what wearable tech are we likely to be able to utilise in our day to-day lives? Will augmented reality (e.g. glasses) be the new smart phones?

In order to remain in work, in the wake of the impact of machine learning, we’ll need to learn how to “race with the machines” rather than “race against the machines” – in the phrase due to MIT economist Erik Brynjolfsson. This requires closer dovetailing of human intelligence with artificial intelligence. We need, therefore, to ride on the curve in which computers have become increasingly miniaturised and increasingly mobile. This trend is seeing computers move from fixed to handheld to wearable. (Ultimately they’ll become “embeddable” and “insideable”.)

The wearable industry today is roughly where the smartphone industry was in 2004 – there’s lots of potential, but devices don’t yet deliver sufficient utility to mainstream users. However, with forthcoming improvements in both hardware and software, new generations of smartglasses are poised to replace smartphones as people’s preferred interface to the digital world. Smartglasses will judiciously provide real-time, relevant information in our visual field, about the tasks that we are undertaking. They’ll be adopted first in enterprise, then by travellers and games-players, and finally will become ubiquitous – potentially by 2023.

Q: Year-on-year, the threat of cybercrime increases and attacks become more rife on financial institutions; never before has cyber security been so pertinent. How vulnerable are companies to such attacks moving forward? Is there anything that can be done to sufficiently future proof against this?

The risk of cyberattack will inevitably grow as corporate systems become more connected and more valuable, and as cybercriminals become more capable. The worst vulnerabilities are often in unexpected places, such as when trusted third parties have access to a company’s IT system for one reason, but hackers can exploit that connection to reach their intended target via a sequence of hops. Devices attached to IT systems as part of “the Internet of Things” and/or “Smart Homes” often lack world class security; attention to security requirements is sacrificed under a competitive rush to “get products to customers quickly” as part of a “permanent beta” culture.

This situation is made worse by decisions by governments sometimes not to inform software vendors of vulnerabilities that are brought to their attention. Government agencies want these vulnerabilities to remain in place, for their own backdoor surveillance purposes. But any such vulnerability is liable to exploitation by black hat hackers as well as white hat ones.

The first response needed by companies to this state of affairs is to avoid thinking that any one solution can be applied as a permanent “magic bullet” against cybercrime. There’s a fierce arms race underway: defences which are state-of-the-art one year can become overtaken in another year.

The second response is to audit all suppliers and partners in terms of their awareness and adoption of best practice cybersecurity. This will introduce extra cost, but should result in longer term benefit.

I believe governments need to step in too – to prevent software companies from disowning responsibility in their licensing terms for defects and security weaknesses.

Q: Any final thoughts?

The future is full of great promise as well as great threat. The task of futurists is to help society to move beyond the instant “future shock” reactions of either ‘wow’ or ‘yuk’ which can prevent a deeper analysis. A much greater share of national and international resources ought to be applied in figuring out the upsides and downsides of potential future scenarios, and to find ways to increase the likelihood of desirable outcomes.

The single most important task of the next ten years is, arguably, to find better ways of cooperating. In an age of unprecedented crowds – both online and offline – the global human community urgently needs social mechanisms that will encourage the wisdom of crowds rather than the folly of crowds. Our existing methods of mutual coordination seem to produce more strife than harmony these days. We’re struggling to cope with ever larger tensions and disruptions on the shrinking world stage. The nation state, the multinational business firm, the free market, the non-governmental organisation, the various international bodies of global coordination set up after the Second World War – all find themselves deeply challenged by the myriad fast-evolving overlapping waves of stress of the early twenty-first century. A potential “democracy 2.0”, in which technology enables better collaborative decision-making, could, therefore, be technology’s greatest gift to the future.

What’s next?

You can hear Davis’ keynote presentation “The Elephant on the Footpath – Anticipating Industry Disruption with Future Tech” at the TSAM conference being held on 15th March 2016 in London.

Download The Brochure Here