Justin Westcott is Edelman’s European Head of Technology and General Manager, London.
I spent two days last week in Amsterdam attending the World Summit AI. The 3rd year of the event, and it had already outgrown its original home – moving out to a proper trade show space, housing a good few more thousand people.
A fitting place, I felt, for the discussion this year. Home to the world’s first “bubble” where hype ran away with reality: “Tulip mania” of 1637. That was a year that saw the rapid rise and then crash in price of the then “new” Tulip bulb. I’m being overly facetious, but what’s clear as the conversation of the last three years has progressed is that whilst there has been lots of progress in both academic and in applied use cases within business, it’s extremely narrow – in essence we’ve just created much better software. Software that optimises; quicker, better, more accurate; software that can – in Amazon’s words – seem like magic. We have not created anything we could really call intelligent.
AI, as a field, isn’t going to stop. It might have some trust challenges to cross as a result of over-hype, but it will continue. Where it will end, who knows? But the future will be a hell of a lot different to today.
Quick fire take-outs:
- Hype vs Reality. Gary Marcus – promoting his new book Rebooting AI – demonstrated why he feels the ability of the technology has been over hyped, and why as a result we’re in danger of over-trusting current AI which he’d class as mediocre. It’s why Teslas still can’t drive themselves, why Facebook M failed, why all these radiologists that the industry thought would be displaced still have jobs. The technology has got really good at doing some very narrow things; but saying it’s intelligent or has any cognition is quite far from the truth.
- Black box algorithms came up a lot; the issue that many of the current Machine Learning (ML) systems are so complicated that no-one actually knows how they’re actually getting to their answers. That’s a real issue when it comes to questions of ethics and fairness. Emily Foges, CEO of Luminance, stated that for them they view black box algorithms as a design flaw. When working for regulated industries like the financial or legal sector, you can no longer hide behind technology; there always needs to be an individual at the firm accountable. Not understanding how decisions are made, and recommendations raised, is a fundamental flaw.
- It’s magic. Dr Wenger Vogels, CTO of Amazon, gave a great presentation outlining just how pervasive ML is across the whole of the firm – from recommendations, to grab and go stores, to logistics from vehicle fleets to soon-to-shadow-our-skies drones. For them it’s all about consumer experience and trying to make it all seem like magic.
- What if we succeed? I love hearing Stuart Russel, Professor of Computer Science at Berkeley, speak. He was at the event to pose challenging questions – specifically what happens when we do finally make a machine that’s more intelligent than we are. It’s a question posed since Turing’s time, but one the AI industry seems to shrug its shoulders to and plough on with development nonetheless – inching ever nearer to this now seemingly inevitable future. He was there to make the case that we need to stop and reboot the industry and think about how to control this technology before it’s too late.
- Iron men and ladies. Businesses like Boeing, Sky, Lego, Shell and Mars took to the stage to demonstrate how AI is being embedded into their operations. For them, much of it was to do with improving processes and augmenting their people. The Chief Digital Officer of Mars said it well – for them it’s looking to create suits of armour for their people, turning them all into superheroes – better work, better experience, greater impact. It’s not about replacing human effort (yet), but making it more effective.
- Rethink education. AI has the power to radically change our education systems if we just allowed it. Squirrel AI, a Chinese Ed-Tech company, talked about how they are both democratising education at scale, but also personalising – recognising that every student has a unique learning path. To get the most out of their potential, one should first understand what they understand, and how they learn, and then tailor the material to meet their individual requirements. A software + AI + classroom teacher approach means we can finally do this at near minimal marginal costs. China is now doing this at scale, why can’t we?
- Our saviour? Humanity has got itself into quite the pickle hasn’t it? We have rapidly growing populations, we’re living longer, in denser habitations, with increasing energy needs. We seem to have outlived Malthusian moments, but providing energy, food, water at this scale is not something we can solve without this technology improving. We need big AI to solve these big challenges. Shell’s head of AI made this point well – AI is being used to help them accelerate solutions to tackle this energy conundrum. Deep Mind, also presenting, reminded us of their beautifully simplistic purpose: Solve Intelligence. It’s the solve for everything else.
- Pinterest, wow. So that seemingly trivial consumer application that people like to use when designing their home or planning their weddings, has actually built one of the most useful data sets (for AI) in the world. Just think if all those images (200bn pins) have been, in some way, semantically sorted by its users – images are put onto boards (4bn), associated with other images etc. Pinterest has now got access to much greater meaning and context by which to train its algorithms. Impressive. Do use its camera search function, and you’ll see just how good it’s become. Possible acquisition target next year? 100%.
- Energy killed the AI star. See point 7 - AI could well be our saviour, but at what cost? Powering AI today is energy intensive; way more inefficient than the super computer we carry around with us each day (not your phone, your brain). Qualcomm’s Max Welling, stated that we really need to be looking at improving the efficiency, and looking at more AI per Joule, not more AI. And new computer models are needed to do so.
- The game changer. Quantum has the power to change the game. Quantum computing is coming; and seemingly sooner than many thought. Without going into all of the details here it basically holds the potential for a paradigm shift in computation. Plug AI into this and the reality may well get ahead of the hype.