The rise and fall (and rise) of Artificial Intelligence

The rise and fall (and rise) of Artificial Intelligence
Artificial intelligence has been around nearly as long as humans have been able to think about themselves, about thought and what they do. Empathy is wired into us – some more than others but we are all capable of thinking from another’s point of view. This capacity leads us to anthropomorphize things that aren’t human, to imbue things in our daily lives with human qualities like moods, characteristics and personality. When we build puppets, robots, models that look sort of human it is easy to for us to assign it with greater power, ability and promise than is really there. For marketers in other fields, to have consumers attribute their products with ‘magical’ properties would be a dream come true but for artificial intelligence it is a nightmare – one the industry has expended funds marketing against. Artificial intelligence has delivered many great tools which today we take for granted. Our phones listen to us and understand our requests in the context of our calendar, our camera’s recognise faces and social networks tell us who those faces belong to, machines translate words from one language to another (although don’t get the translations tattooed just yet) and the list goes on. We chuckle at these mistakes these learning and adaptive systems make, we see the huge strides and investment and we expect a new human like intelligence to emerge in the short term. Around the middle of every decade since the 60’s there has been a peak in excitement for AI, a frustration with it’s lack of progress, and a reduction of funding or AI winters as they are called. In the eighties it was LISP machines, in the nineties it was expert systems. Now in the twenty-tens (I thought it was teenies but that’s a kids show apparently) we are seeing a resurgence of AI, a blending of machine learning, predictive modelling and cognitive computing along with self driving cars. This raises some rare and interesting questions:
  • Are we headed for a new AI winter?
  • Or an AI apocalypse?
  • Also, will I still be cleaning my home in 2020?
It is certainly true to say the set of tasks we can expect software and physical computing systems to do is vastly increased compared to just a decade ago, and massively so since the 60s. Doing all the things humans can do and living in our society, empathising and understanding us in that broad context is still well beyond computers – but engaging with us in specific, well-defined domains such as about our calendar or what we would like to buy from the shop is well within their grasp today. Previously difficult tasks such as searching a database for information, reading that from a screen and keying it into another screen is now entirely possible – see the earlier blog post on bots. Having a drone fly itself around an obstacle to reach an objective is still very hard. Having a vehicle drive itself on the road is in fact easier, albeit most humans don’t benefit from lidar sensors, ultrasonics and eyes in the back of their head (alright, bumper). It is good to see AI on the rise again – I loved the topic ever since getting into programming and getting involved in a cognitive psychology course some years ago. I recall writing an expert system in Pascal back in the 90s. I am concerned, as the insurance industry should be, by a new AI winter. Self driving cars and vehicles have the potential to make the roads safer for all. We will when we see them, imbue them with more power than they have – this is human nature. We will, in the not too distant future, hear people say things like, “the car likes to give cyclists a lot of room on the road” or “the car prefers to take this corner at a fair speed” – imbuing a complex machine with sensors and programming with preferences, desires and likes – human qualities. When the first death comes we will ask how could it do such a thing. When an automated car is put in a position where it must decide between a set of actions – each leading to injury, we will hear people discuss why it chose to do what it did, people may say, “it did the best it could” or worse, “no person would ever have done that, this is why machines shouldn’t be able to choose.” The latter of course revealing the human construct, an unspoken contract – our expectation that smart or intelligent systems will operate like us, share our values, our culture, that we can predict their actions in our context. This is the greatest threat to AI and always has been – the expectation, the contract that the new intelligence will be like human intelligence. Some winters are due in part to that contract being broken, to these systems not living up to the expectation and making inhuman mistakes. There are a set of tools available now that are not intelligent but they are smart and they are powerful. We would be remiss in our duty to our customers and shareholders if we do not leverage them. Manage expectations about these powerful tools and understand the very real limits that exist on them. If we can do this we may benefit from the AI boom and avoid another AI Winter. Will we see an AI apocalypse? Ironically it’s not the human like intelligence that may be our greatest threat but simpler intelligences. A human like intelligence could empathise, could act in acordance with values and could be relatively predictable (in a human way). There are many stories across science fiction of smart robots that act like insects and replicate, in fact that only make copies of themselves, that pose a great threat to any civilisation. They are not intelligent, they don’t want to kill off all life in the galaxy – they just turn all the available resources into copies of themselves which would have that effect. We are much closer to building that threat frankly (with drones, 3d printers, etc.), than a super intelligence that decides all human life is worthless. For now though – I expect these things to stay firmly in the space of science fiction. I include this discussion here because it does demonstrate a key difference between smart with unintended consequences and ‘intelligent’ – a lesson worth bearing in mind for those adopting AI. Finally – will we see robots cleaning our homes by 2020? Well roomba is out there and sort of does that. Stairs and steps are still a huge challenge to robots. Frankly differentiating furniture, pets, clutter, magazines, rubbish, dust and recycling in a moving environment is still a very complex issue. As in insurance, I think smart things will make cleaning easier and assist those who invest but there’ll be a role for human intelligence in ensuring the pets aren’t recycled and the customer ultimately gets the service they expect.

Robotics, bots and chocolate teapots

Robotics, bots and chocolate teapots
Increasingly in operational efficiency and automation circles we’re hearing about bots and robotics. As a software engineer in days past and a recovering enterprise architect I have given up biting my tongue and repeatedly note that, “we have seen it all before.” I’ve written screen scrapers that get code out of screens, written code to drive terminal applications and even hunted around user interfaces to find buttons to press. The early price comparison websites over a decade ago used these techniques to do the comparison. These techniques work for a while but are desperately fragile when someone changes the name of a button, or a screen or a screen flow. However, they can help. I recall a while ago a manager lamenting ‘the solution’ was about as useful as a chocolate teapot. A useful 10 minutes hunting for this video of a chocolate teapot holding boiling water for one whole pot of tea made the point for me. Sometimes all you need is one pot of tea.
Tea poured from a chocolate tea pot

Tea poured from a chocolate tea pot

So it’s not new, some bots may be fragile and with my “efficiency of IT spend” hat on (the one typically worn by enterprise architects) stitching automation together by having software do what people do is an awful solution – but as a pragmatist sometimes it’s good enough. Things have moved on. Rather than a physical machine running this with a ghost apparently operating mouse and keyboard we have virtual machines and monitoring of this is a lot better than it used to be. Further machine learning and artificial intelligence libraries are now getting robust enough to contribute meaningfully smart or learning bots into the mix that can do a bit more than rote button pressing and reading screens. In fact this is all reminiscent of the AI dream of mutli-agent systems and distributed artificial intelligence where autonomous agents collaborated on learning and problem solving tasks amongst other things. The replacement of teams of humans working on tasks with teams of bots directly aligns with this early vision. The way these systems are now stitched together owes much to the recent work on service oriented architecture, component orchestration and modern approaches to monitoring distributed Internet scale applications. For outsourcers it makes a great deal of sense. The legacy systems are controlled and unlikely to change, the benefits are quick and if these bots do break they can have a team looking after many bots across their estate and fix them swiftly. It may not be as elegant as SOA purists would like but it helps them automate and achieve their objectives. The language frustrates me though, albeit bots is better than chocolate teapots. I’ve heard bot referred to as a chunk of code to run, a machine learning model and a virtual machine running the code. I’ve even heard discussion comparing the number staff saved to the number of bots in play – I can well imagine operations leads in the future including bot efficiency in their KPIs. Personally, I’d rather we discussed them for what they are – virtual desktops, screen scraper components, regression models, decision trees, code, bits of SQL were appropriate, etc. rather than bucket them together but perhaps I’m too close to the technology. In short bots may not be a well-defined term but the collection it describes is another useful set of tools, that are becoming increasingly robust, to add to the architects toolkit.