Sunday, 19 February 2017

The coming decades





This week's The World This Weekend began with the split, as Mark Mardell presented it, between reassuring Vice President Pence's wholly warm words on NATO and unsettling President Trump's only-partially warm words on NATO, but much of it - thankfully - was spent on something completely different: The possible impacts of AI on our lives over the coming decades. 

The closing interview with Calum Chace and Richard Susskind was so thought-provoking (and important) that I thought I'd transcribe it for both you and posterity. (Give me a prize!)

MARK MARDELL: Well, to discuss some of the implications of Pepper and friends, and artificial intelligence more generally, I've bought together Calum Chace, author of The Economic Singularity, and, speaking first, Professor Richard Susskind, President of the Society for Computers and Law and author of The Future of the Professions. 
RICHARD SUSSKIND: I look at this from the point of view of the recipient of services. Now, many people say, well, of course it's going to put people out of work, and we could have a debate about that, but what I'm more interested in is this idea that we have health services that are creaking, educational services with a worry about quality, and we have a set of technologies that are emerging that seem to me to be able to offer access to patients, to students, to clients, what were hitherto professionals, over access to easy expertise, to guidance, basically to help people with their lives. And so we have here, in principle, one of the key solutions to the problems facing our health service, our legal system, and our educational system. 
MARK MARDELL: Calum, are you worried about the jobs? 
CALUM CHACE: I am both excited and worried. AI will provide much better services, it will improve all the services that we need, but it will do that by replacing the humans who are currently providing those services, not as well. So, in the future we need to figure out how to make sure those humans carry on receiving an income, and that is a big challenge. 
RICHARD SUSSKIND: I think we have to be clear about timescales here. I often say that for us the 2020s are going to be a decade not of unemployment but of redeployment, and by that I mean we're going to see many professionals - and I'm focusing here on doctors, lawyers, accountants and so forth - retraining to be involved not so much with competing with systems but actually with building these systems. And so we'll have a whole industry and, indeed, a great deal of employment in the 2020s devoted to human beings designing and building, engineering their knowledge into these systems. Once we get into the 30s and 40s I think it will be an entirely different ball game. 
MARK MARDELL: Calum, in every other industrial revolution or change in technology, whether it was the Great Industrial Revolution or the mechanisation in factories in the Thirties, people have predicted that there'll be mass unemployment, and yet there have been new jobs created. Won't that just simply happen again? 
CALUM CHACE: I don't think so, and it is entirely true that previous rounds of automation have not caused lasting unemployment. The question is, 'Is it different this time?', and I think it is, because past rounds of automation have, by and large, been mechanisation. Machines have replaced our muscle power. Now, that wasn't very good for the horse, because the horse had nothing to offer except for its muscle power. What's coming is a wave of cognitive automation where machines do the jobs that we currently do with our minds, and when they take those jobs it's not at all clear what we have left to offer. I partly agree with Richard that's the 2020s isn't going to be the time when we have massive waves of unemployment, although I think there's some areas where we are going to see it - for instance, professional drivers, I think, are largely going to be rendered unemployed by self-driving vehicles. That's about a million people in the UK, about 5 million people in the US. It's not at all clear what other jobs they can do. 
MARK MARDELL: I suppose, as ever, who owns and controls the systems that are doing these things really matters?  
RICHARD SUSSKIND: This is a vital question. If you think of income coming from two sources today - from the labour of people on the one hand and from capital on the other - if you consider that to be a pie, what we're finding is that the labour slice is going to get smaller. In principle, one can see some very attractive and interesting ideas about the capital - by which we mean the intellectual capital, the systems and the data - very interesting ideas that that might actually be shared amongst us, on the basis of a Commons or a Wikipedia type way. The reality, however, as things seem to be emerging, is that a very small number of very large and influential commercial organisations will both own and control the systems and data. What's at issue here - and you can see emerging in this discussion - it's not some changes at the periphery of our society. We're seeing some fundamental challenges to the way we organise ourselves, to the way we live, to the way we work, and this needs deep policy thinking as well as political activity. 
MARK MARDELL: And Calum, we heard from Sheffield about the attraction of the basic income. Do you think that's one solution? 
CALUM CHACE: Well, I think it might be a partial solution, but the thing about basic income is it's fantastically expensive and, secondly, it's not enough. If you've been used to earning, say, £40,000 as a high-end professional driver you're not going to be at all happy to be told that in future you going to be living forever on £10,000 a year. So UBI is only part of the solution. 
MARK MARDELL: Richard, are we getting somewhat carried away? I mean, are there are some things that only humans will ever be able to do? 
RICHARD SUSSKIND: If you think about the basic human capacities that we have - our ability to think and reason and solve problems - we call that 'cognitive'. That's one. Our ability to move things around, to lift, our manual capabilities - that's a second. If you think, thirdly, about our emotional capabilities - to recognise and express emotions. And a fourth is our moral capacity, our ability to recognise what's right and wrong and also to take responsibility. It seems fairly clear that AI is impinging on the cognitive capability (robotics) and the manual capability. The area of 'affective computing' - that's machines that can both express and detect emotions - that's moving fairly rapidly into the emotional dimension, which may leave us to this question of our moral capacity. Do we believe machines can recognise what's right or wrong? But the bigger question is, do we want time machines to take responsibility in the way that we as human beings do? Are we comfortable with the idea, for example, of a machine deciding to switch off a life support system, and doing so? Probably not. And so, there's certainly a moral dimension that for many years yet we'll want to reserve for human beings. But this raises a fundamental question of the moral limits of machines, and the issues that Calum and I have been discussing today. Here's another one, again for policy makers and legislators to think about, and to think about now: Even if machines in the future can undertake various tasks, are we wanting to draw a moral line at some place? 

5 comments:

  1. 'Basic income'? Welfare when it's at home, no?

    Mardell is right (the rarest of phrases) about technology creating more jobs than it kills, but really there is a problem with those jobs generally being created somewhere other than where the current workers live. Having said that, based on his socialist past broadcasting, it's refreshing to hear him do a basic devil's advocate question.

    As for AI taking over, have a look at this and despair for Japan:

    https://youtu.be/nkcKaNqfykg

    ReplyDelete
    Replies
    1. The top comment on that YouTube video is: "The day I buy this will be the lowest point in my life. Orders one ASAP".

      For a YouTube comments that surprisingly witty and, sadly, probably true.

      Delete
    2. I don't agree about more jobs being created than are destroyed. Most of the jobs created in the first industrial revolution and the IT revolution from the 1980s onwards related to increased administrative complexity and then domestic services. Both are now threatened by robotics.

      Basic income is no more welfare than a share dividend is welfare.

      Delete
    3. Other types of jobs are created by technology, but those jobs tend not to be in the location where the workers whose jobs are replaced live. and certainly requires skills they not only don't have, but may not even be able to be trained to do.

      How is basic income not welfare? And if it is like a shared dividend, how is that like welfare?

      Delete
  2. I noted that Mardell referred to the computer as being white.
    We can now expect a cyber Abbott to accuse him of racism, and seeking equal rights for the black computers who are yet to be adequately rewarded after slavery issues.
    mardell on the World at One earlier also said that the computer he saw in hospital had the face of a young boy and with dark hair too.
    How many BBC blokes will now be rushing up north to get into a dalek suit and get on the ward?...wouldn`t Jimmy Savile be proud?

    ReplyDelete

Note: only a member of this blog may post a comment.