Dr Phoebe V Moore was invited to speak on a panel organised by the German Commission for Occupational Health and Safety and Standardization (KAN) at the Human Computer Interaction conference 26 – 31 July 2019. Dr Moore’s trip was funded by Leicester University’s ESRC Impact Accelerator grant. Whilst there, Moore conducted some interviews for her own research, including the following.
Interview with Keynote speaker Human Computer Interaction 2019, Florida Prof Richard H.R. Harper, School of Computing and Communications, Lancaster University
Interviewer Dr Phoebe V. Moore
31st July 2019
Phoebe: What are the risks workers face in the new age of AI?
Richard: One of the consequences of AI integration into workplaces is not merely that it might lead to automation, but rather that its affects may not be confined to the sections of the economy that are traditionally supported by trade unions such as shop-floor factory workers. It may instead affect those who are more often supported by other types of affiliations, namely professional associations. Indeed, AI applications can and are already starting to displace lawyers, accountants and middle management workers, which could be seen as a paradox, given that automation is usually assumed to hit the lower skill tiers first.
That this is so, points to something that is not always mentioned in regard to AI: its potentially enormous cost and the shape of investment that follows on this. The turn to AI will not be a gradual process of implementation but will require massive outlays in investment and so it is only when the benefits of its introduction are seen to be very large that such investments will occur. The cost reduction delivered by laying-off professional staff might be large enough to justify AI as their replacement, but the wages of unskilled staff may not. So AI will be introduced in ways that distorts the normal profile of investment, sucking resources from other areas towards AI. As a result, automation of more traditional means might be delayed as AI becomes the focus of attention.
Phoebe: Are there any benefits to AI and work?
Richard: The actual benefits of AI are still in the largely unproven stages, and it may be some time before AI reliably delivers benefits. The historical failure of computer technologies to deliver cost benefits at the outset of their introduction is well-known. When businesses started buying computers in the late 80s and 90s, there was no evidence of greater productivity nor efficiencies, but instead, an increase in total costs. This was described as the productivity paradox. It was only after the costs in IT investment were taken as a given rather than a new cost that these benefits started to be measurable. Will AI create a similar paradox? Will the cost of AI not equal its benefits? Will AI be too expensive to deliver solutions in the short term? But does this mean it will in the longer term? How long will that be?
AI is, frankly, expensive. Thus, its integration into workplaces doesn’t yet justify the process change and business processes required at the lower end of the job market. This is the main reason that it is not going to result in the replacement of low cost employment, at least not immediately. Indeed, its costs are so great that even when it is introduced to create minor savings in employment levels it has unexpected consequences elsewhere that are not always desired, even by those seeking savings. We already see the use of voice engines in many customer facing financial services, for example, and this is intended to enable ‘intelligent’ automated telephone interaction. But the costs of doing so can turn out to be so great that what was intended only to reduce telephone staff forces can lead to reduction in staff levels elsewhere, behind counters say, affecting customer services. AI on the telephone can mean no people face to face. The automation of one leads to the removal of all intelligence in the other.
Phoebe: Will new jobs be created?
Richard: A further consequence of AI in the workplace is that, to service the role of AI, a new cadre of highly paid, high-skilled workers will be necessary. Those who engineer AI applications will not only provide the learning samples, but will also do the testing and fitting of AI pattern criteria to relevant case studies. These software engineers may end up using the very office spaces that their own technologies made vacant. The hours these staff work may also make them seem like analogies to 19th century mill workers: forced to do shifts so as to ensure that the enormous costs of the computer applications they service are justified. Just as in textile mills, surveillance will be essential and conspicuous and at times sinister; but here it will be outputs via the keyboard that will be measured and uploaded to the shared code repositories.
Phoebe: Will there be any other costs to workers?
Richard: Maintenance and monitoring of code work will be potentially very severe. Software engineers will be hired to quickly spot and correct bugs in codes; the ‘speed’ by which they do so will be monitored and judged. In this sense, workers will be subject to the pressures of software velocity, a term that means they will be subject to a type of performance-based pay that looks at how quickly a bug is solved and patched, and new code put into a code set. Indeed, there will be a whole slew of new phrases in circulation to discuss deskilled code workers’ jobs such as the one mentioned, ‘software velocity’, which sits alongside a process of the commodification of code writing as a skilled job.
Furthermore, code writing will become increasingly and conspicuously transnational. Code sharing and management tools are indifferent to where people work. Code exists in a virtual place across and beyond borders. This brings up the question of what relationships will coders, code writers and the janitors of code have? What regime and political cultural framework can exist, when the outputs aren’t on only visible on-screen in some particular workplace, but consist of a code in a cloud seen from anywhere? What frameworks of work governance will they be subject to? Will workers be visible at all? People judging the work output are invisible, but their power is enhanced by such commodification methods.
These days, the AI production line is distributed across the world. So when you become a code janitor and a ‘commodified coder’, the experience of writing, the way you are managed, the way your work is sustained and hopefully protected, given that AI work is inherently a transnational activity, it is important to ask questions about employee rights. Indeed, if empoyees are in different places and the workload is increased through new expectations of speed and efficiency, while those facilitating the very AI that is being also implemented into workplaces in other forms such as people analytics, how will workers be protected given the different local labour regimes under which they work, and when they have very little contact with one another?
Phoebe: Thank you for speaking to me, Richard Harper!