The Impact of Advanced Automation and the Cloud on Employment

Master's Thesis, 2015

89 Pages, Grade: 2.1


Table of Contents

1.1 Background and Context
1.2 Research Question
1.3 Value of Research
1.4 Scope and Boundaries of this Study
1.5 Chapter Roadmap

2.1 Introduction
2.2 Sources
2.3 The Cloud and Advanced Automation
2.3.1 Data Centre and the Cloud
2.4 Technological Unemployment
2.5 Impact of Automation on Employment
2.5.1 Outsourcing Vs. Automation
2.6 What is the Singularity?
2.7 Projected Timeline for the Singularity
2.7.1 Moore’s Law
2.7.2 Storage
2.7.3 Supercomputers
2.8 Conclusion

3.1 Introduction
3.2 Purpose of Research
3.3 Research Philosophy
3.3.1 Pragmatism
3.3.2 Positivism
3.3.3 Realism
3.3.4 Interpretivism
3.4 Research Strategy
3.4.1 Online Survey
3.4.2 Data Collection
3.4.3 Case Study
3.5 Survey Tool
3.6 Participant Demographic
3.7 Conclusions

4.1 Introduction
4.2 Data Analysis
4.3 Survey Results
4.3.1 Advanced Automation
4.3.2 Cloud Services
4.3.3 Data Centre
4.3.4 Employment in IT
4.4 Dediserve: A Case Study
4.4.1 Introduction
4.4.2 Employment and Automation
4.5 Summary of Findings

5. Conclusions
5.1 Introduction
5.2 Will Advanced Automation Impact Employment?
5.3 Occupations Under Threat
5.4 A Future with Advanced Automation
5.5 Limitations
5.6 Future Research Opportunities
5.7 Summary


Appendix 1: Ethics Application
Appendix 2: Information Page for Participants
Appendix 3: Informed Consent Form
Appendix 4: Survey.

List of Figures and Diagrams

FIGURE 1: Global unemployment trends and projections

FIGURE 2: Adult Population with Advanced Education in Ireland (International Labour Organisation 2014)

FIGURE 3: High skilled jobs currently at risk from advanced automation (Deloitte 2014)

FIGURE 4: Job growth IT sector 2010 – 2020, U.S. Bureau of Labor Statistics

FIGURE 5: Moore’s Law versus Transistor sizes 2000-2020

FIGURE 6: Continued upward growth of storage capacity (

FIGURE 7: Growth in Supercomputer Power 1980-2050 (

FIGURE 8: Survey Response Process, Groves et Al (2004)

FIGURE 9: Will advanced automation have a positive or negative impact on employment?

FIGURE 10: Should automation be feared or embraced?

FIGURE 11: Will automation affect employment in the IT sector?

FIGURE 12: Do you believe advanced automation will replace skilled jobs?

FIGURE 13: Is advanced automation used in your workplace?

FIGURE 14: Has advanced automation replaced jobs in your workplace?

FIGURE 15: Do you fear for your own job?

FIGURE 16: Do you currently use cloud service in your job?

FIGURE 17: Do you plan to increase your use of cloud services?

FIGURE 18: Does cloud reduce employment?

FIGURE 19: Does cloud make you more productive?

FIGURE 20: Are you working more unpaid hours due to the cloud?

FIGURE 21: Is your data secure on cloud platforms?

FIGURE 22: Word cloud highlighting key words from text question 15

FIGURE 23: Do you house your hardware in a data centre?

FIGURE 24: Is a data centre critical to your IT requirements?

FIGURE 25: Is the data centre that contains your data a concern?

FIGURE 26: Will job opportunities increase or decrease?

FIGURE 27: Is education important for future job opportunities?

FIGURE 28: Do you believe that coding is important for future employment?

FIGURE 29: Are you aware of anyone in the IT sector who has found it hard to find employment?

FIGURE 30: Geographical overview of dediserve customer base

FIGURE 31: Employee/Customer Ratio – dediserve Ltd

FIGURE 32: Percentage of businesses which expect automation to replace at

least 5% of their workforce


I would like to express my sincere gratitude to my supervisor Diana Wilson for her invaluable support, guidance and encouragement throughout the process of completing this dissertation.

I would also like to thank all of the people who took part in this research. I am grateful for the time they took to participate and the information that they provided. Without them, this dissertation would not have been possible.

Finally, I owe my deepest gratitude to my wife Emer for all of her constructive comments, patience and support given to me during this research, and to whom I dedicate this dissertation.


One of the primary fears in the current global community is the exponential growth and continued sophistication of artificial intelligence. Fundamental to this concern is the wide-ranging impact that this growth will not only leave on the world as we currently know it, but on the place of humans in that world. This has become termed as ‘the singularity’—the point in time when machines will become self-learning, and more importantly, self-aware. It is at this point that machines and robotics will be elevated from the current monotonous job operations to more high skilled areas. This study looks into the drive towards advanced automation and the increased sophistication of artificial intelligence in conjunction with the cloud and how this growth will eventually lead to technological unemployment. Some economists are predicting up to a 50% job loss or more. Predicting the future typically means extrapolating the past. It often fails to anticipate breakthroughs. But it is precisely those unpredictable breakthroughs in computing that could have the biggest impact on the workforce. Education and up skilling current workers will be the only way to ensure continued relevance within an automated workforce. By focusing on education it will ensure people are best placed to take advantage of this new age of advanced automation. This dissertation concludes that innovation through creativity will ensure employment opportunities continue to present themselves to those best prepared for such changes.


This thesis examines the sustained rise of advanced automation within a global context, and its impact on technological employment and society in general. As Managing Director of a cloud infrastructure company, this subject is of significant relevance to the development of the IT industry. The advent of artificial general intelligence, as defined by ‘the singularity’, is of fundamental importance to this study in assessing its influence on advanced automation. The following section presents a brief overview of contemporary views on this topic, as a means of highlighting the need for further research in this area.

1.1 Background and Context

The recent global recession (2007-2008) brought about a vast reduction in employment in all industries around the globe. According to International Labour organization (ILO) estimates, unemployment levels increased from 178 million in 2007 to 197 million in 2012, with a peak of 212 million reached in 2009 (Aridas, Pasquali 2013). However, global unemployment in 2013 still remains at 202 million, with a projected unemployment growth to 211 million by 2017 (see Figure 1) — even with most countries now out of recession and seeing a GDP (gross domestic product) growth (ILO, 2013). The red line illustrates the unemployment percentage during this time, while the orange level shows the worse possible projection going forward:

illustration not visible in this excerpt

FIGURE 1: Global unemployment trends and projections

This continued growth of unemployment through a new wave of prosperity is considered by those such as Brynjolfsson, McAfee, and Manning, as the reason behind many employers’ decisions to replace human employees with more cost effective automated robotic solutions, rather than any current economic climate (Brynjolfsson and McAfee, 2011; MGI, 2011; Manning 2013) — a contributing factor in ensuring unemployment continues to rise through a more stable economic setting. This, coupled with the eventuality of the ‘singularity’ (the moment in time when robots become self-aware and self-learning), has led to doomsday predictions from leading economists and global IT figures envisaging up to a 47% job loss or more by 2034 (Frey and Osborne, 2013).

Edward E. Leamer, director of the UCLA Anderson Forecast, states:

If you have nothing to offer the job market that cannot be supplied better and cheaper by robots, far-away foreigners, recent immigrants or microprocessors, expect it to be exceedingly difficult to find the job to which you aspire, and plan on doing low-wage service work at the end of a long and painful road of diminished aspirations, no matter what your diploma may suggest (Semuls, 2010).

Conversely, previous technological leaps have shown this statement to be inaccurate. One example of this may be seen in retail employment numbers in the US, whereby 10% of the country’s population worked in retail in 1995. Since then, the Internet has completely changed the face of retail as we know it, and the manner in which purchasing is now done. In 2012 the percentage of the population working in retail increased to 12.7% (Bureau of labor Statistics). This development demonstrates the positive effect of technology on employment in its ability to create jobs, rather than destroy jobs.

1.2 Research Question

The primary research question posed by this dissertation is:

Will the drive towards advanced automation and the increased sophistication of artificial intelligence eventually lead to technological unemployment?

1.3 Value of Research

Employment and work-status are crucial aspects of social identity in situating people’s role and purpose within the existential framework of life. While work enables us to secure basic necessities such as food, accommodation and clothing, its inherent value has a deeper historical and philsophical significance. We gain stimulation from work, and experience a wide range of emotions: exhilaration, exuberance, joy, regret, anger and despair (Holbeche and Springett, 2004). The fabric of society would change irrevocably should employment no longer exist. Artificial intelligence would dominate every aspect of civilisation, and leave us with little engagement or sense of place. However, great technological leaps have been encountered in the past.

The communications–energy mix of the First Industrial Revolution involved the printing press, the rise of coal and the steam engine. The Second Industrial Revolution was fuelled by telegraphy, telephony and oil. Now in the midst of the Third Industrial Revolution, economist and activist Jeremy Rifkin argues that the next transformation will be based on the interaction between the Internet and large-scale renewable energy production. It too will entail not only economic changes, but also profound social and philosophical revolutions (Rifkin, 2011). Nevertheless, none of the previous changes have resulted in widespread job loss. Job creation simply evolved with the changing environment into more creative means. In furthering the investigation into understanding previous technological leaps, this thesis will provide a more comprehensive assessment of the future impact of the singularity on employment.

1.4 Scope and Boundaries of this Study

This study investigates if automation is driving unemployment, and whether employment will continue to grow as the sophistication of artifical intelligence continues. The primary focus will be on the predicted timeline of the singularity, and the ramifications on employment, based on previous technological breakthroughs.

1.5 Chapter Roadmap

- Chapter 1

This introductory chapter provides a summary of the context and relevance of the research question.

- Chapter 2

The literature review examines: current scholarship related to the singularity, its timelines, and its predicted impact on employment; previous technlogical leaps and their influence on employment; the growth of automation; and the current global destruction of jobs through the growth of artificial intelligence.

- Chapter 3

Here, an overview of the various methodological approaches available for this study are presented. The reasons behind the chosen methodology for this research project will be explained. The limitations and strengths of the research project will also be discussed.

- Chapter 4

This chapter assesses the manner in which the data colllected in the study was analysed and interpreted in a rigorous manner. The resultant findings determine the impact on employment due to the growth of automation and the singulairty.

- Chapter 5

The concluding chapter puts forward the findings and conculsions derived from the research, thus demonstrating that the research question has been answered fully. The strengths and limitations of the research will also be highlighted, and areas of further investigation will be proposed.


2.1 Introduction

This chapter outlines the body of extant literature relating to the growth of artificial intelligence and its continuing place of importance within modern day society. Fundamental to this field of research is the eventuality of the ‘singularity’, and its inherent impact on employment. In order to provide a comprehensive assessment of the writings on this subject, it is necessary to first define ‘the singularity’. A series of hypotheses regarding the projected timeline for the singularity will follow, leading to a discussion on the impact of automation on employment, and the role of the cloud within this development. The information gleaned from this review demonstrates the need for further research in this area.

2.2 Sources

The library of Trinity College, Dublin has served as the primary source of research for this dissertation. A variety of databases have been consulted throughout. These include: Academic search complete; ISI Web of Knowledge; JSTOR; and Stella search. Further papers were identified through Google Scholar, as well as web-based online articles, which were sourced via Internet search engines.

2.3 The Cloud and Advanced Automation

We are now looking into a new age that could prove the ‘luddite fallacy’ as being incorrect — the age of cloud computing. Cloud computing is one of the most important technological advances over the last decade, and has the potential to revolutionise the delivery of IT services to consumers (Brynjolfsson and Jordan, 2010). Cloud computing has changed the face of global business, data storage, information sharing and how we now work. It is no longer necessary for smaller businesses to maintain their own server environment in-house. Marston and Li propose that the use of the Cloud ‘dramatically reduces the upfront costs of computing that deter many organisations from deploying many cutting-edge IT services’ (Marston, Li et al., 2011).

One major driver for the adoption of cloud computing in SMEs is the reduction in costs and operational overheads associated with the support and maintenance of internal company-owned IT infrastructure and services (Chen, Lin 2012; Sultan, 2011; ENISA 2009). However, by reducing those costs by the removal of in-house IT services and effectively outsourcing their internal IT departments to ‘the cloud’, companies no longer have employment, or require a drastically reduced workforce for existing IT works within their organisations. This is one of the most fundamentally overlooked points when it comes to the promotion of Cloud computing within organisations by the IT staff that cloud adoption will potentially replace. If a business no longer has work for their internal IT staff, then why would the need for the staff that implemented the solution remain?

According to the EU's Small Business Act (SBA), Ireland’s SME sector lost 15% of its total workforce between 2007 and 2010, which also coincides with 79% of all Irish SMEs now using some form of cloud computing services in their organization; this jumps to 84% if you include email (Carcary et al, 2014). The challenge for IT leaders is to reduce their spending on sustaining the business, and invest more in innovative ways that drive business growth and support strategic business goals (Gartner, 2006).

The cloud is the enabler for the drive by industry, in convergence with advanced automation, to allow for increased improvements in the automation of both high and low skilled jobs. Advanced automation in IT technology is also leading to the deployment of production management functions in off-site data centres for use by multiple manufacturing sites. Examples include Starbucks, who are doubling the number of Clover coffeebrewing machines in operation, which connect to the cloud and track customer preferences. This technology allows recipes to be digitally updated, and help staffers remotely monitor a coffee maker’s performance (Kharif 2013). Starbucks also plan to use connected fridges that indicate when a carton of milk has spoiled for real-time stock control (Kharif 2013).

2.3.1 Data Centre and the Cloud

A data centre is defined by Gartner as:

The department in an enterprise that houses and maintains back-end information technology (IT) systems and data stores — its mainframes, servers and databases. In the days of large, centralized IT operations, this department and all the systems resided in one physical place, hence the name data center (Gartner, 2013).

Data centres play a key role in the delivery of cloud services. Cloud service providers use data centres to house cloud services and cloud-based resources. For cloud-hosting purposes, vendors also often own multiple data centres in several geographic locations to safeguard data availability during outages and other data centre failures. Data centres are continually being built by cloud service providers such as Facebook, Google and Microsoft to ensure increased speed by placing their facility geographically closer to their end users. By being physically closer to their end users, cloud providers can ensure usability is in real time to remove any need for physical infrastructure in the office place. ‘As cloud becomes a significant enabler, enterprises are getting out of the datacentre business in droves,’ says Tim Crawford, CIO strategic advisor at AVOA. ‘But datacentres are not dying; Cloud is just enabling more enterprises to use third-party IT services’ (Venkatraman, 2014).

2.4 Technological Unemployment

The convergence of cloud and advanced automation is believed to be a contributor to ‘technological unemployment’ — the term used in the push of labour from automated to non-automated industries. In 2014 Pew Research canvassed 1,896 technology professionals and economists, and found a split of opinion: 48 % of respondents believed that new technologies would displace more jobs than they would create by the year 2025, while 52 % maintained that they would not do so (Smith, Aaron, Anderson and Janna, 2014).

According to Say’s law, technological unemployment is only ever temporay. Named after French economist Jean-Baptiste Say (1767-1832), the law argues that supply creates its own demand. This mechanism exists in the economic system, and guarantees the automatic reabsoption of any technologically-displaced labour (Luas, 1978). Gregory Woirol considers four major theoretical arguments that technological change could lead to net rise in unemployment as follows (Woirol, 1996):

- That there may be a lack of markets for the increased output.
- That there be a lack of capital to employ released labour.
- That the rise in purchasing power from technological change hypothesized by the Say’s law compensation theory would not occur.
- That technological change led to constantly decreasing ratio of circulating to fixed capital.

2.5 Impact of Automation on Employment

Frey and Osborne postulate that ‘47% of all jobs could be automated in the next 20 years’ (Frey, Osborne 2013). ‘This wave of technological disruption to the job market has only just started’ (The Economist, 2014). Andrew McAfee, from MIT, observes that ‘automation in the manufacturing industry is a net job destroyer’ (Zeilzer, 2013). From statements such as these, the subject of current technological growth appears to invite an air of caution within mainstream media, with such doomsday headlines as: ‘The Machines are Going to Steal our jobs’ (Worstall, 2014); and ‘Robots on the Rise: Is your job at Risk?’ (Goodkind, 2013). However, in order to understand the future effects of automation and the rise of the singularity, it is necessary to look back at the influence of previous advancements on both employment opportunities and job growth. From the Luddites onwards, this has proven to be a continuous ongoing challenge to employment.

The Luddites were a group of English textile workers engaged in the violent breaking up of machines from 1811 onwards (Palmer, 1998). Such vandalisation was premised by the fear of new machines taking their jobs and livelihoods. Against the backdrop of the economic hardship following the Napoleonic wars, new automated looms meant clothing could be made with fewer lower-skilled workers. As the new machines were more productive, some workers lost their relatively highly paid jobs as a result. A ‘Luddite’ named after the mythical English folk hero Ned "King" Ludd, is a term used (usually pejoratively) to describe people who oppose the introduction of new technology, while the ‘Luddite fallacy’ is the simple observation that new technology does not lead to higher overall unemployment in the economy (Ford, 2009).

It is argued that new technology does not destroy jobs – it only changes the composition of jobs in the economy (Ford, 2009). However, with the continued sophistication and growth of artificial intelligence, and in time the progressive reliance we place on automation, we could now potentially be at the tipping-point of the Luddite fallacy. Martin Ford asserts that ‘if we automate even more, the economy cannot absorb the newly unemployed due to automation in other sectors, and hence it would reduce the purchasing power of the people’ (Ford, 2009). Whether it can now continue to live up to the fact that new technology does not destroy jobs remains to be seen. Since 2001, with the aid of computers, telecommunication advances, and ever more efficient plant operations, U.S manufacturing productivity, or the amount of goods or services a worker produces in an hour, has increased by 24% (Huether, 2006).

In 1924 American Congress passed a resolution asking the secretary of labour to draw up a list of labour-saving devices (Bureau of Labour statistics, 1924-26), and a parallel estimate of the number of people who had been left unemployed as a result of their use. General consensus believed that the central result concerning the introduction of machinery was the reduction of total labour requirements. From this, it was assumed that all saved labour would be the total amount of unemployment. Notwithstanding, this line of reason did not match up to reality, as productivity in the United States more than quadrupled from 1870 to 2000. The study fails to take into account displaced labour as it focuses more on job growth during this particular epoch.

The same labour input produced over four times as much value in good and services, yet employment increased over six times, from 10 million to over 65 million. This can be accounted for by the increase in population during this period. However, it fails to acknowledge that population increased by six times, and productivity increased by only four times, even with the assistance of improved and more advanced technology.

It was also shown that most of the jobs held by workers in industries such as car manufacturing, which required the building of roads thus creating additional US jobs, would not have existed if it were not for advanced technology (Buckingham, 1962). Germany stands as a model example of the benefits of automation through robotics. From 1998 onwards, the German government invested in advanced manufacturing and sophisticated automation. Resultantly, trade deficit was reversed into a large surplus due to the introduction of over a million robots in the manufacturing industry, leading to the creation of close to three million jobs (Gorle, 2011).

Once of the most comprehensive reports to date on the impact of automation on employment is, ‘The Impacts of Automation on Employment, 1963-2000, Final Report’ (Leontief and Duchin, 1984). This effort resulted in the development of a detailed model of the probable effects of automation on the demand for labour services in fifty-three occupations. According to this model, the intensive use of automation over the next twenty years will make it possible to conserve about 10 % of the labour that would have been required to produce the same bills of goods in the absence of increased automation. Consequently, the direct displacement of production workers by specific items of automated equipment will, at least in the initial stages, be offset by increased investment demand for capital goods, thus ensuring production workers can be expected to maintain their share of the labour force.

Leontief and Duchin explain that:

The impacts are specific to different types of work and will involve a significant increase in professionals as a proportion of the labour force and a steep decline in the relative number of clerical workers. Production workers can be expected to maintain their share of the labour force. Computations that assume the full utilisation of the projected future labour force suggest that personal and government consumption will be able to increase about 2% a year in real terms through the 1980s and between 0:5 and 1.1% through the 1990s due to the adoption of computer-based automation (in the absence of other structural changes). Whether or not the smooth transition from the old to the newer technology can actually be realised will depend to a large extent on whether the necessary-changes in the skill structure of the labour force and its distribution between different sectors of the economy (and geographic locations) can be effectively carried but. This study projects the direction and magnitude of these changes in the structure of the-labour force and of the educational and training efforts needed to carry them out (Leontief and Duchin,1984).

The findings of this report have since been proven to be incorrect, with the lost manufacturing work being completely removed rather than displaced. The increase demand for capital goods has been offset by cheaper credit (Federal Reserve, 2001), allowing even low income families to drive consumer demand. This has been cited as a primary reason for the ecomomic crash in 2007 (International Monetary Fund Report, 2009).

This report demostrates that, according to historical changes to automation, more jobs are created on average, even with increased automation, and the labour force is simply moving towards a better educated and more creative workforce. Brynjolfsson and McAfee reinforce this idea by arguing that ‘acquiring an excellent education is the best way to not be left behind as technology races ahead; motivated students and modern technologies are a formidable combination’ (Brynjolfsson and McAfee, 2011). This is illustrated in Figure 2, which reveals the increased importance in Ireland of advanced education, and the clear upward growth curve of people in higher education from the economic collapse in 2008 to 2013 (Statistics from the International Labour Organisation).

illustration not visible in this excerpt

FIGURE 2: Adult Population with Advanced Education in Ireland ( International Labour Organisation 2014)

The most recent ILO estimates show that the world in 2008 had a labour force of 3.1 billion — nearly 2.1 billion more than that of 1980. The growth of the world’s labour force has been decelerating since 1980: while the average annual growth rate was 2.1 % in the period spanning 1980-1990, it dropped to 1.6% during 1990-2000, and then to 1.5 % from 2000-2007 (Ghose et al., 2008). This clearly suggests that even though the population is expanding, the rate of job growth is continuing to drop, as advanced automation becomes more commonplace in supplanting not only minimal physical tasks, but more advanced skilled intellectual labour also.

David Autor, an economist at MIT who has studied the connections between jobs and technology, disagrees with the possibilities that technology could account for such an abrupt change in total employment. Whilst acknowledging ‘a great sag in employment beginning in 2000’, the author claims ‘something did change’, but ‘no one knows the cause.’ Computer technologies are undoubtedly changing the types of jobs currently available, and those changes ‘are not always for the good.’ Autor notes that since the 1980s, computers have increasingly taken over such tasks as bookkeeping, clerical work, and repetitive production jobs in manufacturing — all of which typically provided middle-class pay. At the same time, higher-paying jobs requiring creativity and problem-solving skills, often aided by computers, have proliferated (Rotman, 2013). This has been confirmed by a 2014 joint report from Deloitte, the Big Four accountancy firm, and the University of Oxford (Tovey 2014), as seen in Figure 3:

illustration not visible in this excerpt

FIGURE 3: High skilled jobs currently at risk from advanced automation (Deloitte 2014)

McAfee (2011) concurs with these statistics, stating that ‘new technologies are encroaching into human skills in a way that is completely unprecedented’. Many middle-class jobs are right in the bull’s-eye; even relatively high-skill work in education, medicine, and law is affected. ‘The top and bottom are clearly getting farther apart.’ While technology might be only one factor, says McAfee, it has been an ‘underappreciated’ one, and it is likely to become increasingly significant.

According to the latest figures from U.S. Bureau of Labour Statistics (BLS), they expect IT jobs to grow by 22% by 2020 due to an increased demand for software developers in the Health IT and Mobile network sectors, as shown in Figure 4:

illustration not visible in this excerpt

FIGURE 4: Job growth IT sector 2010 – 2020, U.S. Bureau of Labor Statistics

Ron Hira, an associate professor of public policy at the Rochester Institute of Technology, argues that the BLS IT forecasts have been wildly wrong in the past. He claims that ‘volatile occupations tend to be subject to bad forecasts, and it's clear that computer occupation employment levels are very hard to forecast’ (Thibodeau, 2012). ‘The forecasts are biased toward the most recent history in the occupation’. Hira places more stock in growth projections for a predictable profession, citing primary school teachers as an example. There, he conjectures that the BLS can estimate the number of births during the decade, and factor in teacher-student ratios to reach an estimate of employment growth.

The BLS has ‘no methodology to estimate technological disruptions that can increase demand for computer occupations,’ states Hira, listing the rapid increases in the use of the Internet and ERP systems as examples of IT disruptions that cannot be measured (Thibodeau, 2012). David Foote, CEO of Foote partners an IT labour research firm, agrees with Hira in light of ‘current market volatility and uncertainty, which is unprecedented,’ that anyone who makes a 10-year IT employment projection ‘is kidding themselves’ (Thibodeau, 2012). While there is ongoing disagreement about the driving forces behind the persistently high unemployment rates, a number of scholars have pointed at computer-controlled equipment as a possible explanation for recent jobless growth (Brynjolfsson and McAfee, 2011).

2.5.1 Outsourcing Vs. Automation

Traditionally, and pre-automation, the fear for job loss was through outsourcing to less affluent but well educated work forces in India, Poland, and China. However, the growth of artificial intelligence and the increasing sophistication of automation is now becoming a major disruption in even low cost economies, with Yahoo laying off 400 employees in its Indian office (Lunden, 2014), IBM cutting 2,000 Jobs (Rai, 2014), and Cisco also reducing its head count in these regions. With labour still being cheap in these countries, and with these companies not moving the employment to other countries, it becomes obvious that automation is not just impacting high cost first-world countries, but it is now having a real affect on low-cost societies. A 2012 report by HFS Research suggests that, ‘robotic automation has the potential to wreak some dramatic, painful changes on the Indian outsourcers who are the current bulwark of the industry’ (Slaby, 2012).

Discussions on the topic of automation are becoming much more common in the technology industry. According to Mark Muro, technology jobs in Asia have the potential to be on the chopping block. The economist at Brookings goes on to say that ‘technology is another platform putting another pressure on developing countries’ (Neal, 2014). He proposes that ‘cheap labour isn't as cheap as it was’, and companies are seeing that automated replacements are getting to be 'good enough’ (Neal, 2014). On the other hand, not every IT or business process is suitable for automation. James Slaby from HFS research remarks that ‘a business process requiring human perception, or nuanced human judgement based on years of experience is less suitable’, ‘which, of course, is a good thing for those of us humans who still want to contribute valuable work to our employers’ (Slaby, 2012).

A recent Oxfam report shows that 85 people alone now command as much wealth as the poorest half of the world (Oxfam, 2014). This concentration of wealth is being expedited due to the unprecedented growth of wealth amongst the privileged few, who, by allowing automation to reduce employment whilst still increasing the rate of production, allow the owners and facilitators of advanced automation to increase their wealth at a more rapid pace than previously seen in history. The prosperity that has been unleashed by the digital revolution has transferred overwhelmingly to the owners of capital, and the highest-skilled workers (Ford, 2009). Nonetheless, even the employment of high skilled workers is now at risk, once the singularity arrives.

Over the past three decades, labour’s share of output has shrunk globally from 64% to 59%. Meanwhile, the share of income going to the top 1% in America has risen from 9% in the 1970s to 22% today (Sharon, 2014). Unemployment has reached alarming levels in much of the first world, and not just for cyclical reasons, or due to the recent recession. In 2000 65% of working-age Americans were in employment; since then, the proportion has fallen, during good years as well as bad, to the current level of 59%. ‘The wealth of the 1% richest people in the world amounts to $110tn (£60.88tn), or 65 times as much as the poorest half of the world’ (Oxfam, 2014). Both they, and their corporations, are building robots that will have the net effect of letting them keep even more of that capital concentrated in their hands (Frey et Al).

Even though much of the research so far has focused on the possibility that technology could increasingly replace human labour, thus displacing jobs and creating unemployment, it has become clear due to the historical progression of technology that this is not always the case. This sort of thinking is textbook Luddism relying on a ‘lump-of-labor’ fallacy — the idea that there is a fixed amount of work to be done. The counter argument to a finite supply of work comes from economist Milton Friedman who claims that ‘human wants and needs are infinite, which means there is always more to do’ (Fridman, 2004). The argument is simple: human beings have a nature which causes them to have infinite desires. Therefore, as technology advances and satisfies our current desires, we will just move on to new wants and needs (Friedman, 2004). While it is certainly true that technological change displaces current work and jobs, it is equally true, and important, that the other result of each such change is a step-function increase in consumer standards of living. As consumers, we never resist technology change and advancements that provide us with better products and services, even at the cost of jobs (Higbie, 2014).

2.6 What is the Singularity?

Famed mathematician John Von Neumann (1903-1957) first coined ‘the singularity’ during a conversation with Stainislaw Ulam (1909-1984), a renowned mathematician in his own right. The term describes the point in time when Neumann believed technology would become self-aware, and exceed human intellectual capacity and control. Two of the most noted proponents of this concept are Ray Kurzweil, currently director of engineering at Google, and Vernor Vinge, a former professor of mathematics at San Diego State University. Kurzweil believes the singulairty to occur around 2045 (Kurzweil, 2005), whereas Vinge proposes an earlier timeframe of 2023. By then ‘we will have the technological means to create superhuman intelligence’ (Vinge, 1993). According to Stephen Hawking ‘computers are likely to overtake humans in intelligence at some point in the next hundred years’ (ABC, 2006). The advances in both the computer software and hardware necessary for artificial intelligence, along with research in genetics, and nanotechnology, are leading towards a technological singularity in which the intelligence of machines will outperform human intelligence. Therefore, the common held belief is not a matter of ‘if’ it will happen, but rather a matter of ‘when’.

According to István S. N. Berkeley , the field of artificial intelligence is the ‘study of man-made machines or computers which exhibit some form of human intelligence’ (Berkeley, 1997). The world became more aware of the singularity and the true growth of artificial intelligence in 1997 when IBM’s deep blue mainframe beat world champion, Garry Kasparov — considered the greatest chess player of all time (Barden, 2008; IBM, 1997). With the facility to process 200 Million positions per second, it soon became apparent that the ability of computers to match, and eventually supersede the ability of humans both mentally and physically, was possible.

Estimates of general rates of technological progress are always imprecise, but it is fair to say that, in the past, progress came about more slowly. The historian Henry Brooks Adams (1838-1918), determined technological progress by the power generated from coal, and estimated that power output doubled every ten years between 1840 and 1900 — a compounded rate of progress of about 7% per year (Adams, 1946). By contrast, the speed of progress today comes about at a far more rapid pace than Adams could have ever predicted. When we look at the numbers for information storage density in computer memory between 1960 and 2003, those densities increased by a factor of five million, at times progressing at a rate of 60% per year. At the same time, and according to Moore’s Law, semiconductor technology has been progressing at a 40% rate for more than fifty years. These rates of progress are the catalyst for the creation of intelligent machines, from robots to automobiles to drones, which will soon dominate the global economy and in the process, drive down the intrinsic value of human labour over the coming decades.

Research fellow Eliezer Yudkowsky, from the Singularity Institute for Artificial Intelligence, California, views the singularity in three more distinct schools of thought, rather than as a single definable process. The first, ‘accelerating change’, is based on existing knowledge, which allows us to easily see and predict the current technological rate of change very precisely. The second, ‘event horizon’, centers on Vernor Vinge’s superhuman intelligence break-though, which would make the future very unpredictable to forecast. Finally, the third, ‘intelligence explosion’, refers to the situation in which humans in partnership with machines would increase the speed of human intelligence exponentially, whilst not overtaking human imagination and control (Yudkowsky, 2007).

2.7 Projected Timeline for the Singularity

A number of prominent researchers in this field (Good, 1965; Solomonoff, 1985; Vinge, 1993; Moravec, 1999; Kurzweil 2005; Sandberg, 2009; Baum and Goertzel, 2010; and Chalmers, 2010) have argued that at some point in this century humanity will develop artificial intelligence programs capable of substituting human performance in almost every field, including artificial intelligence research. This will greatly accelerate technological progress as AIs design their successors . Dr Stuart Armstrong (Oxford University) conducted a study of Artificial General Intelligence (AGI) predictions as part of the 2012 Singularity Summit. He found the median predication to be 2040, and concluded that: ‘It's not fully formalised, but my current 80% estimate is something like five to one hundred years’ (Armstrong, 2012). AI Singularity theory has its roots and its projected timeline firmly aligned within Moore’s Law. Moore’s Law states ‘the obervation that steady technological improvments in miniaturisation leads to a doubling of the density of transistors on new integrated circuits every 18 months’ (Moore, 1965).

In 1951 mathematician, Alan Turing, theorised about the idea of machines becoming more intelligent than humans. He developed what became known as the ‘Turing Test’. In this test, one person sits at a computer terminal; at the other end sits another person and an AI program. The operator at the terminal is not aware of the identity of the computer or the person, and asks a series of questions in order to ascertain which is the AI and which is the human. Turing said that if the operator were unable to differentiate, then the AI would be a successful equivalent to human intelligence (Turing, 1950). To this date no AI has passed the Turing Test, as of January 2015.


Excerpt out of 89 pages


The Impact of Advanced Automation and the Cloud on Employment
Trinity College Dublin - The University of Dublin
Catalog Number
ISBN (eBook)
ISBN (Book)
File size
3850 KB
impact, advanced, automation, cloud, employment
Quote paper
Aidan Mc Carron (Author), 2015, The Impact of Advanced Automation and the Cloud on Employment, Munich, GRIN Verlag,


  • No comments yet.
Look inside the ebook
Title: The Impact of Advanced Automation and the Cloud on Employment

Upload papers

Your term paper / thesis:

- Publication as eBook and book
- High royalties for the sales
- Completely free - with ISBN
- It only takes five minutes
- Every paper finds readers

Publish now - it's free