The Prospects of Intelligent Technologies for Strategic Decision Making: A Theoretical Thesis


Masterarbeit, 2016

56 Seiten, Note: 12


Leseprobe


TABLE OF CONTENTS

ABSTRACT

LIST OF TABLES

INTRODUCTION
Problem Statement
Structure of Paper

STRATEGIC DECISION MAKING
Introduction to Strategy Process Research and Process Models
The Nature of Strategic Decision Making
Successful Strategic Decision Making
Managerial Cognition and the Role of Management
Heuristics and Biases

INTELLIGENT TECHNOLOGIES
The Emergence of Intelligent Computing and Artificial Intelligence
Introducing IBM Watson
Man-Computer Symbiosis
Companion Systems to Foster the Future Man-Computer Symbiosis

PROSPECTS FOR STRATEGIC DECISION MAKING
The Application of Intelligent Technologies
Managerial Use of Intelligent Technologies
Possible Impact on the Managerial Role
Addressing Limitations and Future Research Possibilities

CONCLUSION

BIBLIOGRAPHY

ABSTRACT

The man-computer symbiosis theory favors a future in which computers empower humans. Following this perspective, the prospects of an intellectual partnership between intelligent systems and managers in decision making are evaluated. The review of strategic decision making research reveals that managers are restricted by their individual cognitive capacities and at the most try to approximate rationality. Whereas increasingly perceptual and cognitive skills can be simulated by so called intelligent systems. The emergence of intelligent systems is assumed to impact how strategic decisions are made in the future. This thesis proposes that intelligent systems bring the possibility along for managers to transcend their cognitive frames and partly overcome cognitive limits. Furthermore, based on that cooperation managers become able to make better informed and timelier decisions.

LIST OF TABLES

Table 1: Literature Review on Successful Strategic Decision Making

Table 2: Selected Heuristics and Biases

Table 3: Central Attributes of IBM Watson

INTRODUCTION

The dominant goal of artificial intelligence has almost always been to create machines superior at making decisions (Russell, 2015). In 1997 IBM’s Deep Blue computer vanquished the world champion in chess. In 2011 IBM’s supercomputer Watson defeated the record champions in the quiz show Jeopardy. In 2016 Google’s algorithm AlphaGo subdued the reigning champion in the Chinese board game Go. The recent technological progress in artificial intelligence has been extensive and exceeded the expectations of many professionals and scientists. Brynjolfsson & McAfee (2016) found comfort in the fact that they were not the only researchers not anticipating how fast substantial progress in artificial intelligence occurred. Go is considered to be the most complex game in the world (Marr, 2016). Yet, Brynjolfsson & McAfee (2016: 90) state that the recent proceedings “[ … ] are not the crowning achievements of the computer era. They ’ re the warm-up acts”. Smoothing the way for artificial intelligence to solve novel challenges and scale up the field of application. Potentially, going to impact our daily routine, the economy or even management. “People have been making machines more intelligent” (Salomon, Perkins, & Globerson, 1991: 2) but can machines also make managers smarter? Mintzberg (1994) answer is a definite no since the promises of artificial intelligence at the strategy level never put into effect in his opinion. As a result of the shortcoming of established formal systems in supporting strategists to deal with human brain’s information overload due to their missing ability to think or learn. Now in the age of individual technologies claiming to be able to think, to learn and beating human contestants Mintzberg’s claim seems to be falsifiable. First and foremost, in consideration of Management being in the verge of transformation. There are growing critical voices that managers can actually be replaced by software or computer systems (Fidler, 2015; Guerrini, 2015). As the practicing manager is the ultimate consumer of strategy process research (Fredrickson, 1983) the prospects of intelligent technologies for better decision making have to be explored. Strategy process research according to Schwenk (1988) provides the foundation to ameliorate strategic decision making. Comprehending how managers make decisions and what is preventing them from making unbounded rational decisions identifies the potential in supporting managers to make better decisions. Artificial intelligence might be a mean to overcome the human constraints in strategic decision making. Mainly with the help of IBM Watson as artificial intelligence use case the effects of technology with management in terms of prospects, the changing role of management, potential intellectual partnerships and the overcoming of tradeoffs are researched.

Problem Statement

Nearly two decades passed since Mintzberg (1994) criticized the promises of artificial intelligence not coming into force on the strategy level. Since than promising progress in research and by technology ventures has been made. Raising the question once again if modern intelligent technologies meet the expectations of the strategic management field.

Research Question: What are the prospects of intelligent technologies for managers in strategic decision making?

On the one hand, the bounded rationality of managers might lead to systematic decision biases (Schwenk, 1985) representing a departure from optimality in decision making. On the other hand, increasingly intelligent technologies are outperforming human champions in chess, quiz shows or any board game. It is for these reasons that it is assumed that intelligent technologies provide an opportunity for managers to make better decisions. In addition, intelligent technologies are assumed to have a disruptive influence to the field of strategic management and to change the role of management.

Structure of Paper

Not all organizations have formal planning systems but what they have in common is that they all are making strategic decisions (Fredrickson, 1983). Thus, this thesis follows a decision based view of strategy. Hereby, emphasizing the role of management as well as the cognitive processes and limitations of the individual manager in order to evaluate the prospects of intelligent technologies and functional requirements for a supporting system. The thesis consists of three main sections. Where the first two sections are dedicated to strategy process research and intelligent technologies respectively. The third section brings the insights from the previous sections together and discusses the prospects of intelligent technologies for strategic decision making. In the first section the results of strategic process research concerning strategic decision making are reviewed. After defining what strategic decisions are successful decision making is discussed. Thereby, focusing on two process dimensions: behavioral and cognitive. The role of the two dimension are analyzed and potential tradeoffs pointed out. The major focus however, lies on the cognitive process dimension. The role of cognition is assumed to be crucial in understanding the role of the individual manager in strategic decision making. The limitations impeding a manager from pure rationality are to be presented. The second section introduces the domain of intelligent technologies which claim to be either able to think or learn. After a short derivation of why it is happening now the foundations of computing are briefly described. Under the notion of intelligent technologies, the concept and research domain of artificial intelligence and sub components like machine learning are defined and introduced. On the basis of IBM’s supercomputer Watson, the specifics and capabilities of intelligent technologies are discussed. IBM’s Watson is one of the intelligent systems most heavily marketed, commercialized, researched and documented in the news. In the context of theories about the relationship between humans and computers the potential prospects and drawbacks are discussed. Subsequently, explored in detail with the help of a specific Watson use case. At last, the emerging paradigm of companion systems and their superior in regards to existing executive systems is presented to understand the future of the interaction between humans and technology. In the third section the insights from the previous chapters are combined to point out the relevancy and prospects of intelligent technologies for strategic management. The benefits for managers in decision making is proposed. Followed by propositions about the development and usage if intelligent systems in the future. Ultimately, the potential impact on the changing role of management is prognosticated. The proposed perspective has not been tested in a systematic or empirical manner.

STRATEGIC DECISION MAKING

Introduction to Strategy Process Research and Process Models

It is commonly asserted that the field of strategic management is fragmented and lacks a coherent identity (Nag, Hambrick, & Chen, 2007). The leading differentiation of strategic management is between strategy content and strategy process (Andrews, 1971; Ansoff, 1965; Chandler, 1962). The differentiation’s usefulness is doubted and also has been criticized by several researchers for separating interrelated elements to explain firm performance (Huff & Reger, 1987; Pettigrew, 1992). However, the majority of publications around strategic management stick to the classical differentiation between content and process research (Lechner & Müller-Stewens, 2000). Strategy content research’s focus is primarily on the subject of a corporation’s or one of its business units’ strategic decision (Fahey & Christensen, 1986). Strategy content research can for example be concerned with generic strategies, competitive strategies or relationships between a corporation and its environment (Herbert & Deresky, 1987; McDougall, Covin, Robinson, & Herron, 1994). Whereas, strategic process research primarily focuses on strategically relevant events and procedures within a corporate unit. The underlying question is if and how strategies of a corporation are constituted (Lechner & Müller-Stewens, 2000).

The classical process model divides the strategy process into two subsequent phases: Formulation and implementation (Andrews, 1971). Formulation is about strategic-decision making while implementation focuses on transferring decisions into actions. The strategy process is conceptualized as a sequential series of clearly defined phases where strategy making has to be explicit (Tilles, 1963) and formulation occurs before implementation (Andrews, 1980). Therefore, strategy formation is reduced to the decision making process which is initiated and driven by top management (Andrews, 1971). Hence, the strategy process is prescriptive. There are detailed steps of what has to be done like the systematic formulation of specific strategies in the strategy process before the implementation. That is why strategic planning is the logical consequence as acknowledged by the majority of strategy process researcher. Hutzschenreuter & Kleindienst (2006) state in their strategy process research review that the topic of strategy formulation is dominated by studies about strategic planning. Yet, this view of the strategy process seems to be restrictive and inconsistent (Mintzberg & McHugh, 1985).

There are multiple theoretical perspectives of strategy formation through which the process can be considered or researched. Or as Hutzschenreuter & Kleindienst (2006) point out there is a proliferation of concepts and frameworks in which it seems easy to get lost. This proliferation is driven by strategy process researchers challenging the classical assumption of a strategy process that progresses in phases. According to these scholars this assumption cannot stand in practice (Burgelman, 1991; Hart, 1992; Mintzberg, 1978). Resulting in the analysis and proposition of alternative process models for the process of strategy formation. For example Mintzberg (1978) differentiates between deliberate and emergent strategies; Noda & Bower (1996) conceptualize strategy formation as iterated process of resource allocation; Burgelman (1991) acknowledges that strategic decisions are made incrementally and by numerous people and Quinn (1995) process pattern framework states that top management is no longer the driving force in strategy formation. As a consequence, there are multiple theoretical perspectives from different traditions which can be used for the analysis of strategy processes. Symptomatically, Mintzberg (1990) applied this multiplicity of thoughts in his 10 schools of strategy formation framework.

However, the majority of research papers are based on four phases: agenda building, decision, implementation and control shaped by the predominant influence of the classical process model (Lechner & Müller-Stewens, 2000). On the one hand, representing a simplification of the strategy process where the process follows a specified sequence between decision and action and where formulation always precedes implementation. On the other hand, it is based on the first strategy process model which had a very strong influence on the overall strategic management field. The unit of analysis is the decision and the driver of strategy is assumed to be the top management team. Whereas, alternative models emphasize the roles of strategy types or strategic initiatives.

This paper abstains from exhaustively discussing the different conceptualizations of the strategy process. Instead, the focus lays on strategy formation and the strategic decisions behind as strategy formation can be conceived as a decision making process (Fredrickson, 1983). Therefore, the classical process model is used as an orientation help to look deeper at the phase of strategic decision making. The majority of research within strategic management about decisions seem to be driven by the classical model. For this paper it is assumed that strategic decisions are made by members of the management team. The characteristics of these strategic decisions and the role of the manager in decision making are of relevance.

The Nature of Strategic Decision Making

The emergent result of complex, multilevel information processing is considered to be a decision (Corner, Kinicki, & Keats, 1994). Decision making is a basic cognitive process where on the basis of given criteria a favored course of action is elected from among multiple alternatives (Wang & Ruhe, 2007). Which coincides with the synthetic general model of the decision making process proposed by Schwenk & Thomas (1983): problem recognition, problem formulation, alternatives generation, and alternatives selection.

Strategic decision processes are characterized by novelty, complexity, and open-endedness (Mintzberg, Raisinghani, & Theoret, 1976). Strategic decisions’ lack of structure is one central feature (Mintzberg et al., 1976). The complexity of strategic problems causes this lack of structure in strategic decision making (Mason & Mitroff, 1981). Complex problems involve uncertainty and ambiguity for the decision makers. Hence, strategic-decision making can be viewed as a special kind of decision-making under uncertainty (Schwenk, 1984). Consequently, a strategic problem or issue cannot be clearly formulated which is making it difficult to precisely define the underlying problems. Furthermore, it becomes problematic to determine the selection criteria by which the strategic decision is made. A strategic decision can be defined as significant in relation to the actions taken, resources committed or precedents set (Mintzberg et al., 1976). Four characteristics of a strategic decision are suggested by Eisenhardt (1989):

- Involves Strategic positioning
- Has high stakes
- Involves many of the firm’s functions
- Is representative of the process to make major decisions

Eisenhardt & Zbaracki (1992) add that strategic decisions are those fundamental decisions which shape the course of the firm representing decisions made by top leaders of an organization that critically affect organizational health and survival. On the other hand, there are scholars claiming that a single decision hardly can be identified as the sole driver of organizational outcome. Long term outcomes are rather the result of the entire history of an organization (Janczak, 2005; Stacey, 1995).

Nevertheless, strategic decisions are in general conceived as decisions that have the potential to influence an entire organization as well as its long-term performance. The three fundamentals for decision making are: decision goals, alternative choices, selection criteria. Research on decision making is of interest in multiple disciplines with each discipline emphasizing on a special aspect of decision making (Wang & Ruhe, 2007). Decision theories can be categorized into two paradigms. Those assuming a rational decision maker (normative) and those based on empirical observation and experimental studies of choice behaviors (descriptive) (Gordon, 2008). Experimental studies showed that the human brain’s cognitive processes share similar and recursive characteristics as well as mechanisms event though the cognitive capacities of decision makers vary (Wang, 2003). As it is the strategists who makes decisions, the individualrelated attributes influence the decision making process and its characteristics. Therefore, the characteristics of strategic decision making and the role of the strategist in decision making will be emphasized in the following. Before the role of cognition is reviewed successful strategic decision making in the organizational context is discussed.

Successful Strategic Decision Making

There is broad agreement amongst many scholars that efficient decision making and effective implementation lead to successful firm performance (Bourgeois & Eisenhardt, 1988; Janis, 1989; Nutt, 1993). Efficient decision making refers to a process that spreads evenly and where the course of action is selected by managers in a prompt manner (Eisenhardt, 1989; Harrison, 1999; Mintzberg et al., 1976). An organization’s opportunities for advancement and learning are limited by inefficient decision making which is also allowing competitors to gain first mover advantage (Eisenhardt, 1989). Whereas, efficient decision making facilitates adapting to the dynamic organization environment and enhances a firm’s performance (Roberto, 2004). However, a single decision hardly can be identified as the sole driver of organizational outcome. Rather, the entire history of an organization accounts for long term outcomes. Decisions are products of organizational and political processes (Janczak, 2005). Being parallel with foregoing research suggesting that strategy formation is conceptually not limited to the chief executive or the top management team but rather is an organization wide phenomenon (Hart, 1992). It is not the result of a deliberate choice but the organization process that generates the output. In strategic decision making battle of choice is won by power (Eisenhardt & Zbaracki, 1992). Therefore, efficient decision making on its own cannot necessarily lead to successful organization performance.

Effective implementation is the execution of the elected approach as well as meeting the objectives set in the decision process (Andrews, 1987; Dean & Sharfman, 1996). For the successful implementation of decisions management needs to build general understanding and commitment also called consensus (Andrews, 1987; Bourgeois, 1980). Performance is increased by consensus through strengthening organization capabilities to implement decisions (Andrews, 1987; Wooldridge & Floyd, 1990).

In conclusion, managers have to make decisions in an efficient way while at the same time building management consensus to facilitate the implementation to strive for successful firm performance. It seems that there are two process dimensions concerned with strategic decision making. On the one hand, the cognitive process dimension considered with how alternatives are generated and evaluated in order to achieve high levels of efficiency. On the other hand, the behavioral process dimension considered with the role of participation, conflict, politics for the legitimacy of decision making can be enhanced to build high levels of consensus for effective decision implementation.

Eisenhardt & Zbaracki (1992) describe strategic decision making as a mixture of political and boundedly rational processes. The political processes or behavioral process domain are broadly concerned with management consensus. The cognitive dimension with efficient decision making and is aligned with the synthetic general model of decision making. Research shifted from what managers are like to how managers make decisions. For example Dean & Sharfman (1996) interviewed senior managers to analyze process effectiveness and results indicate that decision processes have an influence on decision success. Dean & Sharfman (1996) imply that information collection and analysis lead to better decisions. In their study managers collecting information and using analytical techniques made better decision than those who did not. Eisenhardt (1989) and Stacey (1995) examined managers in dynamic environments where they are faced with uncertainty and discontinuous change to find out how they maintain rationality in decision making processes. The findings suggest that managers in successful firms used a variety of tactics from consultation, to alternative searching, to frequent information evaluation cycles. A winning recipe cannot be defined or made responsible for success. Instead factors having a negative impact have been identified.

Table 1 summarizes and indicates the level of debate and uncertainty regarding decision making in relation to the two process dimensions. Despite the indication of the level of debate table 1 also demonstrates that it is difficult for managers to accomplish both process efficiency and management consensus together. In general, there are contradicting findings and propositions.

Table 1: Literature Review on Successful Strategic Decision Making

Abbildung in dieser Leseprobe nicht enthalten

Note: Author ’ s creation

On the one hand, generating and evaluating an extensive set of alternatives to take into consideration in the decision making process (breath approach) enhances efficiency and speed of decision making processes (Eisenhardt, 1989; Nutt, 2004). Eisenhardt & Zbaracki (1992) found in their study that faster decision makers developed many alternatives but thinly analyzed them. On the other hand, other studies showed that the breath approach decreases management consensus (Eisenhardt, 1989) and slows speed (Fredrickson & Mitchell, 1984; Janis, 1972). Furthermore, Eisenhardt (1989) also suggests that the mode of alternative evaluation is more important than the number of options. Performing in depth evaluation of a small set of alternative representing the most attractive alternatives can support management consensus (Amason, 1996; George, 1980).

Formal analysis slows down decision making processes and inhibits efficiency (Fredrickson & Mitchell, 1984). According to Eisenhardt (1989) slower decision makers relied on data from formal systems instead of real-time data from scanning the environment by their own. Yet, formal analysis may decrease process efficiency but facilitates building management consensus (Amason, 1996; George, 1980) when used to justify and lobby for a decision (Roberto, 2004). Formal analysis and exhaustive alternative assessment seem to promote management consensus but decrease process efficiency.

Several researchers argued and demonstrated that conflict in form of debates decreases process efficiency (George, 1980; Janis, 1972) as it discontinues the process and causes delays (Mintzberg et al., 1976). Amason (1996) differentiates between two different forms of conflict. Highlighting that in opposition of interpersonal (affective) conflict, task-oriented (cognitive) conflict does not hinder but increase management consensus. Other scholars like Kim & Mauborgne (1997) exposed that consensus benefits from controversy. Their research revealed that even when an employee did disagree with a manager’s decision he committed to it when the decision process seemed to be fair. Employee’s involvement, idea sharing and decision process transparency increased the trustworthiness of management and created organization wide commitment. Participation in decision making processes tends to decrease efficiency (George, 1980; Janis, 1972) while supporting consensus (Kim & Mauborgne, 1993; Wooldridge & Floyd, 1990).

Bourgeois & Eisenhardt (1988) identified the importance of power and conflict in group decision making. In accordance with other prominent studies they conclude the negative impact of politics on process efficiency (Janis, 1989; Mintzberg et al., 1976). Politics can be conceived as intra-organizational power attempts where it takes time, energy and effort to act on (Pfeffer, 1992). Yet, politics in the form of lobbying others and seeking allies might be constructive for building management consensus (Pettigrew, 1973; Pfeffer, 1992).

The results of table 1 suggest that high levels of process efficiency and management consensus cannot be accomplished together. Attempts of enhancing efficiency decrease consensus and the other way around (Amason, 1996; George, 1980; Janis, 1972). Efforts to improve one element of the decision making process seem to weaken enhancements of other process elements. Or as Roberto (2004) puts it, research argues that managers should achieve high levels in both but has not yet delivered an explanation of how it can be accomplished. The empirical findings of Feldman & March (1981) suggest that in order to achieve high levels of efficiency and consensus managers need to overcome two obstacles of different nature: substantive (make strategic decisions more manageable, avoid overwhelming decision makers and their cognitive capacity) and symbolic (signal and enhance legitimacy of decision processes by signals and symbols). The research of Dean & Sharfman (1996) notes that managers in practice are not struggling with identifying strategic decisions but generalizable rules for successful decision making have yet to be established. Although decision characteristics are important, there are recurring interaction patterns among executives that also profoundly influence strategic decision making (Eisenhardt, 1989). Therefore, it is necessary to take a closer look at the role of managers to understand how they are making decisions and what the obstacles are.

Managerial Cognition and the Role of Management

Managers are the keystone of the strategy making process, individual managers need to take responsibility for formulating strategies (Hill, Jones, & Schilling, 2013). In general it is recognized that managers think, economists assume managers to be rational according to the theory of rational choice (Stubbart, 1989). However, this does not explain how decision are made in the real world consisting of uncertainty and subjectivity. Managers can be seen as continuous processors within their information environment (Sproull, 1984). Accordingly, the managerial cognition perspective assumes that managers are information workers. They are absorbing, processing and disseminating information about issues, opportunities and problems (Walsh, 1995). The fundamental challenge beneath is for managers to cope with their complex and dynamic information world in order to make decisions and solve problems. Stubbart (1989) argues that the missing link between environmental conditions and strategic actions can be explained by cognitive science. Cognitive science can help to understand how managers think, make decisions what their flaws by nature are.

There are two assumptions why the topic of cognition received increased attention from strategy process research according to Lechner & Müller-Stewens (2000). Firstly, process and content phenomena can be better understood through focus on perception and cognition processes. Schwenk (1988) sees the growing interest in strategic cognition due to elevated awareness of its role in diagnosing strategic issues and formulating problems. Secondly, a correlation between cognition and decision is assumed. Research within the cognitive perspective of strategy process research contains considerations for explaining individual’s behavior and the subjective nature of decisions. Might also be the explanation for the short coming practical experience with strategic planning where so far cognition has been ignored (Stubbart, 1989).

The classical strategy process model is based on the principles of rational decision making (Fredrickson, 1983). Yet, cognitive limitations of the rational model have been revealed by several empirical studies. Therefore, the cognition perspective challenges the rational view of decision making. Acknowledging that decisions are not the result of rational considerations and instead reflect a decision maker’s cognitive model and the context-specific nature of a decision (Hutzschenreuter & Kleindienst, 2006) the cognition perspective can explain why humans fail in rational decision making and impede the process efficiency of decision making.

There are multiple factors influencing strategic decision making and the quality of strategic decisions. It has become clear that decisions are not the result of rational considerations. In this context Schwenk (1995: 475) defines rationality as the “extend to which a decision maker follows a systematic process in reaching carefully thought-out goals”. Within strategic management the assumption is that the cognitive limitations of strategists affect strategic- decision making (Steiner & Miner, 1977). These arguments are rooted in Simon’s notion of bounded rationality. Simon (1976) argues that decision-makers must construct simplified mental models when dealing with complex problems. The cognitive limitations of decision makers are the reason that complex tasks may overwhelm individuals and groups (Simon, 1976; Weick, 1984). In their attempt to solve these problems they can only approximate rationality. Cognitive limits cause decision makers to adopt simplified models of the world, to limit search behavior to incrementally different options, and to accept the first satisfactory outcome (Hart, 1992). March & Simon (1958) contend that the cognitive limitations impede the usage of rational analysis for many decisions. Eisenhardt & Zbaracki (1992) also find that decision makers are confined rational. Theories of bounded or limited rationality and other variations show that humans are not able to match the cognitive ideal and seek possibilities to improve rationality.

Managerial cognition is the broad term taking a cognitive approach in research to understand how organizations and individuals construe their environments (Jenkins, 1998). Research aims at analyzing the general cognitive structures and processes (Schwenk, 1988) which are shared amongst individuals. The focus for analyzing individual cognition is the level of top management since the members are assumed to be the dominant protagonists (Forbes & Milliken, 1999). Examining how mental models are determining which stimuli are noticed and interpreted or how decision making is influenced by mental models is the topic of research in managerial cognition (Stimpert & Duhaime, 2008). The complexity of strategic decision making seems to be almost infinite, yet human information processing capacity is limited (Schwenk, 1984). The aim of research on managerial cognition is to obtain insights into how decision makers comprehend and solve very complex strategic problems with limited cognitive capabilities.

Decision making is challenging because decision makers are faced with information overload, uncertainty and discontinuities. Russo & Schoemaker (2002) add that often there is little historical experience, conflicting goals and continuous change in a fast paced environment. Making it impossible for a decision maker to take everything into account or satisfy all related domains. Especially, when it comes to strategic decisions where high stakes are at risk decision makers should be aware of their limits. From the psychological perspective three limits of human capacity (substantive obstacles) have been identified by Feltovich, Prietula, & Anders (2006). The relevant research streams in managerial cognition concerning decision making can be mapped to these psychological dimensions.

Limited ability to concentrate: Feltovich et al. (2006) point out that human beings cannot perceive and pay attention to all of the stimuli they are exposed to. Selective attention is one possible way to deal with cognitive overload. Strategic issues are competing for managerial attention leading to research in managerial cognition on selectivity and agenda setting (Dutton & Duncan, 1987; Dutton, Fahey, & Narayanan, 1983; Kiesler & Sproull, 1982). The domains seeming to be most relevant receive a manager’s attention and may cause selective ignorance of other domains (Hambrick & Mason, 1984). Attention is perceived as allocation of information processing capacity (Sproull, 1984).

Limited working memory capacity & limited long term memory access: Human’s ability to solve problems is limited by the capacity to keep information in short term memory and to the limited extend of information they can retrieve from their long term memory (Feltovich et al., 2006). Simon (1976) argued that decision-makers must construct simplified mental models when dealing with complex problems. Leading to further research of manager’s limitations in information processing (Dutton, Walton, & Abrahamson, 1989; Walsh, 1988). Hence, Walsh (1995) brings forward the argument that the information overload challenge can be met by employing knowledge structures to facilitate information processing as well as decision making. As it is impossible for managers to scan every aspect of an organization and its environment (Hambrick & Mason, 1984) knowledge structures are means of simplification. Cognitive simplifications are attempts of solving complex problems and result in a variation from rationality. Simon acknowledged processing limitations of the human mind and discussed simplifying heuristics to cope with these limitations (Gilovich & Griffin, 2002). Simon did not reject the rational models overall and instead conceived people as rational within their constraints. Tversky & Kahneman (1974) developed their own view of bounded rationality coming from the psychology perspective and focusing on intuitive judgement and underlying heuristics. They demonstrated that heuristics are underlying human judgement resulting in biases as outcomes which reflect departures from rationality. Therefore, Tversky & Kahneman are seen as the founding fathers of research in heuristics and biases challenging rational information processing theories (Gilovich & Griffin, 2002). In the following, the research of economic scholars has been concerned with simplification models and biases in decision making (Busenitz & Barney, 1997; Eisenhardt & Zbaracki, 1992; Schwenk, 1988) or more specifically with cognitive biases in strategy formation (Duhaime & Schwenk, 1985; Schwenk, 1984). Finding out how managers can avoid faulty reasoning and enhance the quality of their strategic decisions is of interest for multiple researchers (Bourgeois & Eisenhardt, 1988; Janis, 1972; Russo & Schoemaker, 2002). For Schwenk (1988) in managerial cognition research the topic of cognitive heuristics and biases are one of the most potentially useful to understand how decision makers comprehend and solve strategic problems.

Heuristics and Biases

Total rationality is the perfection of decision making and leading to process efficiency. A rational actor assesses the probability and utility of all possible outcomes respectively and bases his decision on the optimal combination (Gilovich & Griffin, 2002). But as the rationalist metaphor of a computer like human cannot stand anymore empirical research is considered with biases and shortcomings of the human mind or how real people make decisions (Tversky & Kahneman, 1974). Simon (1976) pointed out that the model of a rational human decision maker is unrealistic and does not represent human judgement. Schwenk (1988) explains these deviations from rationality in decision making by biases and heuristics. “Biases and heuristics are decision rules, cognitive mechanisms, and subjective opinions people use to assist in making decisions” (Busenitz & Barney, 1997: 12). Used as simplification strategies by individuals to make predominantly uncertain and complex decisions. Biases and heuristics belong to the most important models explaining deviation from rational decision making. By studying heuristics of strategic decisions a more realistic view of cognition can be achieved (Eisenhardt & Zbaracki, 1992). Previous research suggests that for senior managers heuristics are important (Eisenhardt, 1989; Fredrickson, 1985) as they provide a more realistic view of rationality. Understanding biases and heuristics can help to improve judgements and decisions (Tversky & Kahneman, 1974). Busenitz & Barney (1997) note that factors preventing purely rational decision-making have been identified and cited several times:

- High cost of decision making efforts (Simon, 1979)
- Decision maker’s information-processing limits (Simon, 1976)
- Variation adopted decision-making procedures (Shafer, 1986)
- Different values of decision makers (Payne, Bettman, & Johnson, 1992)

It should be acknowledged that not all decision makers are subject to the same degree of biases and heuristics in decision making (Busenitz & Barney, 1997). However, heuristics and biases in human judgement result in many departures from optimality (Schwenk, 1984). According to Stubbart (1989: 339) these findings might be the reason behind strategic decision errors because they imply the reliance of strategists on “simple, flawed and biased inferential heuristics in making strategic decisions”. There are already extensive lists of identified heuristics and biases. For example, Hogarth (1981) identified and described 29 biases. Those biases most likely affecting strategic decision have been filtered by Schwenk (1988: 44) shown in table 2.

Table 2: Selected Heuristics and Biases

Abbildung in dieser Leseprobe nicht enthalten

Note: Adapted from Schwenk (1988: 44)

Schwenk (1988) argues that for these presented biases in table 2 there is some empirical evidence mainly from laboratory environments. He suggests that these identified biases may restrict the consideration of strategic alternatives and information used for the evaluation of strategic decisions. In psychological research scholars like Fiedler & von Sydow (2015) see the level of precision, refinement and progress in research on heuristics and biases at the theoretical level as disenchanting and disappointing. Their argument is that the lack of clearly clarified theories and specification of heuristics prevented systematic attempts of evaluation. “Empirical and theoretical research at the level of sober cognitive research turns out to be hardly available” (Fiedler & von Sydow, 2015: 155). The lack of cohesion and a common typology of biases and heuristics is also criticized by Gigerenzer (1991) and Hilbert (2012). The defined heuristics and biases are often too vague argues Gigerenzer (1991). The lack of cohesion and consensus has a share in conflicting perceptions of the constitutional aspects of decision making concludes Hilbert (2012). Therefore, Gigerenzer (1996) calls for a rethinking in research strategies and to break with the reliance on heuristics which explain everything and yet nothing. Klein (1997) illustrates limitations in research on heuristics and biases by citing examples where biases did not compromise the decision quality outside of the laboratory. The majority of evidence has been discovered in controlled laboratory settings, it remains a challenge to prove the influence in field conditions (Klein, 1997). In conformity with Gigerenzer Klein (2008) concludes that the current research is likely to fail in mirroring how people are making decision in the real world and is contributing little insights into improving decisions.

To sum this chapter up, “economists prefer a vision of rational utility-maximizing managers” (Stubbart, 1989: 325-326). Assuming homogeneity amongst managers in reasoning, noticing threats and opportunities as well as in knowledge possession. Managers would all be thinking in the same way and following a defined analytical strategic management process as for example prescribed by Schendel & Hofer (1979). Nevertheless, Schendel & Hofer (1979) noted that rationality seems to be a rather ideal than empirical fact. The vision of managers as rational agents fails to explain how economic decision are made in the real world full of uncertainty, subjectivity and limitations (Smircich & Stubbart, 1985; Tversky & Kahneman, 1974). Arguments against unbounded rationality are predicated on the work of Simon (1957) and March & Simon (1958) proclaiming that the cognitive abilities of managers seem to be sequential and finite in their capacity. Research in psychology and within managerial cognition delivered abundant empirical evidence for the concept of bounded rationality, the limits of human cognition and decision making. Experimental findings might be able to explain strategic decision errors. However, they are also subjects of discussion. It is uncertain if the limits are important to economics at all (Conlisk, 1996) or if they might solely represent an approximation of human decision making. In conclusion, Nooraie (2012: 423) states that “despite the literature, our knowledge of strategic decision making process is limited”. Considerable research around factors affecting strategic decision making and processes has been done (Rajagopalan, Rasheed, & Datta, 1993). Though, research has been either limited or fetched conflicting results.

INTELLIGENT TECHNOLOGIES

The Emergence of Intelligent Computing and Artificial Intelligence

Levy & Murnane (2005) argued in their book The New Division of Labor that information processing tasks cannot be transformed into rules or algorithms because these tasks draw on human pattern recognition. In addition, complex communication is also assumed to stay dominated by humans in the future. Yet, in 2011 the IBM supercomputer Watson combined pattern recognition and natural language processing to beat human players in the quiz show Jeopardy. One of the human contestants gave a remarkable statement: “Brad and I were the first knowledge-industry workers put out of work by the new generation of thinking machines” (Brynjolfsson & McAfee, 2016: 27). The continuous development and progress of digital technology enable machines to complete cognitive tasks. Today, there are digital machines demonstrating broad abilities in domains like pattern recognition or complex communications which used to be solely human (Brynjolfsson & McAfee, 2014). To understand why these digital machines, exist now a short and incomplete explanation for the development in computing is given to convey a basic understanding.

The foundation of computing is formed by algorithms (Introna & Wood, 2002) which determine how a computer is processing data. An algorithm is a function calculating method expressed in a well-defined formal language (Rogers, 1987). An algorithm can be defined as a sequence of operations which is accurately defined by a set of rules (Stone, 1973). So, algorithms can be conceived as mathematical instructions following a defined set of rules in order to solve a specific problem. A common analogy is a cook following a recipe for a peculiar dish. Usually software consists of programmed (coded) and linked algorithms executing the given processes to achieve the desired outcome with the underlying hardware (Ceyhan, 2012). For example IBM’s Watson uses clever algorithms but without the necessary computer hardware behind would have been uncompetitive in Jeopardy (Brynjolfsson & McAfee, 2016).

The recent achievements in computing are due to increasing computational power. IBM’s Deep Blue was the first computer being able to beat a human chess champion when computers became faster (McCorduck, 2004). Not only did the speed of data processing become faster but also the capacity of data processing for which Moore ’ s Law gives an explanation. By observing the doubling of development in digital electronics every year (Thackray, Brock, & Jones, 2015) Moore suggested the steady and exponential growth in terms of improvement in information technology by simultaneously decreasing costs. It may have been a simple observation but it has predicted the past growth and pace of innovation quite accurately (Schaller, 1997). With Moore ’ s Law comes the expectation that new information technology is continuously becoming cheaper, better and faster. Paired with the progressing digitization which is creating a digital network connecting humanity (Brynjolfsson & McAfee, 2016) it results in exponential growth and usage of digital information. Information are the raw material for computer processing as well as decision making.

Availability and access to large amounts of data paired with fast computers and advanced statistical techniques build the foundation of computing often associated with the terms of thinking machines, intelligent computing or artificial intelligence. Historically, Alan Turing is broadly contemplated as the father of artificial intelligence (Beavers, 2013) and the emerging domain of intelligent computing. Turing (1950) defined the intelligent behavior of a computer as the ability to achieve human-level performance in a cognitive task. The Turing test provides a machine intelligence benchmark standard. Machine intelligence is demonstrated when a machines acting cannot be distinguished from a human being’s one (Khanna & Awad, 2015). If a machine is sophisticated enough in cognitive tasks to fool a questioner it is assumed to be intelligent. Turing’s test is based on the suggestion to ignore the verbal issues about the terms think and intelligent and instead adopt a simple test to concentrate on building and observing the machine itself (Haugeland, 1989). The crux of the test is that a machine has to sound and talk like a person (Haugeland, 1989) explaining the informal name of imitation game for the test.

Artificial Intelligence (referred as AI from now on) is an interdisciplinary field of study concerned with creating computers which are capable of intelligent behavior. Intelligence is for the most part concerned with rational action in reference to taking the best possible action in a given situation (Russell & Norvig, 1995). Therefore, the ideal concept of intelligence can be called rationality. AI is a universal research field circumscribing a diversity of sub research fields. It deals with general purpose areas like perception and logical reasoning but also with specific tasks like playing chess, writing poetry or diagnosing diseases (Russell & Norvig, 2014). Haugeland (1989: 2) describes AI as an “effort to make computers think”. According to Russell & Norvig (1995) the working assumption for AI efforts is that human intelligence can be described to the extent that a machine can simulate it. Whitby (2012) points out that AI and relating research always had multiple goals. The scientific goal is to understand the principles of making intelligent behavior possible whereas the engineering goal is the design and synthesis of intelligent artifacts (Poole & Mackworth, 2010). Needless to say, there are different perceptions of AI. Russell & Norvig (1995) categorized the definitions of eight textbooks into four categories:

- Systems that think like humans
- Systems that act like humans
- Systems that think rationally
- Systems that act rationally

In short, the perceptions and assumptions about AI vary especially from a philosophical perspective. Russell & Norvig (1995) identified the variation of AI definitions along two main dimensions. They are either concerned with thinking or behavior. Success is measured either in terms of human performance or pure rationality. This paper considers a machine (artificial) intelligent when the machine is able to perform cognitive functions like pattern recognition or learning which are intuitionally associated with human minds. It adopts the view of Russell & Norvig (1995) that artificial intelligence is mainly concerned with rational action also called intelligent agent.

AI is the established name for the research field but the term itself is a subject of much confusion (Poole & Mackworth, 2010) as it can be understood as the opposite of natural intelligence. “Natural means occurring in nature and artificial means made by people” (Poole & Mackworth, 2010: 5). So, AI can be seen as a form of intelligence created by humans aiming for automation of intellectual tasks. People tend to split into at least two groups when it comes to AI. On the one hand, those finding the whole idea preposterous. On the other hand, those who are convinced of AI and see it only as matter of time to reach its full potential. Yet, both sides are surprisingly self-confident in their opinion and attitude (Haugeland, 1989).

One subset of AI are computer algorithms which can autonomously improve through experience (Russell & Norvig, 2014) also called machine learning. Machine learning refers to methods and algorithms enabling computers to autonomously learn from data (Bishop, 2006). The first use of the term machine learning goes back to Samuel (1959) who defined machine learning as a field of study with the aim of giving computers the capability to learn without being explicitly programmed. While there are also other definitions they all have one thing in common. They all share the notion that machine learning enables computers to wisely perform tasks by learning the encircling setting from repeated examples instead of number crunching (Naqa & Murphy, 2015). In this context estimating dependencies from data (Cherkassky & Mulier, 2007) is characterized as learning. Making it obsolete to explicitly program computers because they can change and improve their algorithms by themselves (Marr, 2016). It may be noted that machine learning algorithms incorporate different information using techniques like neural artificial or Bayesian networks (Bishop, 2006). Logically, there are different classifications of machine learning algorithms. This paper abstains from discussing the different types of machine learning algorithms. Basic understanding of the nature of machine learning and how it breaks with traditional approaches is adequate to understand the potential use of it. Naqa & Murphy (2015: 6) define machine learning as: “The ability to learn through input from the surrounding environment [ … ] is the main key to developing a successful machine learning application”. For example, Google’s self-driving cars heavily depend on machine learning technology. Machine learning and data mining is used to process all the sensor data (Tailor, 2015). By processing and using different data sources like speed limits, traffic light patterns, distances between objects etc. the car can act in different situations without human interventions. As a prerequisite the machine must be able to interpret pictures and input from video cameras in real time to act accordingly.

Introducing IBM Watson

IBM defines cognitive computing as systems which learn at scale, reason with purpose and interact with humans naturally (Kelly III, 2015). Cognitive systems are described as probabilistic. AI applications have the ability to reason with uncertain knowledge when using probabilistic logic (Nilsson, 1986). Meaning that they are not only answering numerical problems but can also create hypothesis, reasoned arguments and recommendations. Those systems are designed to make sense out of complex and unstructured data by weighing information from multiple sources and giving ranked answers with a confidence level assigned. Whereas, programmable systems are solely based on rules and follow a predetermined series of processes to arrive to outcomes. This rigidity limits its usefulness. By cognitive computing IBM refers itself to Watson the lead brand of their cognitive technology. Watson has a big echo in the media since Jeopardy and has so far been aggressively marketed by IBM as cognitive technology supporting the creation of a marketing hype around cognitive computing. So far, IBM has been the only company commercializing a cognitive computing platform (Dalton, Mallow, & Kruglewicz, 2015).

For Salomon et al. (1991) it makes sense to call computer tools that offer an intellectual partnership cognitive tools or technology of mind. Assuming they allow a learner to function at a level that transcends the limitations of his own cognitive system. Wang (2009: 2) delivers a more academic definition of cognitive computing: “Cognitive computing is an emerging paradigm of intelligent computing methodologies and systems that implements computational intelligence by autonomous inferences and perceptions mimicking the mechanisms of the brain”. In other words, cognitive computing can be defined as technologies mimicking the mechanisms of the human brain and refers to intelligent computing methodologies and systems (Wang, 2003). According to IBM the technology they proclaim as cognitive has attributes in common with the field of AI but at the same time differentiates itself by the complex interplay of its various components (Kelly III & Hamm, 2013). However, the description on the IBM Watson website cites: “IBM Watson is a technology platform that uses natural language processing and machine learning to reveal insights from large amounts of unstructured data” (IBM Corporation, 2016a). Based on the description Watson sounds like an application of AI and not a breakthrough in applying cognition or cognitive science to computing. It remains unclear if cognitive computing refers to a new era of computing what IBM is marketing as the cognitive era or future. Or if it is part of a marketing strategy to facilitate the commercialization of IBM’s Watson technology. Cook (2011) interviewed journalist Stephen Baker who spent one year with the Watson research team. He observed that the IBM research team while programming Watson paid little to none attention to the functions of the human brain. The programming was not aimed at mimicking brain mechanisms. In conclusion, for Baker any parallels between Watson and the human mind “are superficial, and only the result of chance” (Brynjolfsson & McAfee, 2016: 255). The parallels in thinking between Watson and humans do not come from sharing the same design but from tackling the same problems (Cook, 2011).

Watson is a question answering system with natural processing capabilities. From a technical point of view Watson combines real-time computing power with machine learning and natural language processing (Dalton et al., 2015). As a result, human language can be understood in terms of processing by Watson. Through hypothesis generation and scoring methods the most probable answers are retrieved. Watson is programmed for uncertainty and weighs information from multiple sources to give ranked answers with a confidence level assigned. An overview of the strengths and capabilities of Watson are condensed and clearly represented in table 3.

Table 3: Central Attributes of IBM Watson

Abbildung in dieser Leseprobe nicht enthalten

Note: Author ’ s creation based on IBM Corporation (2016); Kelly III (2015); Kelly III & Hamm (2013).

Understanding

The amount of digital data created globally doubles every two years (Gantz & Reinsel, 2011), yet only about 0.5% of this data is ever analyzed (Gantz & Reinsel, 2012) due to the predominance of unstructured data. Unstructured data unlike structured data do not conform to a data model or data base (Inmon & Nesavich, 2007). Therefore, unstructured data has no identifiable internal structure (Tujetsch, 2015). Without structure the information cannot be reasonably collected and analyzed since its “contents are not organized into arrays of attributes or values” (Berman, 2013: 2). For instance, audio, video, image, social media or sensor data are considered to be unstructured data but also pdf, text files, emails and many more. All these types of data can be stored and managed without the computer system understanding the file format of it. However, they can hardly be retrieved by applications for processing or interpretation (EMC Education Services, 2009). It is generally assumed that between 80 to 90 percent of all potentially usable business information are hidden in unstructured data (Grimmes, 2008). Ultimately, unstructured data matter. Watson knows that all data are not created equally (Burrus, 2015). Systems like Watson focus on the realm of unstructured data and the need to manage the so called 4 Vs: Volume, Velocity, Variety and Veracity. Watson has been designed to mine and understand extensive amounts of unstructured data (Dalton et al., 2015). Human speech or audio data in multiple languages can be transformed to text (IBM, 2016a). Natural language processing is used to understand grammar and context, evaluate all possible meaning and determine what is being asked (IBM Corporation, 2016a). Image or video data can be identified by the contained subjects and objects based on classifiers and organized into categories (IBM, 2016b). Simplified, Watson dismantles every piece of information in unstructured data to learn which facts are contained in there. Thus, the system can absorb bodies of acquired knowledge quickly either by human input or by accessing publically available data. Figuratively speaking, Watson is able to unconfined read, see and hear in context and meaning.

Reasoning

Watson is not a programmed, deterministic system based on rules and structured data (Kelly III, 2015). Instead, it can understand and act on all types of data, especially on unstructured data. Similar to human reasoning Watson uses probability reasoning. Questions are analyzed as input from which a set of features and hypothesis are generated (Lee, 2014). By using multiple reasoning algorithms running in parallel the most likely responses to a question is evaluated by means of contextual relevant scores (Ferrucci, Levas, Bagchi, Gondek, & Mueller, 2013). Hypothetical answers are built and the hypothesis tested on the quality of associated evidence (Higgins, 2013). Through merging the scores of all possible answers a confidence level is assigned for each answer or insight. In case of a tricky question, besides presenting a set of possible results, Watson can also ask clarifying questions (Higgins, 2013). Based on the human input and further information, alternatives can be ruled out or the confidence level in the presented answers increased. Without accurate consideration of the technical interplay and mode of operation Watson can be noticed as an intelligent system designed to make sense out of complex and unstructured data by weighing information from multiple sources and giving ranked answers with a confidence level assigned.

Learning

For Jeopardy Watson has been trained with question and answer pairs, demonstrated how a right response looks like and then corrected when a given answer was wrong (Best, 2013). So, Watson is trained by the interaction with humans and is evolving from these dynamic interactions. Thereby, can act on all information humans create what Uzzi & Ferucci (2015) describe as learning from the crowd or Brynjolfsson & McAfee (2016) as using the interconnected, digital network of humankind. Hereby, Watson can be told which information source should be attached more importance to over another. So in a case of contradictory facts the most reliable source is favored as well as the most decisive and important content within a text used (Higgins, 2013). For example, that medical journals from the twentieth first century are preferred over those from the twentieth century (Higgins, 2013). Training is an ongoing process and inevitable for improving the ability to make expedient recommendations (Best, 2013). Training is the determinant for the quality of the machine learning results. Once, the ultimate truths are defined machine learning begins to handle questions and answer pairs on its own and modifies its algorithms accordingly. Watson is accumulating data and deriving insights at every interaction continuously. Watson is not programmed but trained for new applications by using the human understanding of the topic and fitting data. Human oversight will be needed to develop, train and customize such intelligent systems in the foreseeable future (Dalton et al., 2015). Human guidance is elementary for making computer systems smarter.

Man-Computer Symbiosis

A computer beating human contestants in a quiz show and the technical possibilities to automate cognitive skills stir up fear in people about job losses. Newspaper headlines like “YOUR job won ’ t exist in 20 years: Robots and AI to eliminate ALL human workers by 2036” (Brown, 2016) are not a rarity. Keynes (1963) mentioned the term technological unemployment for the first time in a future oriented essay written in 1930. Arguing that automation could lead to permanent unemployment of people if more things will be automated. Leontief (1983) confirmed Keynes argumentation by predicting the fading role of humans as the most important factor of production. However, Brynjolfsson & McAfee (2014) put it straight that the discipline of economics is dominated by the view that technological progress and automation in total create more jobs than they destroy. The probably most well-known person supporting this view is Joseph Carl Robnett Licklider. Licklider (1960) is convinced of a future of cooperation between humans and computers. He perceives computers as a new media of expression by making information available and having the potential to inspire human creativity. Man- Computer symbiosis refers to a future in which computers are going to empower humans instead of artificial intelligence replacing human beings. One of his prediction was that computers are going to execute routinized work used to prepare insights and decisions. Information technology should be used to augment human intelligence by extending the information processing capabilities of the human mind. So that men and computer will cooperate in making decisions and controlling complex situations. Machines may complement humans when the individual strengths of both parties will be combined. This process is also more likely to foster innovations which not at any time could have been generated by either machines or humans on their own (Brynjolfsson & McAfee, 2016).

IBM applied the Watson technology to the healthcare industry. Where the clinical expertise and judgement of physicians is augmented but not replaced by Watson. From the academic perspective expert systems in the clinical area aim at providing real-time visual guidance and automation of tasks (Wood et al., 2007). Assuming that expert systems improve decision making in dynamic workspaces by enhancing situation awareness (Endsley, 1995a). Knowledge of the factors influencing decision making in complex environments is a necessity for developing clinical expert systems (Klein, Orasanu, & Calderwood, 1993). Therefore, the requirements and cognitive processes within the workflow have to be accurately analyzed (Jalote-Parmar, Badke-Schaub, Ali, & Samset, 2010) when developing such a system. Accordingly, decision makers cognitive processes have to be understood to provide appropriate decision support at the right time, in the right manner. Referring to the theory of situation awareness: perception of critical factors; comprehension of their meaning; projection of status into action (Endsley, 1995b). Hence, any system should be able to accompany the clinician in his daily work, support him in developing accurate situation awareness and making decisions.

Clinicians think in the same way about diagnosing their patients as Watson did in Jeopardy (Graham, 2015). They take the most probable answers into account. One application of Watson for healthcare is clinical decision support in oncology and genomics. For example, the Memorial Sloan Kettering Cancer Center is partnering with IBM to train Watson in interpreting patients’ relevant cancer information to identify treatment options (Haswell & Hickey, 2012). Existing health record systems are helpful in storing health record data but cannot summarize the data and consolidate it with the notes of doctors and nurses (Darrow, 2015). Through the combination of specialized cancer knowledge with the analytical speed of Watson an intellectual partnership between clinicians and technology is made. The ultimate goal is to expand clinician’s knowledge base, deepen their expertise and improve their productivity (IBM Corporation, 2016b). Watson is a mean to leverage the specific cancer knowledge and research of decades. Within seconds a patient’s symptoms can be compared with the medical literature and health records to identify the most likely condition the patient has. In addition, by reading and analyzing all relevant information like health records, medical journals, doctors’ notes or clinical trials the clinician can be provided with the most probable treatment recommendations and the medical evidence behind it. Watson augments the clinician in his judgement and supports him to treat cases individually and to keep up with the information overflow. Watson for Oncology can be conceived as a research assistant. It guarantees that when doctors make diagnoses or suggest treatments they consider all available research, clinical or other information by doing the required background research for them.

In the development of supporting expert systems, the healthcare discipline seems to be ahead of the management discipline. Most executive information systems are focusing on the delivery of information to executives. By clicking on icons or command buttons, executives can browse through a series of screens of tabular or graphical information organized in a hierarchical structure (Chen, 1995). Executive information systems are considered as a specialized form of decision support systems called DSS (Power, 2002). DSS aim at helping managers in selecting alternatives for a problem by utilizing decision models, rules and connected databases (Tripathi, 2011). They help to screen data of formal analysis and sort the display of data to the criteria defined by the user (Houdeshel, Watson, & Rainer, 1992). Shim et al. (2002) outlined the need to develop these systems further with intelligent systems technology to cope with the overwhelming flow of data executives are faced with and to support a more effective use of the systems. A more effective use is usually associated with a personalized experience. Liang, Lai, & Ku (2006) found evidence that personalized executive systems can increase the user satisfaction when the recommended content fits the executive’s interest and curtails the information overload.

Companion Systems to Foster the Future Man-Computer Symbiosis

Accompanying a person and to assist him or her in making better decisions is the promise of a rational agent. The term companion in relation with technical systems has been used in various contexts. For example there are the so called COMPANIONS projects funded by the European Union with the focus of developing conversational agents giving human users companionship over a long period of time (Wilks, 2010). Another European initiative focuses on robotic Companions with the aim to be of assistance to humans in their daily activities (Paolo, 2016). Nevertheless, there is no definite definition of companion technology (Biundo, Höller, Schattenberg, & Bercher, 2016). Biundo et al. (2016) claim that the first attempt for a systemized definition was made with the establishment of the Transregional Collaborative Research Centre: “Companion-Technology for Cognitive Technical Systems”. The research center is a cross-disciplinary endeavor exploring cognitive capabilities and their implementation within a technical system driven by a consortium consisting of Ulm University, Otto-von-Guericke University Magdeburg and Leibniz Institute for Neurobiology Magdeburg (Biundo & Wendemuth, 2015). Companion technology is seen as a field of research between artificial intelligence, informatics, engineering sciences and life sciences (Biundo et al., 2016). In recognition of its innovative development the Centre has been awarded by the Germany - Land of Ideas initiative in 2015 (Deutschland - Land der Ideen, 2016).

Companion systems are defined as: “cognitive technical systems showing particular characteristics: competence, individuality, adaptability, availability, cooperativeness and trustworthiness” (Biundo et al., 2016). The research program assumes companion systems to be the future of technical systems, cognitive technical systems with individualized functionality (Biundo-Stephan & Wendemuth, 2009). Following the vision that future technical systems will be customized to the individual user driven by user-tailored human-technology interaction (Wendemuth & Biundo, 2012). “The central cognitive processes of planning, reasoning, and decision making are the basis of action and interaction between users and technical systems” (Wendemuth & Biundo, 2012: 90). They enable companion systems to support human users in their actions and decisions. The investigation and implementation of cognitive abilities as well as their orchestrated interplay facilitates the companion like behavior. These technical systems either assist as an co-operative agent in specific tasks or give companionship to the human user in general (Biundo et al., 2016). Companion systems can be characterized as cooperative, reliable, competent and trustworthy partners (Biundo & Wendemuth, 2010).

Application areas are mainly within robotics, driver assistance and other assistance systems.

However, the range of use is and will be further expanded. For instance, in May 2016 Google introduced the Google assistant a comparable companion system for the personal life. It is a personal AI assistant or butler based on the existing voice recognition software Google Now. There are other personal AI assistants based on speech recognition for instance Amazon has Alexa, Apple relies on Siri and Microsoft has Cortana. But none is able to perform on a level comparable to Google (Bohn, 2016). Comparisons from Google assistant to the AI system Jarvis from the Iron Man films are already made (Rajawat, 2016). Yet, it is not perceived as that advanced but considerably close. Basically the Google assistant is working on the same principles (Rajawat, 2016). It attempts to impersonate human interaction, is able of reciprocal dialogues (Lynley, 2016) and can access all of Google’s digital content (Statt, 2016). Theoretically, it may be used to fulfill any need that can be accomplished by an internet service. Basic functions are the comprehension of single and multiple questions, submission of suggestions and the capacity to act. For these reasons, it can for instance coordinate schedules, make matching reservations, book the corresponding tickets, remind when it is time to go and propose the best route in relation to the current traffic load. In doing so, it continually learns about about a person’s habits, the personal life and personalizes its interaction and services. It is a voluntary choice to use these kinds of technology or services. Yet, the full risk and consequences for the protection of privacy are not to be underestimated and have to be considered.

In condensed form the companion technology can be seen as cross disciplinary research coming from the field of AI. It examines the future of human-technology interaction. Central among its research is how intelligent and user-tailored system behavior can be achieved. Every user has individual preconceptions and expectations towards the system’s functions in particular situation determining his behavioral strategy (Rösner et al., 2011). Understanding user’s functional requirements for pursuing their goals and his behavior is needed. Aside the research especially technology ventures made practical progress in building comparable virtual companions based on AI technology. They might not represent the academic requirements for a companion system one-to-one but have many characteristics in common. Possibly both sides can benefit from each other.

Companion systems are able to adapt to the individual user and his current situation. Characterization as cooperative and trustworthy partners are in line with the man-computer symbiosis proposed by Licklider (1960). There is an evolving relationship between humans and machines. Therefore, one needs to look beyond the narrow perspective of a future in which machines and computers replace humans. By observing early adopters’ usage of emerging technologies Rivera & Van der Meulen (2013) identified three main trends: machines replacing humans, augmenting humans with technology and humans and machines working alongside each other. In the next chapter the prospects of technology working alongside with managers will be explored. Biundo et al. (2016) are of the opinion that the user assistance of companion systems can be beneficial in any situation involving great risks or high information loads. Therefore, hypothetically spoken they are also relevant for strategic decision making. The potential benefits of companion systems for strategists in decision making will be explored, propositions for the development and usage of such systems made and at last the possible impact on the role of management prognosticated.

PROSPECTS FOR STRATEGIC DECISION MAKING

The Application of Intelligent Technologies

After the first machines passed the Turing Test there has been an increasing list of prominent AI experts like Stephen Hawking (Cellan-Jones, 2014) already warning that AI could become able to think beyond the human intelligence. Singularity is the point when machines start acting autonomously because they became more intelligent than humans (Guerrini, 2015). For Kirkland (2014) software and systems already outpaced even the best managers when it comes to finding answers to a problem. Yet, singularity seems to be a long way off from realization in consideration of the current status in AI development. The impression of artificial intelligence and digital technologies can be misleading. They may represent cognitive or human-like skills or abilities and seem to become human like. But they are not and so far solely artificial resemblances of human intelligence (Brynjolfsson & McAfee, 2016). Intelligent technologies may not necessarily be limited to artificial intelligence (Salomon et al., 1991) but they undertake significant cognitive processing on behalf of the user. A technological system might not be able to think in human terms but increasingly tasks which require cognitive skills like planning, reasoning and learning can be automated (Schatsky, Muraskin, & Gurumurthy, 2015). Potentially, the explanation behind the emergence of cognitive computing as seen at the example of IBM with their Watson technology.

Generally, the range of application for intelligent technologies falls into three main categories: product, process and insight (Dalton et al., 2015). Product category is concerned with the possibilities to generate new products and services. Use cases aim at providing new end customer benefits. The process category mainly deals with the improvement or automation of operations. Schatsky et al. (2015) note that automation tends to be internally focused and is mainly implemented in two distinguished ways. Either by replacing or by augmenting human workers with technology. The insight category is about applying technology to uncover insights in massive amounts of data. Aiming at informing operational and strategic decisions with the created insights.

Companion systems are one possibility to combine the range of applications in one system or interface. They are a new product empowering users on an individualized level for example by automating information search and analysis to provide new insights for informed decisions. Companion systems might not necessarily be a new technology. Rather they can be conceived more like the ideal interface for the usage of intelligent systems or technology working alongside humans. As per definition a companion system is based on an intelligent or cognitive system which in addition shows particular characteristics like individuality and cooperativeness.

Proposition #1: Attributes of companion systems should be applied to intelligent systems like IBM Watson. They are means not only to personalize but to individualize and to establish an intellectual partnership between humans and technological systems.

As found by Liang et al. (2006) the personalization of executive systems increases the usage and user satisfaction. Being in alignment with the vision of a future customized humantechnology interaction as proposed by Wendemuth & Biundo (2012).

Proposition #2: Applying attributes of companion systems increases the acceptance and use of executive or decision support systems.

As discussed in section one, a manager is an information worker and continuous processor of his environment. He needs to absorb, process and disseminate all kinds of information to be aware of the situation. Managerial information processing is dominated by brief, oral communication (Sproull, 1984). Natural language processing capabilities make it more natural to use any system but also possible to understand documents or even conversations. It is not excluded that companion systems are utilized to listen to conversations, transcribe them and memorize them. Theoretically, companion systems gather all possibly pertinent information across different sources, consolidate them, put them into context, assess and display them according to the user preferences.

Proposition #3: Companion systems redefine the relationship between a manager and his digital environment by providing a new level of context awareness and insights.

Context awareness is helpful to reduce complexity of strategic decision. Companion or intelligent systems absorb or retrieve knowledge from accessible sources faster than any manager. Thereby, unlocking the value of data by making sense of great volumes of all data types. Hence, such systems are able to act on 80% more data which have not been part of any formal analysis before. Consequently, increasing information insight and accessibility and having a positive impact on decision making (Glazer, Steckel, & Winer, 1992). Making strategic problems more manageable and in doing so reducing uncertainty. It becomes easier for the manager to define and formulate strategic problems.

Proposition #4: The context awareness of companion systems assists a manager with the information overload and positively relates to decision making.

The promise of intelligent technologies is that the predominant tradeoffs between speed, cost and quality can be broke (Schatsky et al., 2015). When these technologies work alongside humans the benefits of both sides can be accessed. On the one hand, the productivity and rapidity of machines and otherwise the emotional intelligence and ability to handle uncertainty as well as ambiguity from humans (Rivera & Van der Meulen, 2013). The cooperation between humans and technology inspires human creativity (Licklider, 1960) and might empower a manager to act beyond his cognitive limitations.

Proposition #5: Intelligent technologies might enable a manager to make better informed and timelier decisions.

For businesses intelligent technologies are an emerging source of competitive advantage (Schatsky & Schwartz, 2015). They can boost productivity and break prevailing tradeoffs. Similarly, an emerging use will also affect a manager’s competitive advantage. Knowledge may become more like a commodity when particular knowledge becomes widely accessible, analyzed and reprocessed. Intelligent systems make expert knowledge and evidence available at the tips of the finger.

Proposition #6: Intelligent technologies are debilitating the competitive advantage of knowledge possession.

Intelligent systems empower a manager to act and decide on comprehensive insights from various and hypothetical from all sources of knowledge. Therefore, they can boost human productivity but require human oversight. Even intelligent systems produce partly imperfect results (Schatsky et al., 2015). Machine learning requires training and configuration performed by men to approximate perfect results upfront. Human expertise is than leveraged by the analytical speed of technology. Once the adequate algorithms are defined or have been trained they can be leveraged exclusive of constraints (Brynjolfsson & McAfee, 2016).

Proposition #7: Intelligent systems require upfront investments of time, in training and user interaction to add value for a manager.

Proposition #8: Instead of knowledge possession the holding of suitable algorithms might become the competitive advantage for organizations and strategists.

Managerial Use of Intelligent Technologies

A manager’s attention allocates his information processing capacity. Ideally, all stimuli or strategic issues are considered and weighed up. Afterwards the information processing capacity is allocated amongst the most important stimuli. However, the constraints of the human brain above all capacity limitations limit a manager’s attention. Hence, managers often apply simplified cognitive models to cope with the stimulus satiation. Resulting in limited search behavior, in selected and unconscious ignorance of strategic issues. Strategic issues begin to compete for managerial attention as indicated amongst other by Hambrick & Mason (1984).

Proposition #9: An effective manager trains an intelligent system to continuously scan for strategic issues to reduce his selective ignorance.

A machine is not tied to working hours, breaks or concentration. Instead is able to scan aroundthe-clock without any trip. The individual manager can specify the stimuli to be scanned in particular. In addition, the scanning can be for instance trained based on the research results in managerial attention and a manager’s self-evaluation to minimize the factors influencing managerial selectivity and attention allocation.

Just like with scanning the human capacity to keep and retrieve information for analysis is limited as well. A manager might use heuristics to deal with complex problems. Eventually, applied heuristics may help to simplify or rationalize decision making but most likely result in biases. Reflecting departures from rationality. Unbounded rationality accounts for high levels of decision process efficiency. Hence, a manager’s cognitive limitations impede rational analysis and process efficiency. In their search for support to find and understand the information needed to improve their decision making managers routinely seek out people assumed to be confident or having expert knowledge as a source (Sproull, 1984). However, in finding answers to problems, software already surpassed even the best managers (Kirkland, 2014). Intelligent systems can act on all digital available information, learn from the digital crowd and have outstanding pattern recognition abilities. Thereby, not only processing great volumes of data but also at great speed. For example Watson reads 40 million documents in 15 seconds (IBM Corporation, 2016b). Watson’s decisions are based on the combination of probability and utility of any possible alternative like a rational agent. Possibly, eliminating biases in information processing and interpretation.

Proposition #10: An effective manager uses the analytical speed and accuracy of intelligent systems to reduce biases in information processing and interpretation.

Within their frame computers are outstanding in pattern recognition but much less sophisticated outside their frame. Human’s frames are naturally broader because of human’s multiple senses (Brynjolfsson & McAfee, 2014). For example, Google’s AlphaGo is a learning algorithm trained by using databases containing millions of moves made in the past by human players in the Chinese board game Go (Johnson, 2016). The algorithm identifies good patterns of play and learns to reinforce its behavior on the patterns to gradually improve its abilities (Nielsen, 2016). Nevertheless, AlphaGo does only recognize patterns in relation to Go game boards but has no ability to generalize beyond the game boards (Johnson, 2016). Often games are offering perfect information and clear rules. In the real world in contrast the rules are often ambiguous, behaviors unpredictable and variables infinite. There might not always be a right or wrong.

Proposition #11: An effective manager understands how intelligent systems process information, acts on its insights but also leaves room for ambiguity.

Burrus (2015) is positive about intelligent systems like Watson to know science better than men do in the future. Yet, we should not expect digital technologies to solve problems which humans are not able to solve (Wang, 2003). Dewhurst & Willmott (2014) compared examples of machine learning applications and found that outstanding results in pattern recognition are achieved when the input and knowledge base was of high quality. Intelligent systems require training and configuration performed by managers to approximate perfect results upfront. Before a manager’s expertise can be leveraged by the analytical speed of machines he needs to determine the problem to be solved. In addition, useful datasets, information and other input factors for the machine to draw conclusion from are needed. Afterwards, the machine is able to learn and adapt from the interaction with the manager.

Proposition #12: An effective manager learns how problems are solved with machine learning. Obviously, managers have to make tradeoffs among competing objectives. The factors which promote process efficiency have the tendency to reduce management consensus and the other way round (Roberto, 2004). Decision quality, consensus and efficiency are competing objectives among which managers have to make these tradeoffs (Janis, 1989). It can be argued that effective managers are able to overcome existing tradeoffs. Eisenhardt & Zbaracki (1992) appeal to further studies investigating how all process outcomes (decision quality, speed, and implementation) can be achieved simultaneously. So far research implies that in order to achieve efficiency and consensus managers have to make tradeoffs. Potential tradeoffs are acknowledged but the premise that high levels of efficiency and consensus are achieved by successful firms stays the same.

Fast decision making is considered to lead to better performance in dynamic environments (Eisenhardt, 1989). But the organizational and top management team characteristics influence the pace of decisions (Wally & Baum, 1994). Personal and structural determinants seem to dictate decision pace. Baum & Wally (2003) propose that the greater the cognitive ability of a manager is the faster his decision evaluation becomes. Similarly, Eisenhardt (1989) assumes that when managers are accelerating their cognitive processes they are going to make faster decisions.

Proposition #13: Intelligent systems are a mean to accelerate strategist ’ s cognitive processes and positively influence decision pace.

Formal analysis slows decision pace (Fredrickson & Mitchell, 1984) but increases management consensus (Amason, 1996). Because if managers rely on data from formal systems they cannot decide based on the freshest decision supporting information. The greater the number of alternatives to be analyzed the slower the decision pace is assumed. Yet, an expanded search and consideration of multiple alternatives tend to make decisions more successful (Nutt, 2004).

Proposition #14: Intelligent systems can perform formal analyses in real-time at scale, positively influencing decision pace, management consensus and process efficiency.

Roberto (2004) conducted a study to find out if groups might be able to achieve both efficiency and consensus. His research is based on a sample of 10 strategic decisions and interviews with 78 participants of the decision processes from across different organizational levels. According to the findings efficiency and consensus are positively related to implementation success (Roberto, 2004) equivalent to preceding research. However, three of the groups have been able to achieve high levels of efficiency and consensus at the same time. The result is relevant because it breaks with the paradigm that enhancements of efficiency is at the expense of consensus and the other way around. Those groups choosing an incremental approach outperformed those focusing completely on a definite selection of the final course of action. Greater efficiency and consensus have been achieved when groups have been making a series of small but critical choices (decision criteria, elimination of options, contingent choice) along the decision process. Furthermore, this approach helped to structure complex problems by making transitional choices about individual decision elements and helped to sustain the legitimacy of the process. Three strategies for efficient strategic decision making and achieving management consensus have been proposed.

Firstly, effective managers should choose well defined decision criteria before doing the analysis. Clearly defined criteria help to avoid loyalties, people biases and resolve disputes which might arise during decision process. By making strategic issues more manageable well established decision criteria facilitate efficiency and consensus. Roberto's (2004) results show higher levels of efficiency and consensus when decision criteria were well established.

Proposition #15 Effective managers define transparent, comprehensible decision criteria to guide the analyses of intelligent systems.

Secondly, effective managers should not choose directly from a definite set of options. Subsets of possible options should be eliminated over time. Which helps to break complex decisions into smaller and more manageable pieces, increasing transparency and enhancing understanding.

Proposition #16: Effective managers use intelligent systems to systematically filter all possible options in order to evaluate the most probable alternatives based on defined criteria.

Thirdly, effective managers identify contingencies before the implementation. Decisions should be based upon specific events. Efficient strategic decisions need to be positioned within an organization best achieved by building management consensus. Because even the best analysis and decision can be neutralized by politics (Yu, 2002). Decisions have to be adapted to the organizational context and its development.

Proposition #17: Effective managers use intelligent systems to simulate if-then scenarios.

Possible Impact on the Managerial Role

From the days were Drucker (1967) described the computer as a moron making no decisions and only caring out orders is not much left. Technological progress like machine learning capabilities enables computers to replace skilled practitioners or to drive cars autonomously. For example, a Hong Kong based investment firm even appointed a decision making algorithm to its board of directors (Dewhurst & Willmott, 2014). Another example, is a software designed to replace management called iCEO introduced by Fidler (2015). It is a virtual management software able to divide a complex task into small individual tasks autonomously and to assign single work packages to human workers if needed. Though, Hofmann (2015) doubts the software’s usefulness, in his mind the software can at maximum account for micro- management. In addition, the manager would have to know all the answers and steps required beforehand in order for the software to work.

Nevertheless, intelligent technologies can and will partly automate and therefore eliminate jobs. According to Schatsky & Schwartz (2015) happening in the next five years. The more positive view favors humans working alongside smart machines. It is not new that simple and monotonous labor are automated driven by management. But now in the era where it becomes possible to also simulate and automate knowledge workers perceptual and cognitive capabilities it is also the manager who faces the edge of automation. Futurist Kelly (2012) predicts that information intensive jobs or any job dealing with paperwork will be automated until the end of this century. Starting with the routine tasks like document analysis. Those tasks which cannot be automated by intelligent systems will be supplemented by it (Autor, 2014). These for now not automatable skills like emotional intelligence, empathy and creativity will become more important for the manager to have. So far there is no technology able to negotiate effectively, inspire, motivate or lead people. Right now intelligent technologies are not able to grasp social situations (Kirkland, 2014). Therefore, mainly the skills for interpersonal interaction seem to become more important for managers to have.

Management is in the verge of transformation. Yet, the field of management is not likely to be disrupted right away. It takes a lot of time effort, human expertise and monetary investment for so called intelligent systems to achieve an outcome comparable to the human performance. Or to come to the point where they improve the flawed performance of managers. IBM’s Watson for Oncology is a good example how an intelligent system complements human skills and enhances their performance. Watson has not been taught by specifying how to recognize differences but by giving examples. The main task in machine learning is to give the system input features and determine the output features (Uzzi & Ferrucci, 2015). From the given examples the computer learns the relationship and can discover patterns and variations. Therefore, managers need to understand and learn how problems can be solved the machine learning way. Generally, to solve a machine learning problem somebody needs to specify the problem to solve and is able to identify data sets useful to solve it (Kirkland, 2014). Hence, the task of management is to define solvable problems, combine technical skills and domain knowledge to engage in solving problems by machine learning. When Watson has been applied to its first use case in healthcare significant time and effort for pre-, post-processing and adapting the core engine from Jeopardy was required (Best, 2013). In total three processes were needed for the new application in healthcare which can be generalized to the adaption of intelligent machines to new use cases.

1. Content adaption: Supply problem specific data and information input
2. Training adaption: Weight the input according to problem pertinence
3. Functional adaption: Review problem specific questions to correct the output if needed

Once the input factors are defined, Watson has been shown what the right answer is like.

Afterwards, it merely needs to be corrected if it is wrong. Indeed, Managers are perceived as the differentiation factor in the age of intelligent machines. On the one hand, they frame and guide what the machines are doing and what kind of answers they will provide. On the other hand, they have to do what machines cannot:

- Thinking in creative ways and outside the box
- Asking the right questions
- Knowing where to locate adequate knowledge from domain experts
- Dealing with exceptions
- Tolerating ambiguity
- Acting humanly
- Using soft skills

The emergence of intelligent systems will impact how decisions are made in the future. They create a new level of context transparency, erase the not knowing problem and reveal not known insights. Managers are in charge for steering the transformation and guiding the organization through it. In this context tough strategic decision have to be made without the help of intelligent systems. In the end managers are deciding about their own fate. There will not be a predominant way about the inclusion of smart machines. In the end, managers have to think about what they want these machines to do, how they should behave and how to work alongside them. Providing answers remain the computer’s strength as opposed to asking questions. Managers will direct and guide intelligent systems in their attempt to achieve their given objectives. Thereby, not neglecting to inspire the organization, to develop and train the right skills in order to learn to work with this kind of technology together effectively.

Addressing Limitations and Future Research Possibilities

The provided perspective of this thesis has not been tested systematically. Accordingly, empirical proof is missing. A more profound and comprehensive evaluation would require an interdisciplinary approach. Deep domain knowledge of multiple research streams is needed in order to avoid rough assumptions about artificial intelligence or psychological processes. Therefore, the limitations of this work are recognized. Simplified views about technological, psychological and cognitive processes have been made. In addition, the entirety of supposed influencing factors on strategic decision making could not be covered. For instance, the varying values of managers have not been considered. One might argue that the use of intelligent systems to actively support the practicing manager in his daily activities might be wishful thinking. There are constraints of application in terms of willingness, time effort and investment. However, an increasing number of technology ventures developed intelligent systems which are in a position to partly replace or support practicing managers. Ultimately, the effects of such technologies will influence how practicing managers are making decisions. In tasks like document or quantitative analyses technology already outperforms humans. Further research on how managers utilize these possibilities is required. Too often managers have too little to say in the development of decision support or information systems (Alter, 1976). Which might be an explanation for the failure of AI putting into effect on the strategy level. Research on managerial activities, time allocation and cognition identifies the areas where managers have the most flaws. In collaboration amongst others with system design and AI research functional requirements and scope of application for manager supporting information systems should be defined. Hereby, it might be necessary to differentiate between the different roles of managers within an organization.

CONCLUSION

Although Drucker (1967) described a computer as a moron back in 1967 he also predicted that computers will make it possible to replace parts of labor and management. Nearly fifty years later dealing with the prospects of substitution is becoming more reel than ever. Perceptual and cognitive skills can be simulated by so called intelligent systems. These systems are assumed to either replace humans or to work alongside them in the future. The man-computer symbiosis theory favors a future in which computers empower humans. Following this perspective, the prospects of an intellectual partnership between intelligent systems and managers are relevant. Research in managerial cognition revealed multiple factors influencing rationality in decision making. Managers apply simplified cognitive models to cope with the stimulus satiation and information overload of their environment. The outcome of this is selective ignorance of strategic issues as well as biases in decision making. As a result, the human cognition compromises optimality in strategic decision making. Managers are restricted by their individual cognitive capacities and at the most try to approximate rationality. Though, intelligent systems bring the possibility along for managers to compensate for their cognitive limitations. What is called an intellectual partnership, means the ability for managers to transcend their cognitive frames and partly overcome cognitive limits. Furthermore, based on that cooperation managers become able to make better informed and timelier decisions. However, it seems to be naïve to blindly trust machines. It needs to be understood how they work and their limits also known. An intelligent system like IBM’s Watson proofed its value in Jeopardy as wells as in oncology. But to use it in the domain of strategic decision making a great amount of preparatory work is required. It takes a lot of time effort, human expertise and monetary investment for so called intelligent systems to achieve an outcome comparable to the human performance. The most impressive results are achieved when the input or knowledge used is of high quality. But once they are adequately trained they can be replicated and continuously improve. It is up to the human decision maker to form the help to be received. At the current stage of development these intelligent machines are not going to outsmart humans. It might be a philosophical debate if these machines are able to think but at least they can perform formerly human cognitive tasks. However, managers are perceived as the differentiation factor in the age of smart machines. On the one hand, they frame and guide what the machines are doing and what kind of answers they will provide. On the other hand, they will concentrate on their strengths concerned with social interactions and do what even intelligent machines are not able to do. Yet, “being better at making decisions is not the same as making better decisions” (Russell, 2015). Pure intelligence and rationality do not necessarily lead to the right decision. A pure rational agent might aim at maximizing profit at all cost. There are still morals, ethics and values preventing humanity from making total rational decisions. Therefore, it is not just about making decision making more efficiently but also making the right decision. Despite having a rather conservative opinion towards formal systems and artificial intelligence Mintzberg (1994: 114) also appealed to managers “to think about the future in creative ways”. On these grounds, the practicing manager should consider to engage in the development or use of intelligent support systems. It takes time, effort and the right knowledge to get the machines to a level where they have an added value for the decision makers. But nevertheless, the promises and possibilities of intelligent systems seem to be worth the investment. As Albert Einstein once said, “it has become appallingly obvious that our technology has exceeded our humanity” (Joy Palmer & Ian Richards, 1999: 193).

BIBLIOGRAPHY

Alter, S. L. 1976, November 1. How Effective Managers Use Information Systems. Harvard Business Review. https://hbr.org/1976/11/how-effective-managers-use-information-systems.

Amason, A. C. 1996. Distinguishing the Effects of Functional and Dysfunctional Conflict on Strategic Decision Making: Resolving a Paradox for Top Management Teams. Academy of Management Journal, 39(1): 123-148.

Andrews, K. R. 1971. The Concept of Corporate Strategy. New York: Dow Jones-Irwin.

Andrews, K. R. 1980. Directors’ responsibility for corporate strategy’. Harvard Business Review, 30.

Andrews, K. R. 1987. The Concept of Corporate Strategy (3rd ed.). Homewood: Irwin.

Ansoff, H. I. 1965. Corporate strategy: an analytic approach to business policy for growth and expansion. New York: McGraw-Hill.

Autor, D. 2014. Polanyi ’ s Paradox and the Shape of Employment Growth. NBER Working Paper no. 20485, National Bureau of Economic Research. http://www.nber.org/papers/w20485.

Baum, R. J., & Wally, S. 2003. Strategic decision speed and firm performance. Strategic Management Journal, 24(11): 1107-1129.

Beavers, A. 2013. Alan Turing: Mathematical Mechanist. In S. B. Cooper & J. van Leeuwen (Eds.), Alan Turing: His Work and Impact: 481-485. Waltham: Elsevier.

Berman, J. J. 2013. Principles of Big Data: Preparing, Sharing, and Analyzing Complex Information. Boston: Morgan Kaufmann.

Best, J. 2013. IBM Watson: The inside story of how the Jeopardy-winning supercomputer was born, and what it wants to do next. TechRepublic. http://www.techrepublic.com/article/ibm-watson- the-inside-story-of-how-the-jeopardy-winning-supercomputer-was-born-and-what-it-wants-to- do-next/.

Bishop, C. 2006. Pattern Recognition and Machine Learning. New York: Springer.

Biundo, S., Höller, D., Schattenberg, B., & Bercher, P. 2016. Companion-Technology: An Overview. KI - Künstliche Intelligenz, 30(1): 11-20.

Biundo-Stephan, S., & Wendemuth, A. 2009. Sonderforschungsbereich zur Innovation im Mensch- Technik Dialog: Companion-Systeme. SFB/Transregio 62. https://www.google.de/search?q=Sonderforschungsbereich+zur+Innovation+im+Mensch- Technik+Dialog:+Companion-Systeme&ie=utf-8&oe=utf-8&gws_rd=cr&ei=7zkWV5eTH- TRgAbr-6e4AQ.

Biundo, S., & Wendemuth, A. 2010. Von kognitiven technischen Systemen zu Companion-Systemen. KI - Künstliche Intelligenz, 24(4): 335-339.

Biundo, S., & Wendemuth, A. 2015. Companion-Technology for Cognitive Technical Systems. KI - Künstliche Intelligenz, 30(1): 71-75.

Bohn, D. 2016, May 18. Google is making its assistant “conversational” in two new ways. The Verge. http://www.theverge.com/2016/5/18/11672938/google-assistant-chatbot-virtual-assistant-io- 2016.

Bourgeois, L. J. 1980. Performance and consensus. Strategic Management Journal, 1(3): 227-248.

Bourgeois, L. J., & Eisenhardt, K. M. 1988. Strategic decision processes in high velocity environments: Four cases in the microcomputer industry. Management Science, 34(7): 816-835.

Brown, A. 2016, May 2. YOUR job won’t exist in 20 years: Robots and AI to “eliminate” ALL human workers by 2036. Express.co.uk. http://www.express.co.uk/life-style/science- technology/640744/Jobless-Future-Robots-Artificial-Intelligence-Vivek-Wadhwa.

Brynjolfsson, E., & McAfee, A. 2014. The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. New York: Norton & Company.

Brynjolfsson, E., & McAfee, A. 2016. The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies (Reprint). London: Norton & Company.

Burgelman, R. A. 1991. Intraorganizational ecology of strategy making and organizational adaptation: Theory and field research. Organization Science, 2(3): 239-262.

Burrus, D. 2015. What Can Watson Do for Your Company? WIRED. http://www.wired.com/insights/2015/02/what-can-watson-do-for-your-company/.

Busenitz, L. W., & Barney, J. B. 1997. Differences between entrepreneurs and managers in large organizations: Biases and heuristics in strategic decision-making. Journal of Business Venturing, 12(1): 9-30.

Cellan-Jones, R. 2014, December 2. Stephen Hawking warns artificial intelligence could end mankind. BBC News. http://www.bbc.com/news/technology-30290540.

Ceyhan, A. 2012. Surveillance as biopower. In D. Lyon, K. Ball, & K. D. Haggerty (Eds.), Routledge Handbook of Surveillance Studies: 38-46. New York: Routledge.

Chandler, A. D. 1962. Strategy and structure: chapters in the history of the industrial enterprise. Cambridge: M.I.T. Press.

Chen, M. 1995. A Model-Driven Approach to Accessing Managerial Information: The Development of a Repository-Based Executive Information System. Journal of Management Information Systems, 11(4): 33-63.

Cherkassky, V., & Mulier, F. M. 2007. Learning from Data: Concepts, Theory, and Methods, vol. 2. New Jersey: John Wiley & Sons.

Conlisk, J. 1996. Why Bounded Rationality? Journal of Economic Literature, 34(2): 669-700.

Cook, G. 2011, March 1. Watson, the Computer Jeopardy! Champion, and the Future of Artificial Intelligence. Scientific American. http://www.scientificamerican.com/article/watson-the- computer-jeopa/.

Corner, P. D., Kinicki, A. J., & Keats, B. W. 1994. Integrating organizational and individual information processing perspectives on choice. Organization Science, 5(3): 294-308.

Dalton, R., Mallow, C., & Kruglewicz, S. 2015. Disruption ahead - Deloitte ’ s point of view on IBM Watson. Deloitte Development LCC. http://www2.deloitte.com/us/en/pages/about- deloitte/solutions/cognitive-computing-and-ibm-watson.html.

Darrow, B. 2015. IBM sets up Watson Health unit in Cambridge. Fortune. http://fortune.com/2015/09/10/ibm-watson-health/.

Dean, J. W., & Sharfman, M. P. 1996. Does Decision Process Matter? A Study of Strategic Decision- Making Effectiveness. The Academy of Management Journal, 39(2): 368-396.

Deutschland - Land der Ideen. 2016. Companion technology for cognitive technical systems - individual digital assistants. Deutschland Land der Ideen. https://www.land-der-ideen.de/node/63191.

Dewhurst, M., & Willmott, P. 2014. Manager and machine: The new leadership equation. McKinsey&Company. http://www.mckinsey.com/global-themes/leadership/manager-and- machine.

Drucker, P. F. 1967. The manager and the moron. McKinsey&Company. http://www.mckinsey.com/business-functions/organization/our-insights/the-manager-and-the- moron.

Duhaime, I. M., & Schwenk, C. R. 1985. Conjectures on Cognitive Simplification in Acquisition and Divestment Decision Making. Academy of Management Review, 10(2): 287-295.

Dutton, J. E., & Duncan, R. B. 1987. The influence of the strategic planning process on strategic change. Strategic Management Journal, 8(2): 103-116.

Dutton, J. E., Fahey, L., & Narayanan, V. K. 1983. Toward understanding strategic issue diagnosis. Strategic Management Journal, 4(4): 307-323.

Dutton, J. E., Walton, E. J., & Abrahamson, E. 1989. Important Dimensions of Strategic Issues: Separating the Wheat from the Chaff. Journal of Management Studies, 26(4): 379-396.

Eisenhardt, K. M. 1989. Making fast strategic decisions in high-velocity environments. Academy of Management Journal, 32(3): 543-576.

Eisenhardt, K. M., & Zbaracki, M. J. 1992. Strategic decision making. Strategic Management Journal, 13(S2): 17-37.

EMC Education Services. 2009. Information Storage and Management: Storing, Managing, and Protecting Digital Information. Indianapolis: Wiley.

Endsley, M. R. 1995a. Toward a Theory of Situation Awareness in Dynamic Systems. Human Factors: The Journal of the Human Factors and Ergonomics Society, 37(1): 32-64.

Endsley, M. R. 1995b. Measurement of Situation Awareness in Dynamic Systems. Human Factors: The Journal of the Human Factors and Ergonomics Society, 37(1): 65-84.

Fahey, L., & Christensen, H. K. 1986. Evaluating the Research on Strategy Content. Journal of Management, 12(2): 167-183.

Feldman, M. S., & March, J. G. 1981. Information in Organizations as Signal and Symbol. Administrative Science Quarterly, 26(2): 171-186.

Feltovich, P. J., Prietula, M. J., & Anders, K. 2006. Studies of Expertise from Psychological Perspectives. In K. A. Ericsson, N. Charness, P. J. Feltovich, & R. R. Hoffman (Eds.), The Cambridge handbook of expertise and expert performance: 41-67. New York: Cambridge University Press.

Ferrucci, D., Levas, A., Bagchi, S., Gondek, D., & Mueller, E. T. 2013. Watson: Beyond Jeopardy! Artificial Intelligence, 199-200: 93-105.

Fidler, D. 2015, April 21. Here’s How Managers Can Be Replaced by Software. Harvard Business Review. https://hbr.org/2015/04/heres-how-managers-can-be-replaced-by-software.

Fiedler, K., & von Sydow, M. 2015. Heurisitcs and Biases: Beyond Tversky and Kahneman’s (1974) Judgement under Uncertainty. In M. W. Eysenck & D. Groome (Eds.), Cognitive Psychology: Revisiting the Classic Studies: 146-161. London: SAGE.

Forbes, D. P., & Milliken, F. J. 1999. Cognition and corporate governance: Understanding boards of directors as strategic decision-making groups. Academy of Management Review, 24(3): 489- 505.

Fredrickson, J. W. 1983. Strategic Process Research: Questions and Recommendations. Academy of Management Review, 8(4): 565-575.

Fredrickson, J. W. 1985. Effects of Decision Motive and Organizational Performance Level on Strategic Decision Processes. Academy of Management Journal, 28(4): 821-843.

Fredrickson, J. W., & Mitchell, T. R. 1984. Strategic Decision Processes: Comprehensiveness and Performance in an Industry with an Unstable Environment. Academy of Management Journal, 27(2): 399-423.

Gantz, J., & Reinsel, D. 2012. The digital universe in 2020: Big data, bigger digital shadows, and biggest growth in the far east. IDC iView, 2007: 1-16.

George, A. L. 1980. Presidential decisionmaking in foreign policy: the effective use of information and advice. Boulder: Westview Press.

Gigerenzer, G. 1991. How to Make Cognitive Illusions Disappear: Beyond “Heuristics and Biases.” European Review of Social Psychology, 2(1): 83-115.

Gigerenzer, G. 1996. On narrow norms and vague heuristics: A reply to Kahneman and Tversky. Psychological Review, 592-596.

Gilovich, T., & Griffin, D. 2002. Introduction - Heuristics and Biases: Then and Now. In T. Gilovich, D. Griffin, & D. Kahneman (Eds.), Heuristics and Biases: The Psychology of Intuitive Judgment: 1-18. Cambridge: Cambridge University Press.

Glazer, R., Steckel, J. H., & Winer, R. S. 1992. Locally Rational Decision Making: The Distracting Effect of Information on Managerial Performance. Management Science, 38(2): 212-226.

Gordon, H., M. 2008. Selected Readings on Strategic Information Systems. New York: IGI Global.

Graham, M. 2015, December 4. IBM’s Watson using data to transform health care. ChicagoTribune. http://www.chicagotribune.com/bluesky/originals/ct-watson-deborah-disanzo-ibm-health-bsi- 20151204-story.html.

Grimmes, S. 2008, August 1. Unstructured Data and the 80 Percent Rule. Breakthrough Analysis. https://breakthroughanalysis.com/2008/08/01/unstructured-data-and-the-80-percent-rule/.

Guerrini, F. 2015, August 3. How Artificial Intelligence Could Eliminate (Or Reduce) The Need For Managers. Forbes. http://www.forbes.com/sites/federicoguerrini/2015/08/03/managers- beware-from-smart-contracts-to-the-autonomous-ceo-ai-is-coming-for-your-job-as-well/.

Hambrick, D. C., & Mason, P. A. 1984. Upper Echelons: The Organization as a Reflection of Its Top Managers. Academy of Management Review, 9(2): 193-206.

Harrison, E. F. 1999. The Managerial Decision-Making Process (5th edition). Boston: South-Western College Pub.

Hart, S. L. 1992. An Integrative Framework for Strategy-Making Processes. Academy of Management Review, 17(2): 327-351.

Haswell, H., & Hickey, C. 2012, March 22. Memorial Sloan-Kettering Cancer Center, IBM to Collaborate in Applying Watson Technology to Help Oncologists. IBM News room. https://www-03.ibm.com/press/us/en/pressrelease/37235.wss.

Haugeland, J. 1989. Artificial Intelligence: The Very Idea. Cambridge: MIT Press.

Herbert, T. T., & Deresky, H. 1987. Generic strategies: An empirical investigation of typology validity and strategy content. Strategic Management Journal, 8(2): 135-147.

Higgins, C. 2013, July 10. 5 Ways IBM Watson Changes Computing. Mental Floss. http://mentalfloss.com/article/51546/5-ways-ibm-watson-changes-computing.

Hilbert, M. 2012. Toward a synthesis of cognitive biases: How noisy information processing can bias human decision making. Psychological Bulletin, 138(2): 211-237.

Hill, C. W. L., Jones, G. R., & Schilling, M. A. 2013. Strategic Management: Theory: An Integrated Approach (11th ed.). Stamford: Cengage Learning.

Hofmann, R. 2015, May 12. No, managers cannot be replaced by software by Raymond Hofmann. Global Peter Drucker Forum. http://www.druckerforum.org/blog/?p=841.

Hogarth, R. M. 1981. Beyond discrete biases: Functional and dysfunctional aspects of judgmental heuristics. Psychological Bulletin, 90(2): 197-217.

Houdeshel, G., Watson, H. J., & Rainer, R. K. 1992. Executive Information Systems: Emergence, Development, Impact. New York: John Wiley & Sons.

Huff, A. S., & Reger, R. K. 1987. A review of strategic process research. Journal of Management, 13(2): 211-236.

Hutzschenreuter, T., & Kleindienst, I. 2006. Strategy-Process Research: What Have We Learned and What Is Still to Be Explored. Journal of Management, 32(5): 673-720.

IBM. 2016a. Speech to Text | IBM Watson Developer Cloud. IBM Watson Developer Cloud. https://www.ibm.com/smarterplanet/us/en/ibmwatson/developercloud/speech-to-text.html.

IBM. 2016b. Visual Recognition | IBM Watson Developer Cloud. IBM Watson Developer Cloud. https://www.ibm.com/smarterplanet/us/en/ibmwatson/developercloud/visual-recognition.html.

IBM Corporation. 2016a. What is IBM Watson? IBM Watson. http://www.ibm.com/smarterplanet/us/en/ibmwatson/what-is-watson.html.

IBM Corporation. 2016b. IBM Watson Health: Welcome to the New Era of Cognitive Healthcare. IBM Watson Health. http://www.ibm.com/smarterplanet/us/en/ibmwatson/health/.

Inmon, W. H., & Nesavich, A. 2007. Tapping into Unstructured Data: Integrating Unstructured Data and Textual Analytics into Business Intelligence. Boston: Pearson Education.

Introna, L., & Wood, D. 2002. Picturing Algorithmic Surveillance: The Politics of Facial Recognition Systems. Surveillance & Society, 2(2/3). http://ojs.library.queensu.ca/index.php/surveillance- and-society/article/view/3373.

Jalote-Parmar, A., Badke-Schaub, P., Ali, W., & Samset, E. 2010. Cognitive processes as integrative component for developing expert decision-making systems: A workflow centered framework. Journal of Biomedical Informatics, 43(1): 60-74.

Janczak, S. 2005. The strategic decision-making process in organizations. Problems and Perspectives in Management, 3(1): 58-70.

Janis, I. L. 1972. Victims of groupthink: A psychological study of foreign-policy decisions and fiascoes. Oxford: Houghton Mifflin.

Janis, I. L. 1989. Crucial Decisions: Leadership in Policymaking and Crisis Management. New York: Free Press.

Jenkins, M. 1998. The theory and practice of comparing causal maps. In C. Eden & J.-C. Spender (Eds.), Managerial and Organizational Cognition: Theory, Methods and Research: 231-250. London: SAGE.

Johnson, G. 2016, April 4. To Beat Go Champion, Google’s Program Needed a Human Army. The New York Times. http://www.nytimes.com/2016/04/05/science/google-alphago-artificial-intelligence.html.

Joy Palmer, & Ian Richards. 1999. Get knetted: network behaviour in the new economy. Journal of Knowledge Management, 3(3): 191-202.

Kelly III, J. E. 2015. Computing, cognition and the future of knowing. Somers, New York: IBM Corporation.

Kelly III, J. E., & Hamm, S. 2013. Smart Machines: IBM ’ s Watson and the Era of Cognitive Computing. New York: Columbia University Press.

Kelly, K. 2012, December 24. Better Than Human: Why Robots Will — And Must — Take Our Jobs. WIRED. http://www.wired.com/2012/12/ff-robots-will-take-our-jobs/.

Keynes, J. M. 1963. Economic possibilities for our grandchildren. Essays in persuasion: 358-373. New York: Norton & Company.

Khanna, R., & Awad, M. 2015. Efficient Learning Machines: Theories, Concepts, and Applications for Engineers and System Designers. Berkley: Apress.

Kiesler, S., & Sproull, L. 1982. Managerial Response to Changing Environments: Perspectives on Problem Sensing from Social Cognition. Administrative Science Quarterly, 27(4): 548-570.

Kim, W. C., & Mauborgne, R. 1997. Fair process: managing in the knowledge economy. Harvard Business Review, 75(4): 65-75.

Kim, W. C., & Mauborgne, R. A. 1993. Effectively Conceiving and Executing Multinationals’ Worldwide Strategies. Journal of International Business Studies, 24(3): 419-448.

Kirkland, R. 2014. Artificial intelligence meets the C-suite. McKinsey&Company. http://www.mckinsey.com/business-functions/strategy-and-corporate-finance/our- insights/artificial-intelligence-meets-the-c-suite.

Klein, G. 1997. Developing Expertise in Decision Making. Thinking & Reasoning, 3(4): 337-352.

Klein, G. 2008. Naturalistic Decision Making. Human Factors: The Journal of the Human Factors and Ergonomics Society, 50(3): 456-460.

Klein, G. A., Orasanu, J., & Calderwood, R. 1993. Decision Making in Action: Models and Methods. Norwood, New Jersey: Ablex Publishing.

Lechner, C., & Müller-Stewens, G. 2000. Strategy process research: What do we know, what should we know. The Current State of Business Disciplines, 4: 1863-1893.

Lee, H. 2014. Paging Dr. Watson: IBM’s Watson Supercomputer Now Being Used in Healthcare. Journal of AHIMA, 85(5): 44-47.

Leontief, W. 1983. Technological Advance, Economic Growth, and the Distribution of Income. Population and Development Review, 9(3): 403-410.

Levy, F., & Murnane, R. J. 2005. The New Division of Labor: How Computers Are Creating the Next Job Market. New Jersey: Princeton University Press.

Liang, T.-P., Lai, H.-J., & Ku, Y.-C. 2006. Personalized Content Recommendation and User Satisfaction: Theoretical Synthesis and Empirical Findings. Journal of Management Information Systems, 23(3): 45-70.

Licklider, J. C. R. 1960. Man-Computer Symbiosis. IRE Transactions on Human Factors in Electronics, HFE-1: 4-11.

Lynley, M. 2016, May 18. Google unveils Google Assistant, a virtual assistant that’s a big upgrade to Google Now. TechCrunch. http://social.techcrunch.com/2016/05/18/google-unveils-google- assistant-a-big-upgrade-to-google-now/.

March, J. G., & Simon, H. A. 1958. Organizations. Oxford: Wiley.

Marr, B. 2016. A Short History of Machine Learning -- Every Manager Should Read. Forbes. http://www.forbes.com/sites/bernardmarr/2016/02/19/a-short-history-of-machine-learning- every-manager-should-read/.

Mason, R. O., & Mitroff, I. I. 1981. Challenging Strategic Planning Assumptions: Theory, Cases, and Techniques. New York: Wiley.

McCorduck, P. 2004. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence (2nd ed.). Natick, Massachusetts: A K Peters.

McDougall, P. P., Covin, J. G., Robinson, R. B., & Herron, L. 1994. The effects of industry growth and strategic breadth on new venture performance and strategy content. Strategic Management Journal, 15(7): 537-554.

Mintzberg, H. 1978. Patterns in Strategy Formation. Management Science, 24(9): 934-948.

Mintzberg, H. 1990. The design school: reconsidering the basic premises of strategic management. Strategic Management Journal, 11(3): 171-195.

Mintzberg, H. 1994. The fall and rise of strategic planning. Harvard Business Review, 72(1): 107-114.

Mintzberg, H., & McHugh, A. 1985. Strategy Formation in an Adhocracy. Administrative Science Quarterly, 30(2): 160-197.

Mintzberg, H., Raisinghani, D., & Theoret, A. 1976. The structure of “unstructured” decision processes. Administrative Science Quarterly, 246-275.

Nag, R., Hambrick, D. C., & Chen, M.-J. 2007. What is strategic management, really? Inductive derivation of a consensus definition of the field. Strategic Management Journal, 28(9): 935- 955.

Naqa, I. E., & Murphy, M. J. 2015. What Is Machine Learning? In I. E. Naqa, R. Li, & M. J. Murphy (Eds.), Machine Learning in Radiation Oncology: Theory and Applications: 3-12. Heidelberg: Springer.

Nielsen, M. 2016. Is AlphaGo Really Such a Big Deal? | Quanta Magazine. https://www.quantamagazine.org/20160329-why-alphago-is-really-such-a-big-deal/.

Nilsson, N. J. 1986. Probabilistic logic. Artificial Intelligence, 28(1): 71-87.

Noda, T., & Bower, J. L. 1996. Strategy making as iterated processes of resource allocation. Strategic Management Journal, 17(S1): 159-192.

Nooraie, M. 2012. Factors influencing strategic decision-making processes. International Journal of Academic Research in Business and Social Sciences, 2(7): 405.

Nutt, P. C. 1993. The Formulation Processes and Tactics Used in Organizational Decision Making. Organization Science, 4(2): 226-251.

Nutt, P. C. 2004. Expanding the search for alternatives during strategic decision-making. The Academy of Management Executive, 18(4): 13-28.

Paolo, D. 2016. Robot Companions for Citizens. Robot Companions for Citizens. http://www.robotcompanions.eu/.

Payne, J. W., Bettman, J. R., & Johnson, E. J. 1992. Behavioral Decision Research: A Constructive Processing Perspective. Annual Review of Psychology, 43(1): 87-131.

Pettigrew, A. M. 1973. The Politics of Organizational Decision-Making. London: Tavistock.

Pettigrew, A. M. 1992. The character and significance of strategy process research. Strategic Management Journal, 13(S2): 5-16.

Pfeffer, J. 1992. Managing with Power: Politics and Influence in Organizations. Boston: Harvard Business School Press.

Poole, D. L., & Mackworth, A. K. 2010. Artificial Intelligence: Foundations of Computational Agents. New York: Cambridge University Press.

Power, D. J. 2002. Decision Support Systems: Concepts and Resources for Managers. London: Greenwood Publishing Group.

Quinn, J. B. 1995. Strategic change: logical incrementalism. In H. Mintzberg, S. Ghoshal, & J. B. Quinn (Eds.), The Strategy Process: 105-114. London: Prentice Hall.

Rajagopalan, N., Rasheed, A. M. A., & Datta, D. K. 1993. Strategic Decision Processes: Critical Review and Future Directions. Journal of Management, 19(2): 349-384.

Rajawat, D. 2016, May 19. For Google it’s all about voice! Smartprix Blog. http://blog.smartprix.com/will-google-assistant-make-a-difference/.

Rivera, J., & Van der Meulen, R. 2013. Gartner ’ s 2013 Hype Cycle for Emerging Technologies Maps Out Evolving Relationship Between Humans and Machines. http://www.gartner.com/newsroom/id/2575515.

Roberto, M. A. 2004. Strategic decision-making processes beyond the efficiency-consensus trade-off. Group & Organization Management, 29(6): 625-658.

Rogers, H. 1987. Theory of Recursive Functions and Effective Computability. Cambridge: MIT Press.

Rösner, D., Friesen, R., Otto, M., Lange, J., Haase, M., et al. 2011. Intentionality in Interacting with Companion Systems - An Empirical Approach. In J. A. Jacko (Ed.), Human-Computer Interaction. Towards Mobile and Intelligent Interaction Environments, vol. 6763: 593-602. Berlin: Springer.

Russell, S. 2015. 2015: What do you think about machines that think? edge.org. https://www.edge.org/response-detail/26157.

Russell, S. J., & Norvig, P. 1995. Artificial Intelligence: A Modern Approach. New Jersey: Prentice Hall International.

Russell, S., & Norvig, P. 2014. Artificial Intelligence: A Modern Approach (3rd ed.). Essex: Pearson.

Russo, J. E., & Schoemaker, P. J. H. 2002. Winning Decisions: Getting It Right the First Time. New York: Random House.

Salomon, G., Perkins, D. N., & Globerson, T. 1991. Partners in cognition: Extending human intelligence with intelligent technologies. Educational Researcher, 20(3): 2-9.

Samuel, A. L. 1959. Some Studies in Machine Learning Using the Game of Checkers. IBM Journal of Research and Development, 3(3): 210-229.

Schaller, R. R. 1997. Moore’s law: past, present and future. IEEE Spectrum, 34(6): 52-59.

Schatsky, D., Muraskin, C., & Gurumurthy, R. 2015. Cognitive technologies: The real opportunities for business. Deloitte Review, (16): 114-129.

Schatsky, D., & Schwartz, J. 2015. Redesigning work in an era of cognitive technologies. Deloitte Review, (17): 5-21.

Schendel, D. E., & Hofer, C. W. (Eds.). 1979. Strategic Management: A New View of Business Policy and Planning. Boston: Little, Brown & Company.

Schwenk, C. R. 1984. Cognitive simplification processes in strategic decision-making. Strategic Management Journal, 5(2): 111-128.

Schwenk, C. R. 1985. Management illusions and biases: Their impact on strategic decisions. Long Range Planning, 18(5): 74-80.

Schwenk, C. R. 1988. The Cognitive Perspective on Strategic Decision Making. Journal of Management Studies, 25(1): 41-55.

Schwenk, C. R. 1995. Strategic Decision Making. Journal of Management, 21(3): 471-493.

Schwenk, C., & Thomas, H. 1983. Formulating the mess: The role of decision aids in problem formulation. Omega, 11(3): 239-252.

Shafer, G. 1986. Savage Revisited. Statistical Science, 1(4): 463-485.

Shim, J. P., Warkentin, M., Courtney, J. F., Power, D. J., Sharda, R., et al. 2002. Past, present, and future of decision support technology. Decision Support Systems, 33(2): 111-126.

Simon, H. A. 1957. Administrative Behavior: A Study of Decision-Making Processes in Administrative Organization. London: Macmillan.

Simon, H. A. 1976. Administrative Behavior: A Study of Decision-Making Processes in Administrative Organization (3rd ed.). New York: Free Press.

Simon, H. A. 1979. Information Processing Models of Cognition. Annual Review of Psychology, 30(1): 363-396.

Smircich, L., & Stubbart, C. 1985. Strategic Management in an Enacted World. Academy of Management Review, 10(4): 724-736.

Sproull, L. 1984. The nature of managerial attention. Advances in Information Processing in Organizations, vol. 1: 9-27. Greenwich: JAI Press.

Stacey, R. D. 1995. The science of complexity: An alternative perspective for strategic change processes. Strategic Management Journal, 16(6): 477-495.

Statt, N. 2016, May 20. Why Google’s fancy new AI assistant is just called “Google.” The Verge. http://www.theverge.com/2016/5/20/11721278/google-ai-assistant-name-vs-alexa-siri.

Steiner, G. A., & Miner, J. B. 1977. Management Policy and Strategy. New York: Macmillan.

Stimpert, J. L., & Duhaime, I. 2008. Managerial Cognition and Strategic Decision Making in Diversified Firms. SSRN Scholarly Paper no. ID 1095462, Rochester, New York: Social Science Research Network. http://papers.ssrn.com/abstract=1095462.

Stone, H. S. 1973. An efficient parallel algorithm for the solution of a tridiagonal linear system of equations. Journal of the ACM (JACM), 20(1): 27-38.

Stubbart, C. I. 1989. Managerial Cognition: A Missing Link in Strategic Management Research. Journal of Management Studies, 26(4): 325-347.

Tailor, K. 2015. The Patient Revolution: How Big Data and Analytics Are Transforming the Health Care Experience. New Jersey: John Wiley & Sons.

Thackray, A., Brock, D., & Jones, R. 2015. Moore ’ s Law: The Life of Gordon Moore, Silicon Valley ’ s Quiet Revolutionary. New York: Basic Books.

Tilles, S. 1963, July 1. How to Evaluate Corporate Strategy. Harvard Business Review. https://hbr.org/1963/07/how-to-evaluate-corporate-strategy.

Tripathi, K. P. 2011. Decision support system is a tool for making better decisions in the organization. Indian Journal of Computer Science and Engineering, 2(1): 112-117.

Tujetsch, J. 2015, February 20. What’s the Difference Between Structured and Unstructured Data? documentmedia.com. http://documentmedia.com/article-permalink-1573.html.

Turing, A. M. 1950. Computing Machinery and Intelligence. Mind, 59(236): 433-460.

Tversky, A., & Kahneman, D. 1974. Judgment under uncertainty: Heuristics and biases. Science, 185(4157): 1124-1131.

Uzzi, B., & Ferrucci, D. 2015. Can Computers Make Us Better Thinkers? Kellogg Insight. http://insight.kellogg.northwestern.edu/article/can-computers-make-us-better-thinkers.

Wally, S., & Baum, J. R. 1994. Personal and Structural Determinants of the Pace of Strategic Decision Making. Academy of Management Journal, 37(4): 932-956.

Walsh, J. P. 1988. Selectivity and Selective Perception: An Investigation of Managers’ Belief Structures and Information Processing. Academy of Management Journal, 31(4): 873-896.

Walsh, J. P. 1995. Managerial and Organizational Cognition: Notes from a Trip Down Memory Lane. Organization Science, 6(3): 280-321.

Wang, Y. 2003. On Cognitive Informatics. Brain and Mind, 4(2): 151-167.

Wang, Y. 2009. On Cognitive Computing. International Journal of Software Science and Computational Intelligence, 1(3): 1-15.

Wang, Y., & Ruhe, G. 2007. The Cognitive Process of Decision Making: International Journal of Cognitive Informatics and Natural Intelligence, 1(2): 73-85.

Weick, K. E. 1984. Small wins: Redefining the scale of social problems. American Psychologist, 39(1): 40-49.

Wendemuth, A., & Biundo, S. 2012. A Companion Technology for Cognitive Technical Systems. In A. Esposito, A. M. Esposito, A. Vinciarelli, R. Hoffmann, & V. C. Müller (Eds.), Cognitive Behavioural Systems: 89-103. Berlin: Springer.

Whitby, B. 2012. Artificial Intelligence: A Beginner ’ s Guide. Oxford: Oneworld Publications.

Wilks, Y. 2010. Introducing artificial Companions. In Y. Wilks (Ed.), Close Engagements with Artificial Companions: Key social, psychological, ethical and design issues: 11-22. Amsterdam: John Benjamins Publishing.

Wood, B. J., Locklin, J. K., Viswanathan, A., Kruecker, J., Haemmerich, D., et al. 2007. Technologies for guidance of radiofrequency ablation in the multimodality interventional suite of the future. Journal of Vascular and Interventional Radiology, 18(1): 9-24.

Wooldridge, B., & Floyd, S. W. 1990. The strategy process, middle management involvement, and organizational performance. Strategic Management Journal, 11(3): 231-241.

Yu, L. 2002. The principles of decision making: Walking the fine edge between efficiency and consensus. MIT Sloan Management Review, 43(3): 15-16.

Ende der Leseprobe aus 56 Seiten

Details

Titel
The Prospects of Intelligent Technologies for Strategic Decision Making: A Theoretical Thesis
Hochschule
Aarhus Universitet  (Department of Management)
Note
12
Autor
Jahr
2016
Seiten
56
Katalognummer
V370496
ISBN (eBook)
9783668481664
ISBN (Buch)
9783668481671
Dateigröße
814 KB
Sprache
Englisch
Schlagworte
Strategic Decision Making, Intelligent Technologies, intellectual partnership, strategic decisions, cognitive frame, managerial cognition, artificial intelligence
Arbeit zitieren
Jan Schurkus (Autor:in), 2016, The Prospects of Intelligent Technologies for Strategic Decision Making: A Theoretical Thesis, München, GRIN Verlag, https://www.grin.com/document/370496

Kommentare

  • Noch keine Kommentare.
Blick ins Buch
Titel: The Prospects of Intelligent Technologies for Strategic Decision Making: A Theoretical Thesis



Ihre Arbeit hochladen

Ihre Hausarbeit / Abschlussarbeit:

- Publikation als eBook und Buch
- Hohes Honorar auf die Verkäufe
- Für Sie komplett kostenlos – mit ISBN
- Es dauert nur 5 Minuten
- Jede Arbeit findet Leser

Kostenlos Autor werden