How to Make Better Decisions. Examples from Exploration


Scientific Essay, 2014

19 Pages, Grade: A


Excerpt


ABSTRACT: Good decision-making creates value for a company. The decision to drill in the right place (or not) has strong economic repercussions. Moreover, the human approach to decision-making can be flawed. Many of the common flaws in decision-making are related to heuristic factors; that is, by human ways of thinking and drawing conclusions. These factors can negatively impact the outcomes of an exploration program.

This paper gives examples of cases in which geoscientific reasoning can lead to pitfalls, and explores why this is so. The paper then discusses the merits of the multiple working hypotheses (MWH) concept as an aid to harnessing seemingly contradictory data relating to several, often mutually exclusive, interpretations or scenarios. A simple method using a spreadsheet and decision-tree analysis is proposed to quantify the outcomes of multiple scenarios. The method is illustrated by a subsurface example in which two possible interpretations, reef versus volcano, are considered. An outlook of Bayesian statistics and other methods is included.

Key words: decision-making, biased decisions, heuristics, prospect evaluation, ontology, risk, multiple working hypothesis, decision-tree, score-table, Bayesian probability .

1.0 Introduction

Decision-making is the process of choosing between several alternative possibilities. That implies that there should be a preferred outcome. The decision is often based in incomplete knowledge. All aspects leading to the choice should be considered.

Science is “another form of knowing” (Fischhoff, 2013) as opposed to information, knowledge, or belief. In the following, I will give examples where possible scien­tific knowledge as the basis for decision-making is flawed by “human-style” decision making (next sec­tion).

Science is a systematic endeavour to build and organize knowledge in the form of testable explanations and pre­dictions about matters of interest. The scope of science is continually expanding by the introduction of newly proposed hypotheses, and by attempts to test such hy­potheses using experiments or observations. At their core, tests of scientific theories are repeatable, so that the process of confirmation or refutation of a hypothe­sis can be independently verified.

Scientific reasoning requires that all aspects of a hy­pothesis to be tested and all properties of the subject under evaluation should be considered before generat­ing conclusions regarding the subject under evaluation (also see Wegener, 1929, p. 2). The essential method of science consists of the formulation of a hypothesis, and the testing of the hypothesis by means of an experiment, or a comparison of the hypothesis against observations. In geoscience, a great deal of hypothesis testing is con­ducted using observational rather than experimental ev­idence.

In practice, it is unrealistic, if not impossible, to know all of the properties of a system. The scientist must therefore select a set of properties that are sufficient but unique, and so clearly describe the range of possible cases. However, even that may be nearly impossible. Fischhoff (2013) stated “When deciding what to do, how much difference would it make to learn X, Y or Z”, how much would that added knowledge impact the out­come of a decision? For example, seismic attributes, mineral content, pore geometry, and many other param­eters may—or may not—be relevant from an explora­tion point of view (i.e., may or may not have a bearing on the probability of finding hydrocarbons), and there­fore may, or may not, be important to know.1

The concept of ontology can provide a way out of this dilemma. An ontology is “a specification of a represen­tational vocabulary for a shared domain of discourse— definitions of classes, relations, functions, and other ob­jects” (Gruber, 1993). From a descriptive point of view, an ontology is “a set of classes (this implies a hierarchy of some kind) with types and objects, their relations, attributes, rules and restrictions” (ibid.). Many concep­tual relationships exist or are hidden among geological and geophysical dimensions; these relationships, which go largely unnoticed, but which carry immense value in terms of the knowledge they embrace, and are thus of great significance for geological interpretations (Nim- magadda et al., 2012). Therefore, when preparing a de­cision or communicate the information to serve as a ba­sis for a decision, we strive for completeness of infor- mation—the ontology—and describe the system itself, the basis for the decision.2

Abbildung in dieser Leseprobe nicht enthalten

From a geological viewpoint, more than one ontology (and even overlapping ontologies) can exist. For exam­ple, a petrophysicist has a different way of looking at a formation than a palaeontologist, as (s)he has a differ­ent ontological framework. Both views are correct, within the context of the conceptual worlds in which they exist, but their perspectives are different. The In­dian parable of the seven blind men and the elephant comes to mind (illustrated in Figure 1). In this context, we can consider the ontology of an oil or gas field, or, more accurately, the ontology of all formal explicit specifications of conceptualisation in a working petro­leum system (see Magoon and Dow, 1994, for a sum­mary of what constitutes a petroleum system).

The test of a hypothesis (or confirmation of an obser­vation) in petroleum geology is a costly exercise. The test usually requires drilling an exploration well, which must be properly sampled and logged. The cost of a drilling operation is of the order of millions of US dol­lars. Therefore, a consideration of all possible aspects (“the ontology”) of a situation that may influence the outcome of the test is of key importance to the success­ful outcome of the exploration project. The process of evaluating and ranking the probability of success of a given drilling prospect is part of the regular work flow of an exploration office. This paper will later demon­strate that multiple scenarios (MWHs) can be evaluated concurrently.

Abbildung in dieser Leseprobe nicht enthalten

Figure 1: An illustration of the old Indian parable of the six blind men and the elephant. “Six blind men were asked to determine what an elephant looked like by feeling different parts of the elephant's body. The blind man who feels a leg says the elephant is like a pillar; the one who feels the tail says the elephant is like a rope; the one who feels the trunk says the elephant is like a tree branch; the one who feels the ear says the elephant is like a hand fan; the one who feels the belly says the elephant is like a wall; and the one who feels the tusk says the elephant is like a solid pipe. - A king explains to them: All of you are right. The reason every one of you is telling it differently is because each one of you touched a different part of the elephant. So, actually the elephant has all the features you mentioned.”

2.0 Where Do We Go Wrong? The Human side of Decision-Making

Human cognition has been optimized during evolution for survival, not necessarily to address scientific prob­lems in an adequate fashion. Human assessment of un­certainty is usually not scientific, and is often based on unconscious and biased beliefs rather than science. The concepts presented in this section draw on ideas pro­posed by Tversky and Kahneman (1974) and Kahne- man and Tversky (1979, 1984), where a more complete treatment of the subject matter of bias in decision-mak­ing can be found. The following scenario provides an illustration of biased decision-making processes.

2.1 Traps in Thinking, Logic and Perception

For our ancestors, the simplification of observations and conclusions was an important and often life-pre­serving strategy; for example, “I see a tiger, therefore I had better run away” was a valid life-preserving thought flow and the option not to follow this immedi­ate reflex was subservient to the potential consequences of non-action. Human minds are optimized to solve hu­man day-to-day problems, and therefore tend to jump to conclusions without a proper evaluation of the actual sequence of arguments, counter-arguments, and result­ant conclusions. Nature, evolution, and cultural con­texts have equipped humans with a set of methods (“congenital teachers”, Lorenz, 1973) to master the usual challenges in the human environment and to react appropriately. This mind-set is optimized to enhance the chance of survival. Thus, this mind-set can also be manipulated and is prone to prejudices. Nature has not given us the tools to use statistical methods to solve our problems. At times, statistical reasoning appears even counter-intuitive.

Abbildung in dieser Leseprobe nicht enthalten

Figure 2: Decision-making for survival: Fight or flight?

Unfortunately, research into matters of how the human mind makes decisions is conducted only in the fields of economics, psychology, and other academic disciplines, Heuristic devices (those based on experience), which are often applied as shortcuts in the process of decision­making, are often referred to as “educated guesses”, “rules' of thumb”, “eighty-twenty rules”, or other ste­reotypical responses, which lead to wrong impressions that are “approximately right rather than exactly wrong”, and which can therefore serve as orientation in the absence of more reliable information. However, these heuristic devices often lead to wrong conclusions, and are highly problematic in the course of proper de­cision-making, as they cannot be argued or refuted on scientific or statistical grounds, but rather take the form of axioms or matters of belief or “generally assumed truths”. Such non-scientific methods of decision-mak­ing should be avoided at all costs in petroleum explora­tion because, ultimately, they do not add value.

Some of the main decision-making pitfalls—examples in which human decision-making goes awry—are listed and explained below. Some of the points below are re­lated and may overlap to some degree.

2.1.1. Expectation

The expectation of structure and the expectation of meaning underscore the expectation that “it should make sense”. It is human nature to try to make sense of patterns and structures, even when they are senseless.

Abbildung in dieser Leseprobe nicht enthalten

Figure 3: An example of an ink blot used in the Rorschach test.

Consider the ink-blot test of Rorschach (Blum, 1934, Figure 3) used in psychology.

This skill of pattern recognition is useful when inter­preting maps, seismic sections, and other geological and geophysical data. In this case, the brain of a skilled interpreter can identify patterns that would often evade computer pattern-recognition algorithms. Unfortu­nately, there is also a dangerous tendency to over-inter­pret data, and to see patterns and structures where only random data exist.

but not necessarily in those areas that are common sci­entific pastures for the office-going variety of petro­leum explorationists. For example, Daniel Kahneman, who received the Nobel Prize in Economics in 2002, is a psychologist working in the field of economics, as was Amos Tversky. Konrad Lorenz (Nobel Prize, 1973) was a zoologist by trade.

Experiments have been conducted in the geosciences to show how humans tend to impose structure and mean­ing upon patterns that are actually random:

- Bally (1983, volume 1, page 25, material con­tributed by Howard and Danbom, Conoco) showed seismic panels generated from purely random noise (Figure 4) to an interpreter who was not aware of the random nature of the sig­nals, and when confronted with the task of in­terpreting such a section, immediately began to impose geological interpretations on the random data. Some even said, “I have inter­preted worse data”. What is the relevance of such an interpretation?
- Miall (1992, “An event for every occasion?”) challenged a prevailing sequence stratigraphy chart and later (2010, page 400 ff) generated randomly modified sequences resembling the Exxon sequence chart of Haq et al. (1987). Even then, geologists were able to match these random data to geological data and impose sense to a random data set.
- In an older study, Zeller (1964) devised an ex­periment similar to Miall's, to test if geologists would over-interpret the meanings of litholog­ical profiles.

Abbildung in dieser Leseprobe nicht enthalten

Figure 4: Random seismic data (Bally, 1983). Left: CDP gather made from random noise; center: final stack; right: final stack with trace mixing (partial summation of adjacent traces).

Geoscientists are often overconfident in their interpre­tations, and they fall for the illusion that they “under­stand what's going on”, or at least could understand “what was going on” if only more and better and data were available. This pitfall of overconfidence becomes even more dangerous when a geoscientist has worked in a particular area or on a specific problem for a long period of time (which is also related to the problem of complacency, which is discussed later), in which case (s)he tends to negate the nature of geological processes that are stochastic, non-linear, and non-predictable.

The popular saying, “when you hear hoof beats behind you, think horses, not zebras” does not apply here. The most plausible explanation may not be the right one. The ‘zebras' of exploration are successful prospects, which may have only a small probability of occurrence. Exploration is the search for the anomaly or the excep­tion, not the rule.

2.1.2. Bias due to the Expectation of Uniqueness or Simplicity

Naive or linear cause-effect thinking also constitutes an expectation . Not every set of correlative parameters represents a cause-effect relationship. The fact that two or more parameters correlate on a significant level does not at all constitute a cause-effect relationship. For ex­ample, a correlation exists between chocolate con­sumption per capita and the number of Nobel laureates from that country (Messerli, 2012). However, a cause­effect relationship in this case has not been established (ibid.).

The belief in a single cause convinces us that we have found the answer to a problem, even if the solution we have discovered is only one of several possible solu­tions. Mark Twain (and also others) has been credited with the statement, “when the only tool you have is a hammer, you tend to see every problem as a nail”.

The expectation of the uniqueness of a solution pre­cludes the search for other competing, complementary, or better-fitting solutions. Unfortunately, we have been conditioned by our real life (RL), from school through higher education, to think as if only one right answer to a problem is possible. Therefore, we often stop looking for other possible solutions once we have found only one solution. In reality, some effects can be caused by one of several possible mechanisms; for example, a ball can be moved by kicking it or by throwing it or by hit­ting it with a cricket bat or by any number of other causes.

Other effects can only be caused by two or more causes acting in tandem—causes represent Boolean ‘and' rela­tionships. For geoscientists, a petroleum system is a good example in which several causes (here called ‘el­ements') must act together to cause a hydrocarbon ac­cumulation. Clearly, focusing on one single element of the petroleum system is wrong and will yield an incor­rect understanding of the system.

Parsimony is a principle commonly applied to logic and problem-solving to select between competing hypothe­ses. According to the parsimony principle, the hypoth­esis with the fewest assumptions is selected and consid­ered correct . Parsimony is often invoked to shorten a discussion or argument. The parsimony principle often seems compelling and supersedes proper logic because of its succinctness and apparent elegance—but, why should that be?

Abbildung in dieser Leseprobe nicht enthalten

In the natural sciences, parsimony is not considered as an irrefutable principle of logic or as a foregone scien­tific result, but merely one that satisfies our human pro­clivities for simplicity. The sentence “It is easier for theattributed to Alexis-Charles-Henri Clérel de Tocque­ville, a French politician and thinker in the 19th century. Again, simplicity and succinctness do not translate to logical correctness.

2.1.3. Complacency and Belief Bias

Complacency, defined as “self-satisfaction accompa­nied by unawareness of actual dangers or deficiencies”, is another common problem related to expectation . Poor science is often characterized by an over-reliance on confirmation rather than the testing and refutation of competing hypotheses. In practice, we hear that “we have always done it like that and it has worked fine...”. This attitude is based on the evolution of efficiencies in biological learning and successful behaviours.

As stated before, the human mind has a tendency to simplify complex situations by means of substitution or belief (belief bias). “...beliefs and knowledge are intrin­sically cognitive processes in that each involves an in­dividual's claim regarding reality. In the case of beliefs, however, the statement is a subjective proposition...” (Hindman, 2012, ref. 9, quoted in Eveland and Cooper, 2013).

world to accept a simple lie than a complex truth”, is There is a conceptual overlap between knowledge and belief in science. Imagine, for instance, the beliefs of an astronomer in 1500 versus an astronomer in 2014. In 1500, the revolution of the Sun around the Earth would be considered knowledge. Today, however, this pattern would be considered as an errant belief (after Eveland and Cooper, 2013).

These tendencies of our mind to simplify and to use what seems to be established (established knowledge, belief) are the consequence of the fact that our cognitive tools are designed to handle similar situations quickly and with a minimum of cognitive effort, so as to re­spond quickly and decisively to survival situations. Do­ing so has conferred an evolutionary advantage related to survival, but such an approach is not appropriate for hydrocarbon exploration, as oil and gas fields are not similar —they are actually anomalies, or exceptions to the rule.

2.1.4. Intuition

In science, inductive reasoning is used to support many laws, such as the law of gravity, largely because the pat­terns that support these laws have been observed to be true countless times, and counterexamples are absent. While this process is based in part on intuition, it is also strongly counter-intuitive. Hempel's paradoxon (Hempel, 1945a, b) is an example of a situation in which intuition leaves us in the cold. Hempel describes the paradoxon using two statements:

(1) All ravens are black.
(2) Everything that isn't black isn't a raven.

If one were to observe many ravens and find that they are all black, one's belief in the statement that all ravens are black would increase.

But, if one were to observe many red apples, and concur that all non-black things are non-ravens, one would still not be any more sure that all ravens are black.

This statement is absurd because there is no way that we can discover the colour of a raven without observing a raven.

Abbildung in dieser Leseprobe nicht enthalten

Figure 5: A raven, probably blissfully unaware of Hempel's Raven paradoxon.

But, how do we argue in geoscience? Do we—at times—apply the same logic? Imagine a case in which we argue about whether seismic data should be inter­preted as evidence for a reef or a volcano.

We could make the statement “(1) all reefs cause a ve­locity anomaly”, and then “(2) if there is no velocity anomaly, it is not a reef”.The interpreter might continue to observe a large number of cases of both reefs and non-reefs that do not exhibit velocity anomalies, and may conclude that all of the cases are not reefs. Hence, (s)he might throw out a couple of valid leads and pro­spects that should be in the company's portfolio and considered for exploration drilling.

Abbildung in dieser Leseprobe nicht enthalten

2.1.5. Bias due to Lack of Examples or Imagination

Abbildung in dieser Leseprobe nicht enthalten

Figure 6: Lack of imagination is arguably the biggest obstacle in exploration.

The opposite of intuition is bias due to an absence of examples or a lack of imagination, which is also called the illusion of validity, bias due to retrievability, and concept could be valid. Consequently, they did not even consider testing Wegener's hypothesis.

For example, an explorationist may give more weight to data that can be readily retrieved from a scouting ser­vice subscription or an Internet search, than to data re­trieved from a paper source that may require formatting and editing.

Likewise, case studies from previous work or nearby locations may be weighted too highly, and conclusions drawn from other, more remote geographical areas may be too easily discarded as being inapplicable to the study in question. By the same mechanism, people and organisations alike have a strong tendency to be all-too- certain about things that proved successful at one time. Such traps reflect the tendency to inductive thinking, plausibility conclusions (instead of logical conclusions), and overly strong considerations of known facts at the expense of possibly unknown facts (confirmation bias).

2.1.6. Risk Aversion

Risk aversion, or loss aversion, and its opposite, the overweighting of certainty (Kahneman and Tversky, 1979, 1984), are problems related to expectation. The avoidance of risk and loss is ingrained in human nature. In economics and decision theory, loss aversion refers to the tendency to strongly prefer the avoidance of losses to the acquisition of gains. “The individual will experience regret—the painful sensation of recognising that ‘what is' compares unfavourably with ‘what might have been'. Conversely, if ‘what is' compares favoura­bly with ‘what might have been', the individual will ex­perience a pleasurable sensation, which we have called rejoicing.

We assume that when making a choice between two ac­tions, the individual can foresee the various experiences of regret and rejoicing to which each action might lead, and that this foresight can influence the choice that the individual makes.” Sugden (1985). It seems common that expectations of loss are felt twice as powerfully, psychologically, as expectations of gains. While risky actions may yield benefits in business, given the costs of opportunities, people generally prefer the certainty of a bad outcome to the uncertainty of not knowing the outcome.

Loss aversion also leads to greater regrets for taking ac­tions than for not taking actions; more regret is experi­enced when a decision changes the status quo than when it maintains the status quo. Together, these forces of aversion provide an advantage for the status quo; people and companies are motivated to do nothing, or to maintain their current status, rather than to make bias of imaginability.

[...]


1 A similar version of this paper has been presented in the Proceedings of the 38th Annual IPA Convention & Exhibition, Jakarta, May 2014 under with the title “How to make good decisions - Examples from Exploration” (digital paper IPA14-B006). In addition to the published version, this PDF here includes the some of the cartoons which have been used to present this paper during the convention. Minor changes and edits have been made to the body text - and the title. Please note also the copyright note on pages 18 and 20.

2 PT. PetroPEP Nusantara, Geoscience Consulting, Jakarta (www.petropep.com).

Excerpt out of 19 pages

Details

Title
How to Make Better Decisions. Examples from Exploration
Grade
A
Author
Year
2014
Pages
19
Catalog Number
V1142472
ISBN (eBook)
9783346545978
ISBN (Book)
9783346545985
Language
English
Keywords
decision-making, biased decisions, heuristics, prospect evaluation, ontology, risk, multiple working hypothesis, decision-tree, score-table, Bayesian probability.
Quote paper
Bernhard Seubert (Author), 2014, How to Make Better Decisions. Examples from Exploration, Munich, GRIN Verlag, https://www.grin.com/document/1142472

Comments

  • No comments yet.
Look inside the ebook
Title: How to Make Better Decisions. Examples from Exploration



Upload papers

Your term paper / thesis:

- Publication as eBook and book
- High royalties for the sales
- Completely free - with ISBN
- It only takes five minutes
- Every paper finds readers

Publish now - it's free