Taxonomy of a Fast Data Business Model in the Mobility Market


Tesis (Bachelor), 2016

65 Páginas, Calificación: 1,00


Extracto


CONTENTS

1 Introduction
1.1 Problem Statement
1.2 Motivation
1.3 Background
1.4 Methodology
1.5 Scope and Outline of this Thesis

2 From Big to Fast Data
2.1 Defining Big Data
2.2 Big Data Related Challenges
2.3 Fast Data

3 Decision Tree Algorithms
3.1 A Short Introduction to Decision Trees
3.2 Classification Trees
3.2.1 ID3
3.2.2 C4.5
3.2.3 CART
3.3 Decision Trees and Fast Data
3.3.1 Hoeffding Tree
3.3.2 VFDT
3.3.3 CVFDT
3.3.4 UCVFDT

4 Preliminary Business Model
4.1 Brief Introduction to Open Innovation
4.2 Step-By-Step to the Business Model Concept

5 Explorative Research
5.1 Case Studies
5.2 Expert Interviews
5.3 Interview - Oskar Dohrau
5.4 Interview - Jurgen Gotzenauer

6 Descriptive Research

7 Business Model Adaption

8 Conclusion and Future Work

References
Bibliography
Webography

List of Figures

List of Tables

9 Acronyms

Appendices

A Glossary

B Questionnaire

C Interview I

D Interview II

ABSTRACT

Today, the market around mobility is under heavy change. The industry is preparing the change from fos­sil fuels to their alternatives. Large funds are raised to build new products based on electricity, either with rechargeable battery packs or solar energy. Another field of research and development is autonomous driv­ing. Google and Tesla are prominent examples of companies developing technologies and prototypes of future autonomous cars. The information technology used in such cars is using data analysis and artificial intelligence to observe, measure, predict and control traffic situations. This technology is influenced by one of the trending technologies of the early 21th century, Big Data. The basic research in data analysis had its crucial test in many Big Data related projects and products, like Facebook and Twitter. A part of the data analysis process are classification algorithms used to classify the input data for targeted further process­ing. Decision trees are one of the algorithms of classification. They are very precisely, buttend to overflow. This thesis will introduce the taxonomy of a business model, related to Open Innovation, for developing a new decision tree algorithm for the mobility market. Therefore, the data analysis process up to classifica­tion algorithms are introduced. The major decision tree algorithms are discussed in detail to build up the characteristics of the new decision tree algorithm. Furthermore, the Open Innovation model is introduced to carve out the assets and drawbacks of this model. Based on that theoretical information, the taxonomy of a preliminary business model is developed and visualized using the business model canvas. To prove this concept, expert interviews and questionnaires were used to gather feedback and new ideas. The collected information is used to modify the taxonomy of the business model. The resulting business model canvas shows the possibility of implementation.

ZUSAMMENFASSUNG

Der Automobilmarkt ist mit dem Ende der zweiten Dekade des 20. Jahrhunderts stark im Wandel. Die Indu­strie bereitet sich auf das Ende der fossilen Brennstoffe vor und untersucht eifrig Alternativen. GroBe Inve- stitionen werden gemacht um neue Produkte basierend auf Elektrizitat zu kreieren. Diese werden entweder betrieben mit Solarenergie, Oder hoch-entwickelten Akkumulatoren. Ein weiteres Untersuchungsfeld ist je- nes des autonomen Fahrens. Unternehmen wie Google und Tesla entwickeln Technologien und Prototypen welche die zukunftige Generation des Automobils bedeuten kann. Die verwendete Informationstechnologie basiert einerseits auf kunstlicher Intelligenz und andererseits auf strikte Algorithmen der Datenanalyse, mit deren beider Hilfe es dem autonomen System ermoglicht die aktuelle Verkehrssituation zu messen, vorher- zusagen und zu kontrollieren. Diese Technologien gehen auf jene von Big Data verwendeten Algorithmen zuruck, welche Erfolge feiern konnte bei Unternehmen wie Facebook und Twitter. Teil dieser Algorithmen sind Entscheidungsbaume. Diese werden zur prazisen Klassifizierung auftretender Phanomene eingesetzt. Diese Arbeit entwickelt ein Konzept eines Geschaftsmodells anhand des Business Model Canvas, welches die Entwicklung solcher Entscheidungsbaume fur den Automobilmarkt mit Hilfe von Open Innovation er- moglicht. Die wichtigsten Algorithmen der Entscheidungsbaume werden erlautert und auch die Prozesse von Open Innovation. Beides flieBt in die Erstellung des Konzeptes ein, welches im Anschluss durch die Sammlung von Informationen aus Fallstudien, Experteninterviews und einer Online-Umfrage abschlieBend uberarbeitet wird. Das entstandene Konzept zeigt die Moglichkeit der Umsetzung eines solchen Geschafts­modells, weiBtjedoch darauf hin weitere Untersuchungen und die Erstellung eines kompletten Geschafts­modells zu erwagen.

1 INTRODUCTION

In 2016, the annual global internet traffic will exceed the magic zettabyte mark and will double by 2020. Per second internet traffic has grown from 100 GB per day in 1992, to 100 GBps in 2002, and more than 20.000 GBpsin2015.1

1.1 Problem Statement

The rapid growth of the global internet traffic is accelerated by the trends of Internet of Things (loT) and Machine-to-Machine (M2M). They transmit data from various sensors, usage information and other data to their service endpoints. Additionally, autonomous transportation and other artificial intelligence systems are a major part of future’s data exchange applications.

Abbildung in dieser Leseprobe nicht enthalten

Figure 1: Emerging Technology Hype Cycle, source: Gartner (2015a)

Because the amount of generated and stored data is rising worldwide, the number of data scientists has to grow likewise. The high demand on this data scientists is stated by Gartner’s 2015th hype cycle2 of emerging technologies, as shown in figure 1. The global requests for data science skills can be fulfilled by either educating more people in theories and practice of data science, or by developing fully automated software for multidisciplinary data processing.3

Combining several branches of learning in software based artificial intelligence systems, needs a techno­logical transition from traditional data processing to modern streaming data processing systems. Traditional data processing relies on historic data and time-intense data processing pipelines, which are not usable for real-time decision making processes. Modern data processing systems deal with real-time streaming data. They highly depend on trained algorithms providing decisions in milliseconds.

Research and Technical Development (RTD) engineers need access to trained decision networks to include them into their automated data processing software development project. Because the broad area of oper­ations of such automated data processing software, a decision network needs to be trained for any possible situation a software could ask to get a decision. The challenges of creating such a decision network can be grouped into the following two categories:

- Challenges relating to the development and training of the decision network, and
- Challenges related to cross industry knowledge transfer.

The focus of this thesis is the mobility sector, which includes transportation systems, like bikes, cars or trains, and their coordination and management systems and applied applications. As Google is pushing for autonomous cars without steering wheel by 2020, the software based services need to be intelligent and autonomous too.4 And that software relies on a decision making algorithm or even a decision making network.

Based on that information, the following research question can be posed: How does the taxonomy of an open innovation based business model of developing decision tree algorithms for the use in generic real-time data analysis in the mobility sector look like?

1.2 Motivation

There are a large number of RTD facilities where data streams are used for real-time data analysis. For example:

- Pedestrian detection at very high frame rates.5
- Semantic image filtering and object detection.6
- Detection of the transportation mode.7
- Implementation of fast decision trees for real-time fault detection.8
- Damping prediction of power systems with renewable energy generation.9

1.3 Background

Abbildung in dieser Leseprobe nicht enthalten

Figure 2: Research scenario, source: Own diagram

To build up the taxonomy of a business model in real-time data analysis, this thesis uses a simple RTD sce­nario as shown in figure 2. The sensor network generates multiple data streams which should be analyzed and classified by the decision network for further processing.

According to that scenario, this thesis will try to build and evaluate a business model for developing decision making services in order to integrate in such data processing pipelines.

1.4 Methodology

RESEARCH ACTIVITIES

Abbildung in dieser Leseprobe nicht enthalten

Table 1: Research outline, based on: March and Smith (1995)

Table 1 illustrates the research activities and research outputs of the March and Smith (1995) framework, covered by this thesis. The build column covers the quest for the basic concept of the business model, the evaluate column includes evaluating the completeness and understandability of the concept. The theorize and justify columns are not covered in this work, nevertheless they are addressed in the chapter 8.

Abbildung in dieser Leseprobe nicht enthalten

Table 2: Research methodologies retained for this thesis, based on: Palvia et al. (2004)

To select the adequate method mix for the research objectives, I analyzed a study on research method­ologies from Palvia et al. (2004). The study outlines fourteen different methodologies. From those method­ologies I retain five that fit well with the research objectives I have defined previously. These methods are speculation/commentary, frameworks and conceptual models, library research, literature analysis, case study, interview and secondary data (see table 2).

RESEARCH ACTIVITIES

Abbildung in dieser Leseprobe nicht enthalten

Table 3: Research method mix, based on: March and Smith (1995); Palvia et al. (2004)

The category speculation/commentary refers to articles and research based on the knowledge and experi­ence of the respective authors. They are not based on hard evidence, but they are indicators of new trends and technological directions.10 Thus this thesis use speculation/commentary as one of the contributors to build constructs.

Library research (which is also part of most of the other methodologies) summarizes and combines historic research. This methodology embody the basis for the design of the business model, relying on an extensive library and literature research on big data, decision trees and open innovation.

Palvia et al. define frameworks and conceptual models useful for researchers in Management Information Systems (MIS).11 In the case of this thesis the artifact takes the from of a conceptual business model in real-time data analysis.

A case study generally refers to the in depth study of a single phenomenon (e.g., one application, one technology) over time in a single organization.12 This thesis uses case studies as a method to prove and validate the business model concept, which is essentially based on frameworks.

Interviews are a separate category for data collection.13 In this thesis interviews are essentially used to evaluate the business model concept by people that would use such a construct, like managers, consultants and academics.

1.5 Scope and Outline of this Thesis

This thesis will introduce a preliminary taxonomy of a business model for an organization developing a decision making service for industries in the mobility sector. The target audience for this thesis is primarily innovation and business managers who implement a business or business unit, or who architect software systems in this sector.

A glossary is provided in appendix A to help fill in gaps in background or vocabulary.

Abbildung in dieser Leseprobe nicht enthalten

Figure 3: Graphical frame of reference, source: Own diagram

The thesis is structured in eight parts (see figure 3):

Chapter 1 presents the problem definition, motivations and background of this thesis, and the research methodology with which the research objectives shall be achieved.

Chapter 2 introduces the origins, the terms and concepts of big data. It defines what is meant by real-time data processing and fast data.

Chapter 3 gives an overview of decision trees used for classification and selection in real-time data analysis. Chapter 4 introduces the major contribution of this thesis: the concept of the business model. In this part of the thesis the fundamentals of open innovation is explained and described in detail.

Chapter 5 outlines the research done using case studies and interviews.

Chapter 6 summarizes the results of a questionnaire addressed to managers and employees of local RTD industries.

Chapter 7 is about the adaption of business model concept. The adaption builds on a set of interviews, case studies and a questionnaire.

Chapter 8 gives an outlook on possible future research. Finally, chapter 8 presents some conclusions.

The structure of the questionnaire is provided in appendix B, and the transliterations of the expert interviews are attached in appendix C and appendix D.

2 FROM BIG TO FAST DATA

In recent years, there has been an increasing emphasis on data analysis. Through business analytics, smart cities and autonomous cars, organizations are examining how large datasets can generously be used to create and capture value for individuals, businesses, communities and governments.14

Under the vast increase of global data, the term of big data refers to datasets of enormous size, which typical database software is unable to process. Big data typically consists of masses of unstructured data. Big data describes the storage and analysis of massive data sets. The used technologies includes data compression, data reduction and machine learning.15

A traditional data analyst identifies challenges according well designed algorithms not scaling to large datasets, and the possibility of known and unknown issues not already resolved.16

2.1 Data in many forms

Abbildung in dieser Leseprobe nicht enthalten

Managing the reliability and predictability of inherently imprecise data types

The most cited definition of Big Data is termed as a three part definition. It states volume, velocity and variety as the main elements.17 The definition of those three V’s is as follows:18

- Volume: the huge size of collected data which needs to be processed and analyzed, is often the first aspect which comes to mind. The fact, that Big Data deals with datasets much larger than traditional storage systems, the bespoken units are often exabytes and not gigabytes. An exabyte is one quintil­lion bytes, or one million terabytes. This parameter is constantly moving, what is huge in 2016, may not be years ahead.
- Velocity: the term refers to the speed of data generation, transmission and input to data processing systems. This requires algorithms to process data and generate results in less time and resources.
- Variety of data sources, producing structured, semi-structured or unstructured data. From sensor net­works, social media and mobile applications, the value of the data relies in the diversity of their sources.

In addition to this three V model, IBM added a fourth, called Veracity. It refers to the quality of the data and its plausibility. Some data sources produces unpredictable data fragments, like social media or the weather. The uncertainty has become a single layer of Big Data as shown in figure 4.19

There are many existing definitions of Big Data. As a results of research across several definitions in Big Data related papers, a tag cloud of key terms has been created, as shown in figure 5. The concept behind Big Data can be described by the following:20

Abbildung in dieser Leseprobe nicht enthalten

Figure 5: Key terms of Big Data related papers, source: De Mauro et al. (2015): p. 98

2.2 Big Data Related Challenges

The continuous development of the Internet formed new data types not existing before. Therefore, new technologies were needed to deal with them, and a set of challenges were identified:21

- Graph Mining: Graphs are ever-present and used to describe networks such as computer networks, mobile and telecommunication networks, road and air traffic networks, pipe networks, electrical power­grids, biological networks, or social networks. Graph mining is used to identify irregularities in graphs, and adopt them for further decision making.
- Social Network Mining: The concept of Social Network Analysis is to identify people’s social inter­actions and group dynamics. The main tasks of mining social networks are identifying groups, the purpose of groups, and group evolution, in terms of group value and group relationships.
- Data Stream Mining: Encapsulates the data processing in dynamic environments, where data is con­tinuously generated at high rates. Such environments include sensor networks, process monitoring, or traffic monitoring and management. The learning system of such data must be ready to be applied any time between the arrivals of two data samples, regardless in which order of the samples. In contrast to the prior data mining paradigms, mining data streams imposes limited learning time and immediate processing of data samples.
- Unstructured or Semi-Structured Data Mining: The majority of collected data is unstructured, as in the case of images or videos, or semi-structured, like in HTML documents. This data is transformed into another representation to apply existing data mining algorithms. In text mining, the Natural Language Processing (NLP) is used to find such representations.
- Spatio-Temporal Data Mining: Mining data with spatial and temporal characteristics is challenging, because the datasets are in continuous spaces, and in contrast to classical data mining, the focus is set on local pattern recognition, and geometric and temporal computations. Typical data is generated by sensors of earthquake detectors or tracking mobile and vehicle navigation.
- Distributed Platforms and Parallel Processing: Google created a programming model called Map/Reduce (MapReduce) to process parallelizable problems by distributing them to a processing network. Other companies emulated that architecture in open source frameworks, such as Apache’s version Hadoop.
- Parallelization of Existing Algorithms: Many algorithms used in data mining platforms are avail­able as open source packages, a number of machine learning algorithms have been parallelized and optimized for distributed environments.

However, to rise to the challenge of solving the problems of today, now and tomorrow, data processing techniques and algorithms need to be developed which solves the right problem. Especially when dealing with data streams, typical data streaming models are designed to deal with a variety of domains. This missing specialization is related to issues like misleading interpretation and prediction. Building predictive algorithms, that incorporates multi-domain knowledge, the right optimization criteria has to be chosen, and in usual conditions, multiple criteria has to be optimized simultaneously. New algorithms need to be developed, which intrinsically and simultaneously minimizes memory consumption, predictive performance, and automatic self-monitoring and tuning, as well as auto-adapting. Therefore, models should be simple and depending on carefully tuned parameters. Meeting these criteria makes it possible to process big and fast data.22

2.3 Fast Data

Technologies usually comprehend that data exists on a time continuum and therefore, not stationary. The classical way of dumping data into large databases is moving towards the real-time or near real-time data processing. The fact that, at the time of creation, data is very interactive and for every occurrence, of greatest value, creates new opportunities, performing high-velocity actions on newly created or incoming data. This is the beginning of the data management pipeline, at which for example place a trade, make a recommendation or serve an ad. Shortly after data put into the pipeline, it can be investigated relative to other data arrived earlier. As data begins to age, its value changes against a historical context.23

Fast Data: The Velocity Side of Big Data

Abbildung in dieser Leseprobe nicht enthalten

Figure 6: Fast Data represents the velocity of Big Data, source: Jarr (2015): p. 5

The Fast Data models deal with data in the three dimensions: accuracy, resources, and processing time. These dimensions are interactive: reduction in time and resources may affect the accuracy, caching more information may increase processing time at the expense of resources, or processing less input data reduces the execution time and resource usage at the lack of accuracy.24

To minimize the interdependent issues of the dimensions of Fast Data processing models, a few design principles can be formalized:25

- Batch Processing: This principle describes a paradigm of processing large datasets in parallel by dividing into smaller pieces. The processing usually requires a complete dataset before starting, and the results are synchronized by blocking the output until all processing tasks finished.
- Incremental Processing: As an opposite to the batch processing of data, the incremental processing paradigm do not rely on the size or completeness of the input data. The data fragments are consumed at the time of arrival without any blocking operations or synchronization needed.
- (Near) Real-Time Latency Constraints: The user-defined requirements of Fast Data processing can be composed of different attributes restricting the behavior of the processing pipeline within the three dimensions.

Fast Data comprehend the streaming computational model for processing and analyzing gigantic datasets. Data stream processing enables decision-making in real-time. A data stream is defined as follows.26

Abbildung in dieser Leseprobe nicht enthalten

The definition 2.1 outlines the complexity of processing real-time data. In datastream processing the learning efforts of the decision making process is crucial. The combination of offline and online learning might improve the generated value. Online learning is very fast in terms of processing time and adoption rate. It processes data in serial and builds models incrementally. Offline learning grants the use of more complex data mining algorithms at the cost of processing time and resources. The first is able to process Fast Data, the second allows the processing of Big Data. The combination of both can enhances the decision making process, for example the decision which action should be taken at the current traffic situation is done online, but the context can be preprocessed offline.27

A proposed method combines offline and online learning by using preprocessed and trained models for processing real-time data streams. The figure 7 depicts the machine learning pipeline using the traditional way on the top path, and an optimized, because using a compiled model, at the bottom path. Because the same model is run over and over again, the proposed method makes use of a specialized predictor.28

Abbildung in dieser Leseprobe nicht enthalten

Figure 7: The machine learning pipeline, source: Bernstein, Dixon, and Levy (2010): p. 2

3 DECISION TREE ALGORITHMS

There exists a huge amount of methods used in data mining. Figure 8 illustrates this taxonomy. The methods of decision trees is located at the end of the path from discovery methods, subsequently to prediction meth­ods, and classification methods. At the same level are Neural Network (NN), Bayesian Networks, Support Vector Machines (SVMs), and instance based methods.29

Abbildung in dieser Leseprobe nicht enthalten

Figure 8: The taxonomy of data mining methods, source: Rokach and Maimon (2014): p. 8

This chapter is based on literature research that has revealed the following standard books:

- Artificial Intelligence: A Modern Approach: Russell and Norvig (2010)
- Artificial Intelligence: Foundations of Computational Agents: Poole and Mackworth (2010)
- An Introduction to Machine Learning: Kubat (2015)
- Data Mining: Concepts and Techniques: Jiawei et al. (2012)
- Data Mining With Decision Trees: Theory and Applications: Rokach and Maimon (2014)

3.1 A Short Introduction to Decision Trees

A decision tree is an algorithm describing a function that takes a vector of input attribute values as input and outputs a single value, called decision. The input and output values can be either discrete or continuous. This introduction focuses on decision trees with discrete input values and exactly two possible output values. This is called a Boolean classification, classifying a single input value as either true or false. To get a decision, a decision tree evaluates a sequence of tests on the input value. Each single test, called a node, checks a specific input attribute. The leaf nodes represent the possible return values of the decision tree.30

The following example from Russell and Norvig (2010) illustrates the building process of a decision tree. The example is build on a decision whether to wait for a table at a restaurant. The defined goal is called WillWait. Next is listed the attributes the decision tree should consider of the input:31

- Alternate: is there an alternative restaurant adjoining;
- Bar: is there a bar to wait in;
- Fri/Sat: is it Friday or Saturday;
- Hungry: are we hungry;
- Patrons: how many guests are in the restaurant, possible values are: None, Some, and Full;
- Price: the price tag of the restaurant, possible values are: $, $$, and $$$;
- Raining: is it raining outside;
- Reservation: have we made a reservation;
- Type: the kind of the restaurant, possible values are: Local, International, and Asian;
- WaitEstimate: the estimated wait time by the host in minutes, possible values are: 0-10 minutes, 10-30, 30-60, and >60.

The Boolean decision tree can expressed by the logic statement of Goal (Path1 v Path2 v ...). Only a single Path has to be leading to a leaf. A Path is a combination of nodes, checking attributes of the input, following that path, like Path = (Patrons = Full A WaitEstimate = 0 - 10) as shown as the rightmost path in figure 932

Abbildung in dieser Leseprobe nicht enthalten

Figure 9: An example of a decision tree for deciding whether to wait for a table at a restaurant, source: Russell and Norvig (2010): p. 699

While in research operations, decision trees are commonly used with hierarchical models of decisions and results, in the context of Fast Data, decision trees can be classified into two main categories:33

- Classification Trees: Operating on a classifier model, where the result of this tree is the predefined class which the input belongs to, for example the WillWait decision tree of figure 9;
- Regression Trees: This tree performs its algorithms on a regression model. The result is a real value, for example the total toll of a planned trip, or the price for the hotels.

This work focuses on classification trees.

3.2 Classification Trees

The classification of data is a two-step approach, where the first step is the learning step and the second the classification step. Figure 10 illustrates this process. During the learning step, a classification model, describing the classes and concepts of the data, is built from a learning set of input data, called a training set. A classification algorithm analyzes the training data and build a classifier, containing a set of classification rules. The second step, the classification step, applies this ruleset to a test-dataset for evaluating accuracy, and if acceptable the classifier executes its ruleset to new input data.34

Abbildung in dieser Leseprobe nicht enthalten

Figure 10: The data classification process of (a) Learning and (b) Classification, source: Jiawei et al. (2012): p. 329

The classification process can also be executed incrementally. Therefore, for a new input sample, the learn­ing procedure and the classification procedure are applied in sequence.31

The machine learning algorithm Iterative Dichotomiser (ID3) was developed by the end of the 1970s. In the early 1980s the successor of ID3, the C4.5 algorithm was introduced. Another decision tree algorithm was developed independently during the early 1980s, the Classification And Regression Tree (CART). These are the classic decision tree algorithms. All the three algorithms implement a greedy way at inducing decision trees in top-down recursive divide-and-conquer strategy.32

Abbildung in dieser Leseprobe nicht enthalten

Algorithm 1: Generic decision tree induction algorithm, source: Aggarwal (2015): p. 296

Algorithm 1 illustrates the generic strategy for inducing decision trees. The algorithm begins with the full training set at the root node of the decision tree and recursively partitions the data into lower level nodes by pre-defined split criterion. Some decision tree algorithms implement a special criterion to abort the induction, called stopping criterion. A simple stopping criterion is when the complete training samples belong to the class of one leaf node. This may lead to overfitting, where the decision tree might not generalize to new unseen samples. To avoid this issues, some algorithms include methods for removing such leaf nodes from the tree, known as pruning.33

3.2.1 ID3

The ID3 algorithm is a very simple decision tree algorithm and commonly used for teaching purposes. It uses information gain as a splitting criterion and does not apply any pruning methods.34

[...]


1 Cf. Cisco (2016).

2 Cf. Gartner (2016a).

3 Cf. Gartner (2015b).

4 Cf. Forbes (2015).

5 Cf. Benenson et al. (2012).

6 sCf. Costea and Nedevschi (2016).

7 Cf. Manzoni et al. (2011).

8 Cf. Lee, Alena, and Robinson (2005).

9 Cf. Wang et al. (2015).

10 Cf. Palvia et al. (2004).

11 Cf. Palvia et al. (2004).

12 Cf. Palvia et al. (2004).

13 Cf. Palvia et al. (2004).

14 Cf. McKinsey & Company (2011): pp. 1-25.

15 Cf. Lake and Drake (2014): pp. 1-3.

16 Cf. Japkowicz and Stefanowski (2016): p. 13.

17 Cf. Gartner (2016b).

18 Cf. Brynjolfsson (2012): pp. 5-6.

19 Cf. Schroeck et al. (2012): pp. 6-7.

20 Cf. De Mauro et al. (2015): pp. 100-103.

21 Cf. Japkowicz and Stefanowski (2016): pp. 8-15.

22 Cf. Krempl et al. (2014): p. 8.

23 Cf. Jarr (2015): pp. 5-7.

24 Cf. Vega-Oliveros and Berton (2015): pp. 13-14.

25 Cf. Li (2015): pp. 12-13.

26 Cf. Tran (2013): p. 18.

29 Cf. Rokach and Maimon (2014): p. 8.

30 Cf. Russell and Norvig (2010): p. 698.

31 Cf. Russell and Norvig (2010): p. 698.

32 Cf. Russell and Norvig (2010): pp. 698-699.

31 Cf. Poole and Mackworth (2010): p. 299.

32 Cf. Jiawei et al. (2012): p. 322.

33 Cf. Aggarwal (2015): p. 294.

34 Cf. Rokach and Maimon (2014): p. 77.

Final del extracto de 65 páginas

Detalles

Título
Taxonomy of a Fast Data Business Model in the Mobility Market
Universidad
Campus02 University of Applied Sciences Graz
Calificación
1,00
Autor
Año
2016
Páginas
65
No. de catálogo
V541425
ISBN (Ebook)
9783346176776
ISBN (Libro)
9783346176783
Idioma
Inglés
Palabras clave
Decision Tree, Open Innovation, Business Model Canvas, Serious Gaming, Automotive
Citar trabajo
Andreas Landgraf (Autor), 2016, Taxonomy of a Fast Data Business Model in the Mobility Market, Múnich, GRIN Verlag, https://www.grin.com/document/541425

Comentarios

  • No hay comentarios todavía.
Leer eBook
Título: Taxonomy of a Fast Data Business Model in the Mobility Market



Cargar textos

Sus trabajos académicos / tesis:

- Publicación como eBook y libro impreso
- Honorarios altos para las ventas
- Totalmente gratuito y con ISBN
- Le llevará solo 5 minutos
- Cada trabajo encuentra lectores

Así es como funciona