How do AI startups build trust in their smart service systems?

Bachelor Thesis, 2017

43 Pages, Grade: 1,3



List of abbreviations

List of figures

1 Introduction

2 Theoretical Background
2.1 Trust
2.1.1 Definition and conceptualization
2.1.2 Initial trust and continuous trust
2.2 Why AI startups have a hard time building trust in their smart service systems
2.2.1 Service Systems
2.2.2 Smart Service Systems
2.3 Strategic spheres of activity and hypotheses
2.3.1 “Transparency”
2.3.2 “Recognition by third parties”
2.3.3 “Points of contact”

3 Method & Data
3.1 Method: An abductive approach drawing on QCA
3.2 Framework
3.3 Coding
3.4 Sample

4 Discussion of results
4.1 Description of results
4.2 Discussion of hypotheses
4.3 Additional findings
4.4 Limitations

5 Conclusion and outlook

6 References

Appendix A

Affidavit (Eidesstattliche Erklärung)

List of abbreviations

Abbildung in dieer Leseprobe nicht enthalten

List of figures

Table 1: recalibrated framework with all trust-building tools examined in the survey

Table 2: meaning of the assigned values of the coding scale for the survey

Table 3: final sample criteria

Table 4: count of times and percentages that a transparency tool was used by a startup

Table 5: number of startups using top 3 recognition tools

Table 6: usage of points of contact (selection)

1 Introduction

The global economy is shifting labor from agriculture and manufacturing to services. Globe-spanning service-based business models enabled by information technology (IT) and increasingly specialized businesses and professions have transformed our economies. Service innovation is key in order to achieve growth for this more-service-focused-than-ever world economy to thrive (Maglio & Spohrer, 2013).

Scholars recognize a need for new ways of value-creation that can propel economic growth and the development of more effective services (Vargo, Maglio, & Akaka, 2008). One answer to respond to that need is the re-organization of the production of services in so-called service systems (Maglio, Srinivasan, Kreulen, & Spohrer, 2006).

This approach is particularly useful for knowledge-intensive industries (Vargo et al., 2008) and noticeable for example in the artificial intelligence (AI) industry, a rapidly evolving, hyper-innovative ecosystem with new players coming up at frequent intervals (Gibney, 2016). AI startups offer their services through smart service systems or they try to make their customer’s and their own service systems smarter by adding AI services to the process of value co-creation. The industry heavily relies on software as a service (SaaS) business models which represent the ideal-typical shift to a service-dominant (S-D) logic thinking (Vargo & Lusch, 2008).

When it comes to the acceptance of those new services, trust is a vital concern. While it has always been an important issue in services, trust in smart service systems becomes crucial. As AI startups’ service propositions are far from familiar to their potential clients, they have got to go the extra mile to build trust in their smart service systems.

This paper will provide answers to the research question How do AI startups build trust in their smart service systems? by applying the theory of trust to smart service systems and AI startups.

As website quality is an important trust-building lever (McKnight, Choudhury, & Kacmar, 2002) the research question will be answered by exploring trust building measures in a sample of 26 AI startups’ websites.

The major findings include that AI startups do not make their smart service systems as transparent as they could through their websites, that showcasing recognition by third parties occurs mostly through inexpensive tools that are easy to implement, and that all AI startups offer indirect channels to get in contact with them but less offer richer channels.

2 Theoretical Background

This section will first conceptualize the phenomenon of trust, especially initial trust between unknown partners, and explain why AI startups face particular challenges in initial trust-building in order to examine later what strategic spheres of activity AI startups can enter and what tools they can use to build trust in their smart service systems.

2.1 Trust

The concept of trust has been studied in many different disciplines. Trust influences amongst others communication, organizational citizenship behavior, negotiations, conflict management, individual and common performance, satisfaction and business transactions (Dirks & Ferrin, 2001). Trust is “difficult to earn and easy to lose” (Urban, Sultan, & Qualls, 2000, p. 42).

Trust has always been paid special attention to when analyzing service transactions. As opposed to business transactions that involve goods that can be rated by their physical features, service providers have to provide surrogates to show the quality of their services (Scheuer, 2015) and have thus a difficult time building trust in their services.

Several studies in the business field emphasize the importance of trust and present empirical findings as to how trust is perceived in services. Weiber & Adler (1995) find that trust in services is among the top three determining characteristics when it comes to a decision for or against a service provider. Claycomb & Martin (2001) rank "Encourage our customers to trust us" 4th in top-rated priorities in customer relationship building objectives and practices in their study with 205 US commercial service providers.

The field of information systems is dealing increasingly with trust and Benbasat, Gefen, & Pavlou (2010) claim that research on trust has “taken center stage in the MIS[1] field in the past few decades".

In order to apply the concept of trust to AI startups, the next paragraph builds a working definition and conceptualization of trust.

2.1.1 Definition and conceptualization

Many scholars agree on a fundamental definition of trust which includes "confident expectations" and a "willingness to be vulnerable" according to e.g. Rousseau, Sitkin, Burt, & Camerer (1998) or Morrow, Hansen, & Pearson (2004). In this paper, I follow Rousseau et al.’s (1998, p. 395) definition of trust: “Trust is a psychological state comprising the intention to accept vulnerability based upon positive expectations of the intentions or behavior of another.” [L1] Accepting this kind of vulnerability is momentous in situations where perceived risk and interdependence are high. This is exactly the kind of situation that a potential customer of an AI startup - not fully understanding the underlying smart service system but knowing that the service offered has a potential for disruption - is in at the beginning of the customer journey. Trust is not a behavior but a psychological state that can be reached if opportunities to build trust are used (Rousseau et al., 1998).

Despite that general agreement on the fundamentals, there are many ways of conceptualizing trust:

Rousseau et al. (1998) differentiate between deterrence-based trust, calculus-based trust and relational trust. Barney & Hansen (1994) define a weak form trust, semi-strong form trust, and strong form trust. Jones & George (1998) conceptualize trust as having conditional and unconditional states.

Lewis & Weigert (1985) argue that trust has cognitive, affective and behavioral dimensions. Cognitive trust is a customer’s confidence or willingness to rely on a service provider’s competence and reliability, affective trust means the reliance on a partner based on emotions, the confidence one draws from the care and concern the partner demonstrates and behavioral trust characterizes the actions that result from cognitive and affective trust. Johnson & Grayson (2005) apply this conceptualization to the service sector and find that affective trust is based on affect experienced from interacting with the service provider and is closely related to the perception that a partner’s actions are intrinsically motivated. They also imply that in the business-to-business (B2B) sector, cognitive trust can be substituted by calculative trust - the ability of a firm to minimize uncertainty in doing business with another firm by contractual safeguards which makes cognitive trust in the B2B sector a multifaceted topic that is difficult to analyze.

Other authors differentiate between the concepts of trust and distrust. In this logic, distrust is not seen as a lower level of trust but as its opposite. McKnight, Kacmar, & Choudhury (2004) conclude that both trust and distrust influence the perception of a potential customer when looking at a website.

Another common way to conceptualize trust is to distinguish (implicitly or explicitly) between initial trust and continuous trust. The explicit use of this approach can often be found in research on trust related to information systems which makes the approach relevant for this paper. While many authors in the information systems (IS) field examine the impact of initial trust (e.g. McKnight et al., 2002) there are only a few researchers that analyzed the issue of continuous trust (e.g. Hoehle, Huff, & Goode, 2012).

In this paper, I will focus my research on the domain of initial trust as it is a well-researched field and offers tools that AI startups can use when building trust. In order to operationalize the concept of initial trust, in the following, it shall be defined more clearly.

2.1.2 Initial trust and continuous trust

Initial trust describes the trust between two actors – e.g. an AI startup and its potential customer - who have no record of prior interaction or business. Only after the trustor (the customer) has engaged in trust-related behaviors (e.g. purchasing a product) he can assess the trustworthiness of the vendor.

The opposite, continuous trust, is derived from repeated interaction. It develops over time if interactions take place between two actors and these interactions have been living up to the expectations (Urban et al., 2000). Research shows that continuous trust is a significant contributor to a customer’s intention to use an information system (Hoehle et al., 2012). Meffert, Bruhn, & Hadwich (2015) look at the matter from a slightly different angle and state that in an ongoing service relationship trust becomes less important as it can be substituted with past experiences and facts over time. An example could be a user of an online banking website that always worked quickly and securely for him. This user will develop continuous trust in the website, successively substituting trust with his positive experiences with the website.

However, when looking at AI startups, continuous trust can be disregarded as startups usually cannot bank on long-established trustworthy service relationships with their customers and do not enjoy a brand reputation (cf. Pengnate & Sarathy, 2017 and Urban et al., 2000). They usually interact with new customers just as they are new players in the industry themselves. There is no trust from former interactions and experiences on the customer side which makes the perceived risk by the new potential customer especially high (Stenglin, 2008). Being in this situation, AI startups have to focus on ways of building initial trust. The following section addresses this problem in more detail and introduces the theory of smart service systems.

2.2 Why AI startups have a hard time building trust in their smart service systems[L2]

AI startups’ unique selling proposition (USP) consists of built-in AI entities - machine sub-systems that think or even act like humans or at least rationally - that enhance their products and services since the very beginning of artificial intelligence (Russell & Norvig, 1995). Many AI startups offer integration of smart service systems for their customers where machines take care of the value creation to some extent. Most companies are principally willing to integrate smart service systems into their production system, some even have AI strategies (cf. Ivens, 2015). Nevertheless, to become generally accepted as a player in this very cognitive (Chai, Malhotra, & Alpert, 2015) and entirely new field from a customer’s perspective, trust building is key in the AI industry.

There are some obvious challenges regarding initial trust related to the introduction of AI-enabled services and service systems in the market: AI services heavily rely on the use of big data (Demirkan et al., 2015). Sending sensitive data to a provider typically requires initial trust, the certainty that the data is going to be used only in contractual ways and the security of data transfer and transmission on the customer side. Aside from that, even if the term of artificial intelligence is by now widely known in the economy and society, there are still perceptions of AI as not much more than a buzzword (cf. Shannon, 1984) which leads to people being skeptical about anything that is labelled as AI. AI startups thus have to prove to their potential customers that AI actually adds value to their services.

However, the most important aspect is to understand that AI startups provide their services by striking new paths of value creation and building very complex systems around these. The resulting, complex services and the integration of smart service systems by AI startups in a B2B context are hard to fully understand for potential customers. That causes a higher level of unfamiliarity and a lower level of initial trust at the end of a potential customer (Gefen, 2000).

AI startups therefore have an especially hard time to build trust in their smart service systems.

2.2.1 Service Systems

It is necessary to understand the underlying logic of service systems when evaluating the challenges that AI startups face in trust-building:

Service systems are “configurations of people, technology, value propositions connecting internal and external service systems, and shared information (e.g., language, laws, measures, and methods)” able to create and deliver value to providers, users and other interested entities (Maglio & Spohrer, 2008, p. 18; cf. also Vargo et al., 2008, p. 149).

A service system is always both, provider and client of services (Barile & Polese, 2010) and is dependent on others’ resources to survive. Value in a service system is always co-created by different agents in interactive configurations of mutual exchange (Vargo et al., 2008).

Value co-creation is often identified as the fundamental innovation process which makes service systems the fundamental theoretical construct for service innovation (Peters et al., 2016). Whilst service innovation is required urgently to make the economy thrive (Maglio & Spohrer, 2013) service systems are not only focused on service science but are also put at the heart of many business models in the actual economy, frequently by startups that reconfigure resources in an innovative way.

It is evident that IT influences the way in which value can be created in these service systems where "knowledge is the core source of all exchange" (Vargo et al., 2008, p. 151). This is where smart service systems come into play.

2.2.2 Smart Service Systems[L3]

So-called “smart service systems” are the latest thing in service engineering. They combine the definition of service systems and their underlying service engineering or S-D logic with the newest developments in IT, smart technologies, data, knowledge and intelligence industries.

Smart service systems are systems “capable of learning, dynamic adaptation, and decision making based upon data received, transmitted, and/or processed to improve its response to a future situation” (Medina-Borja, 2015, p. 3).

Bringing this definition together with the previously used service system definition by Maglio & Spohrer (2008, p. 18) we can think of a smart service system as “a configuration of people, technologies, organizations and shared information, able to create and deliver value to providers, users and other interested entities, through service that is capable of learning, dynamic adaptation, and decision making based upon data received, transmitted, and/or processed to improve its response to a future situation.”

Completely new applications and business models get enabled by certain ways of configuring smart service systems (Demirkan et al., 2015). A good example is a startup that utilizes AI in a value creation mechanism which implies the partial execution of entire functions in the system by IT. The resulting smart services build their capabilities of learning, dynamic adaption, and decision making on the technology end mostly through machine learning. Alongside with the new potential of added value, this leads to insecurities on the customer side as the associated phenomena of neural networks, unsupervised learning and data models are only in the beginning of being explored (Russell & Norvig, 1995). It is typically insufficiently studied how the learning exactly works and insecure if the data model will work as reliably in the future.

Simultaneously, smart services are a common way to automate traditionally human functions in a service system. This automation has usually the aims of enhancing customer happiness, minimizing human error and cutting costs (Maglio, 2014) but these goals are not always compatible and advancements in customer experiences and more overall value creation – at least for now – are not guaranteed (Peters et al., 2016).

Either way, smart service systems involve very complex interrelations of agents and stakeholders and their entire creation of value is hard to grasp when not knowing about all underlying processes and logics. The services are so complex that machines have to ‘take over’. That makes it typically difficult for a potential customer of an AI startup to understand who provides what part of the service, where the value is created, who controls the system and how the learning processes work which can lead to a lower initial trust towards the underlying smart service system. To make matters worse, some negative examples of failed AI projects such as Microsoft’s chatbot Tay - a teen girl that turned into a racist and sexist in less than 24 hours (Güntsche, 2017) - recently received a lot of media attention.

All that leads to initial trust issues between potential customers and AI startups that must be addressed by the startups in order to attract customers. The choice to analyze AI startups [L4] (as opposed to, say, e-commerce startups) in this paper is the corollary of S-D and smart service systems logic. With completely new business models, AI startups are at the forefront of integration of smart service systems. They have to cope with the biggest trust building issues now and other companies will eventually be able to learn from them, once they get to the point of integrating their products or services into complex smart service systems as well.

The next section will reveal the strategic spheres of activity that AI startups can use to address these issues and will construct hypotheses based on the expectations that can be found in the literature.

2.3 Strategic spheres of activity and hypotheses

Just as the problems of trust building have been discussed there has always been a discussion on how to solve them.

Narrowing down the content-related focus of trust building measures, I consider literature on trust building in information systems and websites on a more general level. This will help to then build hypotheses for the challenge of trust building in smart service systems. Central publications in the trust in IS research stream all recognize a need for trust building, some even claim that trust will soon become the currency of the internet (Urban et al., 2000). Literature comprises of online trust-building strategies directed towards users or buyers (e.g. Lim, Sia, Lee, & Benbasat, 2006 and McKnight et al., 2002), or seeks novel perspectives such as in Benbasat et al. (2010).

There is general agreement in the literature that the perceived quality of a website is an important trust-building lever. McKnight et al. (2002) find that the quality of a website is a strong predictor of trust in the vendor with customers making strong inferences about the attributes of the vendor from what they first experience on the site. In a later paper, McKnight et al. (2004) set site quality, as part of the vendor-specific factor that can be influenced by a single actor, in relation to the perception of the general web environment, the institutional factor that cannot be influenced by a single actor and find that both impact trust. Schmeißer et al. (2009) define persuasiveness as a set of design rules that promote trust and credibility in a website. They also list "trust and credibility" in a website as an important criterion to persuade the customer to close the deal or the transaction. Urban et al. (2000) reveal that intuitive navigation that allows customers to control their web experience builds trust. Nilashi, Ibrahim, Reza Mirabi, Ebrahimi, & Zare (2015) and Thomas & Scholtz (2000) underline the importance of the visual impression and state that "pleasant", "clean" or "clear" design of websites is perceived as more trustworthy by customers.

The general design of a website thus has to be the object of analysis when seeking to find trust-building measures that AI startups use to build trust in their smart service systems. This paper focusses on content-based forms of trust-building as opposed to visual design aspects that are only somewhat considered by analyzing the availability and prominence of measures within an AI startup’s website.

Literature on initial trust, services, service science, smart service systems, website / online trust and information systems suggests there are three main strategic spheres of activity in which initial trust in smart service systems can be earned: “transparency”, “recognition by third parties” and “points of contact” (they align well with what McKnight et al. (2002) call the three important “trusting beliefs”: competence, benevolence, and integrity). In the following, hypotheses are constructed for each strategic field to be tested.

2.3.1 “Transparency”

Establishing transparency is arguably the most important duty of a service provider to establish trust in its service. Naturally, there will always be insecurities concerning the quality of a service provider’s performance and results on the client side in all phases of the buying process as the client cannot assess the quality of a service in advance (Scheuer, 2015). The more a provider succeeds in unveiling and explaining all components that are part of the service, the smaller those insecurities on the client side will become.

The higher the necessary degree of client integration for a service, the harder it gets for the provider to present the expected results in advance. AI startups that integrate their clients into complex and often complicated smart service systems have to be careful in finding ways to explain their non-standardized services to clients. For example, explicit pricing as recommended by Urban et al. (2000), is probably not advantageous for AI startups that integrate smart service systems as it gives the potential customer the impression of highly standardized services.

There are still many various ways for AI startups to make their service transparent and the expected outcome visible for a client. One example is to explicitly endorse privacy and security policies and ensuring a user’s privacy in doing so (Urban et al., 2000). This can prove extremely useful, especially in times that companies and private users are increasingly susceptible to cybercrime (Greenbaum, 2015).

Another tool is to make advance deliveries of services free of charge (Scheuer, 2015) e.g. by a demo which is often seen as the best way to showcase the performance of a product or service (Spolsky, 2007) or a free trial version of a web service, a version with limited functionality or duration that can be converted into a full version when paying for it (Cheng & Tang, 2010).

Additionally, AI startups can create customer communities that present user feedback to reduce the customer's perception of risk (Urban et al., 2000) e.g. in a blog that allows comments and thus shows that a startup is ready to have a conversation (Robinson, 2012).

Besides all these specific tools, authors agree on the fact that it is crucial to provide up-to-date, complete and credible information on a service and the company behind it so that the customer can assess the company’s capabilities (Meffert et al., 2015 and cf. Urban et al., 2000). There are different ways to achieve this goal for AI startups: They can explain the features and facts of a service in a list or text or showing them in a richer medium such as an explanatory video (Spielmann, 2005). Videos that were showing no facts or features of the product/service were not counted as trust-building tool.

On the technical side of things, AI startups can offer information on details such as integrations, APIs [2] or SDKs [3] and on data security, privacy or cryptography to make their service transparent. Finally, humanizing a business is important when trying to win a customer’s trust. That is why AI startups should provide information on their team. [L5] The abundance of available options for AI startups to make services transparent leads to the first hypothesis.

Hypothesis 1: AI startups try to make their services as transparent as possible to potential clients by using transparency tools at a large scale[L6] .

2.3.2 “Recognition by third parties”

Showing recognition by third parties is always a good option to show potential customers that a service is trusted by others. This is especially useful for service providers like AI startups that have not acquired a good reputation yet (cf. McKnight et al., 2004 and Schmeißer et al., 2009). If a company states that it is “the best” in their field or that their product is “superior to all competing products” it is not very credible as everybody can easily claim that. Proving it by letting others speak is a completely different, much more credible approach. A potential customer can check if a service is actually perceived by others to be as great as it is advertised. When a neutral person or institution praises a service, a potential customer can identify with that person and rely on their decision or assessment.

Urban et al. (2000) suggest celebrity endorsements as a useful tool. As this is not affordable for most startups, it will play no role in the research for this paper.

Liu, Du, Yan, & Sha (2014) state that recommendation trust, initial trust generated by showcasing recommendations of people, applications or agents who have used the web service before, is key in choosing web service providers. There is a consensus in literature that reference clients and their testimonials are a target-oriented and inexpensive way to show third party recognition and generate initial trust (e.g. Meffert et al., 2015 or Scheuer, 2015 or Singh & Baack, 2004). The testimonials display the service provider experience (Johnson & Grayson, 2005) and customer satisfaction. In addition, Hoesselbarth, Neuß, Eicholt, & Winkelmann (2017) and Urban et al. (2000) emphasize that the importance of official third-party seals and respectively audits by reputable independent agents in their work which can be crucial in certain industries. However, certification processes to obtain these seals can be slow and costly. Scheuer (2015) and Urban et al. (2000) add awards (issued by third parties) as tool to the trust-building mix. Drawing on previous work of Hofstede & Hofstede (2011), Hoesselbarth et al. (2017) also consider academic journals and interviews with experts an important trust building element. This could be of special interest for AI startups as the industry is cutting-edge and research in the field is often conducted by startups or startups are founded by former researchers which creates a big potential of delivering academic content in the field.[L7]

The second hypothesis is taking these suggestions into consideration.

Hypothesis 2: The main tool to show recognition by third parties are testimonials. AI startups make greater use of recognition tools which are inexpensive and easy to implement than of costly tools.

2.3.3 “Points of contact”

According to Gounaris (2005), the central function of trust is a reduction of complexity in interpersonal relationships. While technology is substituting traditionally human functions, humans are still at the center of smart service systems, as many scholars argue: Kandogan, Maglio, Haber, & Bailey (2011) claim that smart service systems cannot function properly without human interaction. Humans on the provider side are essential to answer requests from customers because websites can never provide every piece of information, in particular when talking about smart service systems that have to be tailored to customer’s needs in order to work properly.

AI startups that mostly offer web-based services can have their workforce basically anywhere around the world. Still, it makes it easier for a potential customer to trust a service provider if there are high-quality points of contact offered. The richer the channel that is offered, meaning the closer a customer feels in the moment of a contact and the shorter the response times are, the higher the quality of the point of contact. But at the same time, also the cost of labor goes up which could lead to problems for startups that cannot easily increase their staff.

The minimal contact option is a contact email address or contact form. Richer channels (meaning more responsive and more direct channels) like live chats or phone numbers show to a potential customer that there is somebody to whom questions can be addressed. Points of contact also fulfill other functions of reassurance: A postal address for example provides legal safety as in the case of a law suit with the company.

Hoesselbarth et al. (2017) list local points of contact as a prime trust-building element, drawing on the results of a cross-cultural study conducted by Singh & Baack (2004). Urban et al. (2000) propose to use virtual-advisor technology to gain customer confidence and belief. While this might be a suitable way to gain customer confidence and belief for producers of fairly standardized products or services, it gets more difficult to implement a virtual adviser the more complex the offered services are. AI startups that offer integration of smart service systems should do better by offering individual consulting through a real agent, e.g. via phone or live chat instead. [L8]

The third hypothesis is constructed on the base of this theory.

Hypothesis 3: All AI startups offer at least one way in which to contact them but few provide richer channels.

After constructing the hypotheses, the next chapter explains what method was used and how the data was collected.

3 Method & Data

The aim of this paper is to answer the research question: How do AI startups build trust in their smart service systems?

As clarified above, this paper will treat the building of initial trust in smart service systems by AI startups towards their customers. The best opportunity to build initial trust occurs with the customer’s first contact with the smart service system. Examining the typical customer journey, one finds out that the initial visit of a website often represents this first contact point (Mangiaracina, Brugnoli, & Perego, 2015). Analyzing what trust building tools AI startups use on their websites will thus provide answers as to how initial trust is built. Furthermore, websites are easily accessible and data can be collected independently from third parties which also makes them an attractive object of study on the side of practical considerations.

Therefore, in this paper, AI startups’ websites will be analyzed on the fundament of the hypotheses on initial trust to answer the research question.

3.1 Method: An abductive approach drawing on QCA

[4] A methodological starting point for the analysis was the qualitative comparative analysis that e.g. Linton & Kask (2017) use in their study of small Swedish sports retail stores which is suitable especially for its analytical procedure and the calibration of measures. At the same time, the framework for this paper draws on work of Hoesselbarth et al. (2017) who propose a framework to test the effects of landing page design on conversion rates. The authors suggest that trust elements on websites could be tested based on their framework (Hoesselbarth et al., 2017).

An alternative approach, theory building from cases according to Eisenhardt & Graebner (2007), was considered but discarded due to academic and practical reasons. Academically, theory building from cases is most appropriate when no existing theory offers answers to a problem while in the case of this paper there are high quality starting points for answers provided by trust literature. Furthermore, it would have been difficult to conduct interviews with the right decision makers in AI startups who are located around the world and usually do not have enough spare time when building a company.

Most qualitative comparative analyses build a typology beforehand and afterwards test it by applying it to real cases (cf. Linton & Kask, 2017, p. 170). Instead, an abductive two-step approach was chosen for this paper: The first step comprises of deductive work by finding tools of trust-building in the literature. With these, a first analysis of a small sample of 5 websites was conducted. The second step consists of learning inductively from the data and going deeper into these examples, to find additional tools. These tools were then added to the final framework. This approach can be justified since there has been no previous research on AI startups’ trust-building in smart service systems and the existing theory is not sufficient to explain this phenomenon. As trustors AI startups use different tools and combine the tools differently than the trustors which are found in today’s trust literature in fields that have been studied more extensively (e.g. e-commerce merchants, cf. Gefen, Karahanna, & Straub, 2003).

Tools from all categories emerged in the second step of the method:

In the “transparency” category, it was found that one AI startup in the small sample offered a section on FAQ[5] on their website. A recommendation for FAQ as trust-building tool could not be directly derived from literature. Yet, it goes along with Urban et al.’s (2000) call for complete and unbiased information. As AI startups offer smart service systems in need of explanation and FAQ are an effective way to explain them, FAQ were added to the transparency tools in the framework.

When looking at the category “Recognition by third parties”, it became evident that there are three more relevant tools for this study: press coverage, partnerships, and patents/trademarks . Press coverage, referring to links to press articles that feature the company, its service, its smart technology, or its founders’ ideas is a tool used by four out of five AI startups in the small sample. Three of five startups in the small sample were explicitly highlighting their partnerships with other companies, universities or individuals on their websites. Two out of five AI startups in the small sample showcased their own patents/trademarks on their website. These three new tools that could not be previously deducted from literature were considered significant and added to the recognition tools. There were two other possibilities for tools, rating in a trusted platform (e.g. the AI startup had an App that was rated positively in Apple’s App Store) and community (e.g. providing the number of developers that signed up to participate in an AI startup’s smart service system) which were used by one startup in the sample each. They were considered not significant enough compared to the other tools in that category and hence dismissed.

In the last category “Points of contact” that only consisted of four different communication channels as tools before analyzing the small sample, one channel was added to the contact tools: Social media accounts were used by all of the AI startups in the small sample. Furthermore, one startup in the small sample offered a multi-lingual website. Multilingualism is another facet of cultural adaption the importance of which authors have previously underlined (cf. Hoesselbarth et al., 2017 and Singh & Baack, 2004). Thus, it was also added to the contact tools.


[1] Management information systems

[2] application programming interfaces

[3] software development kits

[4] qualitative comparative analysis

[5] frequently asked questions

Excerpt out of 43 pages


How do AI startups build trust in their smart service systems?
Free University of Berlin  (Fachbereich Wirtschaftswissenschaft)
Catalog Number
ISBN (eBook)
ISBN (Book)
AI, Trust, smart service system, startup, start-up, service system, QCA, qualitative comparative analysis, SaaS, B2B, marketing, service-dominant, SDL, MIS, service-dominant logic
Quote paper
Lobosch Pannewitz (Author), 2017, How do AI startups build trust in their smart service systems?, Munich, GRIN Verlag,


  • No comments yet.
Read the ebook
Title: How do AI startups build trust in their smart service systems?

Upload papers

Your term paper / thesis:

- Publication as eBook and book
- High royalties for the sales
- Completely free - with ISBN
- It only takes five minutes
- Every paper finds readers

Publish now - it's free