Honest and Unbiased. The Helpfulness of Vine and Incentivized Reviews on Amazon


Master's Thesis, 2016

99 Pages, Grade: 1,7


Excerpt


Table of Contents

List of Figures

List of Tables

List of Abbreviations

1. Introduction

2. Literature Review
2.1. Electronic Word-of-Mouth
2.2. Online Consumer Reviews

3. Conceptual Background
3.1. Underlying Theories
3.1.1. Dual-Process Theory
3.1.2. Attribution Theory
3.2. Model Development
3.2.1. Review Helpfulness
3.2.2. Product Type
3.2.3. Review Type
3.3. Hypotheses Development

4. Data Analysis
4.1. Empirical Design
4.1.1. Data Collection
4.1.2. Variables
4.2. Analysis and Results
4.2.1. Data Preparation
4.2.2. Results

5. Implications, Limitations and Future Research
5.1. Managerial Implications
5.2. Limitations of the Study
5.3. Future Research Directions

6. Concluding Remarks

Appendix

Bibliography

List of Figures

Figure 1: Example of a Review with "Verified Purchase"-Badge

Figure 2: Example of a Vine Review

Figure 3: Example of an Incentivized Review

Figure 4: Conceptual model

Figure 5: Formula of the Gunning Fog Index

Figure 6: Histograms of Star Rating by Review Type with Mean

Figure 7: Most Recent Customer Reviews Section

Figure 8: 2x3 Interaction Plot

Figure 9: Example of a Vine review with a Request for Voting Figure 10: Amazon's Public Explanation for Vine Program Figure 11: Review with Hidden Incentivization Cue Figure 12: Residual Plot with Normal Distribution Figure 13: Standardized Normal Probability Plot

Figure 14: Quantiles of Residuals Against Quantiles of Normal Distribution Figure 15: Residual-versus-Fitted Plot Figure 16: ACPR Plot of Comment Count

Figure 17: Combined ACPR Plots of the Continuous Variables in the Final Model

List of Tables

Table 1: Overview of Definitions for Diagnosticity and Review Helpfulness

Table 2: Overview of Used Amazon Product Categories as Product Types

Table 3: Frequently Used Keywords to Flag Incentivized Reviews

Table 4: Calculation of Cut-Off Points for Data Exclusion

Table 5: Shapiro-Wilk Test for Normal Data

Table 6: Breusch-Pagan Test for Heteroscedasticity with Heteroscedasticity Issues

Table 7: Variance Inflation Factors for Model (1) With and (2) Without Consensus

Table 8: Linktest Showing No Model Specification Issues

Table 9: Frequency of Appearance of Review Types and Product Types

Table 10: Descriptive Statistics for Continuous Variables

Table 11: Descriptive Statistics for Categorical Variables

Table 12: Regression Output of the Final Model

Table 13: Overview of Observed Effects on Review Helpfulness

Table 14: Overview of Collected Data Points

Table 15: Replaced HTML Entities

Table 16: Rejected Model Without Variable Transformations

Table 17: Rejected Model With Not Significant Variable Comment Count

List of Abbreviations

Adj.: Adjusted

ANOVA: Analysis of Variance

ASIN: Amazon Standard Identification Number

Coef.: Coefficient

Conf.: Confidence

df: Degrees of Freedom

DFFIT: Difference in Fits

HTML: Hypertext Markup Language

ID: Identifier

ln: Natural Logarithm

Max.: Maximum

Min.: Minimum

MS: Mean Square

MSE: Mean Square Error

SKU: Stock Keeping Unit

SS: Sum of Squares

Std. Dev.: Standard Deviation

Std. Err.: Standard Error

URL: Uniform Resource Locator

VIF: Variance Inflation Factor

1. Introduction

The abundance of information that can be found online is on the one side a blessing for those searching for information about any topic. On the other side, however, it often results in a state of information overload for users. This occurs when the sheer scale of information available to a topic, such as a hobby or a product, is just overwhelming the users. The exact same phenomenon is increasingly appearing also in the context of online consumer reviews for products (D. Park and Lee, 2008, p. 388). Here, some online retailers and marketplaces have accumulated thousands of reviews for a single product. Obviously, in a situation like that, for an individual user it is nearly impossible to gather all relevant information hidden in this collection of opinions and additional information.

Still, online consumer reviews became one of the most important sources of information online, with around 40% of UK consumers claiming to post online consumer reviews about products and services and over 80% claiming to read online consumer reviews as was found in a study published by Deloitte (2014, p. 4). Additional to that previous literature found that not only they are used as a complementing information source, but also they substitute other forms of business-to-consumer and traditional product-related word-of-mouth communication between consumers (Chevalier and Mayzlin, 2006, p. 345). Interestingly the anonymity of the authors of online consumer reviews does not seem to be too big an obstacle, as a study found that obviously consumer-written product reviews are more helpful than those written by experts (Li et al., 2013, p. 101). However, this does not imply that the credibility of the author does not play a critical role (McKnight and Kacmar, 2006, p. 4).

Online consumer reviews, however, are no entirely new phenomenon, because they can be considered as one of the digital counterparts of traditional word-of-mouth. Also, here the findings emphasize the importance of word-of-mouth, which is considered to be an important driver of consumer behavior and is often used as an indicator for the future success of a product (Godes and Mayzlin, 2004, p. 545). The main reason for this is probably the perception of the information received. The mere fact that a person recommends to purchase a product or discourages to do so makes the information more credible than information provided in advertisements by companies (Wilson and Sherrell, 1993).

In the context of online consumer reviews and e-commerce, online marketplaces such as Amazon and Chinese Tmall, which is operated by Alibaba, play an important role. On the one side, e.g. Amazon became an important aggregator of online consumer reviews for a product portfolio of more than 250,000,000 products, according to analysts (Marketplace Analytics, 2014). On the other side the market consolidation in e-commerce is rapidly expanding, with only a few big players accounting for a major stake of the e-commerce revenues that is reportedly still growing (Internetretailer.com, 2017; Welt.de, 2016). This and their increasing diversification (Bloomberg.com, 2016; Time.com, 2014; Wired.com, 2017) are strong indicators for that these few players are successfully establishing themselves as the go-to- places for more and more needs consumers may seek to satisfy online. Evidently, the importance of the presence of brand manufacturers and online retailers on these marketplaces goes hand in hand with this.

In this study, especially Amazon’s marketplace, which is probably the most well-known service of Amazon, is of great interest. Important to know is that not all products offered on Amazon’s platforms, such as Amazon.com, Amazon.co.uk, or Amazon.de, are sold by Amazon itself (Amazon.co.uk, 2017b; Amazon.com, 2016a, pp. 3-4). As the name suggests, the marketplaces are open to every seller of a product and the sellers of a product are aggregated on a product-level. Thus, it may be possible for consumers to make an order on an Amazon marketplace without noticing from whom, Amazon or one of possibly many marketplace sellers, in the end the product is purchased from. Also, it has to be pointed out that Amazon has a two-tiered offer towards retailers and brands that want to sell their products on Amazon: it is distinguished between “Sellers” and “Vendors”, with the “Vendor” program aiming on brand manufacturers by e.g. giving them more possibilities to present their products in a more brand-specific way (Amazon.co.uk, 2016b). For the sake of simplicity, whenever in the course of this study it is referred to “sellers”, “marketplace sellers” etc. in the context of Amazon, both of these two types of marketplace participators are meant, unless explicitly indicated otherwise.

The online review platform of Amazon is open for everyone. That means, not only everybody can access online consumer reviews published on one of the Amazon marketplaces, but also everybody can publish a review on the platform, regardless of whether an order has been made on Amazon before. An online consumer review on Amazon is comprised of a rating on a “5-star” scale, a title, which is displayed above the review, the review content, and optional pictures or videos the review author can upload. Alongside this, some meta data is also published, such as the publication date, various badges, e.g. if the review author is among Amazon’s top review authors, or a “verified purchase” badge, if the review can be traced back to an order of the respective product on the Amazon marketplace. Additional to the regular reviews, Amazon offers the fee-based “Vine review program”, which offers participants of the “Vendor” program the possibility to have Amazon-selected review authors receive a product for free such that they can write a review for it (Amazon.co.uk, 2017c). However, many sellers on the Amazon marketplace instead chose to generate reviews on an incentivization basis without taking use of the “Vine review program” and, thus, also without Amazon’s supervision of the review author selection. The differences between the two incentivization-based reviews are pointed out in greater detail at a later point in this study. Because of excessive usage of the unsupervised, incentivization-based method to generate online consumer reviews, Amazon decided to prohibit the generation of such reviews, first on the US marketplace and shortly after that also on the European marketplaces (Amazon.co.uk, 2016a). While an analysis showed that around 10.3% of all book reviews on Amazon were entirely fake (Hu et al., 2012, p. 674), another analysis, which is based on a sample of around 7 million reviews, found that in mid 2016 more than 50% of the newly published reviews on Amazon.com are generated due to an incentivization (Reviewmeta.com, 2016). Given that around 50% of newly published online consumer reviews at that date were published due to an incentivization and considering the large number of online consumer reviews published on Amazon per year in total, the entire budget that Amazon marketplace sellers devoted to the generation of such incentivized reviews only on Amazon’s US market can be estimated to be a 7- to 8-digit dollar amount.

The goal of this study is to analyze the impact, if any, these two types of incentivization-based reviews have on the perception of the review by the review reader, because these reviews could have a bias or at least could be perceived to have one. For this, the metric of review helpfulness, which is the percentage of helpful votes a review has collected from its readers, is analyzed accounting for a number of independent and confounding variables. Another aspect that is analyzed in the course of this study is the role of the product in this regard. After thorough research, no study could be identified in which a comparable analysis of possible biases on reviews’ helpfulness was conducted. Only two studies were found to be comparable at least in a broader sense: (1) Mayzlin, Dover, and Chevalier (2014) analyzed the difference of biased hotel reviews on two different platforms, expedia.com and tripadvisor.com, using a differences in differences approach and accounted for the different ownership structures of the reviewed hotels. For example, one of their findings is that hotels that have an independent hotel as a neighbor have 4.7% more negative reviews and they explain this by the special incentive for independent hotels to manipulate their competitor’s reviews (Mayzlin, Dover, and Chevalier, 2014, pp. 419-420). (2) Luca and Zervas (2013, pp. 3419-3422) used review data about restaurants from yelp.com to identify why restaurant owners publish fake online consumer reviews and found out that especially the owners of restaurants with weak online reputation use fake reviews to manipulate competitor’s ratings and that chain restaurants are in general less likely to engage in manipulative behavior. Obviously, these studies offer only very limited generalizability to the regular e-commerce context and are less relevant to this study’s topic.

This study is structured as follows: in 2. Literature Review the overarching research domain electronic word-of-mouth is illustrated and previous studies about the helpfulness of online consumer reviews and found effects are presented. Also, credibility in the context of electronic word-of-mouth is discussed. In the course of 3. Conceptual Background two theories used to develop the tested hypotheses are described, one of which is frequently used in context of the helpfulness of online consumer reviews. After this, the theoretical importance of the three most important observed variables in the statistical model is explained and the tested hypotheses are developed accordingly. In 0. Figure 4: Conceptual model Data Analysis, a comprehensive outline about the entire empirical aspects of this study is given, including the choice of confounding variables, decisions in regard to the data collection, sample finalization, regression diagnostics and the final results of the hypotheses tests. Chapter 0. Figure 8: 2x3 Interaction Plot Implications, Limitations and Future Research features a more thorough discussion of the tested hypothesis and underlying theory. Also, the implications of this study for various parties, such as online retailers, brand manufacturers and the online platform operators themselves are illustrated. Following that, this study’s limitations and identified future research directions are presented. The study closes with 6. Concluding Remarks.

2. Literature Review

In the following chapter a brief introduction into the superordinate research domain of electronic word-of-mouth is given. Subsequently the more specific area of online consumer reviews, a branch of electronic word-of-mouth, is illustrated, an attempt to present a comprehensive overview of discovered influences on the helpfulness of online consumer reviews is made, and the special role of credibility in this context is briefly discussed.

2.1. Electronic Word-of-Mouth

Traditional word-of-mouth can, simply put, be regarded as all kind of product-related consumer-to-consumer communication (Richins and Root-Shaffer, 1988, p. 32; Schindler and Bickart, 2005, p. 35) with the purpose of uncertainty reduction in buying decisions (Arndt, 1967, p. 295). In turn, electronic word-of-mouth, the domain literature about online consumer reviews can be allocated to, is the online equivalent to traditional word-of-mouth and can more precisely be defined as “as any positive or negative statement made by potential, actual, or former customers about a product or company, which is made available to a multitude of people and institutions via the Internet” (Hennig-Thurau et al., 2004, p. 39).

In the context of electronic word-of-mouth context the so-called automated feedback mechanisms or reputation systems play a special role (Resnick and Zeckhauser, 2002). Automated feedback mechanisms, such as eBay’s rating platform for sellers and buyers, or Amazon’s online consumer review platform, can be considered the virtual environment in which word-of-mouth takes place online and are the online counterpart to traditional word-of- mouth networks. Both word-of-mouth networks and automated feedback mechanisms are the solutions to the age-old problem of social organization, namely to ensure a good behavior between self-interested people who are incentivized to cheat on each other on a short-term (Dellarocas, 2003, p. 1409). In both contexts these networks do so by enabling peer supervision through making the behavior publicly known (Dellarocas, 2003, pp. 1407-1408).

The key differences between traditional and electronic word-of-mouth lie in the above- mentioned automated feedback mechanisms. Here, three aspects deserve special attention (Dellarocas, 2003, pp. 1410-1411): (1) low-cost, bidirectional communication enables word- of-mouth to take place in an unprecedented scale that makes electronic word-of-mouth a much more effective and powerful phenomenon than traditional word-of-mouth; (2) the ability for designers to precisely decide about the underlying mechanisms of electronic word- of-mouth. Traditionally, word-of-mouth occurs naturally and cannot be controlled or modelled and it still poses a difficulty to businesses and research that traditional word-of- mouth is hard to measure (Godes and Mayzlin, 2004, pp. 545-546). However, the internet almost entirely resolved this issue for electronic word-of-mouth. Designers of automated feedback mechanisms can precisely decide e.g. who can participate, what type of information can be published, how the information is aggregated and what is made publicly available to what kind of participants. This aspect is of special interest for this study, since it is about a phenomenon induced by a certain design feature of an automated feedback mechanism; (3) the lack of context represents a new challenge, in that, in traditional word-of-mouth richer information about the word-of-mouth communicator is given to the receiver of word-of- mouth communication, e.g. from his or her physical appearance or from the situation itself. Additionally, the barrier for malicious behavior is significantly lower in electronic word-of- mouth, where people can easily use fake identities to exert manipulative strategies. Thus, word-of-mouth communication receivers constantly have to evaluate whether to rely on the opinion of complete strangers (Mayzlin, 2006).

The antecedents of electronic word-of-mouth communication, or the motives for making the own opinion about products publicly available online, have received attention by a number of studies, such as Cheung and Lee (2012), Hennig-Thurau et al. (2004), Mackiewicz (2008). A frequently cited study in this regard identified four different segments of electronic word-of- mouth contributors, namely the “self-interested helpers”, “multiple-motive consumers”, “consumer advocates”, and “true altruists”, all of which are derived from eight different generic motives ranging from “venting negative feelings” to “advice seeking” (Hennig- Thurau et al., 2004, pp. 48-49). However, for this study the antecedents of electronic word- of-mouth communication are less relevant, since the research subject, vine and incentivized reviews, are electronic word-of-mouth contributions that are obviously not caused by the usual antecedents.

Instead, the other party’s perspective, namely the antecedents of readers of electronic word- of-mouth, are of greater interest here. In a study conducted by Hennig-Thurau and Walsh (2004, p. 58) five motive factors why people read online consumer reviews could be identified: (1) obtaining buying-related information for risk and search time reduction during the buying decision process; (2) for social orientation through information, which serves as a means for dissonance reduction before and after purchase by comparing the own opinion with that of others; (3) as an act of community membership by receiving information about new products and by the experience of belonging to a community; (4) remuneration through economic incentives given by some platforms; (5) consumer learning to solve product-related problems. Also, they found that the motive factors “obtaining buying-related information” and “social orientation” are the two motive factors that most likely have an influence on the buying behavior after reading online consumer reviews, while “consumer learning”, “community membership”, and “social orientation” have the strongest impact on the subsequent communication behavior (Hennig-Thurau and Walsh, 2004, pp. 64-65).

Interestingly, the influence of both found motives that impact the buying behavior is most likely compromised if the message source is found to be not credible or not trustworthy.

2.2. Online Consumer Reviews

In general, online consumer reviews can be considered as one specific type of electronic word-of-mouth (Y. Chen and Xie, 2008, p. 477; Schindler and Bickart, 2005, p. 38). In comparison to other types of electronic word-of-mouth, such as instant messaging or personal emails, online consumer reviews particularly profit from the high scale at low costs that the internet enables by being constantly publicly accessible (Schindler and Bickart, 2005, p. 38). Also, in the context of Amazon other users can comment on reviews and thereby a delayed interaction between both parties is enabled.

On the one side, online consumer reviews serve as a recommenders indicating whether or not a product can be recommended as a good choice. On the other side they serve as informants adding further information that can be regarded as valuable by other prospective customers (D. Park and Lee, 2008, pp. 386-387). Simultaneously the number of reviews is increasing rapidly. While, simplified, the recommender function can still be appropriately fulfilled by online consumer reviews through simple aggregate metrics such as the average rating. The abundance of online consumer reviews lead to a situation of information overload, which compromises the informant function (D. Park and Lee, 2008, pp. 386-387). There is no binding structure for writing an online consumer review, the additional information enclosed in a review about a product may still be incomplete and a review that is found helpful by one person may be found entirely useless by another. Researchers and practitioners are confronted with the question what review to show to whom. Accordingly, in earlier research the focus was largely on the impact of aggregate metrics or rather the macro-perspective of online consumer reviews and other forms of electronic word-of-mouth. Until it was found that these metrics fail to convey important subtleties that seem to play a role in online consumer reviews (Resnick et al., 2000, p. 48) and were found to have less of an impact (D.-H. Park and Kim, 2008, p. 401). Soon, research started to shift its focus towards reviews on an individual-level and predicting what reviews will be found helpful as well as understanding why certain reviews are found to be helpful started to gain large attention by researchers.

Ideal-typically, the literature about helpfulness of online consumer reviews could be classified into two research streams. The first research branch, which mainly stems from the area of management information systems and computer science, can be considered as aiming on maximizing the explanatory power of statistical models with review helpfulness as the dependent variable of such models (e.g. Kim et al., 2006; Yue Lu et al., 2010; Yu et al., 2012). Very frequently literature in this area tries to contribute to the state of research by employing text mining approaches. Here an other separation could be made based on the methodology chosen: the classification- and regression-based models (Danescu-Niculescu- Mizil et al., 2009, pp. 141-142; Ngo-Ye and Sinha, 2012, pp. 2-3). The second major research branch has rather an explorative character. Here the aim lies rather in unveiling new influences on review helpfulness and explaining them often using interdisciplinary approaches borrowing theories from social psychology and behavioral economics (e.g. Filieri, 2015; Pan and Zhang, 2011; Racherla and Friske, 2012). However, more often than not a definite attribution of the studies to one of these extremes is not possible as some try to maximize the explanatory power of their model by accounting for completely new aspects. Also, this is not necessary, because this is just an approach to loosely classify previous literature in the field and this is not intended to be a generally valid taxonomy.

Another dimension by which one could classify previous literature is the assumption of ground truth of review helpfulness (Liu et al., 2007, p. 335; Ngo-Ye and Sinha, 2012, pp. 2­3). The ground truth assumption is about whether to consider review helpfulness, operationalized as the percentage of positive helpful votes a review received, as an accurate indicator of how helpful a review actually is. Studies that do not make this assumption do so because they argue that this metric does not accurately resemble the value of a review. Liu et al. (2007, pp. 335-336) identified three biases in the distribution of helpful votes that underpin this argument. However, the ground truth of review helpfulness remains to be a frequently made assumption in studies of both described research streams.

Considering the two presented research streams, this study is rather considered to be part of the second, explorative branch. Thus, studies from this branch were mainly taken into account during the literature review process. Here, twenty-five comparable studies could be identified for which found effects on review helpfulness were collected in Table 13, which is located in the appendix. Evidently, the majority of those studies introduce one or multiple new aspects to approach the problem of review helpfulness estimation. To group the following review of effects on review helpfulness it was tried to identify a comprehensive, elaborated framework to structure previous literature, such as Otterbacher and Arbor (2009), who applied a concept of data quality from management information systems literature. However, literature merely surveying the current state of research in this area is scarce. A thorough search of the relevant literature yielded that no framework exists that truly covers the entire range of possible variables. Instead, in the following review of found effects a simple approach to classify found effects was chosen by attributing the effect to the review itself, the review’s author, or the product the review was written for. This allows to give a semi-structured, but comprehensive overview about previous literature.

In the following effects are presented that can be attributed to the review itself. A number of studies found certain style characteristics of review content, e.g. usage of temporal contiguity cues (Z. Chen and Lurie, 2013, p. 468) and persuasive writing devices (Otterbacher, 2011, pp. 437-438) to have a significant effect on review helpfulness. Also rating extremity received attention by research such as the integration of quadratic effects of the review rating (Chua and Banerjee, 2015, p. 359; Kim et al., 2006, p. 429; Korfiatis, Garcia-Bariocanal, and Sânchez-Alonso, 2012, p. 213; Mudambi and Schuff, 2010, p. 194; O’Mahony and Smyth, 2009, p. 306; Pan and Zhang, 2011, p. 604) or dummy variables (Forman, Ghose, and Wiesenfeld, 2008; Ghose and Ipeirotis, 2007, 2011; Racherla and Friske, 2012). Another frequently used variable is readability of reviews, operationalized as the frequency of grammatical and spelling errors (Ghose and Ipeirotis, 2011, p. 10; Otterbacher, 2011, pp. 436-437) or different readability indexes (Ghose and Ipeirotis, 2007, p. 3, 2011, p. 10; Korfiatis, Garcia-Bariocanal, and Sânchez-Alonso, 2012, p. 213), such as the Gunning fog index. However, findings about this are very equivocal (Hoang et al., 2008, p. 508; Otterbacher, 2011, p. 437). Also, the age of reviews was found to have an effect on review helpfulness, with older reviews being more helpful. Age was often operationalized as the age in days (Cao, Duan, and Gan, 2011, p. 519; C. C. Chen and Tseng, 2011, p. 763; Z. Chen and Lurie, 2013, p. 468; Ghose and Ipeirotis, 2007, p. 3; Pan and Zhang, 2011, p. 604) or the chronological order of publication (Otterbacher and Arbor, 2009, p. 963). Review balance, the degree of how balanced the frequency of appearance of negative and positive information in the review content, found attention by a number of studies (Ein-Gar, Shiv, and Tormala, 2012, p. 855; Hoang et al., 2008, p. 508). Another aspect is the mere quantity of information featured in reviews, often referred to as the review depth. Frequent ways of operationalization are word count (Baek, Ahn, and Choi, 2012, p. 112; Z. Chen and Lurie, 2013, p. 168; Chua and Banerjee, 2015, p. 359; Kim et al., 2006, p. 429; Mudambi and Schuff, 2010, p. 194; Racherla and Friske, 2012, p. 555), unique word count (Hoang et al., 2008, p. 508), or character count (Pan and Zhang, 2011, p. 604) as well as a number of items (Filieri, 2015, p. 1266), used by studies that tested review helpfulness via questionnaires. Review subjectivity, operationalized through a so-called subjectivity score (Ghose and Ipeirotis, 2007, pp. 4-5, 2011, p. 9), and review concreteness, the degree of freedom in interpretation (Li et al., 2013, pp. 4-5), were also parts of different studies. But, there are equivocal findings in this concern (Yingda Lu, Jerath, and Singh, 2013, p. 1792). Some platforms allow users to comment on each other’s reviews. The resulting comment count was also found to have a positive impact on review helpfulness (Otterbacher, 2011, p. 435). Comment count can be viewed as an indicator for how provoking a review is to interact with the review author. The last review- related variable, which views reviews from an aggregate-level, is review consensus. Review consensus is usually operationalized as the difference of a review from the respective product’s average rating. The findings here are especially equivocal. It was found that the more a review is in consensus with other reviews, the more helpful it is (C. C. Chen and Tseng, 2011, p. 763; Pan and Zhang, 2011, p. 604; Qiu, Pang, and Lim, 2012, p. 636), but also the exact opposite was found (Baek, Ahn, and Choi, 2012, p. 112; Cao, Duan, and Gan, 2011, p. 519).

The second factor found effects can be attributed to is the review’s author. Not only are there a number of metrics that were found to be influential, but also some mere interpretations made by the review reader seem to have an influence. Many review platforms offer the possibility to disclose some information about the identity of the reviewer adding context to the review and thereby attempting to conquer one of the afore-mentioned challenges of electronic word-of-mouth. Positive effects were found for disclosures of information such as the profile photo of the reviewer (Z. Chen and Lurie, 2013, p. 468), real name and location (Forman, Ghose, and Wiesenfeld, 2008, p. 304), or the reviewer rank, hobbies, birthday etc. (Ghose and Ipeirotis, 2011, p. 10). However, there are equivocal findings concerning the disclosure of a profile photo (Baek, Ahn, and Choi, 2012, p. 113). Also the reviewer activity, e.g. the number of reviews written or the time since an author has written the last review, was found to have a positive effect on review helpfulness (Ngo-Ye and Sinha, 2014, p. 56; Racherla and Friske, 2012, p. 555). The reviewer rank, which should be by definition a guarantor for review quality and which, depending on the platform, is often disclosed next to the reviewer’s name in the form of a “top-reviewer”-badge, was also found to have a positive impact (C. C. Chen and Tseng, 2011, p. 763; Z. Chen and Lurie, 2013, p. 468; Yingda Lu, Jerath, and Singh, 2013, p. 1791; Otterbacher, 2011, p. 436). Very similar to the reviewer rank is the reviewer reputation, operationalized as the number of helpful votes casted by fellow platform users. Also this metric, which is usually one of the determinants of the reviewer rank, was unsurprisingly found to have a positive impact (C. C. Chen and Tseng, 2011, p. 763; Chua and Banerjee, 2015, p. 359; Ghose and Ipeirotis, 2011, p. 9; Hoang et al., 2008, p. 508; Ngo-Ye and Sinha, 2014, p. 56; O’Mahony and Smyth, 2010, p. 306). Many online consumer review platforms have features that allow users to follow other users and vice versa. This functionality, known from social networks, allows to connect to each other and establish a more constant relationship to other users. The number of connections a user has was found to have a positive impact on review helpfulness (Otterbacher, 2011, p. 436; Racherla and Friske, 2012, p. 555). Also the reviewer’s innovativeness, the degree to which the text indicates a predisposition of the reviewer towards new products (Pan and Zhang, 2011, p. 607), self-described expert status, whether the reviewer claims to be an expert, and perceived consumer similarity, whether the reader perceives the review author to be similar to himself or herself (Connors, Mudambi, and Schuff, 2011, p. 5), were found to have a positive impact on review helpfulness.

The third and last factor that was found to have an impact on the helpfulness of online consumer reviews is the product itself. A topic that gained attention by research is the impact of characteristic-based classifications that can be applied on products such as Nelson's (1970) product type. Here, studies consistently found that reviews for experience products are significantly less helpful than those for search products (Baek, Ahn, and Choi, 2012, p. 112; Mudambi and Schuff, 2010, p. 194). Also studies based on similar classifications such as utilitarian and experiential products (Pan and Zhang, 2011, p. 604) and on an extended version of Nelson’s framework that also features so-called credence goods (Racherla and Friske, 2012, p. 549) were conducted. Some other product-related variables were incorporated in a number of studies and found to have an impact on review helpfulness, such as the product’s sales rank, retail price, average product rating (Otterbacher and Arbor, 2009, p. 959), and the total number of reviews (Otterbacher and Arbor, 2009, p. 959; Pan and Zhang, 2011, p. 604). So far research in this area has yielded a remarkable multitude of possible influences incorporated in models.

A more general aspect that frequently plays an important role in electronic word-of-mouth is credibility. The credibility of a message, or how believable and trustworthy an information is, can be considered to be a “filter” that we apply on information in the process of evaluating its value (Wathen and Burkell, 2002, p. 134) and thus is a critical aspect in attitude adoption (Petty and Cacioppo, 1986b, pp. 191-192). In a more general sense, message credibility is considered to be the result of an interplay between message characteristics, receiver characteristics, and source characteristics (Wathen and Burkell, 2002, p. 135), the latter of which is of special interest for this study. The computer mediated context of online consumer reviews was found not to change this paradigm: it was found that review readers may use the review content to infer source characteristics (Schlosser, 2005) and that this ability is sharpened in virtual communities where the usual social context as known from traditional WOM is lacking (Pan and Zhang, 2011, p. 601; Tompkins, 2003). Also it was found that credibility is, together with argument quality (Watts Sussman and Schneier Siegal, 2003, p. 59), the most important factor in attitude adoption of electronic word-of-mouth (McKnight and Kacmar, 2006, p. 4) and an important factor for review helpfulness (Filieri, 2015, p. 1266; Li et al., 2013, p. 29; Mudambi and Schuff, 2010, p. 196). This study is about a situation in which review readers can have direct and justified concerns about the source’s credibility, because the review readers are given explicit cues that the regular antecedents of electronic word-of-mouth communication are not the motives behind making the own opinion publicly available online.

3. Conceptual Background

As already stated above, this study is about the impact of obviously, potentially compromised source credibility on the helpfulness of online consumer reviews and the moderating role of consumer’s ability to correctly evaluate the product’s quality before purchase. In the following, the theories necessary to develop the tested hypotheses are illustrated. After this the conceptual role of the observed variables, review helpfulness, product type, and review type, are explained and the study’s hypotheses are developed.

3.1. Underlying Theories

In the following two sections the two most important theoretical frameworks, namely the dual-process and attribution theories, are illustrated and a number of theories that belong to one of these two research areas are explained. This introduction to the theoretical background of this paper also reflects the process that stands behind the selection of theories used to formulate the tested hypotheses.

3.1.1. Dual-Process Theory

Dual-process theory is an umbrella term for theories in social psychology that are based on the general idea that phenomena, such as processing information or forming and adopting an attitude, can occur in two different ways or can be results of one of two possible different cognitive processes. Typically, these processes differ in the extent of consciousness under which the processes are carried out (Evans and Stanovich, 2013, p. 224; James, 1950, pp. 76- 79, 325-327). It is important to point out that there is a big variety of different dual-process theories that have emerged independently from each other. Each may be applied in certain settings and only few have the aspiration to be generally valid (Evans and Stanovich, 2013, p. 224). In persuasion and attitude adoption two major theories were established, the elaboration likelihood model by Petty and Cacioppo (1981) and the heuristic-systematic model by Chaiken (1987), both of which will be briefly illustrated in the following chapter. Both are based on the idea that in some instances, information is processed very thoroughly before attitude adoption, and in other instances, information is processed rather superficially. This dichotomy is the key differentiator to traditional information processing models of persuasive communication, such as the reception-yielding paradigm (McGuire, 1972) and the cognitive response model (Greenwald, 1968), which assume that information during persuasion is always thoroughly scrutinized (Chaiken, 1987, p. 3). The term attitudes in the context of this paper can be regarded as the “general evaluations people hold in regard to themselves, other people, objects, and issues” (Petty and Cacioppo, 1986b, p. 127). Correspondingly attitude adoption is the more or less conscious decision of a message perceiver to pick up a message’s position at least for a certain amount of time.

The elaboration likelihood model is based on seven postulates, with the two most central ones being that people always try to draw correct conclusions and that they are limited in motivation and ability to do so, which is, in turn, influenced by individual and situational factors (Petty and Cacioppo, 1986b). This means that people are not motivated and able to thoroughly think through every message they receive (Petty and Cacioppo, 1981, p. 263). The actual amount of cognitive effort put into this task or rather the extent of elaboration of the conclusion is where the model’s name stems from.

Whenever the afore-mentioned factors facilitate a high elaboration likelihood, namely that people are able and motivated to carefully process perceived information as well as individual and situational factors are favorable, people are likely to follow the so-called central route. On this central route, persuasion and attitude adoption is based on careful consideration of argument content, the real value a message provides (Petty and Cacioppo, 1986b, p. 125). The central route stands for a family of processes of attitude adoption that may be employed then (Eagly and Chaiken, 1993, p. 307; Petty and Cacioppo, 1986a, pp. 4-23). After the usage of argument processing to form a conclusion, the attitude may be adopted, however, only if the information contained in the message is sufficiently compelling (Petty and Cacioppo, 1981, p. 266). It was found that attitudes formed via the central route are said to be relatively stable over time and can typically only be changed by exposing the message perceiver to strong counterarguments (Eagly and Chaiken, 1993, p. 306; Petty and Cacioppo, 1986b, p. 175).

Whenever the message perceiver lacks either motivation or ability to process the presented information or the individual and situational factors are unfavorable for central route processing, the elaboration likelihood is considered to be low. In order to comply with postulate one, the message perceiver still seeks to form correct attitudes. However, he or she does so via following the so-called peripheral route and will not engage in argument processing as depicted above. Here it is important to note that just like the central route, the peripheral route stands for a family of processes of attitude adoption and comprises all kind of cognitive mechanism that allows to draw a conclusion without using argument processing. This could be done by employing heuristics, affective decisions and social role mechanisms, such as relying on expert status, the mere number of arguments, or physical attractiveness of the source for the message perceiver (Eagly and Chaiken, 1993, pp. 305-307). Experimental research has shown that attitudes formed via the peripheral route are less likely to be permanent and can be changed fairly easy (Petty and Cacioppo, 1981, p. 267).

Analogous to the elaboration likelihood model, the heuristic-systematic model also infers that there are two different ways of information processing employed by perceivers of persuasive communication. Here, the central premise is that the message perceiver’s goal is to assess the validity of the persuasive message (Eagly and Chaiken, 1993, p. 326). The heuristic- systematic model is also very similar to the elaboration likelihood model, in that it assumes that the perceiver’s cognitive ability and motivation are the determinants of whether systematic or heuristic processing is applied.

The cognitive abilities of the message perceiver can be viewed to be directly determining the effort the message perceiver can put into the systematic information processing in the given situation. In a broader understanding of cognitive abilities, also situational factors may have an impact on the usage of these cognitive abilities, such as time pressure (Eagly and Chaiken, 1993, pp. 328-330). On the other side there are the motivational determinants, which can be considered as the degree to which the message perceiver is willing to engage in the more effortful systematic information processing. Here it is important to note that people behave economically when it comes to using cognitive resources, that is, they prefer low effort heuristic processing over high effort systematic processing wherever possible. This is called the least effort principle (Chaiken and Maheswaran, 1994). However, this shift towards heuristic processing can only happen if the goal of evaluating message validity can still be achieved with a sufficient degree of accuracy. Thus, the perceiver tries to find a balance between engaging in the effortful usage of cognitive resources and coming to a sufficiently correct conclusion. This is called the sufficiency principle (Eagly and Chaiken, 1993, pp. 328-330).

As mentioned above, if both determinants, abilities and motivation, are benefitting a thorough analysis of the perceived message, the message perceiver engages in systematic processing. The concept of systematic information processing is that the message perceiver accesses and scrutinizes the received information comprehensively and analytically for its relevance to the task of the perceivers, namely evaluation of message validity. Here, the perceiver judges the message’s validity by thinking about the perceived information and setting it into a context of other information the perceiver may already possesses about the object of the persuasive message. Based on the outcome of the evaluation of the message’s validity, the attitude is adopted or not adopted (Eagly and Chaiken, 1993, pp. 326-327).

In other instances, when the determinants do not favor systematic processing, the message perceiver will engage in heuristic processing. Heuristic processing is a more limited approach of information processing, in that less cognitive effort is necessary to form an attitude towards the message’s content. When applying heuristic processing, people mainly use available information that allows them to draw a conclusion about the message’s validity using only heuristics or simple decision rules. For better conception, such decision rules can be formulated as short statements, such as “experts’ statements can be trusted” or “biased opinions should be ignored” and can vary in their reliability giving them effectively different weights during message evaluation. It is important to note that the application of these rules has to be elicited by cues that are present in the perceived message and that the cues have to be relevant for the perceiver’s task of message validity evaluation. Here, cues about the source credibility usually play a notable role (Chaiken, 1980). The message perceiver now assesses the fulfillment of the rules by the message and certain levels of fulfillment of these rules are associated with high or low message validity. The rules a message perceiver can apply are usually learned by past experiences and, thus, have to be present before message evaluation. In this sole reliance on heuristics lies also one of the subtle differences between the elaboration likelihood model and heuristic-systematic model. While both share very similar views about the central route and systematic processing, they have different views of the peripheral route and heuristic processing, namely, that the peripheral route subsumes a family of alternative cognitive processes to form a conclusion, such as social role mechanisms, while heuristic processing only relies on previously formed decision rules. In this regard the heuristic-systematic model is the narrower approach. Consistent with the elaboration likelihood model, the heuristic-systematic model assumes that attitudes formed on the basis of such heuristics are less stable and more vulnerable to strong counter arguments. While systematic processing is influenced by situational factors that inhibit the usage of the message perceiver’s cognitive abilities to evaluate message validity, heuristic processing is not influenced by this (Eagly and Chaiken, 1993, pp. 327-329).

One issue that is frequently stressed to undermine not only the heuristic-systematic model’s validity, but also that of the elaboration-likelihood models is that message perceivers not only aim on validity seeking or holding correct attitudes, but are also under social influence. Thus, they also aim on maintaining relationships and try to defend pre-established attitudes. These motivations are so-called defense-motivations and impression-motivations (Eagly and Chaiken, 1993, p. 326). However, due to the computer-mediated, quasi-anonymous and mainly one-way directed communication, namely from the message author to the message recipient, in context of online consumer reviews such concerns can be eliminated and extended models, such as the multiple-motive heuristic-systematic model, are considered to be not necessary for the course of this study.

In general, both frameworks are suitable for this study, in that they both unanimously predict a superficial information processing, using rules of thumb, for situations in which the message perceiver is not able or willing to process the information by scrutinizing the message. However, there is a number of reasons the heuristic-systematic model is better for this study. While the elaboration likelihood model was specifically developed in a context of persuasive messages, the authors of the heuristic-systematic model claim that their work be valid beyond this context (Chaiken, Liberman, and Eagly, 1989, p. 212). Ideally, online consumer reviews are not supposed to be persuasive messages in the understanding that the review author does not intend to persuade the review reader to adopt his or her attitude towards the product or to buy the product, but instead he or she should only state his or her honest opinion. But, whether or not this holds, the review readers at least do not perceive them as such. As stated above, the heuristic-systematic model has a narrower conception of heuristic information processing, since it solely relies on previously established decision rules, that can be formulated as short cues, while the elaboration likelihood model subsumes a family of cognitive processes under the term peripheral route (Eagly and Chaiken, 1993, pp. 307, 328). Other cognitive processes than simple decision rules or heuristics that could be at work when evaluating the message validity of online consumer reviews go beyond the scope of this study. This is another reason the heuristic-systematic model fits the requirements of this study. Also, while the elaboration likelihood model considers the central and peripheral routes as two mutually-exclusive paths of information processing, the heuristic-systematic model explicitly states that the systematic and heuristic processing can occur concurrently, viewing this dichotomy rather as a spectrum. Even an interdependence between these to information processing modes, with one influencing the other and vice versa is conceptually possible (Eagly and Chaiken, 1993, pp. 328-329). Further, to stay in line with previous research about dual-process theories and attitude adoption in context of online consumer reviews, e.g. Wei and Watts (2008, p. 75), in the following work the heuristic-systematic model will be applied.

3.1.2. Attribution Theory

In general, attribution theory is about how people draw conclusions about causal relationships. It gained significance in social psychology, where it serves as a theory to explain social perception or, simply put, it is about the explanatory efforts people take to explain why people behave in a certain way (Heider, 1958, p. 79; Kelley, 1973, p. 107). With regard to persuasive communication, attribution theory tries to explain how people form certain beliefs about why a message communicator expresses a certain opinion in his or her message (Eagly and Chaiken, 1993, p. 351). Contrary to the first impression one could have after reading about attribution theory, it is no elaborate set of assumptions and dependencies, just like the dual-process theory, but rather a broad understanding of general principles about explaining different phenomena. Many of these phenomena may appear as so obvious to people and trivial they might consider them as common sense. However, what usually is meant by common sense is exactly what attribution theory is about (Heider, 1958, p. 79; Kelley, 1973, p. 108). A popular model within the domain of attribution theory is Kelley’s Covariation Model (Kelley, 1967, p. 194), which can be considered as a simple and subjective analysis of variance (ANOVA) the receiver of a message conducts in order to attribute the communicator’s opinion (Eagly and Chaiken, 1993, p. 352; Kelley and Michela, 1980, p. 462). It is based on the concept that “an effect is attributed to the one of its possible causes with which, over time, it covaries” (Kelley, 1973, p. 108).

The perceiver observes a communicator’s message and tries to explain why the communicator has expressed a certain point, e.g. this could that the professor praises ‘The Psychology of Attitudes’ as a great book. Now, the perceiver can attribute why the communicator has taken his or her view to three different causes: (1) personal characteristics, such as the message communicator’s ideology or his or her traits. Here, the message perceiver basically concludes that the reason why the communicator has a certain opinion lies in the communicator himself or herself. The reasoning behind such an attribution could be that the perceiver concludes that the professor just praises all recently published psychology books. Another possibility of what the behavior can be attributed to are (2) situational characteristics, such as the audience or role constraints of the message communicator. Here, the message perceiver concludes that the reason why the communicator has a certain opinion lies in the situation in which the message was expressed by the communicator. The reasoning behind such an attribution could be that the perceiver concludes that the professor praises the book to comply with several other experts who praised the book too. The last possible attribution can be made to (3) the external reality. Here, the message perceiver concludes that the reason why the communicator has expressed a certain opinion lies in the actual external reality, meaning that he or she concludes that the message is valid. Such an attribution is called a stimulus or entity attribution. To come back to our example, the perceiver would conclude that the professor praises the book, because it really is a very high-quality book (Eagly and Chaiken, 1993, pp. 352-353).

As mentioned above, the message perceiver comes to these different conclusions by conducting the aforementioned simplified and subjective ANOVA employing three variables, consensus, consistency, and distinctiveness, which will be illustrated more thoroughly in the following paragraph. Each of these variables can be high or low for a given situation and ideal-typically certain variable combinations yield certain causal attributions. The first variable is (1) consensus. Here, the message perceiver assesses whether the message communicator is in consensus with other information sources the message perceiver may has perceived information from. An example for high consensus would be that not only the professor praised the book, but also renowned other experts, members of the faculty and fellow students do so. The second variable is (2) consistency. Here, the message perceiver assesses whether or not the message communicator expressed the same viewpoint on multiple occasions and, thus, is consistent in his statements. An example for high consistency would be that the professor praised the book not only in this lecture, but also in his publications and during conferences. The last of the three variables is (3) distinctiveness. Here, the message perceiver tries to assess whether or not the message communicator usually behaves differently in similar situations. A high degree of distinctiveness would be that the professor has been very critical about other recently published books and articles and thus does not just praise all books.

Depending on perceived high or low consensus, consistency, and distinctiveness in the message communicator’s behavior, the message perceiver attributes the behavior to one of the three different causes. If the professor has an opinion no one else shares, an indicator for low consensus, he always praises this book, an indicator of high consistency, and usually praises all recently published books, an indicator of low distinctiveness, the perceiver is likely to attribute the professor’s opinion to the personal characteristics of the professor himself, namely that he always praises all recently published psychology books. In other instances the professor may share his opinion with many other experts, an indicator of high consensus, he never praised the book before, an indicator of low consistency, and usually praises all recently published psychology books, an indicator of low distinctiveness, the perceiver is likely to attribute the professor’s opinion to situational characteristics, namely that the professor praises the book only because of the other experts who do so too. However, if the message perceiver evaluates all three dimensions as high, ideal-typically he or she attributes the professor’s opinion to the external reality, namely that it indeed is a very high-quality book (Eagly and Chaiken, 1993, pp. 352-353; Kelley, 1973, pp. 108-112).

However, one weakness of Kelley’s covariation model is that it depicts the attribution process very ideal-typically. For example, a number of possible cases are not covered by the model, such as cases in which the message communicator shows high consensus and consistency and low distinctiveness. In such a case the model provides no predefined attribution. While this is an issue of lower importance for this study, especially the consistency variable is often problematic, because often the perceiver only has the occasion at hand to form his or her beliefs (Eagly and Chaiken, 1993, pp. 353-354; Kelley, 1973, p. 108). In many situations, such as in the context of online consumer reviews, the perceiver has to form beliefs about the reason of the communicator’s message where they have fewer information available.

To conquer this shortcoming, Kelley, the originator of the covariance model, proposed a simpler approach (Eagly and Chaiken, 1993, p. 355) which employs the plausibility of multiple alternative explanations to facilitate or inhibit the communicator’s position in order to attribute the behavior to a certain cause. Kelley’s so-called configuration concepts can be thought of a simpler framework that is working within Kelley’s covariation model context to explain the perceiver’s causal attributions inferred from single observations of the communicator’s behavior. Reasons could be that, as already depicted above, the perceiver lacks the occasions for multiple observations, or that the perceiver lacks the motivation or is due to situational characteristics, such as time pressure, not able to search for more observations (Kelley, 1973, p. 113). Here, the perceiver relies on similar observations he or she has made before and uses these for causal attribution. These other observations may provide several plausible causes each of which can be facilitative or inhibitory for the given behavior and may be discounted due to the presence of other plausible causes. This is known as the discounting principle (Kelley, 1973, pp. 113-114). An example for two competing causes could be that on the one side the perceiver could conclude that it is no high-quality book, because the professor usually praises all books, but on the other hand the perceiver could conclude that it actually is a high-quality book, because the professor seemed to praise this book even more than other books. However, as pointed out above, two causes can also be facilitating each other. This is called the augmentation principle (Kelley, 1973, p. 114).

Based on Kelley’s configuration concepts, Eagly, Chaiken, and Wood (1981) proposed their multiple plausible causes framework according to which the perceiver attributes behavior based on perceived personal attributes of the communicator, such as his or her ideology or traits, and on external constraints or pressure in the communicator’s situation, such as the attitude of the audience (Eagly and Chaiken, 1993, p. 355). The information based on which the message perceiver tries to attribute the behavior of the communicator is often already present before the communicator expresses the message and, thus, the perceiver may start the causal analysis before the message even has been stated or perceived by the perceiver. Also, it is possible that this information may be expressed simultaneously with the message or that the information may be embedded into the message itself (Eagly and Chaiken, 1993, p. 355). This initial attitude of the perceiver towards the message communicator and the subsequently observed behavior and message cues, which, consistent with Kelley’s discounting principle, may assert or undermine a certain explanation, are used by the perceiver to form a theory to explain the communicator’s behavior and establish an expectancy towards the upcoming message (Eagly and Chaiken, 1993, pp. 355-356). In general, the confirmation of an established expectancy towards the message communicator, leads to the attribution of the communicator’s behavior to the formed theory.

A very typical interpretation of an attribution to personal or situational characteristics is that the communicator is biased (Eagly, Chaiken, and Wood, 1981, p. 37) and, thus, also the message’s validity and persuasiveness are decreased (Eagly and Chaiken, 1993, p. 356; Kelley, 1973, pp. 113-114). In the multiple plausible causes framework (Eagly and Chaiken, 1993, p. 357; Eagly, Wood, and Chaiken, 1978) two possible biases of the communicator are differentiated. One possible bias is (1) the knowledge bias. In this case the perceiver concludes that the relevant information expressed by the communicator and his or her knowledge are incorrect. Basically the perceiver thinks the message of the communicator is wrong. However, the communicator does not willingly try to deceive the audience using wrong arguments, but he or she just does not know better (Eagly and Chaiken, 1993, p. 357; Eagly, Chaiken, and Wood, 1981, pp. 38–39; Eagly, Wood, and Chaiken, 1978). On the other side is the (2) reporting bias. To fall into the category of a reporting bias, the perceiver has to come to the conclusion that the communicator very well knows about the weakness of his or her argument and intentionally tries to persuade the audience using this wrong information (Eagly and Chaiken, 1993, p. 357; Eagly, Chaiken, and Wood, 1981, pp. 38–39; Eagly, Wood, and Chaiken, 1978). On the other side, when a perceiver’s expectancies are disconfirmed, the perceiver has to discard the old theory and form a new one. Here, often the interpretation by the perceiver is that especially convincing evidence made the communicator overcome his or her bias. A result of this is that the perceiver attributes the communicator’s behavior to the external reality making the message more valid. This effect is consistent with Kelley’s augmentation principle (Eagly and Chaiken, 1993, p. 356; Eagly, Chaiken, and Wood, 1981, pp. 37–38; Kelley, 1973, pp. 114–115).

The framework does not hypothesize which of the two possible perceived biases are discounted more strongly by the perceivers. However, it was found that, in general, expectancy disconfirming messages are more persuasive than expectancy confirming messages. Or in other words for expectancy disconfirming messages, the cause of the communicator’s opinion is usually attributed to the external reality and the message is considered to be more valid (Eagly and Chaiken, 1993, p. 360).

This paragraph presents a brief discussion of the covariation model and the multiple plausible causes framework. While the covariation model requires the message perceiver to have multiple occasions to observe a communicator’s message in order to form a belief about the attribution. As already mentioned, these are ideal-typical conditions that are frequently not met in reality and thus people also have to come to a conclusion based on a single observation. In context of online consumer reviews this problem gains in seriousness as customers can only write a single review per product due to limitations by Amazon. Simultaneously review readers most likely do not consider what reviews a review author has already written for other products, even though they technically could quite easily. Simultaneously Kelley’s configuration concepts (Kelley, 1973, pp. 113–118) fixes these issues enabling a one-shot attribution based on several plausible explanations for the communicator’s behavior. The plausible explanations discount or augment each other depending on whether they inhibit or facilitate the formed theory. In turn, the multiple plausible causes framework by Eagly, Chaiken, and Wood (1981) enhances this concept by adding expectancies the perceiver can form towards the communicator and distinguishes between reporting bias, willingly deceiving the message recipient, and knowledge bias, unwillingly spreading wrong information (Eagly and Chaiken, 1993, pp. 355–360). The multiple plausible causes framework, suits the problem at hand very well, explaining how readers of online consumer reviews perceive review content, how beliefs about the review authors are formed, how these beliefs impact the perception of the message communicator and how all of this happens in a situation when multiple observations as reference are lacking. Thus, the phenomena expected within this study can be explained using a perspective based on Eagly, Chaiken, and Wood's (1981) multiple plausible causes framework.

3.2. Model Development

In the following, the conceptual importance of the three review characteristics that are subject of this study is outlined and connected to the theories explained above. First, review helpfulness, the dependent variable of the subsequently conducted empirical analysis, is explained. After that the two independent variables, product type according to Nelson (1970), a frequently studied concept in the literature of online consumer reviews, and the review type, a concept that after thorough research was found to not have been subject of a comparable study before, are briefly illustrated.

[...]

Excerpt out of 99 pages

Details

Title
Honest and Unbiased. The Helpfulness of Vine and Incentivized Reviews on Amazon
College
University of Mannheim  (Lehrstuhl für Business-to-Business Marketing, Sales & Pricing)
Grade
1,7
Author
Year
2016
Pages
99
Catalog Number
V1019775
ISBN (eBook)
9783346413383
ISBN (Book)
9783346413390
Language
English
Keywords
honest, unbiased, helpfulness, vine, incentivized, reviews, amazon
Quote paper
Tom Keller (Author), 2016, Honest and Unbiased. The Helpfulness of Vine and Incentivized Reviews on Amazon, Munich, GRIN Verlag, https://www.grin.com/document/1019775

Comments

  • No comments yet.
Look inside the ebook
Title: Honest and Unbiased. The Helpfulness of Vine and Incentivized Reviews on Amazon



Upload papers

Your term paper / thesis:

- Publication as eBook and book
- High royalties for the sales
- Completely free - with ISBN
- It only takes five minutes
- Every paper finds readers

Publish now - it's free