5 Context of advice

5.1 Egocentric discounting

Egocentric discounting (also known as egocentric ‘advice discounting’) is a phenomenon wherein advice is under-weighted during integration with the advice-taker’s existing opinion, relative to a normative expectation. Most experiments that explore egocentric discounting use the Judge-Advisor System. The Judge-Advisor System has roles for a judge, usually the participant, and one or more advisors, often other participants. In a typical design, the judge offers an initial estimate for some decision, e.g. the total value of coins in a jar of change, then receives advice from the advisor, and then makes a final decision. The difference between the initial estimate and the final decision is taken as measure of how influential the advice was, typically expressed in terms of the contributions of the initial estimate and the advice to the final decision.

In these experiments, the task performance for the advisor is usually as good or better than that of the judge. This performance structure can be well captured for most tasks by a Gaussian answer + error distribution where the answer supplies the mean and the error supplies the variance. When combining individual estimates from multiple distributions, optimal results are obtained by weighting the estimates according to the relative precision of their parent distributions (Soll and Larrick 2009; Bahrami et al. 2010). This is analogous to multi-sensory integration (Ernst and Banks 2002; Körding et al. 2007), and many other cognitive processes argued to be modelled on Bayesian integration, such as those listed in Section 2 of Colombo and Hartmann (2015). Where the performance of the advisor is higher than the judge, the advisor’s error will be lower than the judge’s, and thus the variance of the advisor’s distribution will be narrower and therefore the precision of the advisor’s distribution will be higher. When a judge combines their own estimate with that of an advisor who is at least as good, an optimal judge will weight the advisor’s opinion at least as highly as their own (Equation ). The classic presentation of egocentric discounting is when, in these scenarios, the weight applied to the advice is lower than the optimal weight.

Egocentric discounting is a robust phenomenon in advice-taking. It is not a generic inability to combine estimates: people can accurately combine estimates that do not include their own opinion (Yaniv and Choshen‐Hillel 2012), even adjusting for differences in ability between advisors (Soll and Mannes 2011). Similarly, Trouche et al. (2018) showed that when advice and initial estimates were surreptitiously switched, participants discounted their own initial estimates in favour of advice, suggesting that a person’s own opinion has a privileged status.

In this chapter, I review the literature on egocentric discounting, with particular attention to manipulations that have been used and explanations that have been offered. I conclude the chapter by expounding an alternative perspective from which to view the phenomenon which, in my view, clarifies the phenomenon and opens the field for a wider range of effective explanations.

5.2 Manipulations affecting egocentric discounting

A sizeable body of research has been conducted into egocentric discounting, using a wide variety of manipulations. These manipulations fit roughly into four categories: properties of the task, properties of the advice, properties of the advisor, and wider social factors.

Many of the experiments detailed below have not looked at egocentric discounting itself, but have instead looked at changes to the weight given to advice in integrated decisions. While egocentric discounting and advice weighting are inversely related, establishing the degree to which advice has been discounted requires a reference value from which its weighting has deviated. Many experiments do not calculate this normative value, nor even give sufficient details to establish a reference value, either an objective normative value or an expected value from the perspective of the participant.

One obvious reference value is the case where equal weight is given to one’s own opinion and the advice received. This is often the value implicitly or explicitly stated. However, equal weighting is only normatively prescribed if advice is exactly as reliable as one’s own initial estimate.

It should be borne in mind, therefore, that depending on the circumstances, very high or low levels of advice weighting might not correspond to very low or high levels of discounting. When Schultze, Mojzisch, and Schulz-Hardt (2017) provide participants with advisory estimates from a random number generator, ascribing any weight at all to the advice is suboptimal. Conversely there are many experiments in which expert advice is provided, and in these experiments weighting advice evenly corresponds to egocentric discounting because the advisor’s estimates are more accurate on average.

As a consequence of the uncertainty about which normative weighting strategy is required by each experiment (and sometimes this strategy is different from the perspectives of the researcher and the participant), egocentric discounting is primarily examined here in terms of changes in advice weighting. Where advice weighting diminishes, egocentric discounting is said to increase, without specific comment being possible on the exact level of discounting on display. In studies where normative strategies can be determined (e.g. Yaniv and Kleinberger 2000), advice weighting is below that predicted by the normative strategy, indicating egocentric discounting.

5.2.1 Task properties

The properties of the task chosen can affect the levels of egocentric discounting. Task difficulty is a major factor, perhaps mediated by the judge’s confidence, but broader features also play a role, including how advice is provided and whether unified estimates are required at any point.

5.2.1.1 Task difficulty

The most prominent feature of the task which affects egocentric discounting is difficulty. Gino and Moore (2007) asked participants to estimate a person’s weight from a clear (easy condition) or blurry (hard condition) picture, and saw less discounting on the hard task. Likewise, Wang and Du (2018) used blurring to increase the difficulty of estimating the number of coins in a photograph of a jar and found that participants discounted less in the blurry compared to the clear condition. Yonah and Kessler (2021) included a condition where there was no objectively correct answer in a random dot motion display, and saw increased willingness to take advice compared to conditions where participants were on average 67% correct. They also found that participants were less likely to seek advice where they rated their performance on the task as more competent, i.e. where they experienced the task as easier.

5.2.1.2 Judge’s confidence

Wang and Du (2018) saw full mediation of their difficulty manipulation by participants’ confidence on the task, while Gino and Moore (2007) saw only partial mediation. Other studies have manipulated the judge’s confidence through other mechanisms. See et al. (2011) used a power manipulation which was effective in part through raising judges’ confidence; and Gino, Brooks, and Schweitzer (2012) used anxiety manipulations to decrease judges’ confidence. In both cases, partial or full mediation through confidence occurred such that participants’ higher confidence in themselves and their decisions was associated with greater egocentric discounting. In many other experiments, including other experiments in Wang and Du (2018), confidence is not manipulated but is still associated with greater egocentric discounting. Using a similar methodology, but a different analytical approach, Moussaïd et al. (2013) observed that highly confident participants rarely updated their views following advice.

5.2.1.3 Judge-Advisor System structure

More complex task designs, in which reflection and discussion are encouraged, can reduce discounting. Minson, Liberman, and Ross (2011) and Liberman et al. (2012) asked dyads to take simultaneous roles as judge and advisor, providing initial estimates, exchanging advice during a discussion, and then providing final decisions on estimation tasks. Discounting was reduced, but still evident in this process, as it was in van Swol (2011), which used a traditional Judge-Advisor System paradigm where advice was delivered face-to-face. Liberman et al. (2012) did manage to eliminate discounting where, between exchanging advice and providing a final decision, participants produced a single mutually satisfactory collaborative judgement, and showed that the value of this collaborative judgement was itself improved by open-minded discussion more than by justifying estimates or exchanging bids. Schultze, Mojzisch, and Schulz-Hardt (2017) demonstrated that asking judges to selectively generate reasons why the advice might be correct or incorrect led to lower and higher levels of egocentric discounting respectively.

5.2.2 Advice properties

Several features of advice itself have been explored: the confidence of advice, its similarity to the initial estimate, whether it is solicited, and the amount of it provided.

5.2.2.1 Confidence of the advice

When judges are more confident, they tend to be less influenced by advice, and the expected corollary of this is that when advice is expressed more confidently the advice will be more influential. Soll and Larrick (2009) measured the confidence of advice and saw that higher advice confidence was associated with higher influence of advice. Moussaïd et al. (2013) also found that differences in confidence between judges’ and advisors’ estimates were useful in producing a decision tree determining the extent to which advice was taken. More generally, the assumption that more confident advice will be more accurate is known as the ‘confidence heuristic,’ and has been investigated as a phenomenon in its own right (Pulford et al. 2018; Price and Stone 2004; Bang et al. 2014).

5.2.2.2 Similarity of advice to the initial estimate

A frequently-manipulated property of advice, and the most interesting in the context of the first part of this thesis, is the similarity of the advice to the initial estimate. This is sometimes expressed as agreement or reasonableness of advice.17 The evidence concerning the effects of advice distance on advice influence is equivocal. Some studies show that advice is less influential the further it is from the initial estimate, while other studies show a greater influence of more distant advice. Other studies have indicated that the relationship is quadratic: low weight is assigned to advice which is too near or far from the initial estimate, and a greater weight assigned to advice which is in the moderately distant.

Several studies have provided evidence for a simple agreement effect whereby advice that is nearer to the initial estimate is more influential. Yaniv (2004) manipulated advice to be nearer to or further away from the initial estimate and saw that the influence of advice decreased as the advice was further from the initial estimate (although this pattern did not hold for low-expertise judges in one experiment). Minson, Liberman, and Ross (2011) found that more distant advice was associated with less advice-taking behaviour once average distance between dyad members was controlled for, although their results were not expressed using standard advice-taking metrics. Yaniv and Milyavsky (2007) observed that advice closer to the initial estimate was also more influential when combining multiple pieces of advice simultaneously.

The opposite effect was demonstrated by Hütter and Ache (2016). They found consistently higher influence for advice that was further from the initial estimate, both for single pieces of advice and for integrating multiple pieces of advice.

A non-linear, U-shaped relationship between advice distance and advice influence has been shown in other studies. Moussaïd et al. (2013) asked participants to estimate answers to a variety of questions and give confidence ratings, both before and after receiving another person’s initial estimate as advice. They identified a three-zone structure to the influence of advice according to the distance between the initial estimate and the advice. Similar advice fell into the ‘confirmation zone,’ where opinion was unchanged but confidence increased; moderately distant advice fell into an ‘influence zone’ where opinion changed to accommodate the advice; and distant advice was generally ignored. Likewise, Schultze, Rakotoarisoa, and Schulz-Hardt (2015) showed in an elegant series of Judge-Advisor System experiments that relationships between egocentric discounting and advice distance were U-shaped. Schultze, Rakotoarisoa, and Schulz-Hardt (2015) also showed that confidence in final decisions was dramatically boosted by near advice, and that confidence gains decreased sharply with distance, consistent with the Moussaïd et al. (2013) account.

Related to the distance of advice is the reasonableness of advice, because where the judge has a somewhat reasonable estimate the distance serves as a reliable proxy for reasonableness. Gino, Brooks, and Schweitzer (2012) included an experiment in which (non-anxious) participants heavily discounted unreasonably high and unreasonably low advice, while discounting reasonable advice at a rate typically seen in Judge-Advisor System experiments. Similarly, Schultze, Mojzisch, and Schulz-Hardt (2017) saw judges discount wildly implausible advice more heavily, although it was still assigned some weight, even when labelled as coming from a random number generator.

The level of agreement may act as a cue to the plausibility of both the advice and the initial estimate. Somewhat counter-intuitively, this can lead to agreeing advice being assigned less weight than we might expect because of its bolstering of the confidence in the initial estimate.

5.2.2.3 Solicitation of advice

The extent to which advice is discounted may also be related to whether advice is wanted. This is hard to disentangle from the effects of task difficulty, because people are more likely to seek advice when they find the task more difficult to do (and hence their confidence in their response is lower).

Gino and Moore (2007) compared advice-taking behaviour across two experiments using a task in which participants had to estimate people’s weight from photographs. In one of these experiments participants received advice automatically and in the other participants had the option of clicking a button to receive advice. Participants opted to receive advice on almost all trials, including in an easy condition, and no differences were found in the extent to which advice was used between the compulsory and optional advice experiments. The very high rates of advice seeking in the optional experiment suggest that participants in the compulsory advice experiment may have been very welcoming of the advice due to the difficulty of the task. Gino (2008) showed that more expensive advice was sought less frequently but used more heavily, that expensive advice was used more heavily than free advice even when both are compulsory, and that paid-for advice was used more heavily than when the same advice was given for free as the result of a coin-flip. This study packaged questions together in blocks, and participants purchased advice for a whole block at once. This procedure means that the solicitation of advice is decoupled from the difficulty of the question on a trial-by-trial basis, although it is still likely that those participants who found all the questions in a block more difficult were more likely to solicit advice. In the real world, price is often an indicator of quality, albeit an imperfect one, and solicitation is likely to act as a proxy for confidence, which is again related to task difficulty: participants in the experiments may have taken more advice because they were less sure on the blocks in which they sought advice. Hütter and Ache (2016) found that participants opted to see a large number of advisory estimates in a calorific content estimation task when allowed to sample ad lib, although the influence of the advice was low.

5.2.2.4 Number of advisory estimates

Yaniv and Milyavsky (2007) presented participants in a Judge-Advisor System the advice of 2, 4, or 8 advisors and saw that discounting behaviour did not lessen as the number of advisors rose. Hütter and Ache (2016) saw levels of advice usage for multiple pieces of advice which were relatively similar to levels of advice usage for a single piece of advice. In other words, people integrating their own opinion with two advisory estimates tend to integrate the advisory estimates and then treat them as a single piece of advice for integrating with their own opinion.

Minson and Mueller (2012) assigned participants to be members of a dyad or to act alone, and crossed their design such that some dyads received another dyad’s estimate as their advice and some received an individual participant’s estimate as their advice, while some individual participants received advice from dyads and some from other individuals. The advice was labelled as having come from an individual or a dyad. Despite the optimal policy being to weight initial estimates and advice according to the number of judges and advisors, weights were almost identical for individual advisors and dyad advisors, meaning that advice that represented the average of two advisors was treated as a single estimate.

This may be an artefact of presentation because these studies presented multiple estimates simultaneously as a list, or as a single combined estimate. If this is a real phenomenon, however, it is a critical bias: while sensible motivations for favouring one’s own opinion over another’s will be posited below, it is far harder to argue that the weight assigned to one’s own opinion should remain the same whether being integrated with one other estimate or ten other estimates.

5.2.3 Advisor properties

A common strategy in advice-taking experiments is to manipulate the properties of the advisors (either within- or between-participants). Expertise is often manipulated, but some research has investigated factors such as the pre-existing relationship between judge and advisor, whether advisors are human or algorithmic, and whether advisors have a clear conflict of interest.

5.2.3.1 Expertise of advisors

By far the most frequently manipulated property of advisors is their ability to perform the task in question, known as their expertise, and identifiable with the Mayer, Davis, and Schoorman (1995) dimension of ability§1.1.2.1. Advisor expertise can be communicated to participants in various ways, which can be broadly categorised into ‘showing’ approaches in which participants build up a picture of advisors’ performance over a series of instances, and ‘telling’ approaches where participants are presented with a summary of an advisor’s performance. As described by Equation , the ability of the advisor to perform the task (relative to the judge) alters the optimal level of advice-taking according to the normative model. This means that discounting may still occur even when the advisor’s estimate is weighted more highly than the judge’s.

Yaniv and Kleinberger (2000) showed in a series of historical date estimation experiments that better-performing advisors were more influential than worse-performing advisors, especially where feedback was provided on judge’s final decisions but also where it was not. Gino, Brooks, and Schweitzer (2012) used a coin-jar estimation task where participants were shown the advisor’s past performance to demonstrate that people give the same advice more weight if it comes from an advisor with a history of good performance, although this effect was not visible in an anxiety arm of the experiment. Rakoczy et al. (2015) saw that 3-6 year-old children took advice from advisors who had named animals correctly more seriously than advice from advisors who acknowledged their own ignorance in a version of the Judge-Advisor System adapted for young children.

Sniezek, Schrah, and Dalal (2004) saw greater dependence on advice provided by specially-trained ‘expert’ peers, but only where the proportion of reward money for accurate judgement paid to the advisor had been decided before advice was provided. Soll and Larrick (2009) observed greater influence of more expert advisors across three experiments in which expertise was signalled by familiarity ratings with a university for which the graduate salary was being estimated, country of origin in a geography knowledge task, and confidence in a set of trivia questions. Likewise, Soll and Mannes (2011) saw that participants paired with advisors who were more accurate than they were at estimating basketball teams’ point-per-game from other team statistics were more influenced by advice than those paired with advisors who were less accurate than they were. Tost, Gino, and Larrick (2012) used a weight-estimation task and observed that greater weight was placed on advice from advisors labelled as experts compared to the same advice from advisors labelled as novices. Schultze, Mojzisch, and Schulz-Hardt (2017) labelled advisors using a ranking system and saw that participants placed a higher Weight on Advice from highly-ranked advisors compared to low-ranked advisors, although the actual advice was (unbeknownst to the participants) the same. Wang and Du (2018) showed that participants placed greater Weight on Advice from expert as opposed to novice advice in a coin-jar estimation task using advice that was genuinely expert or novice. Önkal et al. (2017) reported a series of experiments using a stock price estimation task in which advisor expertise was manipulated using both labels and experience. Where experience alone was provided there was no clear effect of expertise, while labelling had a substantial effect. Follow-up experiments did demonstrate effects of experience.

5.2.3.2 Familiarity of advisors

So far as I can ascertain, no one has reported on a Judge-Advisor System in which comparable advice is received from friends and non-friends. Sniezek and van Swol (2001) and van Swol and Sniezek (2005) found a correlation between the amount classmates had interacted and ratings of trust in those classmates as advisors, but they did not use a measure of advice-taking which allows calculation of advice weighting. Minson, Liberman, and Ross (2011) had long-term dance partners make estimates about their own dance performances in relation to professional assessment, and saw normal levels of egocentric discounting.18

It seems intuitive that advice will be weighted more highly when coming from people we know, but this does not appear to have been tested. With regard to the three-factor model of trust put forward in Mayer, Davis, and Schoorman (1995), knowing and trusting an advisor would be expected to increase the extent to which the advisor’s advice is taken.

5.2.3.3 Humanity of the advisor

Some experimenters have examined advice weighting for non-human advisors. In a stock-market forecasting task, Önkal et al. (2009) found that judges placed less weight on (identical) advice when it was labelled as coming from a statistical model versus a human expert forecaster. In their study on over-weighting, Schultze, Mojzisch, and Schulz-Hardt (2017) provided participants with advice labelled as coming from a random number generator and noted that its estimates were still assigned some weight by judges. The influence of this randomly-generated advice was roughly equivalent to that of human advisors not labelled as having high expertise.

5.2.3.4 Advisor conflict of interest

Where advisors have a conflict of interest, following the advice may benefit the advisor at the expense of the judge. Gino, Brooks, and Schweitzer (2012) observed that non-anxious judges assigned less weight to advice from advisors with a conflict of interest, while anxious judges assigned similar weight regardless of conflict of interest. Bonner and Cadman (2014) similarly saw less influence from advice from advisor with a conflict of interest in a CEO-remuneration task.

5.2.4 Wider social factors

The Judge-Advisor System in experiments is usually quite abstracted away from advice-taking in the real world, in which broader social factors are likely to offer a highly influential context dictating or suggesting norms for advice-taking. Despite being somewhat shielded from these broader factors, they are nevertheless apparent in some Judge-Advisor System experiments.

Fairness may be a universal value (Waal 2014), and it is certainly a lauded value in the developed Western societies where the majority of Judge-Advisor System research takes place. Consistent with this, judges in the Sniezek, Schrah, and Dalal (2004) study tended to offer both novice and expert advisors equal shares in their reward money. This sense of fairness is seen to extend to advice-weighting, too. Harvey and Fischer (1997) saw expert judges consistently placed some weight on novice advice, and attributed this advice-taking behaviour to a social requirement for fairness. Likewise, Mahmoodi et al. (2015) showed that dyads making perceptual decisions would consistently over-weight the estimate of the less accurate dyad member despite evidence that this impaired performance and thus decreased the dyad’s reward money.

Feeling powerful was observed to lead to decreases in advice-taking by See et al. (2011) and Tost, Gino, and Larrick (2012), partially mediated by the judge’s confidence. Somewhat similarly, Gino, Brooks, and Schweitzer (2012) saw angry judges took less advice through being more self-confident, while anxious judges sought and took more advice through being less self-confident.

Finally, developmental maturity seems to be an important factor. Rakoczy et al. (2015) found that while 3-6 year old children did differentiate between knowledgeable and ignorant advisors, they nonetheless placed exceptionally high Weight on Advice from both advisors.

5.3 Purported explanations for egocentric discounting

Almost as numerous as the studies into egocentric discounting are the explanations offered to account for it. Despite this, no explanation has managed to withstand critical empirical scrutiny. Below, I offer a brief review of the explanations which have been put forward to date.

5.3.1 Egocentric bias

Among the earliest explanations for egocentric discounting was egocentric bias: the belief that one’s judgement is superior to that of others. Harvey and Fischer (1997) attributed egocentric discounting to such self-serving estimates of ability, such as when car drivers report on average being better than average (Svenson 1981). The explanation also suggested a mediating role of overconfidence, tying it neatly into later similar explanations from See et al. (2011), Gino, Brooks, and Schweitzer (2012), and Tost, Gino, and Larrick (2012). For all these authors, a judge’s confidence in the initial estimate is the principal driving force behind the weighting of advice. This view is appreciable from a Bayesian perspective on advice integration (Bahrami et al. 2010): as with multi-sensory integration (Fetsch et al. 2012), informational cues coalesce optimally onto the correct answer where they are weighted according to their precision. In such a framework, there is an optimal (Bayesian) integration process which is being fed faulty inputs (because the judge’s own estimate is believed to have erroneously high precision).

Overconfidence certainly seems to have a role, and accounts well for many of the nuances in advice-taking experiments including the findings that advice is taken more readily for difficult tasks (where judge confidence is lower) and that judges made to feel more powerful (and hence more confident) take less advice. It is unlikely to be the whole story, however: Trouche et al. (2018) note that Soll and Mannes (2011) were able to disentangle ratings of ability and advice-taking behaviour and saw that the egocentric discounting occurred even if the judge’s assessment of relative ability were taken as true.

5.3.2 Access to reasons

One of the most influential explanations of egocentric discounting has been Yaniv’s argument that judges have greater access to the reasons justifying their own decisions than to those justifying the decisions of others, due to the opaqueness of other minds (Yaniv and Kleinberger 2000). This differential access to reasons suggests, from the perspective of the judge, that there is greater evidence favouring the judge’s own opinion than favouring the estimate of their advisor, and that, analogous to the confidence case above, the more well-supported opinion should be given a greater weight during integration.

Trouche et al. (2018) presented judges with their own estimates labelled as advice, and with advice labelled as their own estimates, and observed that judges persisted in placing greater weight on what they believed was their own initial estimate rather than what was actually their own initial estimate. This result is a serious problem for the access-to-reasons explanation, because there is no good reason why simply relabelling estimates should change the judge’s internal census of evidence supporting the estimates.

5.3.3 Anchoring

Some researchers have argued that egocentric discounting can be explained by anchoring. Anchoring is a well-established phenomenon whereby numbers clearly unrelated to a numerical estimation task can nevertheless bias estimates, as when participants asked to estimate the height of Mount Everest in feet but first asked to say whether ‘2,000’ is higher or lower than the correct value (12,000) give far lower estimates than participants asked to say whether ‘45,500’ is higher or lower than the correct value (Jacowitz and Kahneman 1995). Bonner and Cadman (2014) suggested that anchoring was responsible for judges’ over-use of outlandishly extravagant suggestions for CEO remuneration. In a more thorough set of studies, Schultze, Mojzisch, and Schulz-Hardt (2017) observed a consistently greater-than-zero weight on transparently useless advice, which they ascribed to anchoring to the advice. While these cases for anchoring may be admitted, they are to do with over-weighting advice rather than the under-weighting advice which characterises egocentric discounting. This is because the putative anchor is the advisory estimate.

Historically, it has been suggested that the judge’s initial estimate acts as an anchor (Harvey and Fischer 1997), but this was later ruled out when Harvey and Harries (2004) demonstrated egocentric discounting persevered when the labels for the judge’s initial estimate and the advisor’s advice were switched. If anchoring to the initial estimate were responsible for egocentric discounting the relative weighting of initial estimate and advice would have followed the actual values rather than the labelled values, whereas the data showed that discounting occurred towards the labelled rather than actual initial estimate.

5.3.4 Sunk costs

Another general cognitive bias recruited as an explanation for egocentric discounting is the sunk costs fallacy, in which one perseveres with a poor strategy in order to justify the cost or effort that has already gone into pursuing it. Interestingly, sunk costs have been recruited to explain both egocentric discounting and following advice.

Gino (2008) had participants receive advice for free or for a fee depending upon the outcome of a coin flip. They found that the same advice was more influential where payment had been taken, and that the more expensive the more influential it was whether paid for or free. Assuming that participants viewed the greater cost as a marker of quality, the remaining effect contingent on whether or not payment had actually been taken can be explained by sunk costs. Similarly, Sniezek, Schrah, and Dalal (2004) found that, at least for expert advisors, judges who allocated a portion of their prospective reward money to the advisor before receiving the advice placed more weight on that advice.

Ronayne and Sgroi (2018) also invoked the sunk costs fallacy, but suggested that it could account for using less advice, i.e. discounting. These authors presented participants with the opportunity to use another participant’s results rather than their own in a reward lottery, which they somewhat oddly labelled ‘advice.’ Despite this ‘advice’ being transparently better, participants frequently chose to keep their own results rather than switch, and this finding is explained on the basis that those participants were loath to forfeit the work they had done to obtain their own results by adopting another’s. Extended to the Judge-Advisor System, this explanation would predict that more effortful tasks would lead to greater egocentric discounting. This is an intriguing prediction, but perhaps because effortfulness tends to covary with difficulty (which in turn decreases egocentric discounting), it does not appear to have been studied.

5.3.5 Naïve realism

A third cognitive bias invoked to explain egocentric discounting is naïve realism. Naïve realism occurs when people treat their own perceptions as reflective of shared underlying reality and others’ perceptions as misguided or biased to the extent that they do not agree. Minson, Liberman, and Ross (2011) argue strongly for a naïve realism explanation of egocentric discounting, although it is never fully explained why, on a naïve realism view, any adjustment to advice is warranted (because ex hypothesi the initial estimate reflects the true answer). The most creative element of the experiments involves funnelling integrated decisions (corresponding to judges’ final decisions in the typical Judge-Advisor System) through a collaborative joint decision process before extracting a final individual decision. Naïve realism once again fails to provide a compelling explanation why the final individual decisions closely reflected the collaborative joint decisions rather than the initial estimates: rather than continuing to endorse their own view of ‘reality,’ participants appeared to be willing to accept the joint view once it had been established.

5.3.6 Responsibility / feeling of deserving outcomes

Some advice-taking may be explicable on the basis of responsibility sharing. Harvey and Fischer (1997), Mahmoodi et al. (2015), and Ronayne and Sgroi (2018) have all suggested that taking advice can transfer some of the responsibility for the outcome of a decision onto the advisor. Where uncertainty is high, or where rewards are shared, this can be particularly useful.

While distribution of responsibility is more a reason for reduced rather than increased egocentric discounting, when combined with an account that predicts ascribing very low or no weight to advice by default (e.g. naïve realism), it can explain why advice weighting is higher than the zero that would be expected. It seems more plausible, however, that factors which promote advice-taking such as fairness, advisor expertise, and distribution of responsibility serve to place a limit on egocentric discounting, rather than that complete discounting is a default strategy from which these factors move judges.

5.3.7 Wariness

Trouche et al. (2018) designate the above explanations ‘proximal explanations,’ because they offer a mechanistic account of how discounting occurs. Rather than partaking in this discussion, they instead provide an ‘ultimate explanation,’ which may explain why discounting occurs. They suggest that discounting occurs because advisors’ interests do not always align with judges’, and thus some level of discounting offers protection from relying too heavily on advice that may be deliberately harmful. They suggest thus that discounting is an evolved response to misaligned incentives between judges and advisors. The account I offer below is in the mode of this explanation.

5.4 A wider view of egocentric discounting

The explanations outlined above are all explanations pitched at the level of are all offered as explanations for a deviation from normative optimality. I have chosen to take a different approach, asking instead under which circumstances the observed behaviour would be an optimal policy, and exploring the plausibility of those circumstances continuing to have an influence in experimental settings where the normative behaviour might be averaging one’s own opinion with advice.

As an anecdotal starting point, I note that if one tells anyone who is not an advice-taking researcher that people do not take others’ opinions as seriously as they take their own when making decisions, the response is likely to be a flat “of course,” perhaps accompanied by a perplexity as to why such an obvious statement is being presented as a valuable insight. The approach taken here works to codify this intuition as a set of hyper-priors: expectations about the relative utility of advice as compared to one’s own opinion. I argue that the normative model of advice-taking, and the pared-down experimental design with which is it entwined, seek to take the situation out of the evolutionary history of advice, but cannot take the evolutionary history of advice out of the situation.

According to this hypothesis, the hyper-priors on advice-taking are a consequence of the many opportunities for deception and misunderstanding which apply to advice but not to one’s own opinion. The most obvious of these is the opportunity for deception. As Trouche et al. (2018) point out, advisors do not always have the judge’s best interests at heart when supplying advice. Consider, for example, a situation where one co-worker, Sally, asks another, Hanan, whether it is a good idea to apply for a promotion. Hanan may think it would be good for Sally to apply, because Sally is well-qualified and hard-working, but nevertheless discourage Sally from applying because Hanan herself is going to apply and wishes to reduce the competition.

It is not necessary, however, for there to be misaligned incentives of this kind. Advice may be less informative than one’s own opinion where the advisor is less able to perform the task, or does not perform the task as effectively. Consider, for instance, Sally asking Hanan for advice on where to go on holiday. Hanan may wish to maximise the probability Sally has a wonderful holiday by offering the best advice possible, but, because Hanan does not have a perfect knowledge of Sally’s preferences, nevertheless advise Sally to select a non-optimal destination. Likewise, Sally may well have spent considerable time researching and thinking about the question, and it is not reasonable to believe that Hanan would do likewise because it is, after all, Sally’s holiday.

There is room for misunderstanding even where an advisor’s interests are aligned with the judge’s and the advisor’s ability equals the judge’s. Advice must be communicated to the judge, and communication in the real world is inherently noisy. Communication of advice requires something in the mind of the advisor to be encoded into a set of signals, transmitted to the judge, and then re-encoded into something in the mind of the judge, at which point it can be integrated with what the judge already believes. Information can be degraded at any of these steps, resulting in advice that is less informative than the judge’s own opinion. When Sally asks Hanan what to do about a work problem, and Hanan rapidly and confidently rattles off a suggestion, Sally may be forgiven for thinking the rapidity and confidence are a property of Hanan’s confidence in the suggestion rather than an underlying characteristic of Hanan. If Sally does not adjust for the fact that Hanan is always more confident about things than Sally is, then Hanan’s suggestion will be overly dominant relative to its informational value.

5.4.1 Compatability with existing explanations

As noted by Trouche et al. (2018), ultimate explanations of the kind offered here do not invalidate proximal explanations of the kind offered in the middle of this chapter§5.3. My view is most consistent with a Bayesian integration view in which advice is weighted by a range of features of advice, advisor, and context, with the further proviso that there are hyper-priors which govern the default level of discounting. I am thus content to observe the contest to provide proximal explanations for changes in the level of egocentric discounting, and only baulk at claims that egocentric discounting is ‘irrational.’ I suggest that, once egocentric discounting as a default is accepted, the adjustments in the level of discounting are almost all transparently rational.

5.4.2 Evidence

In the chapters that follow, I present evidence from computational agent-based evolutionary simulations and on-line human behavioural experiments to illustrate the plausibility of the claims made above. The evolutionary simulations demonstrate that an array of plausible factors affecting the relative utility of advice can create an environment in which egocentric discounting is adaptive, and the behavioural experiments demonstrate that some of these factors can be responded to by individual humans by adjustments in behaviour. I note that only the first of these is necessary for illustrating the plausibility of the theory: the behavioural adjustments serve more to support the extension of the argument, that changes in the level of discounting are rational adjustments.