When Nobel Prizewinning psychologist and economist, Daniel Kahneman, has a new book out, you can expect it to be worth reading. The author of ‘Thinking Fast and Slow’, possibly the world’s most famous behavioural expert, has teamed up with Olivier Sibony and Cass R. Sunstein to write a book on (and entitled) ‘Noise’. And, in my opinion, it doesn’t disappoint.

Noise, in this context, refers to the random variation around a target value. For example, when I am aiming to hit the treble 20 on the dartboard, if you take the average of all the darts I throw then maybe the ‘average’ location is somewhere near the treble 20. The individual darts, though, are all over the place. This is noise.

Kahneman et al. are really interested in the noise around human judgements, particularly in industries where good judgement is required. In particular they look at the criminal justice system and insurance underwriting industry. But really it could also refer to any pricing where human judgement is used in price-setting.

Sentencing criminals – bear with me here, it is relevant in the end!

When sentencing criminals, ideally the sentence will reflect the judgement of society relating to punishment, deterrence and rehabilitation and will take into account the crime itself, circumstances, past record and danger to society of the criminal. This is what judges are for.

What it should not reflect is which judge happens to be presiding, and whether the judge has a stomach-ache the morning of sentencing, or if their favourite sports team just lost. Most people would agree that sentences need to be fairly and evenly applied, as randomness is unfair.

And yet every study into sentencing suggests that there are huge discrepancies which make the system grossly unfair. Although it’s impossible to tell when looking at individual cases, the studies would suggest that it is not totally unlikely that given the same case and same circumstances, one judge could give a sentence of 15 years in prison and another just community service. And, as in many other industries too, the disparity between judgements is much higher than almost anyone in the industry would think is possible.

People tend to think their judgement is better than it actually is

In the 1970s in the US, seeing the unfairness, a judge started a push to standardize sentencing, taking much of the discretion away from judges. This was a great success by any statistical examination and took out a lot of bias and unfairness from the system.

But people always think their judgement is better than it is. And it is very easy to find the occasional example where the strict sentencing guidelines were inappropriate for the case. An outcry over a clearly unfair sentence can help to mobilize support against the standardization. What is not so easy to see is the far greater number of examples where huge unfairness was avoided. And so, sadly, in the early 2000s the judges decided that they knew better and struck down the rules.

Reducing noise is important whatever your bias

The damage caused by allowing judges to have discretion hugely statistically outweighs the benefit of the discretion. And this holds true, almost whatever your bias is.

Imagine there are a set of crimes, that society thinks should have a 7 year prison sentence for. Personally, I am in favour of rehabilitation rather than punishment of offenders for all but the most heinous crimes. I would be a lenient judge who gives an average of 4 years for a punishment. But there is also a strict judge whose average sentence is 10 years. These are our biases.

One day, the strict judge has a toothache, and it is my birthday. If a criminal sees me, then in a good mood I may give a 2 year sentence rather than my normal 4 years. But if the same criminal, who did the same crime, had the misfortune of going to the strict judge, his bad mood might make the sentence 15 years. Both of these are bad outcomes because society says that the correct sentence is 7 years. And in fact, both of us judges would prefer that two criminals with the same crime both got 7 years than this lottery, and not just for reasons of fairness.

To put it mathematically, you can measure the correctness of multiple judgements using Root-Mean-Square (RMS) Error. This takes the average of all the squares of the errors and then square-roots that value.

For me, the correct sentence is 4 years. So, the RMS Error over the two sentences that contained noise is:

\[RMSE\ Lenient\_Noise\ =\ \sqrt({{(2-4)^2 + (15-4)^2} \over 2})\ =\ 7.9\ years\]

The strict judge is also not happy. According to his values, the RMS error is:

\[RMSE\ Strict\_Noise\ =\ \sqrt({{(2-10)^2 + (15-10)^2} \over 2})\ =\ 6.7\ years\]

Now what if both cases had regulated sentences where we both had to give 7 year sentences. For me the outcome is a big reduction in RMS Error, from 7.9 years to 3 years.

\[RMSE\ Lenient\_NoNoise\ =\ \sqrt({{(7-4)^2 + (7-4)^2} \over 2})\ =\ 3\ years\]

And the strict judge is also happier. His RMS error is reduced from 6.7 years to 3 years too.

The reduction of noise makes the sentences fairer whatever our biases were.

And just to prove the point, let’s consider the case where the strict judge’s 10-year sentences have been taken up as the standard instead of the currently accepted 7 years. You would think that would be worse for me than the current system with average of 7 years, wouldn’t you?

But actually I wouldn’t!

Look at my RMS error if both get 10 years in prison:

\[RMSE\ Lenient\_StrictNoNoise\ =\ \sqrt({{(10-4)^2 + (10-4)^2} \over 2})\ =\ 6\ years\]

You see that my RMS Error is reduced from 7.9 years with the noisy 7 year average, vs just 6 years with the no-noise 10 year average!

This leads to the totally unintuitive result that I would prefer the stricter sentences consistently applied than more lenient sentences with individual judgement!

But what does this have to do with revenue management??

I’m almost there. But just first, let’s look at a subject much closer to revenue management – insurance. Here, if you charge too little you will lose potential profit that you would have otherwise made. And if you charge too much you won’t get the business as a competitor will get it. Does this sound familiar?

The authors of ‘Noise’ were asked to do an audit of the judgements of an insurance underwriting company. They asked the most experienced people in the company what they thought the variation would be in premium prices for the same set of circumstances. They thought it would be around 10-15%. So maybe the lower price was 85 and the higher price is 100. This was seen as acceptable.

But in fact, the authors found that the premiums given for the same set of circumstances varied by a massive 55%. Just as in every other industry studied, the variation in human judgement was much larger than anyone thought. And it was costing the company hundreds of millions of dollars.

When you are too high in one case and too low in another, the two errors don’t cancel out. The time you are too high means you miss out on business. And the time you are too low, you lose a lot of potential profit.

Is this starting to sound familiar?

Revenue managers have biases and noise in their judgements too!

I’m hoping you are seeing where this is going. Revenue managers have to make judgements every day. Like me, in my imaginary role as a hippy judge, we all have biases in our revenue management.

I might like to price unusually low for long-lead-times to try to attract some business early. Maybe I like to keep the minimum price high, so as to not damage the brand reputation. I might even consistently be pricing Sundays too high, and with no good reason. All of these are biases that may mean we don’t get maximum revenue.

But, as with the judges and the insurers, we are also all much less consistent in our judgements than we think we are. We don’t even follow our biases correctly. The following are sources of noise:

  • We have an inertia bias, that make us more likely to leave a price where it is than to change it.
  • We are unable to check all new information and sometimes miss it.
  • Some days we weigh some information as more important (for example, recent pickup) and other days we may weight competitor prices as more important – and not for any good reason, just that this is where our focus is.
  • And there are many more causes of noise; sometimes it can come down to whether we are feeling positive or negative on that day.

For hotel pricing, we will consider ‘bias’ to be suboptimal pricing theory, and ‘noise’ to be inconsistent application of that theory.

We sometimes focus more on removing bias than noise

The point that the book ‘Noise’ is making, is this:

To get the best pricing it would be ideal to remove all biases. But we sort of knew that already. We may study our different weekdays to see where we have done badly in the past and try to improve.

But often the larger contributor to reduced revenue is not the bias but the noise around the prediction. It is us not doing what we want to do. Not applying our own rules (and biases) consistently. And if we can reduce the noise, in other words consistently price, then we will hugely reduce pricing error even with large biases.

In RMS Error terms:

\[RMSE^2\ =\ Bias^2\ +\ Noise^2\]

Often the noise of an individual doing the pricing is larger than the bias. What does this mean? Let me give an example…

Often noise is more important

Pretend I am a good revenue manager, and I am given 10 identical situations (but they are spaced apart in time so I have forgotten about the last time each time I am doing the pricing). In all of the situations, the correct price (according to my biases) is $150. That means that if I have 10 identical situations, the average price I pick will be $150.

Actually the correct, revenue maximizing, price is $140 but I don’t know this. I have $10 of bias.

So on ten occasions I am setting this price and actually I am reasonably consistent. Twice I offer the room for $125, twice I offer it for $175 and the other six times I offer it for $150.

Like we did with the sentencing, let’s do the RMS Error versus the revenue maximizing price of $140. I won’t bore you with the equation but the RMS Error comes to $18.70.

This compares to an RMS Error of just $10 if I could consistently set my price at $150.

How does this RMS Error change if we halve the bias to just $5– ie. If my average price was 145? It goes down to $16.60 – which is good.

But how does it change if we reduce the noise by half (to a variation of $12.50) but with the original bias? The RMS error is then reduced to $12.75.

In this example, we get a much larger reduction in error by reducing noise than by reducing the bias.

Can an RMS reduce your RMS error?

Do revenue management systems (RMSs) give perfect prices? No, they have biases. They have biases programmed into the algorithms and biases created in the settings made by the clients.

By examining past performance, it is possible to reduce those biases. In an unchanging world, it would theoretically be possible to remove all biases, but in the real world pricing will never be perfect. The good news with an RMS, is that because the biases are consistently applied, they should be easier to find and reduce.

An interesting question is which has fewer biases; an RMS or a good revenue manager? The good revenue manager will be able to take into account context in a way that the computer can’t. On the other hand, the human is unable to systematically analyse past performance in the same way. Personally I don’t know the answer and it surely depends upon which revenue manager and how the RMS is set up.

However, what an RMS does always do is hugely reduce noise. Given a consistent set of events, the RMS will give the same price every time.

As we have seen, the reduction of noise is often more important than the reduction of bias. Obviously this depends on the relative levels of bias and noise, but one must always be aware that noise is often much larger than we think it is for the reasons discussed above.

It is important to be aware of our fallibilities, and to get help to correct them. When we built RoomPriceGenie we wanted to make a system that gives the revenue manager control over the prices and consistently applies them. But also with an underpinning of theory that helps to reduce the revenue manager biases.

We are all fallible, and there is noise in all judgements. What Kahneman et al. bring to light is that the financial cost of noise is probably a lot larger than we intuitively imagined.

Ari Andricopoulos
CEO
RoomPriceGenie