Industry Update
Opinion Article16 December 2015

How to Avoid Common Mistakes in Text Analytics of Consumer Reviews

By Jeff Catlin, CEO - Lexalytics

share this article
1 minComments
Catlin

Let's start with a basic primer on text analytics (aka text mining). Depending on whom you ask, the two are different, but in my experience, it really comes down to what industry you are coming from. This is one of the more annoying things about the "natural language processing" industry – the terminology hasn't settled down quite yet.

Advertisements

Since they're close enough, let's just use "text analytics" for purposes of this article.

To quote Wikipedia: The term text analytics describes a set of linguistic, statistical, and machine learning techniques that model and structure the information content of textual sources for business intelligence, exploratory data analysis, research, or investigation.

Or, as we like to put it, "text analytics" tells you "who, what, where, when, and (sometimes) why" – so that you can figure out what you need to do about it.

Text analytics is a very powerful tool, and can be transformational for a business. However, there are a number of very common mistakes that we see again and again, and we'd like to help you keep from making those mistakes when analyzing customerr reviews.

What We See People Doing Wrong

  • Not Starting From the Question

This is, by far, the most common problem. You need to start from a question in order to get to a useful answer. Even if it is a single, simple question like "What do my guests think about the bathrooms?" or "What are my guests complaining about the most?"

You will fail if you do not start from a good, specific question. Here's an example of a bad question: "What's in my reviews?" The problem with that question is that it is ambiguous ("what" could be many things – like "which competitors" or "are they talking about rooms or service or food") and not limited in any way. Are you interested in all reviews across all time? Or just a year over year picture? Which leads to the next most common mistake.

  • Data in Isolation Means Nothing

Great, so now you know that 25 guests complained about the bathrooms. Is that better or worse than last year? Is that better or worse than your competition? You need to actually make comparisons either across your competitors or across time (preferably both) in order to really tell if you need to fix something or if your fixes are working.

Put differently, you need to make strategic decisions as to where to put your money, and that needs to reflect your brand values, your competition, and your customer experience. When you measure, you need to be comparing so that you can see how you…measure up, or how your measurements have changed over time.

  • Acting Too Slowly on Individual Complaints

Along with getting a view of all discussions related to your brand, you can also use text analytics to throw you an alert if someone has Tweeted that they're having a poor experience, or if they've put up a bad review. The first case (the Tweet) is a gift, because oftentimes that poor experience is happening RIGHT NOW and so you can actually immediately act on the customer's issue. While there are some really entitled individuals out there, most people really appreciate it when you jump on something like that, and you can turn a negative experience into a really positive experience – if you are monitoring and can act quickly enough.

  • Acting Too Quickly on Broad Analysis

An individual complaint is a point problem that is addressable by, say, making sure that they get the extra towels that they wanted, or if you comp a dinner. Looking at trends and doing broad analysis can seemingly point to problems that you need to fix immediately.

Take time to do a deeper analysis – read the comments themselves, evaluate vs. your competition and your brand values, and make sure that this is going to be a worthwhile investment.

  • Assuming That You Have Clean Data

Depending on whether you are doing some of the work yourself, or you're working with an off-the-shelf system, you may or may not have to deal with some of these problems. Data quality is always an issue – even if you are paying top dollar for a completely managed review analysis system, you still will need to spot check the incoming data stream to make sure that weird stuff isn't making it in. What do I mean by "weird stuff?" Reviews in languages that you aren't analyzing, reviews that were pulled in because some keyword you were using was too broad. Reviews that include part of some advertisement because the review-gathering engine messed up. These are all things that you're going to want to monitor and complain about if they are a regular problem in your system.

A subset of this problem is the dreaded spam review. Spam is way more common on Twitter than it is in the review space, but since you're not generating spam reviews for your site. (You're not, right? Because that's just tacky.) However, your competitors may not have the same solid ethics that you do, and so if something seems "off" about the reviews, then do some investigating. This is where things like trending really can make a big difference – if you know that they were getting slammed in the reviews, and suddenly you see lots of positive reviews, but yet, you know they didn't make any substantial process or physical changes… Be sure that you can null some of that out so that you can get a more realistic view – or at least send someone over to check it out.

There's other, more innocuous spam, that of different companies trying to sell things like travel packages that include brand names whom you are trying to follow. There's a fair amount of research going on right now as to how to filter that out, but social media spam is a much less well solved problem than email spam.

  • Not Having Realistic Expectations from the Text Analytics System

The University of Pittsburgh took some graduate students, and then trained them for 40 hours as to how to evaluate the sentiment of an individual sentence. Then they set them loose on a corpus of roughly 16,000 sentences. Even after 40 hours of training, those (probably pretty intelligent) graduate students only achieved about an 80% agreement rate as to whether a particular sentence was positive, negative, or neutral.

Just ponder that for a second. Humans, and well-trained humans at that, are going to disagree at least one out of every five sentences as to whether that sentence is positive, negative, or neutral. (As an aside, this fact makes me amazed that we all manage to get through our days without constantly fighting because we're misunderstanding each other.) Back to the point at hand, the accuracy of any computerized text analytics system is going to be bound by the tuning of the system – and that accuracy is bound by inter-rater agreement. You may not have to do any tuning at all, and fortunately for you, reviews are one of the easiest types of content for machines to understand. But even given that, you will not exceed 85% "accuracy", and anybody promising more than that should be given the hairy eyeball and asked some uncomfortable questions.

  • Not Accounting for Multiple Conversion Steps

The numbers that I gave above are for straight-up text. Sometimes, you're going to have to do conversion from hand-written comments using Optical Character Recognition (OCR) or convert from speech to text. Each of these steps is going to reduce your overall accuracy. Now, the flip side of this is that if you are in the situation where you really have that much handwritten text or spoken audio that you need those tools, then you're probably going to be in good shape. Having lots of data makes a really big difference when worrying about accuracy – you don't need it to be perfect, you just need to be able to see the trends and such that you need in order to make your business decisions.

  • Not Allocating Any Time for Tuning

Even the best, most hospitality focused system is going to want some love. Love that only you can provide. Say you have a branded bed that you've been using in your advertising. Well, chances are that a good system will probably pick that term out, but why take the chance – just tell the system that there's a particular brand you're looking for, and now it's guaranteed to find it.

Same thing applies to areas like sentiment and/or the taxonomies of categorization. For example, do you want more granular information about your restaurant (so you list all the menu items and staff names) or are you most interested in what people are saying about the entertainment facilities (so you list all the aspects of the gym, pool, and in-room entertainment systems).

This does not have to be a big lift, but if you spend even a few hours to a day, then you can get results that will better answer the question you were asking. If you can allocate this time on a regular basis, say quarterly, then that's even better.

  • Thinking Too Small

This is really the last big mistake I see people making. I want you to start with a specific question in mind, but then once you answer that one, expand out. The beauty of using machines for analysis is that they're 100% consistent, and scaling to include all data sources is reasonably easy. Once you've gotten over the hurdle of answering your first question, answering the next one is way easier. Know what everyone is saying about you. Know what everyone is saying about your competitors. Then go take advantage of that information.

Start with a question in mind. Look at trends and comparisons. Act quickly if you can catch a customer complaint right when it happens, but take time to dig into trends that affect larger expenditures. Spend time cleaning your data, and keep an eye out for weirdness from spam. Have realistic expectations, particularly if you're going from speech all the way to text analytics. Once you're happy, think really big. The data is there, you just need to think up a big enough question.

Jeff Catlin

    More from Jeff Catlin
    Latest News
    Advertisements