An Agency Workflow for Google My Business Dead Ends

[ad_1]

MiriamEllis

There are times when your digital marketing agency will find itself serving a local business with a need for which Google has made no apparent provisions. Unavailable categories for unusual businesses come instantly to mind, but scenarios can be more complex than this.

Client workflows can bog down as you worry over what to do, fearful of making a wrong move that could get a client’s listing suspended or adversely affect its rankings or traffic. If your agency has many employees, an entry-level SEO could be silently stuck on an issue, or even doing the wrong thing because they don’t know how or where to ask the right questions.

The best solution I know of consists of a combination of:

  • Client contracts that are radically honest about the nature of Google
  • Client management that sets correct expectations about the nature of Google
  • A documented process for seeking clarity when unusual client scenarios arise
  • Agency openness to experimentation, failure, and on-going learning
  • Regular monitoring for new Google developments and changes
  • A bit of grit

Let’s put the fear of often-murky, sometimes-unwieldy Google on the back burner for a few minutes and create a proactive process your team can use when hitting what feels like procedural dead end on the highways and byways of local search.

The apartment office conundrum

As a real-world example of a GMB dead end, a few months ago, I was asked a question about on-site offices for apartment complexes. The details:

  • Google doesn’t permit the creation of listings for rental properties but does allow such properties to be listed if they have an on-site office, as many apartment complexes do.
  • Google’s clearest category for this model is “apartment complex”, but the brand in question was told by Google (at the time) that if they chose that category, they couldn’t display their hours of operation.
  • This led the brand I was advising to wonder if they should use “apartment rental agency” as their category because it does display hours. They didn’t want to inconvenience the public by having them arrive at a closed office after hours, but at the same time, they didn’t want to misrepresent their category.

Now that’s a conundrum!

When I was asked to provide some guidance to this brand, I went through my own process of trying to get at the heart of the matter. In this post, I’m going to document this process for your agency as fully as I can to ensure that everyone on your team has a clear workflow when puzzling local SEO scenarios arise.

I hope you’ll share this article with everyone remotely involved in marketing your clients, and that it will prevent costly missteps, save time, move work forward, and support success.

Step 1: Radical honesty sets the stage right

Whether you’re writing a client contract, holding a client onboarding meeting, or having an internal brand discussion about local search marketing, setting correct expectations is the best defense against future disappointments and disputes. Company leadership must task itself with letting all parties know:

  1. Google has a near-monopoly on search. As such, they can do almost anything they feel will profit them. This means that they can alter SERPs, change guidelines, roll out penalties and filters, monetize whatever they like, and fail to provide adequate support to the public that makes up and interacts with the medium of their product. There is no guarantee any SEO can offer about rankings, traffic, or conversions. Things can change overnight. That’s just how it is.
  2. While Google’s monopoly enables them to be whimsical, brands and agencies do not have the same leeway if they wish to avoid negative outcomes. There are known practices which Google has confirmed as contrary to their vision of search (buying links, building listings for non-existent locations, etc.). Client and agency agree not to knowingly violate Google’s guidelines. These guidelines include:

Don’t accept work under any other conditions than that all parties understand Google’s power, unpredictability, and documented guidelines. Don’t work with clients, agencies, software providers, or others that violate guidelines. These basic rules set the stage for both client and agency success.

Step 2: Confirm that the problem really exists

When a business believes it is encountering an unusual local search marketing problem, the first task of the agency staffer is to vet the issue. The truth is, clients sometimes perceive problems that don’t really exist. In my case of the apartment complex, I took the following steps.

  1. I confirmed the problem. I observed the lacking display of hours of operation on GMB listings using the “apartment complex” category.
  2. I called half-a-dozen nearby apartment complex offices and asked if they were open either by appointment only, or 24/7. None of them were. At least in my corner of the world, apartment complex offices have set, daily business hours, just like retail, opening in the AM and closing in the PM each day.
  3. I did a number of Google searches for “apartment rental agency” and all of the results Google brought up were for companies that manage rentals city-wide — not rentals of units within a single complex.

So, I was now convinced that the business was right: they were encountering a real dead end. If they categorized themselves as an “apartment complex”, their missing hours could inconvenience customers. If they chose the “apartment rental agency” designation to get hours to display, they could end up fielding needless calls from people looking for city-wide rental listings. The category would also fail to be strictly accurate.

As an agency worker, be sure you’ve taken common-sense steps to confirm that a client’s problem is, indeed, real before you move on to next steps.

Step 3: Search for a similar scenario

As a considerate agency SEO, avoid wasting the time of project leads, managers, or company leadership by first seeing if the Internet holds a ready answer to your puzzle. Even if a problem seems unusual, there’s a good chance that somebody else has already encountered it, and may even have documented it. Before you declare a challenge to be a total dead-end, search the following resources in the following order:

  1. Do a direct search in Google with the most explicit language you can (e.g. “GMB listing showing wrong photo”, “GMB description for wrong business”, “GMB owner responses not showing”). Click on anything that looks like it might contain an answer, look at the date on the entry, and see what you can learn. Document what you see.
  2. Go to the Google My Business Help Community forum and search with a variety of phrases for your issue. Again, note the dates of responses for the currency of advice. Be aware that not all contributors are experts. Looks for thread responses from people labeled Gold Product Expert; these members have earned special recognition for the amount and quality of what they contribute to the forum. Some of these experts are widely-recognized, world-class local SEOs. Document what you learn, even if means noting down “No solution found”.
  3. Often, a peculiar local search issue may be the result of a Google change, update, or bug. Check the MozCast to see if the SERPs are undergoing turbulent weather and Sterling Sky’s Timeline of Local SEO Changes. If the dates of a surfaced issue correspond with something appearing on these platforms, you may have found your answer. Document what you learn.
  4. Check trusted blogs to see if industry experts have written about your issue. The nice thing about blogs is that, if they accept comments, you can often get a direct response from the author if something they’ve penned needs further clarification. For a big list of resources, see: Follow the Local SEO Leaders: A Guide to Our Industry’s Best Publications. Document what you learn.

    If none of these tactics yields a solution, move on to the next step.

    Step 4: Speak up for support

    If you’ve not yet arrived at an answer, it’s time to reach out. Take these steps, in this order:

    1) Each agency has a different hierarchy. Now is the time to reach out to the appropriate expert at your business, whether that’s your manager or a senior-level local search expert. Clearly explain the issue and share your documentation of what you’ve learned/failed to learn. See if they can provide an answer.

    2) If leadership doesn’t know how to solve the issue, request permission to take it directly to Google in private. You have a variety of options for doing so, including:

    In the case of the apartment complex, I chose to reach out via Twitter. Responses can take a couple of days, but I wasn’t in a hurry. They replied:

    As I had suspected, Google was treating apartment complexes like hotels. Not very satisfactory since the business models are quite different, but at least it was an answer I could document. I’d hit something of a dead-end, but it was interesting to consider Google’s advice about using the description field to list hours of operation. Not a great solution, but at least I would have something to offer the client, right from the horse’s mouth.

    In your case, be advised that not all Google reps have the same level of product training. Hopefully, you will receive some direct guidance on the issue if you describe it well and can document Google’s response and act on it. If not, keep moving.

    3) If Google doesn’t respond, responds inexpertly, or doesn’t solve your problem, go back to your senior-level person. Explain what happened and request advice on how to proceed.

    4) If the senior staffer still isn’t certain, request permission to publicly discuss the issue (and the client). Head to supportive fora. If you’re a Moz Pro customer, feel free to post your scenario in the Moz Q&A forum. If you’re not yet a customer, head to the Local Search Forum, which is free. Share a summary of the challenge, your failure to find a solution, and ask the community what they would do, given that you appear to be at a dead end. Document the advice you receive, and evaluate it based on the expertise of respondents.

    Step 5: Make a strategic decision

    At this point in your workflow, you’ve now:

    • Confirmed the issue
    • Searched for documented solutions
    • Looked to leadership for support
    • Looked to Google for support
    • Looked to the local SEO industry for support

    I’m hoping you’ve arrived at a strategy for your client’s scenario by now, but if not, you have 3 things left to do.

    1. Take your entire documentation back to your team/company leader. Ask them to work with you on an approved response to the client.
    2. Take that response to the client, with a full explanation of any limitations you encountered and a description of what actions your agency wants to take. Book time for a thorough discussion. If what you are doing is experimental, be totally transparent about this with the client.
    3. If the client agrees to the strategy, enact it.

    In the case of the apartment complex, there were several options I could have brought to the client. One thing I did recommend is that they do an internal assessment of how great the risk really was of the public being inconvenienced by absent hours.

    How many people did they estimate would stop by after 5 PM in a given month and find the office closed? Would that be 1 person a month? 20 people? Did the convenience of these people outweigh risks of incorrectly categorizing the complex as an “apartment rental agency”? How many erroneous phone calls or walk-ins might that lead to? How big of a pain would that be?

    Determining these things would help the client decide whether to just go with Google’s advice of keeping the accurate category and using the description to publish hours, or, to take some risks by miscategorizing the business. I was in favor of the former, but be sure your client has input in the final decision.

    And that brings us to the final step — one your agency must be sure you don’t overlook.

    Step 6: Monitor from here on out

    In many instances, you’ll find a solution that should be all set to go, with no future worries. But, where you run into dead-end scenarios like the apartment complex case and are having to cobble together a workaround to move forward, do these two things:

    1. Monitor outcomes of your implementation over the coming months. Traffic drops, ranking drops, or other sudden changes require a re-evaluation of the strategy you selected. *This is why it is so critical to document everything and to be transparent with the client about Google’s unpredictability and the limitations of local SEOs.
    2. Monitor Google for changes. Today’s dead end could be tomorrow’s open road.

    This second point is particularly applicable to the apartment complex I was advising. About a month after I’d first looked at their issue, Google made a major change. All of a sudden, they began showing hours for the “apartment complex” category!

    If I’d stopped paying attention to the issue, I’d never have noticed this game-changing alteration. When I did see hours appearing on these listings, I confirmed the development with apartment marketing expert Diogo Ordacowski:

    Moral: be sure you are continuing to keep tabs on any particularly aggravating dead ends in case solutions emerge in future. It’s a happy day when you can tell a client their worries are over. What a great proof of the engagement level of your agency’s staff!

    When it comes to Google, grit matters

    Image Credit: The Other Dan

    “What if I do something wrong?”

    It’s totally okay if that question occurs to you sometimes when marketing local businesses. There’s a lot on the line — it’s true! The livelihoods of your clients are a sacred trust. The credibility that your agency is building matters.

    But, fear not. Unless you flagrantly break guidelines, a dose of grit can take you far when dealing with a product like Google My Business which is, itself, an experiment. Sometimes, you just have to make a decision about how to move forward. If you make a mistake, chances are good you can correct it. When a dead end with no clear egress forces you to test out solutions, you’re just doing your job.

    So, be transparent and communicative, be methodical and thorough in your research, and be a bit bold. Remember, your clients don’t just count on you to churn out rote work. In Google’s increasingly walled garden, the agency which can see over the wall tops when necessity calls are bringing extra value.



[ad_2]

Source link

Does the 9th Circuit’s new decision in HiQ vs. LinkedIn open the floodgates to scraping?

[ad_1]

Greg Sterling

Yesterday the U.S. Ninth Circuit Court of Appeals found (.pdf) in favor of data analytics company HiQ Labs, which had been scraping data and building products from LinkedIn public profiles. It’s a case that has a lot of implications — and may still be appealed.

CFAA and anti-hacking rules. LinkedIn tried to stop HiQ by using, among other things, the Computer Fraud and Abuse Act (CFAA), which is a federal cybersecurity and anti-hacking law. In basic terms, the CFAA says that a computer may not be accessed without authorization or in excess of authorization.

The profile data on LinkedIn was and is public. But LinkedIn didn’t like HiQ scraping its content and issued a cease-and-desist order in 2017. The letter stated that HiQ was in violation of LinkedIn’s user agreement as well as California and federal law, including the CFAA among others. LinkedIn also said that it would technically block HiQ’s efforts to scrape the site.

HiQ sued for a preliminary injunction against LinkedIn and won at the district court level. The court ordered LinkedIn to allow HiQ access to the content again. LinkedIn appealed to the Ninth Circuit.

Who’s “authorized” to access website content. A central question in the case involved determining, once HiQ received LinkedIn’s cease-and-desist letter, whether it was “without authorization” under CFAA. The Ninth Circuit said no.

CFAA contemplates information that is not publicly accessible (e.g., password protected). Public LinkedIn profiles were not password protected. In simple terms: only if the LinkedIn data was non-public would the company have been able to invoke CFAA to block HiQ’s access.

LinkedIn argued that HiQ violated the terms of its user agreement. The Ninth Circuit pointed out that its status as a “user” was terminated by LinkedIn with the cease-and-desist letter. In addition, LinkedIn didn’t claim any ownership interest in the public profile content. And while LinkedIn also said it was also seeking to protect users’ privacy rights in blocking HiQ, the court didn’t buy that argument regarding public profile information — where there was little or no expectations of privacy.

Other potential ways to block scraping. The case was substantially about CFAA, though there were other claims the court discussed. In the end, it didn’t say a website owner doesn’t have any recourse against wholesale appropriation of its public content. The court said that other laws could apply: “state law trespass to chattels claims may still be available. And other causes of action, such as copyright infringement, misappropriation, unjust enrichment, conversion, breach of contract, or breach of privacy, may also lie.”

The Ninth Circuit didn’t analyze the application of any of these theories to the facts of HiQ, however. It simply said they might apply to protect against scraping or content appropriation.

In response to the decision, a LinkedIn spokesperson offered the following statement: “We’re disappointed in the court’s decision, and we are evaluating our options following this appeal. LinkedIn will continue to fight to protect our members and the information they entrust to LinkedIn.”

Why we should care. This case may not be over and could ultimately wind up before the U.S. Supreme Court. Its broadest interpretation, however, appears to be: any “public” online data not owned or password protected by a publisher — and facts cannot be copyrighted — can be freely captured by third parties.

At the end of the opinion, the court expressed concern about “giving companies like LinkedIn free rein to decide, on any basis, who can collect and use data—data that the companies do not own, that they otherwise make publicly available to viewers, and that the companies themselves collect and use—risks the possible creation of information monopolies that would disserve the public interest.”

 

This story first appeared on Search Engine Land. For more on search marketing and SEO, click here.


About The Author

Greg Sterling is a Contributing Editor at Search Engine Land. He writes about the connections between digital and offline commerce. He previously held leadership roles at LSA, The Kelsey Group and TechTV. Follow him Twitter or find him on LinkedIn.

[ad_2]

Source link

The Data You’re Using to Calculate CTR is Wrong and Here’s Why

[ad_1]

Luca-Bares

Click Through Rate (CTR) is an important metric that’s useful for making a lot of calculations about your site’s SEO performance, from estimating revenue opportunity, prioritize keyword optimization, to the impact of SERP changes within the market. Most SEOs know the value of creating custom CTR curves for their sites to make those projections more accurate. The only problem with custom CTR curves from Google Search Console (GSC) data is that GSC is known to be a flawed tool that can give out inaccurate data. This convolutes the data we get from GSC and can make it difficult to accurately interpret the CTR curves we create from this tool. Fortunately, there are ways to help control for these inaccuracies so you get a much clearer picture of what your data says.

By carefully cleaning your data and thoughtfully implementing an analysis methodology, you can calculate CTR for your site much more accurately using 4 basic steps:

  1. Extract your sites keyword data from GSC — the more data you can get, the better.
  2. Remove biased keywords — Branded search terms can throw off your CTR curves so they should be removed.
  3. Find the optimal impression level for your data set — Google samples data at low impression levels so it’s important to remove keywords that Google may be inaccurately reporting at these lower levels.
  4. Choose your rank position methodology — No data set is perfect, so you may want to change your rank classification methodology depending on the size of your keyword set.

Let’s take a quick step back

Before getting into the nitty gritty of calculating CTR curves, it’s useful to briefly cover the simplest way to calculate CTR since we’ll still be using this principle. 

To calculate CTR, download the keywords your site ranks for with click, impression, and position data. Then take the sum of clicks divided by the sum of impressions at each rank level from GSC data you’ll come out with a custom CTR curve. For more detail on actually crunching the numbers for CTR curves, you can check out this article by SEER if you’re not familiar with the process.

Where this calculation gets tricky is when you start to try to control for the bias that inherently comes with CTR data. However, even though we know it gives bad data we don’t really have many other options, so our only option is to try to eliminate as much bias as possible in our data set and be aware of some of the problems that come from using that data.

Without controlling and manipulating the data that comes from GSC, you can get results that seem illogical. For instance, you may find your curves show position 2 and 3 CTR’s having wildly larger averages than position 1. If you don’t know that data that you’re using from Search Console is flawed you might accept that data as truth and a) try to come up with hypotheses as to why the CTR curves look that way based on incorrect data, and b) create inaccurate estimates and projections based on those CTR curves.

Step 1: Pull your data

The first part of any analysis is actually pulling the data. This data ultimately comes from GSC, but there are many platforms that you can pull this data from that are better than GSC’s web extraction.

Google Search Console — The easiest platform to get the data from is from GSC itself. You can go into GSC and pull all your keyword data for the last three months. Google will automatically download a csv. file for you. The downside to this method is that GSC only exports 1,000 keywords at a time making your data size much too small for analysis. You can try to get around this by using the keyword filter for the head terms that you rank for and downloading multiple 1k files to get more data, but this process is an arduous one. Besides the other methods listed below are better and easier.

Google Data Studio — For any non-programmer looking for an easy way to get much more data from Search Console for free, this is definitely your best option. Google Data Studio connects directly to your GSC account data, but there are no limitations on the data size you can pull. For the same three month period trying to pull data from GSC where I would get 1k keywords (the max in GSC), Data Studio would give me back 200k keywords!

Google Search Console API — This takes some programming know-how, but one of the best ways to get the data you’re looking for is to connect directly to the source using their API. You’ll have much more control over the data you’re pulling and get a fairly large data set. The main setback here is you need to have the programming knowledge or resources to do so.

Keylime SEO Toolbox — If you don’t know how to program but still want access to Google’s impression and click data, then this is a great option to consider. Keylime stores historical Search Console data directly from the Search Console API so it’s as good (if not better) of an option than directly connecting to the API. It does cost $49/mo, but that’s pretty affordable considering the value of the data you’re getting.

The reason it’s important what platform you get your data from is that each one listed gives out different amounts of data. I’ve listed them here in the order of which tool gives the most data from least to most. Using GSC’s UI directly gives by far the least data, while Keylime can connect to GSC and Google Analytics to combine data to actually give you more information than the Search Console API would give you. This is good because whenever you can get more data, the more likely that the CTR averages you’re going to make for your site are going to be accurate.

Step 2: Remove keyword bias

Once you’ve pulled the data, you have to clean it. Because this data ultimately comes from Search Console we have to make sure we clean the data as best we can.

Remove branded search & knowledge graph keywords

When you create general CTR curves for non-branded search it’s important to remove all branded keywords from your data. These keywords should have high CTR’s which will throw off the averages of your non-branded searches which is why they should be removed. In addition, if you’re aware of any SERP features like knowledge graph you rank for consistently, you should try to remove those as well since we’re only calculating CTR for positions 1–10 and SERP feature keywords could throw off your averages.

Step 3: Find the optimal impression level in GSC for your data

The largest bias from Search Console data appears to come from data with low search impressions which is the data we need to try and remove. It’s not surprising that Google doesn’t accurately report low impression data since we know that Google doesn’t even include data with very low searches in GSC. For some reason Google decides to drastically over report CTR for these low impression terms. As an example, here’s an impression distribution graph I made with data from GSC for keywords that have only 1 impression and the CTR for every position.

If that doesn’t make a lot of sense to you, I’m right there with you. This graph says a majority of the keywords with only one impression has 100 percent CTR. It’s extremely unlikely, no matter how good your site’s CTR is, that one impression keywords are going to get a majority of 100 percent CTR. This is especially true for keywords that rank below #1. This gives us pretty solid evidence low impression data is not to be trusted, and we should limit the number of keywords in our data with low impressions.

Step 3 a): Use normal curves to help calculate CTR

For more evidence of Google giving us biased data we can look at the distribution of CTR for all the keywords in our data set. Since we’re calculating CTR averages, the data should adhere to a Normal Bell Curve. In most cases CTR curves from GSC are highly skewed to the left with long tails which again indicates that Google reports very high CTR at low impression volumes.

If we change the minimum number of impressions for the keyword sets that we’re analyzing we end up getting closer and closer to the center of the graph. Here’s an example, below is the distribution of a site CTR in CTR increments of .001.

The graph above shows the impressions at a very low impression level, around 25 impressions. The distribution of data is mostly on the right side of this graph with a small, high concentration on the left implies that this site has a very high click-through rate. However, by increasing the impression filter to 5,000 impressions per keyword the distribution of keywords gets much much closer to the center.

This graph most likely would never be centered around 50% CTR because that’d be a very high average CTR to have, so the graph should be skewed to the left. The main issue is we don’t know how much because Google gives us sampled data. The best we can do is guess. But this raises the question, what’s the right impression level to filter my keywords out to get rid of faulty data?

One way to find the right impression level to create CTR curves is to use the above method to get a feel for when your CTR distribution is getting close to a normal distribution. A Normally Distributed set of CTR data has fewer outliers and is less likely to have a high number of misreported pieces of data from Google.

3 b): Finding the best impression level to calculate CTR for your site

You can also create impression tiers to see where there’s less variability in the data you’re analyzing instead of Normal Curves. The less variability in your estimates, the closer you’re getting to an accurate CTR curve.

Tiered CTR tables

Creating tiered CTR needs to be done for every site because the sampling from GSC for every site is different depending on the keywords you rank for. I’ve seen CTR curves vary as much as 30 percent without the proper controls added to CTR estimates. This step is important because using all of the data points in your CTR calculation can wildly offset your results. And using too few data points gives you too small of a sample size to get an accurate idea of what your CTR actually is. The key is to find that happy medium between the two.

In the tiered table above, there’s huge variability from All Impressions to >250 impressions. After that point though, the change per tier is fairly small. Greater than 750 impressions are the right level for this site because the variability among curves is fairly small as we increase impression levels in the other tiers and >750 impressions still gives us plenty of keywords in each ranking level of our data set.

When creating tiered CTR curves, it’s important to also count how much data is used to build each data point throughout the tiers. For smaller sites, you may find that you don’t have enough data to reliably calculate CTR curves, but that won’t be apparent from just looking at your tiered curves. So knowing the size of your data at each stage is important when deciding what impression level is the most accurate for your site.

Step 4: Decide which position methodology to analyze your data

Once you’ve figured out the correct impression-level you want to filter your data by you can start actually calculating CTR curves using impression, click, and position data. The problem with position data is that it’s often inaccurate, so if you have great keyword tracking it’s far better to use the data from your own tracking numbers than Google’s. Most people can’t track that many keyword positions so it’s necessary to use Google’s position data. That’s certainly possible, but it’s important to be careful with how we use their data.

How to use GSC position

One question that may come up when calculating CTR curves using GSC average positions is whether to use rounded positions or exact positions (i.e. only positions from GSC that rank exactly 1. So, ranks 1.0 or 2.0 are exact positions instead of 1.3 or 2.1 for example).

Exact position vs. rounded position

The reasoning behind using exact position is we want data that’s most likely to have been ranking in position 1 for the time period we’re measuring. Using exact position will give us the best idea of what CTR is at position 1. Exact rank keywords are more likely to have been ranking in that position for the duration of the time period you pulled keywords from. The problem is that Average Rank is an average so there’s no way to know if a keyword has ranked solidly in one place for a full time period or the average just happens to show an exact rank.

Fortunately, if we compare exact position CTR vs rounded position CTR, they’re directionally similar in terms of actual CTR estimations with enough data. The problem is that exact position can be volatile when you don’t have enough data. By using rounded positions we get much more data, so it makes sense to use rounded position when not enough data is available for exact position.

The one caveat is for position 1 CTR estimates. For every other position average rankings can pull up on a keywords average ranking position and at the same time they can pull down the average. Meaning that if a keyword has an average ranking of 3. It could have ranked #1 and #5 at some point and the average was 3. However, for #1 ranks, the average can only be brought down which means that the CTR for a keyword is always going to be reported lower than reality if you use rounded position.

A rank position hybrid: Adjusted exact position

So if you have enough data, only use exact position for position 1. For smaller sites, you can use adjusted exact position. Since Google gives averages up to two decimal points, one way to get more “exact position” #1s is to include all keywords which rank below position 1.1. I find this gets a couple hundred extra keywords which makes my data more reliable.

And this also shouldn’t pull down our average much at all, since GSC is somewhat inaccurate with how it reports Average Ranking. At Wayfair, we use STAT as our keyword rank tracking tool and after comparing the difference between GSC average rankings with average rankings from STAT the rankings near #1 position are close, but not 100 percent accurate. Once you start going farther down in rankings the difference between STAT and GSC become larger, so watch out how far down in the rankings you go to include more keywords in your data set.

I’ve done this analysis for all the rankings tracked on Wayfair and I found the lower the position, the less closely rankings matched between the two tools. So Google isn’t giving great rankings data, but it’s close enough near the #1 position, that I’m comfortable using adjusted exact position to increase my data set without worrying about sacrificing data quality within reason.

Conclusion

GSC is an imperfect tool, but it gives SEOs the best information we have to understand an individual site’s click performance in the SERPs. Since we know that GSC is going to throw us a few curveballs with the data it provides its important to control as many pieces of that data as possible. The main ways to do so is to choose your ideal data extraction source, get rid of low impression keywords, and use the right rank rounding methods. If you do all of these things you’re much more likely to get more accurate, consistent CTR curves on your own site.

[ad_2]

Source link

How Does the Local Algorithm Work? – Whiteboard Friday

[ad_1]

JoyHawkins

When it comes to Google’s algorithms, there’s quite a difference between how they treat local and organic. Get the scoop on which factors drive the local algorithm and how it works from local SEO extraordinaire, Joy Hawkins, as she offers a taste of her full talk from MozCon 2019.

Click on the whiteboard image above to open a high resolution version in a new tab!

Video Transcription

Hello, Moz fans. I’m Joy Hawkins. I run a local SEO agency from Toronto, Canada, and a search forum known as the Local Search Forum, which basically is devoted to anything related to local SEO or local search. Today I’m going to be talking to you about Google’s local algorithm and the three main factors that drive it. 

If you’re wondering what I’m talking about when I say the local algorithm, this is the algorithm that fuels what we call the three-pack here. When you do a local search or a search that Google thinks has local intents, like plumbers let’s say, you traditionally will get three results at the top with the map, and then everything below it I refer to as organic. This algorithm I’ll be kind of breaking down is what fuels this three-pack, also known as Google My Business listings or Google Maps listings.

They’re all talking about the exact same thing. If you search Google’s Help Center on what they look at with ranking these entities, they tell you that there are three main things that fuel this algorithm. The three things that they talk about are proximity, prominence, and relevance. I’m going to basically be breaking down each one and explaining how the factors work.

1. Proximity

I’ll kind of start here with proximity. Proximity is basically defined as your location when you are searching on your phone or your computer and you type something in. It’s where Google thinks you are located. If you’re not really sure, often you can scroll down to the bottom of your page, and at the bottom of your page it will often list a zip code that Google thinks you’re in.

Zip code (desktop)

The other way to tell is if you’re on a phone, sometimes you can also see a little blue dot on the map, which is exactly where Google thinks you’re located. On a high level, we often think that Google thinks we’re located in a city, but this is actually pretty false, which I know that there’s been actually a lot of talk at MozCon about how Google pretty much always knows a little deeper than that as far as where users are located.

Generally speaking, if you’re on a computer, they know what zip code you’re in, and they’ll list that at the bottom. There are a variety of tools that can help you check ranking based on zip codes, some of which would be Moz Check Your Presence Tool, BrightLocal, Whitespark, or Places Scout. All of these tools have the ability to track at the zip code level. 

Geo coordinates (mobile)

However, when you’re on a phone, usually Google knows your location even more detailed, and they actually generally know the geo coordinates of your actual location, and they pinpoint this using that little blue dot.

It knows even more about the zip code. It knows where you’re actually located. It’s a bit creepy. But there are a couple of tools that will actually let you see results based on geo coordinates, which is really cool and very accurate. Those tools include the Local Falcon, and there is a Chrome extension which is 100% free, that you can put in your browser, called GS Location Changer.

I use this all the time in an incognito browser if I want to just see what search results look like from a very, very specific location. Now these two levels, depending on what industry you are working in, it’s really important to know which level you need to be looking at. If you work with lawyers, for example, zip code level is usually good enough.

There aren’t enough lawyers to make a huge difference at certain like little points inside a given zip code. However, if you work with dentists or restaurants, let’s say, you really need to be looking at geo coordinate levels. We have seen lots of cases where we will scan a specific keyword using these two tools, and depending on where in that zip code we are, we see completely different three-packs.

It’s very, very key to know that this factor here for proximity really influences the results that you see. This can be challenging, because when you’re trying to explain this to clients or business owners, they search from their home, and they’re like, “Why am I not there?” It’s because their proximity or their location is different than where their office is located.

I realize this is a challenging problem to solve for a lot of agencies on how to represent this, but that’s kind of the tools that you need to look at and use. 

2. Prominence

Moving to the next factor, so prominence, this is basically how important Google thinks you are. Like Is this business a big deal, or are they just some random, crappy business or a new business that we don’t know much about?

  • This looks at things like links, for example. 
  • Store visits, if you are a brick-and-mortar business and you get no foot traffic, Google likely won’t think you’re very prominent. 
  • Reviews, the number of reviews often factors in here. We often see in cases where businesses have a lot of reviews and a lot of old reviews, they generally have a lot of prominence.
  • Citations can also factor in here due to the number of citations. That can also factor into prominence. 

3. Relevance

Moving into the relevance factor, relevance is basically, does Google think you are related to the query that is typed in? You can be as prominent as anyone else, but if you do not have content on your page that is structured well, that covers the topic the user is searching about, your relevance will be very low, and you will run into issues.

It’s very important to know that these three things all kind of work together, and it’s really important to make sure you are looking at all three. On the relevance end, it looks at things like:

  • content
  • onsite SEO, so your title tags, your meta tags, all that nice SEO stuff
  • Citations also factor in here, because it looks at things like your address. Like are you actually in this city? Are you relevant to the city that the user is trying to get locations from? 
  • Categories are huge here, your Google My Business categories. Google currently has just under 4,000 different Google My Business categories, and they add an insane amount every year and they also remove ones. It’s very important to keep on top of that and make sure that you have the correct categories on your listing or you won’t rank well.
  • The business name is unfortunately a huge factor as well in here. Merely having keywords in your business name can often give you relevance to rank. It shouldn’t, but it does. 
  • Then review content. I know Mike Blumenthal did a really cool experiment on this a couple years ago, where he actually had a bunch of people write a bunch of fake reviews on Yelp mentioning certain terms to see if it would influence ranking on Google in the local results, and it did. Google is definitely looking at the content inside the reviews to see what words people are using so they can see how that impacts relevance. 

How to rank without proximity, prominence, or relevance

Obviously you want all three of these things. It is possible to rank if you don’t have all three, and I’ll give a couple examples. If you’re looking to expand your radius, you service a lot of people.

You don’t just service people on your block. You’re like, “I serve the whole city of Chicago,” for example. You are not likely going to rank in all of Chicago for very common terms, things like dentist or personal injury attorney. However, if you have a lot of prominence and you have a really relevant page or content related to really niche terms, we often see that it is possible to really expand your radius for long tail keywords, which is great.

Prominence is probably the number one thing that will expand your radius inside competitive terms. We’ll often see Google bringing in a business that is slightly outside of the same area as other businesses, just because they have an astronomical number of reviews, or maybe their domain authority is ridiculously high and they have all these linking domains.

Those two factors are definitely what influences the amount of area you cover with your local exposure. 

Spam and fake listings

On the flip side, spam is something I talk a lot about. Fake listings are a big problem in the local search space. Fake listings, these lead gen providers create these listings, and they rank with zero prominence.

They have no prominence. They have no citations. They have no authority. They often don’t even have websites, and they still rank because of these two factors. You create 100 listings in a city, you are going to be close to someone searching. Then if you stuff a bunch of keywords in your business name, you will have some relevance, and by somehow eliminating the prominence factor, they are able to get these listings to rank, which is very frustrating.

Obviously, Google is kind of trying to evolve this algorithm over time. We are hoping that maybe the prominence factor will increase over time to kind of eliminate that problem, but ultimately we’ll have to see what Google does. We also did a study recently to test to see which of these two factors kind of carries more weight.

An experiment: Linking to your site within GMB

One thing I’ve kind of highlighted here is when you link to a website inside your Google My Business listing, there’s often a debate. Should I link to my homepage, or should I link to my location page if I’ve got three or four or five offices? We did an experiment to see what happens when we switch a client’s Google My Business listing from their location page to their homepage, and we’ve pretty much almost always seen a positive impact by switching to the homepage, even if that homepage is not relevant at all.

In one example, we had a client that was in Houston, and they opened up a location in Dallas. Their homepage was optimized for Houston, but their location page was optimized for Dallas. I had a conversation with a couple of other SEOs, and they were like, “Oh, well, obviously link to the Dallas page on the Dallas listing. That makes perfect sense.”

But we were wondering what would happen if we linked to the homepage, which is optimized for Houston. We saw a lift in rankings and a lift in the number of search queries that this business showed for when we switched to the homepage, even though the homepage didn’t really mention Dallas at all. Something to think about. Make sure you’re always testing these different factors and chasing the right ones when you’re coming up with your local SEO strategy. Finally, something I’ll mention at the top here.

Local algorithm vs organic algorithm

As far as the local algorithm versus the organic algorithm, some of you might be thinking, okay, these things really look at the same factors. They really kind of, sort of work the same way. Honestly, if that is your thinking, I would really strongly recommend you change it. I’ll quote this. This is from a Moz whitepaper that they did recently, where they found that only 8% of local pack listings had their website also appearing in the organic search results below.

I feel like the overlap between these two is definitely shrinking, which is kind of why I’m a bit obsessed with figuring out how the local algorithm works to make sure that we can have clients successful in both spaces. Hopefully you learned something. If you have any questions, please hit me up in the comments. Thanks for listening.

Video transcription by Speechpad.com


If you liked this episode of Whiteboard Friday, you’ll love all the SEO thought leadership goodness you’ll get from our newly released MozCon 2019 video bundle. Catch Joy’s full talk on the differences between the local and organic algorithm, plus 26 additional future-focused topics from our top-notch speakers:

Grab the sessions now!

We suggest scheduling a good old-fashioned knowledge share with your colleagues to educate the whole team — after all, who didn’t love movie day in school? 😉

[ad_2]

Source link

Learn how to turn customer reviews into customer insights with sentiment analysis

[ad_1]

Digital Marketing Depot

Customer ratings and reviews have become a critical brand marketing tool. In fact, 88% of consumers now trust online reviews as much as personal recommendations, and 95% say that online reviews influence their buying decisions.

But how do you turn customer reviews into customer insights? And how can you apply those insights to your overall marketing strategy to improve reputation and create a compelling customer experience that builds brand loyalty?

Join our data science and review management experts this Thursday as they show you how to identify and use customer sentiment at scale to make improvements to both the in-store and online experience. You’ll learn:

  • Techniques to leverage long-tail reviews into search best practices.
  • Tips to analyze review sentiment at scale in order to identify in-store concerns at the national or local level.
  • How AI can evaluate review phrases, words and sentiment to improve marketing effectiveness.

Register now for “From Customer Review to Customer Insight: Best Practices in AI and Sentiment Analysis,” produced by Digital Marketing Depot and sponsored by Chatmeter.

About The Author

Digital Marketing Depot is a resource center for digital marketing strategies and tactics. We feature hosted white papers and E-Books, original research, and webcasts on digital marketing topics — from advertising to analytics, SEO and PPC campaign management tools to social media management software, e-commerce to e-mail marketing, and much more about internet marketing. Digital Marketing Depot is a division of Third Door Media, publisher of Search Engine Land and Marketing Land, and producer of the conference series Search Marketing Expo and MarTech. Visit us at http://digitalmarketingdepot.com.

[ad_2]

Source link

What location data can tell us about the state of Starbucks’ pumpkin spice latte

[ad_1]

Greg Sterling

Even though Autumn doesn’t officially start until September 23, Labor Day marks the effective start of the season for most people. School is back in session, the weather starts to turn (a bit) and the much-loved but equally derided pumpkin spice latte (PSL) makes its return.

Been around now for 15 years. The flavored coffee drink was initially introduced by Starbucks in 2003 and has now been in the market for 15 years. It has almost single-handedly inspired an entire sub-genre of seasonal foods.

Grocery store display of pumpkin-spiced foods

During its early years, the novelty of the coffee drink brought in new customers regularly. And while it has become seasonal staple of many people’s caffeinated beverage routines, its impact on Starbucks sales and store visitation appears to waning.

Pumpkin-spice fatigue? According to a foot traffic analysis by location intelligence company Gravy Analytics, the venerable yet caloric drink didn’t drive additional any incremental visits when it was re-introduced in 2018. Gravy observed, “Average daily foot traffic decreased by 2%. Starbucks customers also didn’t visit their local Starbucks more frequently once the pumpkin spice latte was released. Average daily visits per device remained flat throughout the period.”

It’s not clear whether the public has become indifferent to the Starbucks drink in particular or whether there’s growing pumpkin-spice fatigue (PSF) more generally. Based on Starbucks’ foot traffic data, Gravy speculates that competing chains, such as Dunkin, may not reap rewards they anticipate from their own pumpkin spice drinks.

Starbucks Foot Traffic by Day of Week (Jul 15 – Oct 13, 2018)

Source: Gravy Analytics (2019)

Conversely, Gravy previously showed that the introduction of the meatless Impossible Burger drove a nearly 20% increase in visitation to participating Burger King stores in test markets. This (and sales data) prompted the chain to introduce the faux burger nationally last month.

Why we should care. Location data has many uses. Audience segmentation and offline attribution are the most common. Competitive insights are gaining currency as well. But another important use case is product testing. Location data can be used a tool to determine the impact of a fast-food menu item, in this case, on store visits before the broader introduction of the product.

The pumpkin-spice latte is a lighthearted example to prove a larger point about the utility of location data to provide customer insights. And while test marketing and sales data have historically been used to determine product viability, location data can help assess whether that product still has market appeal — or has run its course.


About The Author

Greg Sterling is a Contributing Editor at Search Engine Land. He writes about the connections between digital and offline commerce. He previously held leadership roles at LSA, The Kelsey Group and TechTV. Follow him Twitter or find him on LinkedIn.

[ad_2]

Source link

Pulling back the curtain on location intelligence

[ad_1]

Greg Sterling

There are perhaps 20 companies offering location data or location analytics. X-Mode is less well-known than many others but says it’s one of the few, primary sources of “first party” location data in the market. We caught up with Josh Anton, founder and CEO of X-Mode, to get his take on the current state of location intelligence, what marketers need to look for in a data partner and some of the changes coming with stricter data-privacy rules.

ML: What does X-Mode do?

JA: X-Mode was founded by the people behind the widely popular campus
safety app Drunk Mode. We work with app developers and data buyers to offer the
highest quality location data that meets all current regulatory standards
including the GDPR and CCPA.

X-Mode
has one of the most accurate location data panels in the industry and receives
the majority of its data directly from mobile app publishers through XDK, its
proprietary location-based SDK. With over 300 apps on its platform, X-Mode
licenses a high accuracy (70% accurate within 20 meters), dense data panel that
includes mobility metrics (speed, bearing, altitude, vertical accuracy), near
real-time GPS, and other detection capabilities (IoT, Wi-Fi, and Beacon).

X-Mode
provides this anonymized user panel to hundreds of clients across multiple
industries including Mapping and Location Services, AdTech, MarTech, FinTech,
Smart Cities, Real Estate and InsurTech.

ML: What was Drunk Mode and how did it evolve into
X-Mode?

JA: Drunk Mode was the living map for
when you went out partying. Drunk Mode stopped you from drunk dialing your
friends, allowed you to find your drunk friends, and showed you where you went
last night. A lot of our focus was how we leverage location to make the college
student’s night a bit safer.

To continue start generating revenue Drunk Mode
began monetizing location data in 2015, as display advertising opportunities
were limited and low value. During this time our team saw two major
opportunities for disruption in the location data industry: 1) data licensees’
need for high-quality location data and 2) publishers looking for incremental
monetization. Due to Drunk Mode’s investment in accurate location technology
for college safety (breadcrumbs), we already had established contracts
monetizing location and understood the pain points of having users opt-in to
sharing location. We were in a unique position to solve numerous issues and
generate a “win-win-win” for publishers-consumers-X-Mode.

Thus, the X-Mode Location Data Network was born in
Q2 2017. We leveraged our new XDK 1.0 that was built off the core underlying
technology of our Drunk Mode application and now powers X-Mode’s location
platform. Having grown our network to 65M+ global users in less than 2 years,
we realized early on that location-based use cases we built around Drunk Mode
had a much larger impact than we ever imagined.

Instead of creating a network around college safety,
we can now help optimize emergency routes. Rather than just offering drunk food
discounts to users after a long night out, we can now power fortune 500
companies’ ability to better optimize their ad-spend to target around location-based
moments at scale. In the past, we gave Uber/Lyft discounts to help college
students get home safely, and now at X-Mode have the power to help optimize
transportation routes for the masses. We realized there was a much bigger world
outside college and Drunk Mode sobered up to what people know as X-Mode today.

ML: You made the statement that X-Mode was one of a small
number of “first party” location-data providers in the U.S. You also suggested
there’s only a finite supply of available location data in the market. Please
elaborate.

JA: If a location company wants to build an audience or measurement product focused in the U.S. that their end clients use, that company would typically need to combine bid-stream data, low-quality aggregator data, their own always-on SDK data and/or first-party SDK companies. They would need to get to 30M DAUs/75M MAUs for measurement and 30M DAUs/250M MAUs for audience retargeting.

Even if one combines the top three companies in the location space, 70% of the true first-party data in the market (from an “always-on” location SDK), you only see ~30M DAUs in the U.S. (accounting for overlap). If you go downstream, there are only about 5,000 apps with over 2,000 DAUs that have appropriate permissions to run “always-on” location, with about 40% of that number monetizing location.

The reason why it seems like there is an infinite supply of location data in the market is because there’s still a huge number of companies taking low-fidelity data from ad-based SDKs (bid-stream) and creating derivative products to achieve artificial scale. This approach isn’t useful for measurement. Even worse, this approach does not have the privacy permissions needed to navigate a privacy conscious world.

Almost every location intelligence company has some
sort of SDK. However, the real questions that people should be asking when it
comes to understanding location data licensing are the following: 

  1. What percentage of users come from first-party SDK data vs. bid-stream or aggregated ad-based SDK data? Can you name the suppliers that make up your feed under a non-solicitation?  Most importantly, how do you really know it’s coming from an SDK?
  2. Do you know whether this data is coming directly from an app and not just recycled data from another 3rd party? Ask for the app categories and redacted screenshots of some of the larger apps’ privacy policies under a non-solicitation.
  3. Is the data being collected directly from an app? Ask questions about collection methodology; a clean panel will have a pretty standard methodology across the board.

The best panels on the market for measurement or audiences will have 60%+ of their data sourced from their own SDK or from other first-party SDK companies like X-Mode. However due to economics, and buyers not realizing that there are only a handful of companies that control first-party data, companies default to building their data products off of low-fidelity, low-cost data. Companies buying location data often think of it as a commodity, without thinking about data quality and privacy implications, which will occur in the coming months as it becomes much harder to sell data where neither the publisher nor users know their data is being monetized.

ML: You said 60% or 70% of X-Mode data has 20-meter (or better)
accuracy. How is that accomplished?

JA: We use high-accuracy GPS settings at
a specific cadence and machine learning to understand when the best time to
trigger location may be, around a visit/movement. Then we layer beacon
trilateration to enhance our collection methodology, which helps when mapping
locations in a mall or a dense city block.

ML: Many companies get location from the bid-stream but claim to clean it up and discard inaccurate data. Are you skeptical? And what role, if any, does bid-stream location data have to play in the ecosystem?

JA: I am skeptical because there are actually two issues with bid-stream data. The first is data accuracy (already discussed). The second is a lack of persistent collection. With bid-stream data, you are only capturing location when someone views an ad. It’s limited to online behavior.

ML: What impact do you believe CCPA will have on location data, in terms of its availability and quality?

JA: Right now, there is a lot of fluff in
the market. Only three main first-party suppliers of location data in the USA
(X-Mode, Cuebiq and Foursquare) control 70% of the first party supply of
background location in the market. These companies not only work with
publishers directly, but also have a quality SDK and control the relationship
with the publisher to pop-up the proper opt-ins needed to navigate CCPA or the
other 30+ states that are passing some sort of legislation requiring explicit
consent.

Third-party aggregators out there, getting data
either through ad-based SDKs (where publishers may not know their data is being
monetized) or through the bid-stream, will have issues not only sourcing data
at scale, but also providing that data with the proper consent mechanisms
needed by agencies and brands.

Privacy is a good thing because it gives consumers more control over their data. At the same time wipes out a lot of the “fluff data” in the market coming from 3rd party aggregators. Companies building location-based solutions will have to rely on first-party SDK data companies like X-Mode, Cuebiq, and Foursquare to power their solutions so they can stay on top of privacy and control quality. In the next 18 months, I expect to see both consolidation of first-party SDK players like ourselves and folks that are low quality aggregators to pivot or evolve into analytics or other location-based tools upstream.

ML: What are the most important things brands and
marketers need to understand about working with location data?

JA: The most important two things brands and marketers need to ask
themselves when looking at targeting and attribution are: 

Quality:

  • How did you calculate a visit and dwell time?
  • How was the data sourced to create this visit/dwell time and how confident are you about each visit/dwell time? What filters do you have in place for outliers (i.e., what was the cadence of collection, filters for speed when someone’s driving, etc.)?
  • What is the average number days seen across your panel (i.e., DAU to MAU Ratio)? 2+ weeks is the gold standard but anything above 5+ days is not bad.
  • In terms of how you map location data to a visit, what’s the high-level black box that you used to do this (i.e., polygon, check-ins, point radius, etc.)? Polygons and check-ins are going to be much more accurate than point radius, which is what most companies use today. Foursquare and Safegraph were ahead of the game here.

Transparency and privacy:

  • Where was this data sourced and how was it collected: server to server vs. SDK vs. directly from an app?
  • How are you determining or mandating you have the legal right to use this data for your defined use-case — that is: the contract terms and privacy policies of some of the larger contributors to your panel?

About The Author

Greg Sterling is a Contributing Editor at Search Engine Land. He writes about the connections between digital and offline commerce. He previously held leadership roles at LSA, The Kelsey Group and TechTV. Follow him Twitter or find him on LinkedIn.

[ad_2]

Source link

Amazon vs. Google: Decoding the World’s Largest E-commerce Search Engine

[ad_1]

Lorna_Franklin

A lot of people forget that Amazon is a search engine, let alone the largest search engine for e-commerce. With 54 percent of product searches now taking place on Amazon, it’s time to take it seriously as the world’s largest search engine for e-commerce. In fact, if we exclude YouTube as part of Google, Amazon is technically the second largest search engine in the world.

As real estate on Google becomes increasingly difficult to maintain, moving beyond a website-centric e-commerce strategy is a no brainer. With 54% of shoppers choosing to shop on e-commerce marketplaces, it’s no surprise that online marketplaces are the number one most important digital marketing channel in the US, according to a 2018 study by the Digital Marketing Institute. While marketplaces like Etsy and Walmart are growing fast, Amazon maintains its dominance of e-commerce market share owning 47 percent of online sales, and 5 percent of all retail sales in the US.

Considering that there are currently over 500 million products listed on Amazon.com, and more than two-thirds of clicks happen on the first page of Amazon’s search results—selling products on Amazon is no longer as easy as “set it and forget it.” 

Enter the power of SEO.

When we think of SEO, many of us are aware of the basics of how Google’s algorithm works, but not many of us are up to speed with SEO on Amazon. Before we delve into Amazon’s algorithm, it’s important to note how Google and Amazon’s starkly different business models are key to what drives their algorithms and ultimately how we approach SEO on the two platforms.

The academic vs. The stockbroker

Google was born in 1998 through a Ph.D. project by Lawrence Page and Sergey Brin. It was the first search engine of its kind designed to crawl and index the web more efficiently than any existing systems at the time.

Google was built on a foundation of scientific research and academia, with a mission to;

“Organize the world’s information and make it universally accessible and useful” — Google

Now, answering 5.6 billion queries every day, Google’s mission is becoming increasingly difficult — which is why their algorithm is designed as the most complex search engine in the world, continuously refined through hundreds of updates every year.

In contrast to Brin and Page, Jeff Bezos began his career on Wall Street in a series of jobs before starting Amazon in 1994 after reading that the web was growing at 2,300 percent. Determined to take advantage of this, he made a list of the top products most likely to sell online and settled with books because of their low cost and high demand. Amazon was built on a revenue model, with a mission to:

“Be the Earth’s most customer-centric company, where customers can find and discover anything they might want to buy online, and endeavors to offer its customers the lowest possible prices.” — Amazon

Amazon doesn’t have searcher intent issues

When it comes to SEO, the contrasting business models of these two companies lead the search engines to ask very different questions in order to deliver the right results to the user.

On one hand, we have Google who asks the question:

“What results most accurately answer the searcher’s query?”

Amazon, on the other hand, wants to know:

“What product is the searcher most likely to buy?”

On Amazon, people aren’t asking questions, they’re searching for products—and what’s more, they’re ready to buy. So, while Google is busy honing an algorithm that aims to understand the nuances of human language, Amazon’s search engine serves one purpose—to understand searches just enough to rank products based on their propensity to sell.

With this in mind, working to increase organic rankings on Amazon becomes a lot less daunting.

Amazon’s A9 algorithm: The secret ingredient

Amazon may dominate e-commerce search, but many people haven’t heard of the A9 algorithm. Which might seem unusual, but the reason Amazon isn’t keen on pushing their algorithm through the lens of a large scale search engine is simply that Amazon isn’t in the business of search.

Amazon’s business model is a well-oiled revenue-driving machine — designed first and foremost to sell as many products as possible through its online platform. While Amazon’s advertising platform is growing rapidly, and AWS continues as their fastest-growing revenue source — Amazon still makes a large portion of revenue through goods sold through the marketplace.

With this in mind, the secret ingredient behind Amazon’s A9 algorithm is, in fact: Sales Velocity

What is sales velocity, you ask? It’s essentially the speed and volume at which your products sell on Amazon’s marketplace.

There are lots of factors which Amazon SEO’s refer to as “direct” and “indirect” ranking factors, but ultimately every single one of them ties back to sales velocity in some way.

At Wolfgang Digital, we approach SEO on Google based on three core pillars — Technology, Relevance, and Authority.

Evidently, Google’s ranking pillars are all based on optimizing a website in order to drive click through on the SERP.

On the other hand, Amazon’s core ranking pillars are tied back to driving revenue through sales velocity — Conversion Rate, Keyword Relevance and of course, Customer Satisfaction.

Without further ado, let’s take a look at the key factors behind each of these pillars, and what you can optimize to increase your chances of ranking on Amazon’s coveted first page.

Conversion rate

Conversion rates on Amazon have a direct impact on where your product will rank because this tells Amazon’s algorithm which products are most likely to sell like hotcakes once they hit the first page.

Of all variables to monitor as an Amazon marketer, working to increase conversion rates is your golden ticket to higher organic rankings.

Optimize pricing

Amazon’s algorithm is designed to predict which products are most likely to convert. This is why the price has such a huge impact on where your products rank in search results. If you add a new product to Amazon at a cheaper price than the average competitor, your product is inclined to soar to the top-ranking results, at least until it gathers enough sales history to determine the actual sales performance.

Even if you’re confident that you have a supplier advantage, it’s worth checking your top-selling products and optimizing pricing where possible. If you have a lot of products, repricing software is a great way to automate pricing adjustments based on the competition while still maintaining your margins.

However, Amazon knows that price isn’t the only factor that drives sales, which is why Amazon’s first page isn’t simply an ordered list of items priced low to high. See the below Amazon UK search results for “lavender essential oil:”

Excluding the sponsored ads, we can still see that not all of the cheap products are ranked high and the more expensive ones lower down the page. So, if you’ve always maintained the idea that selling on Amazon is a race to the bottom on price, read on my friends.

Create listings that sell

As we discussed earlier, Amazon is no longer a “set it and forget” platform, which is why you should treat each of your product listings as you would a product page on your website. Creating listings that convert takes time, which is why not many sellers do it well, so it’s an essential tactic to steal conversions from the competition.

Title

Make your titles user-friendly, include the most important keywords at the front, and provide just enough information to entice clicks. Gone are the days of keyword stuffing titles on Amazon, in fact, it may even hinder your rankings by reducing clicks and therefore conversions.

Bullet points

These are the first thing your customer sees, so make sure to highlight the best features of your product using a succinct sentence in language designed to convert.

Improve the power of your bullet points by including information that your top competitors don’t provide. A great way to do this is to analyze the “answered questions” for some of your top competitors.

Do you see any trending questions that you could answer in your bullet points to help shorten the buyer journey and drive conversions to your product?

Product descriptions

Given that over 50 percent of Amazon shoppers said they always read the full description when they are considering purchasing a product, a well-written product description can have a huge impact on conversions.

Your description is likely to be the last thing a customer will read before they choose to buy your product over a competitor, so give these your time and care, reiterating points made in your bullet points and highlighting any other key features or benefits likely to push conversions over the line.

Taking advantage of A+ content for some of your best selling products is a great way to craft a visually engaging description, like this example from Safavieh.

Of course, A+ content requires additional design costs which may not be feasible for everyone. If you opt for text-only descriptions, make sure your content is easy to read while still highlighting the best features of your product.

For an in-depth breakdown on creating a beautifully crafted Amazon listing, I highly recommend this post from Startup Bros.

AB test images

Images are incredibly powerful when it comes to increasing conversions, so if you haven’t tried split testing different image versions on Amazon, you could be pleasantly surprised. One of the most popular tools for Amazon AB testing is Splitly — it’s really simple to use, and affordable with plans starting at $47 per month.

Depending on your product type, it may be worth investing the time into taking your own pictures rather than using the generic supplier provided images. Images that tend to have the biggest impact on conversions are the feature images (the one you see in search results) and close up images, so try testing a few different versions to see which has the biggest impact.

Amazon sponsored ads

The best thing about Amazon SEO is that your performance on other marketing channels can help support your organic performance.

Unlike on Google, where advertising has no impact on organic rankings, if your product performs well on Amazon ads, it may help boost organic rankings. This is because if a product is selling through ads, Amazon’s algorithm may see this as a product that users should also see organically.

A well-executed ad campaign is particularly important for new products, in order to boost their sales velocity in the beginning and build up the sales history needed to rank better organically.

External traffic

External traffic involves driving traffic from social media, email, or other sources to your Amazon products.

While external sources of traffic are a great way to gain more brand exposure and increase customer reach, a well-executed external traffic strategy also impacts your organic rankings because of its role in increasing sales and driving up conversion rates.

Before you start driving traffic straight to your Amazon listing, you may want to consider using a landing page tool like Landing Cube in order to protect your conversion rate as much as possible.

With a landing page tool, you drive traffic to a landing page where customers get a special offer code to use on your product listing page—this way, you only drive traffic which is guaranteed to convert.

Keyword relevance

A9 still relies heavily on keyword matching to determine the relevance of a product to searcher’s query, which is why this is a core pillar of Amazon SEO.

While your title, bullet points, and descriptions are essential for converting customers, if you don’t include the relevant keywords, your chances of driving traffic to convert are slim to none.

Every single keyword incorporated in your Amazon listing will impact your rankings, so it’s important to deploy a strategic approach.

Steps for targeting the right keywords on Amazon:

  1. Brainstorm as many search terms you think someone would use to find your product.
  2. Analyze 3–5 competitors with the most reviews to identify their target keywords.
  3. Validate the top keywords for your product using an Amazon keyword tool such as Magnet, Ahrefs, or Keywordtool.io.
  4. Download the keyword lists into Excel, and filter out any duplicate or irrelevant keywords. 
  5. Prioritize search terms with the highest search volume, bearing in mind that broad terms will be harder to rank for. Depending on the competition, it may make more sense to focus on lower volume terms with lower competition—but this can always be tested later on.

Once you have refined the keywords you want to rank for, here are some things to remember:

  • Include your most important keywords at the start of the title, after your brand name.
  • Use long-tail terms and synonyms throughout your bullets points and descriptions.
  • Use your backend search terms wisely — these are a great place for including some common misspellings, different measurement versions e.g. metric or imperial, color shades and descriptive terms.
  • Most importantly — don’t repeat keywords. If you’ve included a search term once in your listing i.e. the title, you don’t need to include it in your backend search terms. Repeating a keyword, or keyword stuffing will not improve your rankings.

Customer satisfaction

Account health

Part of Amazon’s mission statement is “to be the Earth’s most customer-centric company.” This relentless focus on the customer is what drives Amazon’s astounding customer retention, with 85 percent of Prime shoppers visiting the marketplace at least once a week and 56% of non-Prime members reporting the same. A focus on the customer is at the core of Amazon’s success, which is why stringent customer satisfaction metrics are a key component to selling on Amazon.

Your account health metrics are the bread and butter of your success as an Amazon seller, which is why they’re part of Amazon’s core ranking algorithm. Customer experience is so important to Amazon that, if you fail to meet the minimum performance requirements, you risk getting suspended as a seller—and they take no prisoners.

On the other hand, if you are meeting your minimum requirements but other sellers are performing better than you by exceeding theirs, they could be at a ranking advantage. 

Customer reviews

Customer reviews are one of the most important Amazon ranking factors — not only do they tell Amazon how customers feel about your product, but they are one of the most impactful conversion factors in e-commerce. Almost 95 percent of online shoppers read reviews before buying a product, and over 60 percent of Amazon customers say they wouldn’t purchase a product with less than 4.5 stars.

On Amazon, reviews help to drive both conversion rate and keyword relevance, particularly for long-tail terms. In short, they’re very important.

Increasing reviews for your key products on Amazon was historically a lot easier, through acquiring incentivized reviews. However, in 2018, Amazon banned sellers from incentivizing reviews which makes it even more difficult to actively build reviews, especially for new products.

Tips for building positive reviews on Amazon:

  • Maintain consistent communication throughout the purchase process using Amazon email marketing software. Following up to thank someone for their order and notify when the order if fulfilled, creates a seamless buying experience which leaves customers more likely to give a positive review.
  • Adding branded package inserts to thank customers for their purchase makes the buying experience personal, differentiating you as a brand rather than a nameless Amazon seller. Including a friendly reminder to leave a review in a nice delivery note will have better response rates than the generic email they receive from Amazon.
  • Providing upfront returns information without a customer having to ask for it shows customers you are confident in the quality of your product. If a customer isn’t happy with your product, adding fuel to the fire with a clunky or difficult returns process is more likely to result in negative reviews through sheer frustration.
  • Follow up with helpful content related to your products such as instructions, decor inspiration, or recipe ideas, including a polite reminder to provide a review in exchange.
  • And of course, deliver an amazing customer experience from start to finish.

Key takeaways for improving Amazon SEO

As a marketer well versed in the world of Google, venturing onto Amazon can seem like a culture shock — but mastering the basic principles of Amazon SEO could be the difference between getting lost in a sea of competitors and driving a successful Amazon business.

  • Focus on driving sales velocity through increasing conversion rate, improving keyword relevance, nailing customer satisfaction and actively building reviews.
  • Craft product listings for customers first, search engines second.
  • Don’t neglect product descriptions in the belief that no one reads them—over 50% of Amazon shoppers report reading the full description before buying a product.
  • Keywords carry a lot of weight. If you don’t include a keyword in your listing, your chances of ranking for it are slim.
  • Images are powerful. Take your own photos instead of using generic supplier images and be sure to test, test, and test.
  • Actively build positive reviews by delivering an amazing customer experience.
  • Invest in PPC and driving external traffic to support organic performance, especially for new products.

What other SEO tips or tactics do you apply on Amazon? Tell me in the comments below!

[ad_2]

Source link