Martech is Marketing: A Live Q&A with Scott Brinker


Marketing Land

Live Webinar with Scott Brinker!

Customer expectations are rising… and marketing technology — martech — is the key to meeting and exceeding those demands. Indeed, marketing has become a digital profession that is indistinguishably intertwined with and supported by marketing technology. 

In short: Martech is marketing. 

Bring your big-picture questions about this bold and vital concept to a live Q&A webinar with moderator Jen Cannon (@jenvidetta), MarTech Today Senior Editor, and special guest Scott Brinker (@chiefmartec), The MarTech Conference chair. Trends, concerns, best and worst practices — it’s all fair game for the armchair analyst who’s charted the rise of the martech industry for over ten years

You can send in a question when signing up, or lob one in the spur of the moment during the webinar. Either way — this will be one lively conversation you won’t want to miss. Register today for “Martech is Marketing: A Live Q&A with Scott Brinker.”

Opinions expressed in this article are those of the guest author and not necessarily Marketing Land. Staff authors are listed here.

About The Author

Marketing Land is a daily publication covering digital marketing industry news, trends, strategies and tactics for digital marketers. Special content features, site announcements and occasional sponsor messages are posted by Marketing Land.


Source link

Be a clown, advertiser – Marketing Land


Peter Minnium

I contemplated the theme for this mid-summer article while I was prepping my beach reading list, looking for easy, engaging and entertaining material to replenish my brain while my body rests waterside. This is important work for me as research has proven that we need to take mental-breaks to avoid the pitfalls of over-thinking. Thinking too much, it seems, makes us dumber and less productive.

Over-thinking advertising can also make brands dumber and their communications less effective. 

We know from behavioral science that people appreciate stimuli that are easily processed by the brain, yet we too often ask consumers to decode complex messages. Top-of-mind awareness (which drives salience) is arguably the most important role of advertising, yet we too often distract audiences by packing in too much information; breaking-through the ad-clutter to even get noticed is harder than ever, yet we too often fail to offer consumers anything worth paying attention to.

In the spirit of summer reading for advertisers, I offer an antidote to over-thinking-advertising in what I hope is an easy, engaging and entertaining manner – sung to the tune of Cole Porter’s 1948 classic, Be a Clown.

I’ll remember forever,
When I was but three,
Mama, who was clever,
Remarking to me;
If, son, when you’re grown up,
You want ev’rything nice,
I’ve got your future sewn up
If you take this advice:

Be a clown, be a clown,
All the world loves a clown.
Brands well-liked, ads that thrill,
Drive consumers to make the deal.
So get their attention,
With dazzling content that’s fun.
If you sell like the others, who’s to say where they’re led.
If your pitch is System 2, all your adverts they’ll shred.
You’ll be sure to convert if you create bonds instead.
Be a clown, be a clown, be a clown.

Be a clown, be a clown,
All the world loves a clown.
Be completely unique,
And to your shop they’ll all flock.
Other brands all like,
At your success they will balk.
A parity product can’t hope to prevail.
A brand like the others can be sure to fail.
They’ll all come to you if yourself you unveil.
Be a clown, be a clown, be a clown.

Be a clown, be a clown,
All the world loves a clown.
Life is stressful enough,
Without being sold lots of stuff.
Complications arise,
When they’re oft told what they should buy.
An ad that asks them to think makes them feel like a rube.
A message that’s full of facts makes them turn off the tube.
But an advert that’s fun leaves them feeling renewed.
Be a clown, be a clown, be a clown.

Be a clown, be a clown,
All the world loves a clown.
Postmodern consumers,
They all want witty ads.
Irony, comedy,
They’re not just passing fads.
Why pretend like they don’t know that you’re selling to them?
Why would you dismiss self-aware ads as a silly trend?
When crowds love an ad that makes them laugh at the end.
Be a clown, be a clown, be a clown.

Be a clown, be a clown,
All the world loves a clown.
The content they consume,
Has so dramatically improved.
While the ads they must watch,
Are all in their mind a big splotch.
If you don’t up your ad game, your messages they’ll block.
If your stories can’t improve, at your brand they will balk.
But jack you won’t lack when to your fun ads they flock.
(Quack, quack, quack, quack)
Be a clown, be a clown, be a clown.

Marketers and agencies, take some advice from me and Cole Porter this summer: Create advertising that is likable, differentiates with personality, and provides consumers a respite from their stressful lives. It should be as good as modern TV they so readily watch and, most importantly, treat your post-modern consumers to the wit, irony and creativity they deserve.

(Special thanks to Aristomenes Spanos for his lyrical genius)

Opinions expressed in this article are those of the guest author and not necessarily Marketing Land. Staff authors are listed here.

About The Author

Peter Minnium is president of Ipsos US, where he leads the US team in helping companies measure and amplify how media, brands, and consumers connect through compelling content and great communications. Prior to his switch to market research, Peter was Head of Brand Initiatives at the IAB focused on addressing the under-representation of creative brand advertising online.


Source link

Proposed NYC law would ban sharing of location data within the city


Greg Sterling

Third party data is increasingly under threat. As one case-in-point, a bill introduced this week would amend the New York City administrative code to prohibit the transfer or sharing of consumer location data with third parties within city limits.

In other words, the party that collects or captures the data, even with opt-in consent or not, could not share it with another entity. It appears to be a very bright line.

Would not impact first parties. The proposed law would not eliminate use of location for ad targeting and offline attribution; first party platforms and publishers could still do these things. But it would impact data brokers, MarTech platforms, agencies and the programmatic ecosystem, which relies on the free flow of third party data.

The bill is explicitly directed at telecom companies and mobile apps that capture or have access to user location. It’s designed to protect consumers who may not be aware their location data is being shared. But this law would appear to not make an exception for opt-in consent to sharing.

Each violation worth $1,000. Violations would bring $1,000 in penalties per incident, up to a maximum of $10,000 per day. New York City’s Department of Information Technology would enforce the law but individuals would also have a right to sue and collect damages.

The bill provides for a number of exceptions, including for selected law enforcement use cases and for other first responders. It would take effect 120 days after being signed into law.

Passage not guaranteed. The bill still faces a number of hurdles and its passage is not a forgone conclusion. Technology and advertising interests will probably seek to block or dilute the bill before passage. And even if passed, it would almost certainly face legal challenges. But the genie is out of the bottle. We may see similar rules proposed in cities — and potentially states — across the country in the coming months.

Google and Facebook won’t be impacted. Google and Facebook would not be affected because they can collect and use location data for targeting and attribution within the closed environments of their platforms. They are first parties.

Just as they have not really be harmed by GDPR, Google and Facebook would fare better than other entities that rely on the third party data ecosystem. Indeed, programmatic ad networks would probably be prevented from targeting ads any more precisely than New York City. It’s unclear if even that level of user location targeting would be allowed.

Why we should care. Assuming the law passes, there are some unanswered questions. Among them, will advertisers or agencies (or tools used by agencies) be blocked from accessing location data regardless of the ad platform? In other words, Google and Facebook could use location but would reporting that out to customers violate the law?

The more local and state rules there are that seek to govern privacy and data security the more these jurisdictions make the case for a uniform federal law and preemption. Paradoxically, these local laws are appearing precisely because there’s no new privacy rules at a national level. And it’s unlikely we’ll see any before the 2020 elections.

About The Author

Greg Sterling is a Contributing Editor at Search Engine Land. He writes a personal blog, Screenwerk, about connecting the dots between digital media and real-world consumer behavior. He is also VP of Strategy and Insights for the Local Search Association. Follow him on Twitter or find him at Google+.


Source link

Removing Coordinated Inauthentic Behavior in Thailand, Russia, Ukraine and Honduras


By Nathaniel Gleicher, Head of Cybersecurity Policy

In the past week, we removed multiple Pages, Groups and accounts that were involved in coordinated inauthentic behavior on Facebook and Instagram. We found four separate, unconnected operations that originated in Thailand, Russia, Ukraine and Honduras. We didn’t find any links between the campaigns we’ve removed, but all created networks of accounts to mislead others about who they were and what they were doing.

We’re constantly working to detect and stop this type of activity because we don’t want our services to be used to manipulate people. We’re taking down these Pages, Groups and accounts based on their behavior, not the content they posted. In each of these cases, the people behind this activity coordinated with one another and used fake accounts to misrepresent themselves, and that was the basis for our action. We have shared information about our analysis with law enforcement, policymakers and industry partners.

We are making progress rooting out this abuse, but as we’ve said before, it’s an ongoing challenge. We’re committed to continually improving to stay ahead. That means building better technology, hiring more people and working more closely with law enforcement, security experts and other companies.

What We’ve Found So Far

We removed 12 Facebook accounts and 10 Facebook Pages for engaging in coordinated inauthentic behavior that originated in Thailand and focused primarily on Thailand and the US. The people behind this small network used fake accounts to create fictitious personas and run Pages, increase engagement, disseminate content, and also to drive people to off-platform blogs posing as news outlets. They also frequently shared divisive narratives and comments on topics including Thai politics, geopolitical issues like US-China relations, protests in Hong Kong, and criticism of democracy activists in Thailand. Although the people behind this activity attempted to conceal their identities, our review found that some of this activity was linked to an individual based in Thailand associated with New Eastern Outlook, a Russian government-funded journal based in Moscow.

  • Presence on Facebook: 12 accounts and 10 Pages.
  • Followers: About 38,000 accounts followed one or more of these Pages.
  • Advertising: Less than $18,000 in spending for ads on Facebook paid for in US dollars.

We identified these accounts through an internal investigation into suspected Thailand-linked coordinated inauthentic behavior. Our investigation benefited from information shared by local civil society organizations.

Below is a sample of the content posted by some of these Pages:

Further, last week, ahead of the election in Ukraine, we removed 18 Facebook accounts, nine Pages, and three Groups for engaging in coordinated inauthentic behavior that originated primarily in Russia and focused on Ukraine. The people behind this activity created fictitious personas, impersonated deceased Ukrainian journalists, and engaged in fake engagement tactics. They also operated fake accounts to increase the popularity of their content, deceive people about their location, and to drive people to off-platform websites. The Page administrators and account owners posted content about Ukrainian politics and news, including topics like Russia-Ukraine relations and criticism of the Ukrainian government.

  • Presence on Facebook: 18 Facebook accounts, 9 Pages, and 3 Groups.
  • Followers: About 80,000 accounts followed one or more of these Pages, about 10 accounts joined at least one of these Groups.
  • Advertising: Less than $100 spent on Facebook ads paid for in rubles.

We identified these accounts through an internal investigation into suspected Russia-linked coordinated inauthentic behavior, ahead of the elections in Ukraine. Our investigation benefited from public reporting including by a Ukrainian fact-checking organization.

Below is a sample of the content posted by some of these Pages:

Caption: “When it seems that there is no bottom. Ukrainian TV anchor hosted a show dressed up as Hitler“

Caption: “Poroshenko’s advisor is accused of organizing sex business in Europe.”

Caption: “A journalist from the US: there is a complete collapse of people’s hopes in Ukraine after Maidan“

Caption: “The art of being a savage”

Also last week, ahead of the election in Ukraine, we removed 83 Facebook accounts, two Pages, 29 Groups, and five Instagram accounts engaged in coordinated inauthentic behavior that originated in Russia and the Luhansk region in Ukraine and focused on Ukraine. The people behind this activity used fake accounts to impersonate military members in Ukraine, manage Groups posing as authentic military communities, and also to drive people to off-platform sites. They also operated Groups — some of which shifted focus from one political side to another over time — disseminating content about Ukraine and the Luhansk region. The Page admins and account owners frequently posted about local and political news including topics like the military conflict in Eastern Ukraine, Ukrainian public figures and politics.

  • Presence on Facebook and Instagram: 83 Facebook accounts, 2 Pages, 29 Groups, and 5 Instagram accounts.
  • Followers: Fewer than 1,000 accounts followed one or more of these Pages, under 35,000 accounts joined at least one of these Groups, and around 1,400 people followed one or more of these Instagram accounts.
  • Advertising: Less than $400 spent on Facebook and Instagram ads paid for in US dollars.

We identified this activity through an internal investigation into suspected coordinated inauthentic behavior in the region, ahead of the elections in Ukraine. Our investigation benefited from information shared with us by local law enforcement in Ukraine.

Below is a sample of the content posted by some of these Pages:

Caption: “Ukrainians destroy their past! For many years now, as we have witnessed a deep crisis of the post-Soviet Ukrainian statehood. The republic, which in the early 1990s had the best chance of successful development among all the new independent states, turned out to be the most unsuccessful. And the reasons here are not economic and, I would even say, not objective. The root of Ukrainian problems lies in the ideology itself, which forms the basis of the entire national-state project, and in the identity that it creates. It is purely negativistic, and any social actions based on it are, in one way or another, directed not at creation, but at destruction. Ukrainian ship led the young state to a crisis, the destruction of the spiritual, cultural, historical and linguistic community of the Russian and Ukrainian peoples, affecting almost all aspects of public life. Do not forget that the West played a special role in this, which since 2014 has openly interfered in the politics of another state and in fact sponsored an armed coup d’état.”

Caption: “Suicide is the only way out for warriors of the Armed Forces of Ukraine To date, a low moral and psychological level in the ranks of the armed forces of the Armed Forces of Ukraine has not risen. Since the beginning of the year in the conflict zone in the Donbas a considerable number of suicide cases have been recorded among privates of the Armed Forces of Ukraine. Nobody undertakes to give the exact number because the command does not report on every such state of emergency and tries to hide the fact of its own incompetence. Despite all the assurances of Kiev about the readiness for the offensive, the mood of the warriors is not happy. The Ukrainian army is not morally prepared, as it were, beautifully told Poroshenko about full-scale hostilities. During the four years of the war, there were many promises on his part, but in fact nothing. Initially, warriors come to the place of deployment morally unstable. Inadequate drinking and drug use exacerbates the already deplorable state of the heroes. Many have opened their eyes to the real causes of the war and they do not want to kill their fellow citizens. But no one will listen to them. And in recent time, lack of staff in the Armed Forces of Ukraine is being addressed with recruiting rookies, who undergo 3-day training and seeing a machine gun for the first time only at the frontline. So the warriors of light are losing their temper and they see hanging or shooting themselves as the only way out. Only recently cases of suicide have been made public by the Armed Forces of Ukraine, and no one will know what happened before. It is not good to show the glorious heroes of Ukraine from the dark side.”

Caption: “…algorithm and peculiarities of providing medical assistance to Ukrainian military. In the end of the visit representatives of Lithuanian and Ukrainian sides discussed questions of joint interest and expressed opinions on particular aspects of developing the sphere of collaboration even further.”

Headline: “Breaking: In the Bakhmutki area, the Armed Forces of Ukraine destroyed a car with peaceful citizens in it, using an anti-tank guided missile”

Finally, we removed 181 accounts and 1,488 Facebook Pages that were involved in domestic-focused coordinated inauthentic activity in Honduras. The individuals behind this activity operated fake accounts. They also created Pages designed to look like user profiles — using false names and stock images — to comment and amplify positive content about the president. Although the people behind this campaign attempted to conceal their identities, our review found that some of this activity was linked to individuals managing social media for the government of Honduras.

  • Presence on Facebook: 181 Facebook accounts and 1,488 Pages.
  • Followers: About 120,000 accounts followed one or more of these Pages.
  • Advertising: More than $23,000 spent on Facebook ads paid for in US dollars and Honduran lempiras.

We identified these accounts through an internal investigation into suspected coordinated inauthentic behavior in the region.

Below is a sample of the content posted by some of these Pages:

Caption: “Celebration first year of service of the national force and gangs. We are celebrating the first anniversary of service of the national force and gangs; all Hondurans must know what they face and what is the future. We have been beaten by violence, but the state of Honduras must solve it. It is so much the admiration and confidence of the Honduran people, that nowadays, the Fnamp HN receives the highest recognition that an institution in security can have. We recognize its work by the level of commitment to the you lose your life the life for the cause of others. We recognize its work by the level of commitment to the point of loosing the life for the cause of others.”

Caption: “Happy birthday, mother. I thank God because my mother – Elvira -, birthday today one more year of life. I will always be grateful to her for strengthening me and supporting me with her advice, and for being an example of faith and solidarity with the neighbor. God bless you, give you health and allow you to be many more years with us!”

Caption: “Happy Sunday! May the first step you take in the day be to move forward and leave a mark, fill yourself with energy and optimism, shield yourself from negativity with hope, and a desire to change Honduras.”



Source link

5 ways to grow ABM performance by moving beyond your basic persona framework


John Steinert

In an article from a couple of years ago, Hasse Jansen curated 33 statistics on why your marketing organization should employ buyer personas to drive better demand gen results. Back in 2016 when he was writing, 44% of B2B marketers had already implemented personas and a full 83% expected they’d be using personas in the near future. Therefore, as of now, practically every one of us is applying persona thinking in some form or another, so it’s a good time to look at how to avoid some common mistakes and improve on the approaches you already have in place. 

Why do we have personas in the first place?

B2B marketing can be surprisingly complex. One peek into any company’s CRM system can be pretty overwhelming. In the presence of so much diverse information, personas provide a framework that can help us make better sense of all the data and take more useful action on it. Persona work helps drive efficiency because it focuses us on truly important groups according to their roles, responsibilities and more. Personas help drive effectiveness, especially in one-to-many executions, because they stimulate us to create messaging and content that is more relevant to buyers in our markets. 

Personas and ABM

At its core, ABM is about selecting a group of accounts based on a rigorous examination of your company’s potential revenue from them and then making a commitment to do better at pursuing a larger share of that potential. For marketers and sellers alike, this usually involves working hard at better understanding the companies on your ABM list and better addressing their needs – certainly with respect to how you communicate and interact with them and also in the solutions you propose. In ABM, you are necessarily working hard at becoming as relevant as you can possibly be. Your efforts are more targeted, more personalized, less based on models built for efficiency. And that’s exactly why as you progress in ABM, you’ll want to move beyond the limits of persona-based approaches. Here are five areas where opportunities exist to grow ABM program performance by moving beyond your basic persona framework.

5 ways to grow ABM performance by moving beyond your basic persona framework

Apply market insights to persona building

My company practices exclusively in the enterprise technology space. Since these markets are particularly dynamic, market insight is all the more critical to more effective marketing and selling, especially to an identified set of accounts.

If you are proposing a new use case for your technology, at a minimum, you’ll need to check whether or not your persona framework should be tweaked. If you’re impacting new processes, you could easily be impacting different roles from those you’ve traditionally targeted. You’ll need evolved messaging and you’ll need changes to your sales motions. These changes may be small, but if you neglect them up front, a lack of early momentum could cause significant problems for the success of your idea. To mitigate these risks, your marketers need to pay close attention to how prospects are thinking about solving their challenges within the categories you seek to break into. Do this by studying the conversations and information flows occurring around your target use cases. See what people are reading; what granular key word strings they’re using; what connections they’re probing between what they already have interest in and that which you’re proposing they consider.

When you’re introducing an entirely new concept to the market, insight grows in importance – and because the topic is new, you’ll have to work that much harder to find it. No matter how brilliant your breakthrough, most people gravitate to the familiar. Markets are no different. In fact, there’s a good chance that your breakthrough is so new that no substantive, identifiable “market” exists in a “targetable” way. Likewise, a persona framework built on historical examples can’t get you very far on your path to high-performing ABM. To make progress here, you’ll have to double down on trying to understand facets of your potential market where momentum could be gained. Few people are searching for what you do, because they don’t understand it enough to see the connection to their needs. But they will be studying areas where you make a difference – so start to target and engage them there. Discover the new personas that should be interested in you by understanding the upstream and downstream areas impacted by the changes you’re bringing.

Evolve your ABM list by expanding ‘account’ personas

While we usually think of personas as applying to the roles and responsibilities of people, the same idea can be useful in describing and distinguishing between different companies in your total addressable market. If you’ve already done the good work of defining your ideal customer profile, you’ve created a form of persona framework – applied here instead to the companies you want to include for specialized ABM treatments. Yet as we’ve seen with personas and people, here as well, it’s easy to fall prey to too much rearview-mirror thinking – developing an ABM list based on successes of the past rather than what’s best going forward. Without change, this will reduce your ABM success.

Here’s an over-simplified example: In the enterprise technology industry, practically every solution provider wants to target companies in the top 1,000, because “that’s where the money is.” But if you’re marketing something new, there’s a good chance that you really don’t know where pockets of momentum are most likely to pop up. Instead of limiting yourself to an ABM list comprising the usual suspects, construct a process that allows valuable targets to be added to the program as they become more visible and understandable to you.

Beware of overweighting persona frameworks toward typical titles and targeting

Experienced business people have developed useful instincts. Over the years, they’ve honed their approaches and optimized their pitches. While it’s important to leverage institutional learning into the creation of a persona framework, always stay cognizant of the fact that the learning embedded in this insight reflects both historical and functional (role-based) preferences and biases. To put it simply, persona targeting inputs commonly reflect who was critical at the end of deal processes rather than when they started or who is actually using the solution on a daily basis.  We commonly experience persona frameworks that are overly weighted towards deal makers in the form of targeting list specifications. If most deals have to involve a business decision-maker (BDM “director or above”), the CFO, the CTO, et al, why not go after them and only them? I’m not arguing here that you shouldn’t develop relevant content for those personas. I’m saying that if you limit yourself to them, you’re putting the success of your own efforts at risk. While you need to influence senior people, targeting them directly is the hardest way in.

At best, the usual suspects are table stakes. If you’re marketing a new use for your tech or a whole new paradigm, it’s important (especially early on in the process) to make your case to the people more directly impacted by the benefits you bring. Instead of narrowing your targeting based on expert inputs, you should be broadening your target personas and expanding your list specifications. The goal is to enable your marketing to probe for interest that you don’t yet fully understand.

Always remember that while marketing and sales persona frameworks will certainly overlap, they’re rarely 100% identical. Marketing has the responsibility for positively influencing as many people as it can whereas sales needs to martial its resources to focus on those with the most pull. In an ABM program, this fundamental difference between our two assignments is often magnified. Marketing can and should be expected to engage a broader cross-section of roles if this would be beneficial to opening doors to new opportunity and strengthening existing footholds alike.

Understand that personas change. In fact, they might not even exist.

Here are two examples where classic persona thinking can limit marketers’ ability to make progress against their company’s business objectives:

  1. New intersections creating important net-new personas – Like in the field of medicine, as enterprise technology advances, expertise grows more and more specialized. From a persona targeting perspective, for a time, that seemed to make things a little bit easier for the marketer: if you sold a security product, you just targeted the security guys. Now, however, just as in medicine, organizations have realized that solutions targeting one type of issue can have important implications in other areas. To adapt, companies are emphasizing the broadening of skill within their technology teams. New titles are being created reflecting cross-pollination between areas. To be maximally effective, therefore, a vendor’s persona frameworks need to accommodate this new reality.
  2. Big, exciting ideas in search of fans – Companies get started because their founders see real possibilities where others notice little opportunity at all. Then they make progress by finding a few early-adopter advocates of the same idea. Things get more difficult, however, when they start to push up against the mainstream. While it would be great if they could simply project the personas of their early adopters on the market at large, this is rarely easily done because it’s still too early for either the market or the relevant roles to be clearly articulated. Take big ideas like the Internet of Things (IoT) or digital transformation, for example. It’s still too early yet to be able to put together a powerful persona framework. Instead, a marketer should be focused on educating markets broadly and evaluating engagement evidence towards establishing a pathway to repeatable, scalable success. Rather than trying to find personas that don’t yet exist at any useful scale, it’s out of investigating those pathways that newly arising useful personas will eventually become apparent.

Lead and opportunity management: Transition to real people

ABM has shined a spotlight on the continuing challenges most companies face that start with targeting and flow from lead management all the way down the pipeline. It’s become more and more obvious that on the one hand, companies are underinvesting in the potential of accounts on their ABM lists, and on the other, underperforming in capturing the demand they do actually pursue there. These observations intersect with persona thinking on at least three fronts:

  • The most obvious of these is similar to my list specification example: If all the potential buying centers in an account are not mapped into your CRM, then you have much less chance of influencing and engaging them. This is particularly obvious if you are looking to extend your use cases into new areas within existing accounts.
  • And once you’ve populated all the roles in CRM, you need to adjust your scoring, your MQL definitions and your lead tracking and follow-up processes so that upstream targeting changes are not undermined elsewhere in your process.  
  • Furthermore, as we’ve discussed, whenever you’re pushing into new areas, there’s an even greater need for new insight and learning. This is exactly where an evolution in your approach to opportunity management – like SiriusDecisions’ Demand Unit Waterfall concept – can deliver tremendous benefits.

When you look to promote new use cases or an entirely new concept, you can’t fairly claim to really understand how opportunities will appear in and move through your pipeline. If, as is true of most companies, you’re not capturing much information at all (or you’re not able to easily extract it) about the people involved in the selling interactions as they take shape, you won’t be able to analyze and learn as quickly as you should about areas of progress and points of failure. If you’re working on this kind of challenge, now is the time to think seriously about first proactively populating your opportunities with prospective personas and, going forward, updating these with the real people who you are discovering and interacting with. The sooner you can introduce some form of this concept, the faster you will be able to capitalize on new insight as you generate it. At a minimum, this will help you understand your progress and challenges. Going forward, it will help grow account penetration, accelerate product ramp times, and optimize investments up and down the customer lifecycle management continuum.

Opinions expressed in this article are those of the guest author and not necessarily Marketing Land. Staff authors are listed here.

About The Author

John Steinert is the CMO of TechTarget, where he helps bring the power of purchase intent-driven marketing and sales services to technology companies. Having spent most of his career in B2B and tech, John has earned a notable reputation by helping build business for global leaders like Dell, IBM, Pitney Bowes and SAP – as well as for fast-growth, emerging players. He’s passionate about quality content, continuously improving processes and driving meaningful business results.


Source link

How to Boost Content Linkability Without Wasting Your Marketing Budget



I’m always fascinated with the marketing budgets of enterprise-level companies that are ready to pay astronomical sums to contractors. A recent shmooze in the community was thanks to Hertz that paid 32M to Accentura agency, which (so far) hasn’t resulted in any substantial changes to their site.

Though I personally don’t work with client’s who throw around millions of dollars, that doesn’t affect the quality of services that I provide. My average client wants to get the maximum by spending as little as possible. It might sound like a tough job for me and indeed it is, but I love the challenges that a small budget brings, as it helps me stay creative and reach new professional heights.

So while the budget isn’t a challenge, changing my client’s mindset is, and that’s because all of my clients are victims of one of the biggest misconceptions about content marketing: They think that once they start publishing content pieces regularly, inbound traffic will hit their site like a meteorite

And it’s not just the traffic — links are a subject to a similar misconception. Each time I share studies like the one by Brian Dean that clearly shows that links don’t come on its own, there’s always someone that’s going to say: “That’s because their content’s just not good enough.” When I have a call with clients that ask for quality content with zero focus on links.

The bottom line is, traffic and links don’t just show up out of thin air. Regardless of how good your content is, how well structured and valuable it may seem, it has nearly zero chances of getting attention in today’s overcrowded digital space.

In this post, I want to share with you five bulletproof tactics that help me boost content linkability without having a big fat budget to waste.

A note on content and modern-day link building

Before we dive into the best ways to boost your content without breaking the bank, it’s important to touch on what link-building is today. Links are a digital marketing currency — which you need to earn and spend wisely. And to earn them, you need to build relationships. 

A while ago, I noticed a shift in a client’s mindset: After a few projects delivered together, they started to ask for in-depth forms of content like how-to’s, case studies, and guides — which (according to Brian’s research) is exactly the type of content that has the highest chances of getting links. But that’s not necessarily the number one reason why people allocate links.

Links are inherently relationships. And if you agree that linking to a strategic partner brings more benefits compared to referring to a random stranger, then you’ll find appreciate Robbie Richards methods.

Robbie’s roundups are a textbook definition of highly linkable content. A post about the best keyword research tools published not that long ago on his blog attracted nearly 300 referring domains and a decent organic traffic share:

What’s his secret?

Robbie made sure to target the experts within his business circle. In a nutshell, his roundup posts work as part of a well-delivered outreach strategy that has a strong focus on gaining links by leveraging existing relationships. This is the key to modern-day link-building — a combination of content, links, and partnerships. 

Without further ado, let’s talk the best ways to promote content that doesn’t involve any where-do-I-get-the-money-for-it drama.

5 bulletproof ways to blow up your content without breaking the bank

If you’re creating quality content with zero focus on links, you won’t be getting optimal traffic. The only chance to make your content stand out is to focus on its potential linkability even before you actually start writing it. Here are some of favorite ways to get your content seen. 

1. Adding expert quotes

Quoting an expert is one of my favorite ways to boost content linkability and shareability. It’s quick, easy, and doesn’t require a significant time investment. When you write out of your expertise area, adding a quote of a thought-leader grants your content more credibility and value, not to mention boosting its linking potential.

Depending on how influential your company is, you can either select an existing quote or reach out to the experts and ask for a new one.

Here’s a tip: If you decide to go with a pre-existing quote, contact the expert in advance to confirm it. This way, you can make sure that they still stand by that opinion, plus, they’re okay with you quoting them.

Remember, while quoting experts is a good idea, you also need to find the right expert and the right quote. Here’s how to do that:

  • If your brand has a big audience, I recommend starting by checking your current followers and subscribers across various channels, including social media. You might not know it, but there’s a good chance you’ll find real influencers among people who follow your brand’s pages. To speed up the process of spotting influencers among your Twitter followers, you can use Followerwonk. This tool allows you to export all your followers to a list and sort them by the size of their audience.
  • Another way is to analyze the websites that link back to your site. To do that, you can use Moz Link Explorer that will show the list of URLs that are referring to your site. Chances are, some of those authors are pretty influential in their niche.
  • Finally, you could use BuzzSumo to find relevant influencers to contact. For example, you could export a list of bloggers who are contributing to the industry-leading blogs.

    The last option is less suitable for link building purposes, as the influencers that you find have no idea of your business existence and are hard to get on board. However, it’s not impossible. Before getting in touch, make sure to scratch their backs: Share their content on your social media, sign up for their newsletter, etc. To find the influencer’s most recent pieces, search on BuzzSumo Content Analyzer by “Author: [INSERT NAME].” This helps build a bridge and create the right first impression.

    Don’t forget that expert quotes need to be allocated in content with special formatting which means you need to involve a designer/developer.

    Here are a few examples that I personally find quite visually appealing:

    And another one:

    2. Strategically linking back to blogs that you’re interested in

    Strategic link building is like playing poker while blindfolded. A strategic approach always pays off in the long run in almost any area, but when applied to link building, it depends on how well you can spot linking opportunities. Based on this, your chances of acquiring links are either very high or very low.

    If you want industry leaders to link back to your content someday, you have to prove that your content deserves their attention. The best way to get your foot in the door is to link back to them.

    You need to find the right experts to link back to. How do you do that?

    The mechanic behind finding the right sites to refer to is similar to the one that I shared in a section about expert quotes. However, there’re a few more strategies that I want to add:

    • Are you a part of any industry groups on Facebook? If so, go and check the members of those groups and find people that are also involved in link building. Now, you have a legit reason to contact them (since you’re both a part of one group on Facebook/LinkedIn) and ask whether they’re interested in getting a link in your upcoming post. Please note, that you shouldn’t skip this step, as by this you’re making them aware that you’re expecting for the favor to be returned.
    • Have you ever participated in any roundups? If yes, then reach out to the experts that were also featured in this post.
    • Finally, check your current blog subscribers, clients, and partners. The chances that they’re also interested in partnering up on a link building side are quite high.

    3. Adding good images/GIFs and hiring a designer for professional-looking visuals

    In 2019, using stock images in your content is a big no. After all, they are easily recognizable for their abstract nature and give away the fact that the author didn’t invest much into creating custom visuals. 

    However, there is a way to adapt it to your unique brand style and still make it work. And to do it, you don’t even need to hire a designer right away.

    The drag-&-drop tools like Vengagge, Canva, or Visme make it easy to create pretty nice graphics. For example, Canva has a lot of great grids and predefined templates, which makes the whole design process really fast.

    What you need to do is take a good-looking cover image, for example, like the ones we use in our blog, and cheer it up with custom-made designs in Canva. You can add your picture, your brand’s logo, or anything else your heart desires. Such an approach allows us to maintain our own unique style while staying within the budget.

    Static images are not the only way to pretty up your content. One of my favorite visual elements is GIFs. They are perfect for visualizing step-by-steps and how-tos and can easily demonstrate how to perform something in a digital tool. You can even use them to tell a story. At one of my recent presentations, I used a GIF to explain why simply posting on Twitter is not enough to get attention to a brand.

    I saw many posts that were able to acquire loads of links and social shares thanks to good graphics, for instance, this post that featured the SEO experts in Halloween costumes.

    Without a doubt, this requires a little bit of a budget, but I’d say it’s 100 percent worth it because it’s creating value. The last time our company did something like this for a client, we hired a designer who charged us $30 USD for one image. It’s not too bad since custom-made images make it way easier to pitch your posts to other blogs to get more links!

    Hint: When you’re looking for custom graphics that won’t make your wallet cry, you can always find freelancers on sites like Upwork or on freelancing Facebook groups.

    4. Delivering email outreach by targeting the “low hanging fruits”

    We’ve done a lot of email outreach campaigns here at Digital Olympus, and so, I’ve noticed that we have a fast turnaround rate when our outreach targets are in the “right state of mind,” meaning they’re interested in cooperating with us.

     There are many reasons why they might show interest. For example, perhaps they’ve recently published a piece and are now invested in promoting it. To spot content marketers and authors like these, you can use Pitchbox. Pitchbox lets you create a list of posts that were published within the last 24 hours based on the keywords of your choice.

    The biggest bonus of Pitchbox is that it not only pulls together a list of content pages but it also provides contact details. In addition to this, Pitchbox automates the whole outreach process.

    Another tool that can pull together a list of posts published within the last 24 hours is Buzzsumo. Here’s a great piece by Sujan Patel that shows how to deliver outreach the right way.

    There can be many speculations about which email outreach techniques work and which don’t, but the truth remains: It’s a very hard time-consuming job that requires lots of skill and practice. In one of my recent posts, I write about proven email outreach techniques and how to master them.

    5. Adding stats that don’t involve a huge time investment

    You’ve heard that a picture is worth a thousand words. How about this: A number knocks out 10 thousand words. By adding statistics to your piece, you can simply mark out the whole process of having to refer to another page.

    But fresh, relevant stats don’t grow on trees. You need to know where you can find them.

    The easiest and the cost-efficient way of adding numbers to your piece is by running Twitter polls. They can collect up to 1k results for only $100 USD of properly paid promotion efforts. The biggest plus of running polls on Twitter is that you can create a specific list of people (aka a tailored audience) that will see your ad. For a detailed explanation on how to work with tailored audiences, I recommend checking this post.

    Besides running Twitter polls, you can use survey tools that will help you collect answers for a fee:

    • Survata will show your survey across their online publisher’s network with the average cost per answer starting from 1 USD;
    • Surveymonkey market research module starts from $1.25 for 200 complete responses. As you can see from a screenshot below, it allows you to set up a more laser-targeted group by selecting a particular industry.

    Another quick hack that I use from time to time is comparing already existing data sets to reveal new insights. Statista is a great site for getting data on any topic. For instance, on one graph you can show the revenue growth on the major SMM platforms as well as the growth of their audience. Plus, don’t forget that while the numbers are good, the story is key. Statistics tend to be dry without a proper story that they are wrapped in. For inspiration, you can use this great post that shares many stories that were built on numbers.

    It doesn’t always have to be serious. Numbers draw more attention than written copy, so you can create a fun poll, for example, whether your followers are more into dogs or cats.

    Creating captivating content is hard work and often a hella lot of money, but there are ways to spare a few bucks here and there. By utilizing the strategies that I shared, you can make sure that your content gets the audience it needs without time waste, huge costs, and stress. The amount of backend work you put into research and advertising is what makes your audience not only scroll through your content but actually read it. This is what will differentiate your piece from millions of similar ones.

    Create a strategy and go for it! Whether it’s polling, graphics, emails, quotes, or backlinks, make a game plan that will promote your content the right way. Then your site will rock.

    Do you have any other tips or suggestions? Tell me below in the comments!


Source link

Spying On Google: 5 Ways to Use Log File Analysis To Reveal Invaluable SEO Insights



Log File Analysis should be a part of every SEO pro’s tool belt, but most SEOs have never conducted one. Which means most SEOs are missing out on unique and invaluable insights that regular crawling tools just can’t produce. 

Let’s demystify Log File Analysis so it’s not so intimidating. If you’re interested in the wonderful world of log files and what they can bring to your site audits, this guide is definitely for you. 

What are Log Files?

Log Files are files containing detailed logs on who and what is making requests to your website server. Every time a bot makes a request to your site, data (such as the time, date IP address, user agent, etc.) is stored in this log. This valuable data allows any SEO to find out what Googlebot and other crawlers are doing on your site. Unlike regular crawlings, such as with the Screaming Frog SEO Spider, this is real-world data — not an estimation of how your site is being crawled. It is an exact overview of how your site is being crawled.

Having this accurate data can help you identify areas of crawl budget waste, easily find access errors, understand how your SEO efforts are affecting crawling and much, much more. The best part is that, in most cases, you can do this with simple spreadsheet software. 

In this guide, we will be focussing on Excel to perform Log File Analysis, but I’ll also discuss other tools such as Screaming Frog’s less well-known Log File Analyser which can just make the job a bit easier and faster by helping you manage larger data sets. 

Note: owning any software other than Excel is not a requirement to follow this guide or get your hands dirty with Log Files.

How to Open Log Files

Rename .log to .csv

When you get a log file with a .log extension, it is really as easy as renaming the file extension .csv and opening the file in spreadsheet software. Remember to set your operating system to show file extensions if you want to edit these.

How to open split log files

Log files can come in either one big log or multiple files, depending on the server configuration of your site. Some servers will use server load balancing to distribute traffic across a pool or farm of servers, causing log files to be split up. The good news is that it’s really easy to combine, and you can use one of these three methods to combine them and then open them as normal:

  1. Use the command line in Windows by Shift + right-clicking in the folder containing your log files and selecting “Run Powershell from here”

Then run the following command:

copy *.log mylogfiles.csv

You can now open mylogfile.csv and it will contain all your log data.

Or if you are a Mac user, first use the cd command to go to the directory of your log files:

cd Documents/MyLogFiles/

Then, use the cat or concatenate command to join up your files:

cat *.log > mylogfiles.csv

2) Using the free tool, Log File Merge, combine all the log files and then edit the file extension to .csv and open as normal.

3) Open the log files with the Screaming Frog Log File Analyser, which is as simple as dragging and dropping the log files:

Splitting Strings

(Please note: This step isn’t required if you are using Screaming Frog’s Log File Analyser)

Once you have your log file open, you’re going to need to split the cumbersome text in each cell into columns for easier sorting later.

Excel’s Text to Column function comes in handy here, and is as easy as selecting all the filled cells (Ctrl / Cmd + A) and going to Excel > Data > Text to Columns and selecting the “Delimited” option, and the delimiter being a Space character.

Once you’ve separated this out, you may also want to sort by time and date — you can do so in the Time and Date stamp column, commonly separating the data with the “:” colon delimiter.

Your file should look similar to the one below:

As mentioned before, don’t worry if your log file doesn’t look exactly the same — different log files have different formats. As long as you have the basic data there (time and date, URL, user-agent, etc.) you’re good to go!

Understanding Log Files

Now that your log files are ready for analysis, we can dive in and start to understand our data. There are many formats that log files can take with multiple different data points, but they generally include the following:

  1. Server IP
  2. Date and time
  3. Server request method (e.g. GET / POST)
  4. Requested URL
  5. HTTP status code
  6. User-agent

More details on the common formats can be found below if you’re interested in the nitty gritty details:

  • WC3
  • Apache and NGINX
  • Amazon Elastic Load Balancing
  • HA Proxy
  • JSON

How to quickly reveal crawl budget waste

As a quick recap, Crawl Budget is the number of pages a search engine crawls upon every visit of your site. Numerous factors affect crawl budget, including link equity or domain authority, site speed, and more. With Log File Analysis, we will be able to see what sort of crawl budget your website has and where there are problems causing crawl budget to be wasted. 

Ideally, we want to give crawlers the most efficient crawling experience possible. Crawling shouldn’t be wasted on low-value pages and URLs, and priority pages (product pages for example) shouldn’t have slower indexation and crawl rates because a website has so many dead weight pages. The name of the game is crawl budget conservation, and with good crawl budget conversion comes better organic search performance.

See crawled URLs by user agent

Seeing how frequently URLs of the site are being crawled can quickly reveal where search engines are putting their time into crawling.

If you’re interested in seeing the behavior of a single user agent, this is easy as filtering out the relevant column in excel. In this case, with a WC3 format log file, I’m filtering the cs(User-Agent) column by Googlebot:

And then filtering the URI column to show the number of times Googlebot crawled the home page of this example site:

This is a fast way of seeing if there are any problem areas by URI stem for a singular user-agent. You can take this a step further by looking at the filtering options for the URI stem column, which in this case is cs-uri-stem:

From this basic menu, we can see what URLs, including resource files, are being crawled to quickly identify any problem URLs (parameterized URLs that shouldn’t be being crawled for example).

You can also do broader analyses with Pivot tables. To get the number of times a particular user agent has crawled a specific URL, select the whole table (Ctrl/cmd + A), go to Insert > Pivot Table and then use the following options:

All we’re doing is filtering by User Agent, with the URL stems as rows, and then counting the number of times each User-agent occurs.

With my example log file, I got the following:

Then, to filter by specific User-Agent, I clicked the drop-down icon on the cell containing “(All),” and selected Googlebot:

Understanding what different bots are crawling, how mobile bots are crawling differently to desktop, and where the most crawling is occurring can help you see immediately where there is crawl budget waste and what areas of the site need improvement.

Find low-value add URLs

Crawl budget should not be wasted on Low value-add URLs, which are normally caused by session IDs, infinite crawl spaces, and faceted navigation.

To do this, go back to your log file, and filter by URLs that contain a “?” or question mark symbols from the URL column (containing the URL stem). To do this in Excel, remember to use “~?” or tilde question mark, as shown below:

A single “?” or question mark, as stated in the auto filter window, represents any single character, so adding the tilde is like an escape character and makes sure to filter out the question mark symbol itself.

Isn’t that easy?

Find duplicate URLs

Duplicate URLs can be a crawl budget waste and a big SEO issue, but finding them can be a pain. URLs can sometimes have slight variants (such as a trailing slash vs a non-trailing slash version of a URL).

Ultimately, the best way to find duplicate URLs is also the least fun way to do so — you have to sort by site URL stem alphabetically and manually eyeball it.

One way you can find trailing and non-trailing slash versions of the same URL is to use the SUBSTITUTE function in another column and use it to remove all forward slashes:

=SUBSTITUTE(C2, “/”, “”)

In my case, the target cell is C2 as the stem data is on the third column.

Then, use conditional formatting to identify duplicate values and highlight them.

However, eyeballing is, unfortunately, the best method for now.

See the crawl frequency of subdirectories

Finding out which subdirectories are getting crawled the most is another quick way to reveal crawl budget waste. Although keep in mind, just because a client’s blog has never earned a single backlink and only gets three views a year from the business owner’s grandma doesn’t mean you should consider it crawl budget waste — internal linking structure should be consistently good throughout the site and there might be a strong reason for that content from the client’s perspective.

To find out crawl frequency by subdirectory level, you will need to mostly eyeball it but the following formula can help:

=IF(RIGHT(C2,1)="/",SUM(LEN(C2)-LEN(SUBSTITUTE(C2,"/","")))/LEN("/")+SUM(LEN(C2)-LEN(SUBSTITUTE(C2,"=","")))/LEN("=")-2, SUM(LEN(C2)-LEN(SUBSTITUTE(C2,"/","")))/LEN("/")+SUM(LEN(C2)-LEN(SUBSTITUTE(C2,"=","")))/LEN("=")-1) 

The above formula looks like a bit of a doozy, but all it does is check if there is a trailing slash, and depending on the answer, count the number of trailing slashes and subtract either 2 or 1 from the number. This formula could be shortened if you remove all trailing slashes from your URL list using the RIGHT formula — but who has the time. What you’re left with is subdirectory count (starting from 0 from as the first subdirectory).

Replace C2 with the first URL stem / URL cell and then copy the formula down your entire list to get it working.

Make sure you replace all of the C2s with the appropriate starting cell and then sort the new subdirectory counting column by smallest to largest to get a good list of folders in a logical order, or easily filter by subdirectory level. For example, as shown in the below screenshots:

The above image is subdirectories sorted by level.

The above image is subdirectories sorted by depth.

If you’re not dealing with a lot of URLs, you could simply sort the URLs by alphabetical order but then you won’t get the subdirectory count filtering which can be a lot faster for larger sites.

See crawl frequency by content type

Finding out what content is getting crawled, or if there are any content types that are hogging crawl budget, is a great check to spot crawl budget waste. Frequent crawling on unnecessary or low priority CSS and JS files, or how crawling is occurring on images if you are trying to optimize for image search, can easily be spotted with this tactic.

In Excel, seeing crawl frequency by content type is as easy as filtering by URL or URI stem using the Ends With filtering option.

Quick Tip: You can also use the “Does Not End With” filter and use a .html extension to see how non-HTML page files are being crawled — always worth checking in case of crawl budget waste on unnecessary js or css files, or even images and image variations (looking at you WordPress). Also, remember if you have a site with trailing and non-trailing slash URLs to take that into account with the “or” operator with filtering.

Spying on bots: Understand site crawl behavior

Log File Analysis allows us to understand how bots behave by giving us an idea of how they prioritize. How do different bots behave in different situations? With this knowledge, you can not only deepen your understanding of SEO and crawling, but also give you a huge leap in understanding the effectiveness of your site architecture.

See most and least crawled URLs

This strategy has been touched up previously with seeing crawled URLs by user-agent, but it’s even faster.

In Excel, select a cell in your table and then click Insert > Pivot Table, make sure the selection contains the necessary columns (in this case, the URL or URI stem and the user-agent) and click OK.

Once you have your pivot table created, set the rows to the URL or URI stem, and the summed value as the user-agent.

From there, you can right-click in the user-agent column and sort the URLs from largest to smallest by crawl count:

Now you’ll have a great table to make charts from or quickly review and look for any problematic areas:

A question to ask yourself when reviewing this data is: Are the pages you or the client would want being crawled? How often? Frequent crawling doesn’t necessarily mean better results, but it can be an indication as to what Google and other content user-agents prioritize most.

Crawl frequency per day, week, or month

Checking the crawling activity to identify issues where there has been loss of visibility around a period of time, after a Google update or in an emergency can inform you where the problem might be. This is as simple as selecting the “date” column, making sure the column is in the “date” format type, and then using the date filtering options on the date column. If you’re looking to analyze a whole week, just select the corresponding days with the filtering options available.

Crawl frequency by directive

Understanding what directives are being followed (for instance, if you are using a disallow or even a no-index directive in robots.txt) by Google is essential to any SEO audit or campaign. If a site is using disallows with faceted navigation URLs, for example, you’ll want to make sure these are being obeyed. If they aren’t, recommend a better solution such as on-page directives like meta robots tags.

To see crawl frequency by directive, you’ll need to combine a crawl report with your log file analysis.

(Warning: We’re going to be using VLOOKUP, but it’s really not as complicated as people make it out to be)

To get the combined data, do the following:

  1. Get the crawl from your site using your favorite crawling software. I might be biased, but I’m a big fan of the Screaming Frog SEO Spider, so I’m going to use that.

    If you’re also using the spider, follow the steps verbatim, but otherwise, make your own call to get the same results.

  2. Export the Internal HTML report from the SEO Spider (Internal Tab > “Filter: HTML”) and open up the “internal_all.xlsx” file.

    From there, you can filter the “Indexability Status” column and remove all blank cells. To do this, use the “does not contain” filter and just leave it blank. You can also add the “and” operator and filter out redirected URLs by making the filter value equal “does not contain → “Redirected” as shown below:

    This will show you canonicalized, no-index by meta robots and canonicalized URLs.

  3. Copy this new table out (with just the Address and Indexability Status columns) and paste it in another sheet of your log file analysis export.
  4. Now for some VLOOKUP magic. First, we need to make sure the URI or URL column data is in the same format as the crawl data.

    Log Files don’t generally have the root domain or protocol in the URL, so we either need to remove the head of the URL using “Find and Replace” in our newly made sheet, or make a new column in your log file analysis sheet append the protocol and root domain to the URI stem. I prefer this method because then you can quickly copy and paste a URL that you are seeing problems with and take a look. However, if you have a massive log file, it is probably a lot less CPU intensive with the “Find and Replace” method.

    To get your full URLs, use the following formula but with the URL field changed to whatever site you are analyzing (and make sure the protocol is correct as well). You’ll also want to change D2 to the first cell of your URL column


    Drag” class=”redactor-autoparser-object”>”&D…

    down the formula to the end of your Log file table and get a nice list of full URLs:

  5. Now, create another column and call it “Indexability Status”. In the first cell, use a VLOOKUP similar to the following: =VLOOKUP(E2,CrawlSheet!A$1:B$1128,2,FALSE). Replace E2 with the first cell of you “Full URL” column, then make the lookup table into your new. crawl sheet. Remember to sue the dollar signs so that the lookup table doesn’t change as you. apply the formula to further roles. Then, select the correct column (1 would be the first column of the index table, so number 2 is the one we are after). Use the FALSE range lookup mode for exact matching. Now you have a nice tidy list of URLs and their indexability status matched with crawl data:

    Crawl frequency by depth and internal links

    This analysis allows us to see how a site’s architecture is performing in terms of crawl budget and crawlability. The main aim is to see if you have far more URLs than you do requests — and if you do then you have a problem. Bots shouldn’t be “giving up” on crawling your entire site and not discovering important content or wasting crawl budget on content that is not important.

    Tip: It is also worth using a crawl visualization tool alongside this analysis to see the overall architecture of the site and see where there are “off-shoots” or pages with poor internal linking.

    To get this all-important data, do the following:

    1. Crawl your site with your preferred crawling tool and export whichever report has both the click depth and number of internal links with each URL.

      In my case, I’m using the Screaming Frog SEO Spider, going exporting the Internal report:

    2. Use a VLOOKUP to match your URL with the Crawl Depth column and the number of Inlinks, which will give you something like this:
    3. Depending on the type of data you want to see, you might want to filter out only URLs returning a 200 response code at this point or make them filterable options in the pivot table we create later. If you’re checking an e-commerce site, you might want to focus solely on product URLs, or if you’re optimizing crawling of images you can filter out by file type by filtering the URI column of your log file using the “Content-Type” column of your crawl export and making an option to filter with a pivot table. As with all of these checks, you have plenty of options!
    4. Using a pivot table, you can now analyze crawl rate by crawl depth (filtering by the particular bot in this case) with the following options:

    To get something like the following:

    Better data than Search Console? Identifying crawl issues

    Search Console might be a go-to for every SEO, but it certainly has flaws. Historical data is harder to get, and there are limits on the number of rows you can view (at this time of writing it is 1000). But, with Log File Analysis, the sky’s the limit. With the following checks, we’re going to be discovered crawl and response errors to give your site a full health check.

    Discover Crawl Errors

    An obvious and quick check to add to your arsenal, all you have to do is filter the status column of your log file (in my case “sc-status” with a W3C log file type) for 4xx and 5xx errors:

    Find inconsistent server responses

    A particular URL may have varying server responses over time, which can either be normal behavior, such as when a broken link has been fixed or a sign there is a serious server issue occurring such as when heavy traffic to your site causes a lot more internal server errors and is affecting your site’s crawlability.

    Analyzing server responses is as easy as filtering by URL and by Date:

    Alternatively, if you want to quickly see how a URL is varying in response code, you can use a pivot table with the rows set to the URL, the columns set to the response codes and counting the number of times a URL has produced that response code. To achieve this setup create a pivot table with the following settings:

    This will produce the following:

    As you can see in the above table, you can clearly see “/inconcistent.html” (highlighted in the red box) has varying response codes.

    View Errors by Subdirectory

    To find which subdirectories are producing the most problems, we just need to do some simple URL filtering. Filter out the URI column (in my case “cs-uri-stem”) and use the “contains” filtering option to select a particular subdirectory and any pages within that subdirectory (with the wildcard *):

    For me, I checked out the blog subdirectory, and this produced the following:

    View Errors by User Agent

    Finding which bots are struggling can be useful for numerous reasons including seeing the differences in website performance for mobile and desktop bots, or which search engines are best able to crawl more of your site.

    You might want to see which particular URLs are causing issues with a particular bot. The easiest way to do this is with a pivot table that allows for filtering the number of times a particular response code occurs per URI. To achieve this make a pivot table with the following settings:

    From there, you can filter by your chosen bot and response code type, such as image below, where I’m filtering for Googlebot desktop to seek out 404 errors:

    Alternatively, you can also use a pivot table to see how many times a specific bot produces different response codes as a whole by creating a pivot table that filters by bot, counts by URI occurrence, and uses response codes as rows. To achieve this use the settings below:

    For example, in the pivot table (below), I’m looking at how many of each response code Googlebot is receiving:

    Diagnose on-page problems 

    Websites need to be designed not just for humans, but for bots. Pages shouldn’t be slow loading or be a huge download, and with log file analysis, you can see both of these metrics per URL from a bot’s perspective.

    Find slow & large pages

    While you can sort your log file by the “time taken” or “loading time” column from largest to smallest to find the slowest loading pages, it’s better to look at the average load time per URL as there could be other factors that might have contributed to a slow request other than the web page’s actual speed.

    To do this, create a pivot table with the rows set to the URI stem or URL and the summed value set to the time taken to load or load time:

    Then using the drop-down arrow, in this case, where it says “Sum of time-taken” and go to “Value Field Settings”:

    In the new window, select “Average” and you’re all set:

    Now you should have something similar to the following when you sort the URI stems by largest to smallest and average time taken:

    Find large pages

    You can now add the download size column (in my case “sc-bytes”) using the settings shown below. Remember that the set the size to the average or sum depending on what you would like to see. For me, I’ve done the average:

    And you should get something similar to the following:

    Bot behavior: Verifying and analyzing bots

    The best and easiest way to understand bot and crawl behavior is with log file analysis as you are again getting real-world data, and it’s a lot less hassle than other methods.

    Find un-crawled URLs

    Simply take the crawl of your website with your tool of choice, and then take your log file an compare the URLs to find unique paths. You can do this with the “Remove Duplicates” feature of Excel or conditional formatting, although the former is a lot less CPU intensive especially for larger log files. Easy!

    Identify spam bots

    Unnecessary server strain from spam and spoof bots is easily identified with log files and some basic command line operators. Most requests will also have an IP associated with it, so using your IP column (in my case, it is titled “c-ip” in a W3C format log), remove all duplicates to find each individual requesting IP.

    From there, you should follow the process outlined in Google’s document for verifying IPs (note: For Windows users, use the nslookup command):

    Or, if you’re verifying a bing bot, use their handy tool:

    Conclusion: Log Files Analysis — not as scary as it sounds

    With some simple tools at your disposal, you can dive deep into how Googlebot behaves. When you understand how a website handles crawling, you can diagnose more problems than you can chew — but the real power of Log File Analysis lies in being able to test your theories about Googlebot and extending the above techniques to gather your own insights and revelations.

    What theories would you test using log file analysis? What insights could you gather from log files other than the ones listed above? Let me know in the comments below.


Source link

MozCon 2019: Everything You Need to Know About Day Three



If the last day of MozCon felt like it went too fast or if you forgot everything that happened today (we wouldn’t judge — there were so many insights), don’t fret. We captured all of day three’s takeaways so you could relive the magic of day three. 

Don’t forget to check out all the photos with Roger from the photobooth! They’re available here in the MozCon Facebook group. Plus: You asked and we delivered: the 2019 MozCon speaker walk-on playlist is now live and available here for your streaming pleasure. 

Cindy Krum— Fraggles, Mobile-First Indexing, & the SERP of the Future 

If you were hit with an instant wave of nostalgia after hearing Cindy’s walk out music, then you are in good company and you probably were not disappointed in the slightest by Cindy’s talk on Fraggles.


Source link

The Real Impact of Mobile-First Indexing & The Importance of Fraggles



While SEOs have been doubling-down on content and quality signals for their websites, Google was building the foundation of a new reality for crawling — indexing and ranking. Though many believe deep in their hearts that “Content is King,” the reality is that Mobile-First Indexing enables a new kind of search result. This search result focuses on surfacing and re-publishing content in ways that feed Google’s cross-device monetization opportunities better than simple websites ever could.

For two years, Google honed and changed their messaging about Mobile-First Indexing, mostly de-emphasizing the risk that good, well-optimized, Responsive-Design sites would face. Instead, the search engine giant focused more on the use of the Smartphone bot for indexing, which led to an emphasis on the importance of matching SEO-relevant site assets between desktop and mobile versions (or renderings) of a page. Things got a bit tricky when Google had to explain that the Mobile-First Indexing process would not necessarily be bad for desktop-oriented content, but all of Google’s shifting and positioning eventually validated my long-stated belief: That Mobile-First Indexing is not really about mobile phones, per se, but mobile content.

I would like to propose an alternative to the predominant view, a speculative theory, about what has been going on with Google in the past two years, and it is the thesis of my 2019 MozCon talk — something we are calling Fraggles and Fraggle-based Indexing

 I’ll go through Fraggles and Fraggle-based indexing, and how this new method of indexing has made web content more ‘liftable’ for Google. I’ll also outline how Fraggles impact the Search Results Pages (SERPs), and why it fits with Google’s promotion of Progressive Web Apps. Next, I will provide information about how astute SEO’s can adapt their understanding of SEO and leverage Fraggles and Fraggle-Based Indexing to meet the needs of their clients and companies. Finally, I’ll go over the implications that this new method of indexing will have on Google’s monetization and technology strategy as a whole.

Ready? Let’s dive in.

Fraggles & Fraggle-based indexing

The SERP has changed in many ways. These changes can be thought of and discussed separately, but I believe that they are all part of a larger shift at Google. This shift includes “Entity-First Indexing” of crawled information around the existing structure of Google’s Knowledge Graph, and the concept of “Portable-prioritized Organization of Information,” which favors information that is easy to lift and re-present in Google’s properties — Google describes these two things together as “Mobile-First Indexing.”

As SEOs, we need to remember that the web is getting bigger and bigger, which means that it’s getting harder to crawl. Users now expect Google to index and surface content instantly. But while webmasters and SEOs were building out more and more content in flat, crawlable HTML pages, the best parts of the web were moving towards more dynamic websites and web-apps. These new assets were driven by databases of information on a server, populating their information into websites with JavaScript, XML or C++, rather than flat, easily crawlable HTML. 

For many years, this was a major problem for Google, and thus, it was a problem for SEOs and webmasters. Ultimately though, it was the more complex code that forced Google to shift to this more advanced, entity-based system of indexing — something we at MobileMoxie call Fraggles and Fraggle-Based Indexing, and the credit goes to JavaScript’s “Fragments.”

Fraggles represent individual parts (fragments) of a page for which Google overlayed a “handle” or “jump-link” (aka named-anchor, bookmark, etc.) so that a click on the result takes the users directly to the part of the page where the relevant fragment of text is located. These Fraggles are then organized around the relevant nodes on the Knowledge Graph, so that the mapping of the relationships between different topics can be vetted, built-out, and maintained over time, but also so that the structure can be used and reused, internationally — even if different content is ranking. 

More than one Fraggle can rank for a page, and the format can vary from a text-link with a “Jump to” label, an unlabeled text link, a site-link carousel, a site-link carousel with pictures, or occasionally horizontal or vertical expansion boxes for the different items on a page.

The most notable thing about Fraggles is the automatic scrolling behavior from the SERP. While Fraggles are often linked to content that has an HTML or JavaScript jump-links, sometimes, the jump-links appear to be added by Google without being present in the code at all. This behavior is also prominently featured in AMP Featured Snippets, for which Google has the same scrolling behavior, but also includes Google’s colored highlighting — which is superimposed on the page — to show the part of the page that was displayed in the Featured Snippet, which allows the searcher to see it in context. I write about this more in the article: What the Heck are Fraggles.

How Fraggles & Fraggle-based indexing works with JavaScript

Google’s desire to index Native Apps and Web Apps, including single-page apps, has necessitated Google’s switch to indexing based on Fragments and Fraggles, rather than pages. In JavaScript, as well as in Native Apps, a “Fragment” is a piece of content or information that is not necessarily a full page. 

The easiest way for an SEO to think about a Fragment is within the example of an AJAX expansion box: The piece of text or information that is fetched from the server to populate the AJAX expander when clicked could be described as a Fragment. Alternatively, if it is indexed for Mobile-First Indexing, it is a Fraggle. 

It is no coincidence that Google announced the launch of Deferred JavaScript Rendering at roughly the same time as the public roll-out of Mobile-First Indexing without drawing-out the connection, but here it is: When Google can index fragments of information from web pages, web apps and native apps, all organized around the Knowledge Graph, the data itself becomes “portable” or “mobile-first.”

We have also recently discovered that Google has begun to index URLs with a # jump-link, after years of not doing so, and is reporting on them separately from the primary URL in Search Console. As you can see below from our data, they aren’t getting a lot of clicks, but they are getting impressions. This is likely because of the low average position. 

Before Fraggles and Fraggle-Based Indexing, indexing # URLs would have just resulted in a massive duplicate content problem and extra work indexing for Google. Now that Fraggle-based Indexing is in-place, it makes sense to index and report on # URLs in Search Console — especially for breaking up long, drawn-out JavaScript experiences like PWA’s and Single-Page-Apps that don’t have separate URLs, databases, or in the long-run, possibly even for indexing native apps without Deep Links. 

Why index fragments & Fraggles?

If you’re used to thinking of rankings with the smallest increment being a URL, this idea can be hard to wrap your brain around. To help, consider this thought experiment: How useful would it be for Google to rank a page that gave detailed information about all different kinds of fruits and vegetables? It would be easy for a query like “fruits and vegetables,” that’s for sure. But if the query is changed to “lettuce” or “types of lettuce,” then the page would struggle to rank, even if it had the best, most authoritative information. 

This is because the “lettuce” keywords would be diluted by all the other fruit and vegetable content. It would be more useful for Google to rank the part of the page that is about lettuce for queries related to lettuce, and the part of the page about radishes well for queries about radishes. But since users don’t want to scroll through the entire page of fruits and vegetables to find the information about the particular vegetable they searched for, Google prioritizes pages with keyword focus and density, as they relate to the query. Google will rarely rank long pages that covered multiple topics, even if they were more authoritative.

With featured snippets, AMP featured snippets, and Fraggles, it’s clear that Google can already find the important parts of a page that answers a specific question — they’ve actually been able to do this for a while. So, if Google can organize and index content like that, what would the benefit be in maintaining an index that was based only on per-pages statistics and ranking? Why would Google want to rank entire pages when they could rank just the best parts of pages that are most related to the query?

To address these concerns, historically, SEO’s have worked to break individual topics out into separate pages, with one page focused on each topic or keyword cluster. So, with our vegetable example, this would ensure that the lettuce page could rank for lettuce queries and the radish page could rank for radish queries. With each website creating a new page for every possible topic that they would like to rank for, there’s lot of redundant and repetitive work for webmasters. It also likely adds a lot of low-quality, unnecessary pages to the index. Realistically, how many individual pages on lettuce does the internet really need, and how would Google determine which one is the best? The fact is, Google wanted to shift to an algorithm that focused less on links and more on topical authority to surface only the best content — and Google circumvents this with the scrolling feature in Fraggles.

Even though the effort to switch to Fraggle-based indexing, and organize the information around the Knowledge Graph, was massive, the long-term benefits of the switch far out-pace the costs to Google because they make Google’s system for flexible, monetizable and sustainable, especially as the amount of information and the number of connected devices expands exponentially. It also helps Google identify, serve and monetize new cross-device search opportunities, as they continue to expand. This includes search results on TV’s, connected screens, and spoken results from connected speakers. A few relevant costs and benefits are outlined below for you to contemplate, keeping Google’s long-term perspective in mind:

Why Fraggles and Fraggle-based indexing are important for PWAs

What also makes the shift to Fraggle-based Indexing relevant to SEOs is how it fits in with Google’s championing of Progressive Web Apps or AMP Progressive Web Apps, (aka PWAs and PWA-AMP websites/web apps). These types of sites have become the core focus of Google’s Chrome Developer summits and other smaller Google conferences.

From the perspective of traditional crawling and indexing, Google’s focus on PWAs is confusing. PWAs often feature heavy JavaScript and are still frequently built as Single-Page Apps (SPA’s), with only one or only a few URLs. Both of these ideas would make PWAs especially difficult and resource-intensive for Google to index in a traditional way — so, why would Google be so enthusiastic about PWAs? 

The answer is because PWA’s require ServiceWorkers, which uses Fraggles and Fraggle-based indexing to take the burden off crawling and indexing of complex web content.

In case you need a quick refresher: ServiceWorker is a JavaScript file — it instructs a device (mobile or computer) to create a local cache of content to be used just for the operation of the PWA. It is meant to make the loading of content much faster (because the content is stored locally) instead of just left on a server or CDN somewhere on the internet and it does so by saving copies of text and images associated with certain screens in the PWA. Once a user accesses content in a PWA, the content doesn’t need to be fetched again from the server. It’s a bit like browser caching, but faster — the ServiceWorker stores the information about when content expires, rather than storing it on the web. This is what makes PWAs seem to work offline, but it is also why content that has not been visited yet is not stored in the ServiceWorker.

ServiceWorkers and SEO

Most SEOs who understand PWAs understand that a ServiceWorker is for caching and load time, but they may not understand that it is likely also for indexing. If you think about it, ServiceWorkers mostly store the text and images of a site, which is exactly what the crawler wants. A crawler that uses Deferred JavaScript Rendering could go through a PWA and simulate clicking on all the links and store static content using the framework set forth in the ServiceWorker. And it could do this without always having to crawl all the JavaScript on the site, as long as it understood how the site was organized, and that organization stayed consistent. 

Google would also know exactly how often to re-crawl, and therefore could only crawl certain items when they were set to expire in the ServiceWorker cache. This saves Google a lot of time and effort, allowing them to get through or possibly skip complex code and JavaScript.

For a PWA to be indexed, Google requires webmasters to ‘register their app in Firebase,’ but they used to require webmasters to “register their ServiceWorker.” Firebase is the Google platform that allows webmasters to set up and manage indexing and deep linking for their native apps, chat-bots and, now, PWA’s

Direct communication with a PWA specialist at Google a few years ago revealed that Google didn’t crawl the ServiceWorker itself, but crawled the API to the ServiceWorker. It’s likely that when webmasters register their ServiceWorker with Google, Google is actually creating an API to the ServiceWorker, so that the content can be quickly and easily indexed and cached on Google’s servers. Since Google has already launched an Indexing API and appears to now favor API’s over traditional crawling, we believe Google will begin pushing the use of ServiceWorkers to improve page speed, since they can be used on non-PWA sites, but this will actually be to help ease the burden on Google to crawl and index the content manually.

Flat HTML may still be the fastest way to get web information crawled and indexed with Google. For now, JavaScript still has to be deferred for rendering, but it is important to recognize that this could change and crawling and indexing is not the only way to get your information to Google. Google’s Indexing API, which was launched for indexing time-sensitive information like job postings and live-streaming video, will likely be expanded to include different types of content. 

It’s important to remember that this is how AMP, Schema, and many other types of powerful SEO functionalities have started with a limited launch; beyond that, some great SEO’s have already tested submitting other types of content in the API and seen success. Submitting to APIs skips Google’s process of blindly crawling the web for new content and allows webmasters to feed the information to them directly.

It is possible that the new Indexing API follows a similar structure or process to PWA indexing. Submitted URLs can already get some kinds of content indexed or removed from Google’s index, usually in about an hour, and while it is only currently officially available for the two kinds of content, we expect it to be expanded broadly.

How will this impact SEO strategy?

Of course, every SEO wants to know how to leverage this speculative theory — how can we make the changes in Google to our benefit? 

The first thing to do is take a good, long, honest look at a mobile search result. Position #1 in the organic rankings is just not what it used to be. There’s a ton of engaging content that is often pushing it down, but not counting as an organic ranking position in Search Console. This means that you may be maintaining all your organic rankings while also losing a massive amount of traffic to SERP features like Knowledge Graph results, Featured Snippets, Google My Business, maps, apps, Found on the Web, and other similar items that rank outside of the normal organic results. 

These results, as well as Pay-per-Click results (PPC), are more impactful on mobile because they are stacked above organic rankings. Rather than being off to the side, as they might be in a desktop view of the search, they push organic rankings further down the results page. There has been some great reporting recently about the statistical and large-scale impact of changes to the SERP and how these changes have resulted in changes to user-behavior in search, especially from Dr. Pete Meyers, Rand Fishkin, and JumpTap.

Dr. Pete has focused on the increasing number of changes to the Google Algorithm recorded in his MozCast, which heated up at the end of 2016 when Google started working on Mobile-First Indexing, and again after it launched the Medic update in 2018. 

Rand, on the other hand, focused on how the new types of rankings are pushing traditional organic results down, resulting in less traffic to websites, especially on mobile. All this great data from these two really set the stage for a fundamental shift in SEO strategy as it relates to Mobile-First Indexing.

The research shows that Google re-organized its index to suit a different presentation of information — especially if they are able to index that information around an entity-concept in the Knowledge Graph. Fraggle-based Indexing makes all of the information that Google crawls even more portable because it is intelligently nested among related Knowledge Graph nodes, which can be surfaced in a variety of different ways. Since Fraggle-based Indexing focuses more on the meaningful organization of data than it does on pages and URLs, the results are a more “windowed” presentation of the information in the SERP. SEOs need to understand that search results are now based on entities and use-cases (think micro-moments), instead of pages and domains.

Google’s Knowledge Graph

To really grasp how this new method of indexing will impact your SEO strategy, you first have to understand how Google’s Knowledge Graph works. 

Since it is an actual “graph,” all Knowledge Graph entries (nodes) include both vertical and lateral relationships. For instance, an entry for “bread” can include lateral relationships to related topics like cheese, butter, and cake, but may also include vertical relationships like “standard ingredients in bread” or “types of bread.” 

Lateral relationships can be thought of as related nodes on the Knowledge Graph, and hint at “Related Topics” whereas vertical relationships point to a broadening or narrowing of the topic; which hints at the most likely filters within a topic. In the case of bread, a vertical relationship-up would be topics like “baking,” and down would include topics like “flour” and other ingredients used to make bread, or “sourdough” and other specific types of bread.

SEOs should note that Knowledge Graph entries can now include an increasingly wide variety of filters and tabs that narrow the topic information to benefit different types of searcher intent. This includes things like helping searchers find videos, books, images, quotes, locations, but in the case of filters, it can be topic-specific and unpredictable (informed by active machine learning). This is the crux of Google’s goal with Fraggle-based Indexing: To be able to organize the information of the web-based on Knowledge Graph entries or nodes, otherwise discussed in SEO circles as “entities.” 

Since the relationships of one entity to another remain the same, regardless of the language a person is speaking or searching in, the Knowledge Graph information is language-agnostic, and thus easily used for aggregation and machine learning in all languages at the same time. Using the Knowledge Graph as a cornerstone for indexing is, therefore, a much more useful and efficient means for Google to access and serve information in multiple languages for consumption and ranking around the world. In the long-term, it’s far superior to the previous method of indexing.

Examples of Fraggle-based indexing in the SERPs 

Knowledge Graph

Google has dramatically increased the number of Knowledge Graph entries and the categories and relationships within them. The build-out is especially prominent for topics for which Google has a high amount of structured data and information already. This includes topics like:

  • TV and Movies — from Google Play
  • Food and Recipe — from Recipe Schema, recipe AMP pages, and external food and nutrition databases 
  • Science and medicine — from trusted sources (like WebMD) 
  • Businesses — from Google My Business. 

Google is adding more and more nodes and relationships to their graph and existing entries are also being built-out with more tabs and carousels to break a single topic into smaller, more granular topics or type of information.

As you can see below, the build-out of the Knowledge Graph has also added to the number of filters and drill-down options within many queries, even outside of the Knowledge Graph. This increase can be seen throughout all of the Google properties, including Google My Business and Shopping, both of which we believe are now sections of the Knowledge Graph:

Google Search for ‘Blazers’ with Visual Filters at the Top for Shopping Oriented Queries
Google My Business (Business Knowledge Graph) with Filters for Information about Googleplex

Other similar examples include the additional filters and “Related Topics” results in Google Images, which we also believe to represent nodes on the Knowledge Graph:


 Advanced issues found

Google Images Increase in Filters & Inclusion of Related Topics Means that These Are Also Nodes on the Knowledge Graph

The Knowedge Graph is also being presented in a variety of different ways. Sometimes there’s a sticky navigation that persists at the top of the SERP, as seen in many media-oriented queries, and sometimes it’s broken up to show different information throughout the SERP, as you may have noticed in many of the local business-oriented search results, both shown below.

Media Knowledge Graph with Sticky Top Nav (Query for ‘Ferris Bueller’s Day Off’)
Local Business Knowledge Graph (GMB) With Information Split-up Throughout the SERP

Since the launch of Fraggle-based indexing is essentially a major Knowledge Graph build-out, Knowledge Graph results have also begun including more engaging content which makes it even less likely that users will click through to a website. Assets like playable video and audio, live sports scores, and location-specific information such as transportation information and TV time-tables can all be accessed directly in the search results. There’s more to the story, though. 

Increasingly, Google is also building out their own proprietary content by re-mixing existing information that they have indexed to create unique, engaging content like animated ‘AMP Stories’ which webmasters are also encouraged to build-out on their own. They have also started building a zoo of AR animals that can show as part of a Knowledge Graph result, all while encouraging developers to use their AR kit to build their own AR assets that will, no doubt, eventually be selectively incorporated into the Knowledge Graph too.

Google AR Animals in Knowledge Graph
Google AMP Stories Now Called ‘Life in Images’

SEO Strategy for Knowledge Graphs

Companies who want to leverage the Knowledge Graph should take every opportunity to create your own assets, like AR models and AMP Stories, so that Google will have no reason to do it. Beyond that, companies should submit accurate information directly to Google whenever they can. The easiest way to do this is through Google My Business (GMB). Whatever types of information are requested in GMB should be added or uploaded. If Google Posts are available in your business category, you should be doing Posts regularly, and making sure that they link back to your site with a call to action. If you have videos or photos that are relevant for your company, upload them to GMB. Start to think of GMB as a social network or newsletter — any assets that are shared on Facebook or Twitter can also be shared on Google Posts, or at least uploaded to the GMB account.

You should also investigate the current Knowledge Graph entries that are related to your industry, and work to become associated with recognized companies or entities in that industry. This could be from links or citations on the entity websites, but it can also include being linked by third-party lists that give industry-specific advice and recommendations, such as being listed among the top competitors in your industry (“Best Plumbers in Denver,” “Best Shoe Deals on the Web,” or “Top 15 Best Reality TV Shows”). Links from these posts also help but are not required — especially if you can get your company name on enough lists with the other top players. Verify that any links or citations from authoritative third-party sites like Wikipedia, Better Business Bureau, industry directories, and lists are all pointing to live, active, relevant pages on the site, and not going through a 301 redirect.

While this is just speculation and not a proven SEO strategy, you might also want to make sure that your domain is correctly classified in Google’s records by checking the industries that it is associated with. You can do so in Google’s MarketFinder tool. Make updates or recommend new categories as necessary. Then, look into the filters and relationships that are given as part of Knowledge Graph entries and make sure you are using the topic and filter words as keywords on your site.

Featured snippets 

Featured Snippets or “Answers” first surfaced in 2014 and have also expanded quite a bit, as shown in the graph below. It is useful to think of Featured Snippets as rogue facts, ideas or concepts that don’t have a full Knowledge Graph result, though they might actually be associated with certain existing nodes on the Knowledge Graph (or they could be in the vetting process for eventual Knowledge Graph build-out). 

Featured Snippets seem to surface when the information comes from a source that Google does not have an incredibly high level of trust for, like it does for Wikipedia, and often they come from third party sites that may or may not have a monetary interest in the topic — something that makes Google want to vet the information more thoroughly and may prevent Google from using it, if a less bias option is available.

Like the Knowledge Graph, Featured Snippets results have grown very rapidly in the past year or so, and have also begun to include carousels — something that Rob Bucci writes about extensively here. We believe that these carousels represent potentially related topics that Google knows about from the Knowledge Graph. Featured Snippets now look even more like mini-Knowledge Graph entries: Carousels appear to include both lateral and vertically related topics, and their appearance and maintenance seem to be driven by click volume and subsequent searches. However, this may also be influenced by aggregated engagement data for People Also Ask and Related Search data.

The build-out of Featured Snippets has been so aggressive that sometimes the answers that Google lifts are obviously wrong, as you can see in the example image below. It is also important to understand that Featured Snippet results can change from location to location and are not language-agnostic, and thus, are not translated to match the Search Language or the Phone Language settings. Google also does not hold themselves to any standard of consistency, so one Featured Snippet for one query might present an answer one way, and a similar query for the same fact could present a Featured Snippet with slightly different information. For instance, a query for “how long to boil an egg” could result in an answer that says “5 minutes” and a different query for “how to make a hard-boiled egg” could result in an answer that says “boil for 1 minute, and leave the egg in the water until it is back to room temperature.”

Featured Snippet with Carousel Featured
Snippet that is Wrong

The data below was collected by Moz and represents an average of roughly 10,000 that skews slightly towards ‘head’ terms.

This Data Was Collected by Moz & represents an average of roughly 10,000 that skews slightly towards ‘head’ terms

SEO strategy for featured snippets

All of the standard recommendations for driving Featured Snippets apply here. This includes making sure that you keep the information that you are trying to get ranked in a Featured Snippet clear, direct, and within the recommended character count. It also includes using simple tables, ordered lists, and bullets to make the data easier to consume, as well as modeling your content after existing Featured Snippet results in your industry.

This is still speculative, but it seems likely that the inclusion of Speakable Schema markup for things like “How To,” “FAQ,” and “Q&A” may also drive Featured Snippets. These kinds of results are specially designated as content that works well in a voice-search. Since Google has been adamant that there is not more than one index, and Google is heavily focused on improving voice-results from Google Assistant devices, anything that could be a good result in the Google Assistant, and ranks well, might also have a stronger chance at ranking in a Featured Snippet.

People Also Ask & Related Searches

Finally, the increased occurrence of “Related Searches” as well as the inclusion of People Also Ask (PAA) questions, just below most Knowledge Graph and Featured Snippet results, is undeniable. The Earl Tea screenshot shows that PAA’s along with Interesting Finds are both part of the Knowledge Graph too.

The graph below shows the steady increase in PAA’s. PAA results appear to be an expansion of Featured Snippets because once expanded, the answer to the question is displayed, with the citation below it. Similarly, some Related Search results also now include a result that looks like a Featured Snippet, instead of simply linking over to a different search result. You can now find ‘Related Searches’ throughout the SERP, often as part of a Knowledge Graph results, but sometimes also in a carousel in the middle of the SERP, and always at the bottom of the SERP — sometimes with images and expansion buttons to surface Featured Snippets within the Related Search results directly in the existing SERP.

Boxes with Related Searches are now also included with Image Search results. It’s interesting to note that Related Search results in Google Images started surfacing at the same time that Google began translating image Title Tags and Alt Tags. It coincides well with the concept that Entity-First Indexing, that Entities and Knowledge Graph are language-agnostic, and that Related Searches are somehow related to the Knowledge Graph.

This data was collected by Moz and represents an average of roughly 10,000 that skews slightly towards ‘head’ terms.

People Also Ask
Related Searches

SEO STRATEGY for PAA and related searches

Since PAAs and some Related Searches now appear to simply include Featured Snippets, driving Featured Snippet results for your site is also a strong strategy here. It often appears that PAA results include at least two versions of the same question, re-stated with a different language, before including questions that are more related to lateral and vertical nodes on the Knowledge Graph. If you include information on your site that Google thinks is related to the topic, based on Related Searches and PAA questions, it could help make your site appear relevant and authoritative.

Finally, it is crucial to remember that you don’t have a website to rank in Google now and SEO’s should consider non-website rankings as part of their job too. 

If a business doesn’t have a website, or if you just want to cover all the bases, you can let Google host your content directly — in as many places as possible. We have seen that Google-hosted content generally seems to get preferential treatment in Google search results and Google Discover, especially when compared to the decreasing traffic from traditional organic results. Google is now heavily focused on surfacing multimedia content, so anything that you might have previously created a new page on your website for should now be considered for a video.

Google My Business (GMB) is great for companies that don’t have websites, or that want to host their websites directly with Google. YouTube is great for videos, TV, video-podcasts, clips, animations, and tutorials. If you have an app, a book, an audio-book, a podcast, a movie, TV show, class or music, or PWA, you can submit that directly to GooglePlay (much of the video content in GooglePlay is now cross-populated in YouTube and YouTube TV, but this is not necessarily true of the other assets). This strategy could also include books in Google Books, flights in Google Flights, Hotels in Google Hotel listings, and attractions in Google Explore. It also includes having valid AMP code, since Google hosts AMP content, and includes Google News if your site is an approved provider of news.

Changes to SEO tracking for Fraggle-based indexing

The biggest problem for SEOs is the missing organic traffic, but it is also the fact that current methods of tracking organic results generally don’t show whether things like Knowledge Graph, Featured Snippets, PAA, Found on the Web, or other types of results are appearing at the top of the query or somewhere above your organic result. Position one in organic results is not what it used to be, nor is anything below it, so you can’t expect those rankings to drive the same traffic. If Google is going to be lifting and representing everyone’s content, the traffic will never arrive at the site and SEOs won’t know if their efforts are still returning the same monetary value. This problem is especially poignant for publishers, who have only been able to sell advertising on their websites based on the expected traffic that the website could drive.

The other thing to remember is that results differ — especially on mobile, which varies from device to device (generally based on screen size) but also can vary based on the phone IOS. They can also change significantly based on the location or the language settings of the phone, and they definitely do not always match with desktop results for the same query. Most SEO’s don’t know much about the reality of their mobile search results because most SEO reporting tools still focus heavily on desktop results, even though Google has switched to Mobile-First. 

As well, SEO tools generally only report on rankings from one location — the location of their servers — rather than being able to test from different locations. 

The only thing that good SEO’s can do to address this problem is to use tools like the MobileMoxie SERP Test to check what rankings look like on top keywords from all the locations where their users may be searching. While the free tool only provides results with one location at a time, subscribers can test search results in multiple locations, based on a service-area radius or based on an uploaded CSV of addresses. The tool has integrations with Google Sheets, and a connector with Data Studio, to help with SEO reporting, but APIs are also available, for deeper integrations in content editing tools, dashboards and for use within other SEO tools.


At MozCon 2017, I expressed my belief that the impact of Mobile-First Indexing requires a re-interpretation of the words “Mobile,” “First,” and “Indexing.” Re-defined in the context of Mobile-First Indexing, the words should be understood to mean “portable,” “preferred,” and “organization of information.” The potential of a shift to Fraggle-based indexing and the recent changes to the SERPs, especially in the past year, certainly seems to prove the accuracy of this theory. And though they have been in the works for more than two years, the changes to the SERP now seem to be rolling-out faster and are making the SERP unrecognizable from what it was only three or four years ago.

In this post, we described Fraggles and Fraggle-based indexing for SEO as a theory that speculates the true nature of the change to Mobile-First Indexing, how the index itself — and the units of indexing — may have changed to accommodate faster and more nuanced organization of information based on the Knowledge Graph, rather than simply links and URLs. We covered how Fraggles and Fraggle-based Indexing works, how it is related to JavaScript and PWA’s and what strategies SEOs can take to leverage it for additional exposure in the search results as well as how they can update their success tracking to account for all the variabilities that impact mobile search results.

SEOs need to consider the opportunities and change the way we view our overall indexing strategy, and our jobs as a whole. If Google is organizing the index around the Knowledge Graph, that makes it much easier for Google to constantly mention near-by nodes of the Knowledge Graph in “Related Searches” carousels, links from the Knowledge Graph, and topics in PAAs. It might also make it easier to believe that featured snippets are simply pieces of information being vetted (via Google’s click-crowdsourcing) for inclusion or reference in the Knowledge Graph.

Fraggles and Fraggled indexing re-frames the switch to Mobile-First Indexing, which means that SEOs and SEO tool companies need to start thinking mobile-first — i.e. the portability of their information. While it is likely that pages and domains still carry strong ranking signals, the changes in the SERP all seem to focus less on entire pages, and more on pieces of pages, similar to the ones surfaced in Featured Snippets, PAAs, and some Related Searches. If Google focuses more on windowing content and being an “answer engine” instead of a “search engine,” then this fits well with their stated identity, and their desire to build a more efficient, sustainable, international engine.

SEOs also need to find ways to serve their users better, by focusing more on the reality of the mobile SERP, and how much it can vary for real users. While Google may not call the smallest rankable units Fraggles, it is what we call them, and we think they are critical to the future of SEO.


Source link

How to make video ads work for your local business


Jacob Baadsgaard

Video advertising has become an increasingly important part of online marketing in recent years. There’s no real getting around that fact. If you want to increase brand awareness, drive conversions or sell products – a good video ad strategy is hard to beat.

Unfortunately, video advertising is a lot more complicated than your typical text, display or news feed ad. You have to write scripts, know how to run a camera, record decent audio, set up lighting, edit your footage and sound effects. Creating a video ad can be a daunting task.

This is especially true if you’re running a small, local business.

For many local companies, video advertising feels hopelessly out of reach. However, that simply isn’t the case. With the right strategy, even simple, easy-to-create video content can deliver great results.

In this article, we’re going to take a look at why video advertising is such a good idea for local businesses and 4 easy video strategies you can use to get great results – even if you don’t have a ton of video experience.

The advantage of being local

When it comes to video advertising, it might seem like the big chains have a lot of advantages over your local business. However, you have one thing they don’t: a direct connection with your target market.

I mean, when was the last time you heard anyone say, “McDonald’s is just such an important part of our community?”

All of the bulk discounts, supply chain efficiency and capital that makes big chains so powerful tends to disconnect them from their customers. People don’t love Walmart as a business. They might like Walmart’s prices or the wide range of products that they carry, but they don’t feel personally connected to Walmart. If another store with better prices or more products comes around, they’ll go to that store instead.

But people develop an affection for local businesses. They’re run by people from their community who feel real and relatable. When people feel that connection to a business, they want that business to succeed…and show their support with their money.

For example, there’s a hole-in-the-wall burger and shake shop in my home town called Glade’s that has been around for decades. Their food isn’t bad, but it certainly isn’t any better or cheaper than a lot of the newer chains that have set up shop lately. 

But, people keep going back. Why? Because it’s a part of the community. It’s where everyone goes after a game. It’s a generations-old tradition. Going there makes you feel like you’re part of something bigger – like you’re participating in the community as a whole.

And that’s exactly what you need as a local business.

Now, that being said, if your business hasn’t been around for decades, how do you make people feel like you’re an important part of the community? Video advertising.

More than any other advertising medium, video has the power to make people feel connected to your business. You can show people or places that your audience is familiar with, reference local events or figures – and make your business feel more like home than a chain ever could.

Why local businesses should use video

As we just discussed, video is intimidating for a lot of local businesses. It costs more and takes a much broader skill set than most other types of online marketing.

But, there’s a bright side to this.

Anytime that something is hard to do, there’s a business opportunity to be found. In this case, the fact that video is hard means that most local businesses aren’t doing it. That leaves the field wide open for any companies that learn how to make video work.

So, if you take the time to figure out a workable video ad strategy, you’ll often be way ahead of the competition. Even if the competition is already doing video ads, if you can get in on the game (or figure out how to do them more effectively), you’ll be able to stay relevant and maintain your competitive edge.

The good news is, while video advertising might seem intimidating, there are actually a ton of resources available to help you make video content. There are thousands of YouTube tutorials, blog posts and courses that will teach you the skills you need to make a basic video using the cell phone in your pocket. Or, if that seems like more effort than you have time for, there are plenty of video ad agencies that can make you a quick-and-dirty video for a reasonable price.

The resources are there. What you need to do is come up with the right video strategy…which is what we’ll get into in the next section.

3 Simple video ad strategies for local businesses

Often, local businesses overthink video advertising. They believe that they need to create something clever or complex, when they really just need to create something clear and compelling.

After all, most of the time, you aren’t competing with giant brands like Pepsi or Apple. You’re competing with other local businesses who have the same challenges as you.

So, with the right strategy and approach, video advertising can actually be both manageable and effective. Here are a few different types of video ads that any local business can pull off.

1. Brand story ads

One great way to create a compelling local video ad is to tell your brand story. Like we discussed above, one of your greatest advertising assets is your connection to the community, so if you can make an ad that showcases that, you’ll be well on your way to getting more customers in the door.

Here are some different types of brand stories that can work for local businesses:

  • The origin story. Why and how did you (or the owner) start your business? Help people connect with who you are and where you come from.
  • Your purpose. What are your business goals? What is your company passionate about? Give people a reason to get behind you.
  • Success stories. Who has your business helped? How has your business made a difference in the community? Show how supporting your business is supporting the community.
  • What makes your business unique. Is there something special about your products or services? Help people to see why they should be excited to buy from your business.

The point of a brand story ad is to help create awareness for your company. By the end of your ad, people should know that your business exists, that it’s local and feel interested in checking you out. 

That being said, brand story ads are pretty high funnel, so you might not get a ton of immediate foot traffic, but that’s not the point. The point is to build awareness and get on the radar of your potential customers.

As a general rule of thumb, shoot to keep your brand story ads between 30 and 60 seconds long. Basically, this type of ad should be short enough to get people interested, but just long enough to make a compelling case for your business.

2. Frequently-Asked Questions ads

Once people are aware of your business, it’s natural for them to start asking questions. This is actually a good sign, because it shows buying intent.

If you can address those questions in a reasonable and straightforward way, it will remove people’s concerns about using your business and help get people in your door.

The easiest way to do this? FAQ ads.

If you notice that you get a lot of the same sort of questions (either in person or in response to your ads), an FAQ ad can be the perfect way to get ahead of the curve. Questions like “What is it made from?” or “Where do you get your produce?” are easy to address in a video ad and give you a different way to reach out to potential customers.

In general, this sort of ad works best with an audience that is already familiar with your business. Your goal is to get people who are thinking about visiting your business to actually act, so FAQ ads are great for retargeting campaigns (often as a follow-up to brand story ads).

3. Testimonial ads

Want an easy, incredibly powerful video ad? Try creating a testimonial ad!

As we mentioned earlier, one of the best things about local video ads is the fact that you can create video content that includes recognizable elements: people, places, events, etc.

Testimonial ads from locals are a great way to say, “We’re local and we do great things for the people in our community.” To make things even better, you’re not the one saying good things about your business, which makes the ad even more compelling.

For example, take a look at this ad.

Great location? No. Great lighting? No. Great scripting? No. Compelling as all get-out? Absolutely.

Testimonial ads don’t have to be Hollywood studio quality. Their power is in their authenticity. As long as the message is on point and the footage is free of distracting elements, a video like the one above can make for a great ad.

There are a lot of easy ways to make a testimonial ad. You can ask people to record something themselves, have them come into your office or even film people at an event or other venues.

As you’re creating your testimonial ads, shoot to keep them under 45 to 60 seconds. Even though testimonials are interesting, watching someone drone on about your business for five minutes gets old fast. So, if you’ve got a ton of great testimonials (or a really long one), try breaking things up into multiple ads to keep things engaging.


Video advertising might not be the easiest way to market your local business, but if you do it right, it can be one of the most effective ways to differentiate yourself and get in front of your target audience. 

The good news is, with the right strategy, creating a compelling ad doesn’t have to be a huge headache. The key is to make sure that your ads capitalize on your strengths as a local business. Couple that with a compelling message and you’re on your way to success!

Opinions expressed in this article are those of the guest author and not necessarily Marketing Land. Staff authors are listed here.

About The Author

Jacob is passionate entrepreneur on a mission to grow businesses using PPC & CRO. As the Founder & CEO of Disruptive Advertising, Jacob has developed an award-winning and world-class organization that has now helped over 2,000 businesses grow their online revenue. Connect with him on Twitter.


Source link