All About Fraggles (Fragment + Handle) – Whiteboard Friday



Suzzicks

What are “fraggles” in SEO and how do they relate to mobile-first indexing, entities, the Knowledge Graph, and your day-to-day work? In this glimpse into her 2019 MozCon talk, Cindy Krum explains everything you need to understand about fraggles in this edition of Whiteboard Friday.

Click on the whiteboard image above to open a high resolution version in a new tab!

Video Transcription

Hi, Moz fans. My name is Cindy Krum, and I’m the CEO of MobileMoxie, based in Denver, Colorado. We do mobile SEO and ASO consulting. I’m here in Seattle, speaking at MozCon, but also recording this Whiteboard Friday for you today, and we are talking about fraggles.

So fraggles are obviously a name that I’m borrowing from Jim Henson, who created “Fraggle Rock.” But it’s a combination of words. It’s a combination of fragment and handle. I talk about fraggles as a new way or a new element or thing that Google is indexing.

Fraggles and mobile-first indexing

Let’s start with the idea of mobile-first indexing, because you have to kind of understand that before you can go on to understand fraggles. So I believe mobile-first indexing is about a little bit more than what Google says. Google says that mobile-first indexing was just a change of the crawler.

They had a desktop crawler that was primarily crawling and indexing, and now they have a mobile crawler that’s doing the heavy lifting for crawling and indexing. While I think that’s true, I think there’s more going on behind the scenes that they’re not talking about, and we’ve seen a lot of evidence of this. So what I believe is that mobile-first indexing was also about indexing, hence the name.

Knowledge Graph and entities

So I think that Google has reorganized their index around entities or around specifically entities in the Knowledge Graph. So this is kind of my rough diagram of a very simplified Knowledge Graph. But Knowledge Graph is all about person, place, thing, or idea.

Nouns are entities. Knowledge Graph has nodes for all of the major person, place, thing, or idea entities out there. But it also indexes or it also organizes the relationships of this idea to this idea or this thing to this thing. What’s useful for that to Google is that these things, these concepts, these relationships stay true in all languages, and that’s how entities work, because entities happen before keywords.

This can be a hard concept for SEOs to wrap their brain around because we’re so used to dealing with keywords. But if you think about an entity as something that’s described by a keyword and can be language agnostic, that’s how Google thinks about entities, because entities in the Knowledge Graph are not written up per se or their the unique identifier isn’t a word, it’s a number and numbers are language agnostic.

But if we think about an entity like mother, mother is a concept that exists in all languages, but we have different words to describe it. But regardless of what language you’re speaking, mother is related to father, is related to daughter, is related to grandfather, all in the same ways, even if we’re speaking different languages. So if Google can use what they call the “topic layer”and entities as a way to filter in information and understand the world, then they can do it in languages where they’re strong and say, “We know that this is true absolutely 100% all of the time.”

Then they can apply that understanding to languages that they have a harder time indexing or understanding, they’re just not as strong or the algorithm isn’t built to understand things like complexities of language, like German where they make really long words or other languages where they have lots of short words to mean different things or to modify different words.

Languages all work differently. But if they can use their translation API and their natural language APIs to build out the Knowledge Graph in places where they’re strong, then they can use it with machine learning to also build it and do a better job of answering questions in places or languages where they’re weak. So when you understand that, then it’s easy to think about mobile-first indexing as a massive Knowledge Graph build-out.

We’ve seen this happening statistically. There are more Knowledge Graph results and more other things that seem to be related to Knowledge Graph results, like people also ask, people also search for, related searches. Those are all describing different elements or different nodes on the Knowledge Graph. So when you see those things in the search, I want you to think, hey, this is the Knowledge Graph showing me how this topic is related to other topics.

So when Google launched mobile-first indexing, I think this is the reason it took two and a half years is because they were reindexing the entire web and organizing it around the Knowledge Graph. If you think back to the AMA that John Mueller did right about the time that Knowledge Graph was launching, he answered a lot of questions that were about JavaScript and href lang.

When you put this in that context, it makes more sense. He wants the entity understanding, or he knows that the entity understanding is really important, so the href lang is also really important. So that’s enough of that. Now let’s talk about fraggles.

Fraggles = fragment + handle

So fraggles, as I said, are a fragment plus a handle. It’s important to know that fraggles — let me go over here —fraggles and fragments, there are lots of things out there that have fragments. So you can think of native apps, databases, websites, podcasts, and videos. Those can all be fragmented.

Even though they don’t have a URL, they might be useful content, because Google says its goal is to organize the world’s information, not to organize the world’s websites. I think that, historically, Google has kind of been locked into this crawling and indexing of websites and that that’s bothered it, that it wants to be able to show other stuff, but it couldn’t do that because they all needed URLs.

But with fragments, potentially they don’t have to have a URL. So keep these things in mind — apps, databases and stuff like that — and then look at this. 

So this is a traditional page. If you think about a page, Google has kind of been forced, historically by their infrastructure, to surface pages and to rank pages. But pages sometimes struggle to rank if they have too many topics on them.

So for instance, what I’ve shown you here is a page about vegetables. This page may be the best page about vegetables, and it may have the best information about lettuce, celery, and radishes. But because it’s got those topics and maybe more topics on it, they all kind of dilute each other, and this great page may struggle to rank because it’s not focused on the one topic, on one thing at a time.

Google wants to rank the best things. But historically they’ve kind of pushed us to put the best things on one page at a time and to break them out. So what that’s created is this “content is king, I need more content, build more pages” mentality in SEO. The problem is everyone can be building more and more pages for every keyword that they want to rank for or every keyword group that they want to rank for, but only one is going to rank number one.

Google still has to crawl all of those pages that it told us to build, and that creates this character over here, I think, Marjory the Trash Heap, which if you remember the Fraggles, Marjory the Trash Heap was the all-knowing oracle. But when we’re all creating kind of low- to mid-quality content just to have a separate page for every topic, then that makes Google’s life harder, and that of course makes our life harder.

So why are we doing all of this work? The answer is because Google can only index pages, and if the page is too long or too many topics, Google gets confused. So we’ve been enabling Google to do this. But let’s pretend, go with me on this, because this is a theory, I can’t prove it. But if Google didn’t have to index a full page or wasn’t locked into that and could just index a piece of a page, then that makes it easier for Google to understand the relationships of different topics to one page, but also to organize the bits of the page to different pieces of the Knowledge Graph.

So this page about vegetables could be indexed and organized under the vegetable node of the Knowledge Graph. But that doesn’t mean that the lettuce part of the page couldn’t be indexed separately under the lettuce portion of the Knowledge Graph and so on, celery to celery and radish to radish. Now I know this is novel, and it’s hard to think about if you’ve been doing SEO for a long time.

But let’s think about why Google would want to do this. Google has been moving towards all of these new kinds of search experiences where we have voice search, we have the Google Home Hub kind of situation with a screen, or we have mobile searches. If you think about what Google has been doing, we’ve seen the increase in people also ask, and we’ve seen the increase in featured snippets.

They’ve actually been kind of, sort of making fragments for a long time or indexing fragments and showing them in featured snippets. The difference between that and fraggles is that when you click through on a fraggle, when it ranks in a search result, Google scrolls to that portion of the page automatically. That’s the handle portion.

So handles you may have heard of before. They’re kind of old-school web building. We call them bookmarks, anchor links, anchor jump links, stuff like that. It’s when it automatically scrolls to the right portion of the page. But what we’ve seen with fraggles is Google is lifting bits of text, and when you click on it, they’re scrolling directly to that piece of text on a page.

So we see this already happening in some results. What’s interesting is Google is overlaying the link. You don’t have to program the jump link in there. Google actually finds it and puts it there for you. So Google is already doing this, especially with AMP featured snippets. If you have a AMP featured snippet, so a featured snippet that’s lifted from an AMP page, when you click through, Google is actually scrolling and highlighting the featured snippet so that you could read it in context on the page.

But it’s also happening in other kind of more nuanced situations, especially with forums and conversations where they can pick a best answer. The difference between a fraggle and something like a jump link is that Google is overlaying the scrolling portion. The difference between a fraggle and a site link is site links link to other pages, and fraggles, they’re linking to multiple pieces of the same long page.

So we want to avoid continuing to build up low-quality or mid-quality pages that might go to Marjory the Trash Heap. We want to start thinking in terms of can Google find and identify the right portion of the page about a specific topic, and are these topics related enough that they’ll be understood when indexing them towards the Knowledge Graph.

Knowledge Graph build-out into different areas

So I personally think that we’re seeing the build-out of the Knowledge Graph in a lot of different things. I think featured snippets are kind of facts or ideas that are looking for a home or validation in the Knowledge Graph. People also ask seem to be the related nodes. People also search for, same thing. Related searches, same thing. Featured snippets, oh, they’re on there twice, two featured snippets. Found on the web, which is another way where Google is putting expanders by topic and then giving you a carousel of featured snippets to click through on.



 So we’re seeing all of those things, and some SEOs are getting kind of upset that Google is lifting so much content and putting it in the search results and that you’re not getting the click. We know that 61% of mobile searches don’t get a click anymore, and it’s because people are finding the information that they want directly in a SERP.

That’s tough for SEOs, but great for Google because it means Google is providing exactly what the user wants. So they’re probably going to continue to do this. I think that SEOs are going to change their minds and they’re going to want to be in those windowed content, in the lifted content, because when Google starts doing this kind of thing for the native apps, databases, and other content, websites, podcasts, stuff like that, then those are new competitors that you didn’t have to deal with when it was only websites ranking, but those are going to be more engaging kinds of content that Google will be showing or lifting and showing in a SERP even if they don’t have to have URLs, because Google can just window them and show them.

So you’d rather be lifted than not shown at all. So that’s it for me and featured snippets. I’d love to answer your questions in the comments, and thanks very much. I hope you like the theory about fraggles.

Video transcription by Speechpad.com



Source link

Acast Open launches to give brands an on-ramp to podcasting



Amy Gesenhues

The podcasting platform Acast has launched Acast Open, making available its podcast production offerings to any brand or publisher wanting to start a podcast. Acast Open includes three subscription models — Starter, Influencer and Ace — that come with different levels of support and analytics.

Why we should care

Adobe Analytics reported that podcast app usage grew 60% over the past year. Not only does this translate to new advertising opportunities for brands — the increase in podcast popularity opens the door for any company or executive ready to take their content marketing to a new level via a branded podcast.

“Digital audio brings a new and untapped audience who are not reachable via traditional media,” says Sally Yu, director of research and insights for BBC Global News’ APAC division, during a recent event organized by BBC News and Campaign Asia, “It brings additive value to the traditional media reach.” Outside of music streaming, the top three forms of audio content consumed right now are music (67%), news (50%) and podcasts (37%), according to a study commissioned by BBC News that focused on the commercial opportunities of branded podcasts.

If your brand — or CEO — has value to add to a larger industry conversation, a branded podcast may be the piece of content marketing that sets you apart from your competition and helps you reach a whole new audience. Platforms like Acast and Spotify’s “create a podcast” app aim to make it easier for brands to join the ever-growing list of podcasters.

More on the news

  • Acast reports that any podcasts produced on its platform that appear to be attracting “a significant enough listenership” may be invited to join its premium network of podcast shows.
  • Acast’s current network includes a number of popular podcasts, such as “My Dad Wrote a Porno,” “Forever35,” and “Wahlgren & Wistam.”
  • Acast Open is the result of Acast’s acquisition of Pippa, a technology platform that provides hosting, analytics, and monetization capabilities for podcasters. Acast purchased Pippa in April.
  • The free Starter model includes a podcast RSS feed for distribution, basic analytics and a basic website for the podcast. The Influencer level is $14.99 a month and comes with advanced analytics and YouTube and Spotify support. Ace, the most expensive offering at $29.99 a month, is designed for companies in need of more advanced podcasting tools.

About The Author

Amy Gesenhues is a senior editor for Third Door Media, covering the latest news and updates for Marketing Land, Search Engine Land and MarTech Today. From 2009 to 2012, she was an award-winning syndicated columnist for a number of daily newspapers from New York to Texas. With more than ten years of marketing management experience, she has contributed to a variety of traditional and online publications, including MarketingProfs, SoftwareCEO, and Sales and Marketing Management Magazine. Read more of Amy’s articles.





Source link

App Store SEO: How to Diagnose a Drop in Traffic & Win It Back



Joel.Mesherghi

For some organizations, mobile apps can be an important means to capturing new leads and customers, so it can be alarming when you notice your app visits are declining.

However, while there is content on how to optimize your app, otherwise known as ASO (App Store Optimization), there is little information out there on the steps required to diagnose a drop in app visits.

Although there are overlaps with traditional search, there are unique factors that play a role in app store visibility.

The aim of this blog is to give you a solid foundation when trying to investigate a drop in app store visits and then we’ll go through some quick fire opportunities to win that traffic back.

We’ll go through the process of investigating why your app traffic declined, including:

  1. Identifying potential external factors
  2. Identifying the type of keywords that dropped in visits
  3. Analyzing app user engagement metrics

And we’ll go through some ways to help you win traffic back including:

  1. Spying on your competitors
  2. Optimizing your store listing
  3. Investing in localisation

Investigating why your app traffic declined

Step 1. Identify potential external factors

Some industries/businesses will have certain periods of the year where traffic may drop due to external factors, such as seasonality.

Before you begin investigating a traffic drop further:

  • Talk to your point of contact and ask whether seasonality impacts their business, or whether there are general industry trends at play. For example, aggregator sites like SkyScanner may see a drop in app visits after the busy period at the start of the year.
  • Identify whether app installs actually dropped. If they didn’t, then you probably don’t need to worry about a drop in traffic too much and it could be Google’s and Apple’s algorithms better aligning the intent of search terms.

Step 2. Identify the type of keywords that dropped in visits

Like traditional search, identifying the type of keywords (branded and non-branded), as well as the individual keywords that saw the biggest drop in app store visits, will provide much needed context and help shape the direction of your investigation. For instance:

If branded terms saw the biggest drop-off in visits this could suggest:

  1. There has been a decrease in the amount of advertising spend that builds brand/product awareness
  2. Competitors are bidding on your branded terms
  3. The app name/brand has changed and hasn’t been able to mop up all previous branded traffic

If non-branded terms saw the biggest drop off in visits this could suggest:

  1. You’ve made recent optimisation changes that have had a negative impact
  2. User engagement signals, such as app crashes, or app reviews have changed for the worse
  3. Your competition have better optimised their app and/or provide a better user experience (particularly relevant if an app receives a majority of its traffic from a small set of keywords)
  4. Your app has been hit by an algorithm update

If both branded and non-branded terms saw the biggest drop off in visits this could suggest:

  1. You’ve violated Google’s policies on promoting your app.
  2. There are external factors at play

To get data for your Android app

To get data for your Android app, sign into your Google Play Console account.

Google Play Console provides a wealth of data on the performance of your android app, with particularly useful insights on user engagement metrics that influence app store ranking (more on these later).

However, keyword specific data will be limited. Google Play Console will show you the individual keywords that delivered the most downloads for your app, but the majority of keyword visits will likely be unclassified: mid to long-tail keywords that generate downloads, but don’t generate enough downloads to appear as isolated keywords. These keywords will be classified as “other”.

Your chart might look like the below. Repeat the same process for branded terms.

Above: Graph of a client’s non-branded Google Play Store app visits. The number of visits are factual, but the keywords driving visits have been changed to keep anonymity.

To get data for your IOS app

To get data on the performance of your IOS app, Apple have App Store Connect. Like Google Play Console, you’ll be able to get your hands on user engagement metrics that can influence the ranking of your app.

However, keyword data is even scarcer than Google Play Console. You’ll only be able to see the total number of impressions your app’s icon has received on the App Store. If you’ve seen a drop in visits for both your Android and IOS app, then you could use Google Play Console data as a proxy for keyword performance.

If you use an app rank tracking tool, such as TheTool, you can somewhat plug gaps in knowledge for the keywords that are potentially driving visits to your app.

Step 3. Analyze app user engagement metrics

User engagement metrics that underpin a good user experience have a strong influence on how your app ranks and both Apple and Google are open about this.

Google states that user engagement metrics like app crashes, ANR rates (application not responding) and poor reviews can limit exposure opportunities on Google Play.

While Apple isn’t quite as forthcoming as Google when it comes to providing information on engagement metrics, they do state that app ratings and reviews can influence app store visibility.

Ultimately, Apple wants to ensure IOS apps provide a good user experience, so it’s likely they use a range of additional user engagement metrics to rank an app in the App Store.

As part of your investigation, you should look into how the below user engagement metrics may have changed around the time period you saw a drop in visits to your app.

  • App rating
  • Number of ratings (newer/fresh ratings will be weighted more for Google)
  • Number of downloads
  • Installs vs uninstalls
  • App crashes and application not responding

You’ll be able to get data for the above metrics in Google Play Console and App Store Connect, or you may have access to this data internally.

Even if your analysis doesn’t reveal insights, metrics like app rating influences conversion and where your app ranks in the app pack SERP feature, so it’s well worth investing time in developing a strategy to improve these metrics.

One simple tactic could be to ensure you respond to negative reviews and reviews with questions. In fact, users increase their rating by +0.7 stars on average after receiving a reply.

Apple offers a few tips on asking for ratings and reviews for IOS app.

Help win your app traffic back

Step 1. Spy on your competitors

Find out who’s ranking

When trying to identify opportunities to improve app store visibility, I always like to compare the top 5 ranking competitor apps for some priority non-branded keywords.

All you need to do is search for these keywords in Google Play and the App Store and grab the publicly available ranking factors from each app listing. You should have something like the below.

Brand

Title

Title Character length

Rating

Number of reviews

Number of installs

Description character length

COMPETITOR 1

[Competitor title]

50

4.8

2,848

50,000+

3,953

COMPETITOR 2

[Competitor title]

28

4.0

3,080

500,000+

2,441

COMPETITOR 3

[Competitor title]

16

4.0

2566

100,000+

2,059

YOUR BRAND

​[Your brands title]

37

4.3

2,367

100,000+

3,951

COMPETITOR 4

[Competitor title]

7

4.1

1,140

100,000+

1,142

COMPETITOR 5

[Competitor title]

24

4.5

567

50,000+

2,647

     Above: anonymized table of a client’s Google Play competitors

From this, you may get some indications as to why an app ranks above you. For instance, we see “Competitor 1” not only has the best app rating, but has the longest title and description. Perhaps they better optimized their title and description?

We can also see that competitors that rank above us generally have a larger number of total reviews and installs, which aligns with both Google’s and Apple’s statements about the importance of user engagement metrics.

With the above comparison information, you can dig a little deeper, which leads us on nicely to the next section.

Optimize your app text fields

Keywords you add to text fields can have a significant impact on app store discoverability.

As part of your analysis, you should look into how your keyword optimization differs from competitors and identify any opportunities.

For Google Play, adding keywords to the below text fields can influence rankings:

  • Keywords in the app title (50 characters)
  • Keywords in the app description (4,000 characters)
  • Keywords in short description (80 characters)
  • Keywords in URL
  • Keywords in your app name

When it comes to the App Store, adding keywords to the below text fields can influence rankings:

  • Keywords in the app title (30 characters)
  • Using the 100 character keywords field (a dedicated 100-character field to place keywords you want to rank for)
  • Keywords in your app name

To better understand how your optimisation tactics hold up, I recommended comparing your app text fields to competitors.

For example, if I want to know the frequency of mentioned keywords in their app descriptions on Google Play (keywords in the description field are a ranking factor) than I’d create a table like the one below.

Keyword

COMPETITOR 1

COMPETITOR 2

COMPETITOR 3

YOUR BRAND

COMPETITOR 4

COMPETITOR 5

job

32

9

5

40

3

2

job search

12

4

10

9

10

8

employment

2

0

0

5

0

3

job tracking

2

0

0

4

0

0

employment app

7

2

0

4

2

1

employment search

4

1

1

5

0

0

job tracker

3

0

0

1

0

0

recruiter

2

0

0

1

0

0

     Above: anonymized table of a client’s Google Play competitors

From the above table, I can see that the number 1 ranking competitor (competitor 1) has more mentions of “job search” and “employment app” than I do.

Whilst there are many factors that decide the position at which an app ranks, I could deduce that I need to increase the frequency of said keywords in my Google Play app description to help improve ranking.

Be careful though: writing unnatural, keyword stuffed descriptions and titles will likely have an adverse effect.

Remember, as well as being optimized for machines, text fields like your app title and description are meant to be a compelling “advertisement” of your app for users..

I’d repeat this process for other text fields to uncover other keyword insights.

Step 2. Optimize your store listing

Your store listing in the home of your app on Google Play. It’s where users can learn about your app, read reviews and more. And surprisingly, not all apps take full advantage of developing an immersive store listing experience.

Whilst Google doesn’t seem to directly state that fully utilizing the majority of store listing features directly impacts your apps discoverability, it’s fair to speculate that there may be some ranking consideration behind this.

At the very least, investing in your store listing could improve conversion and you can even run A/B tests to measure the impact of your changes.

You can improve the overall user experience and content found in the store listing by adding video trailers of your app, quality creative assets, your apps icon (you’ll want to make your icon stand out amongst a sea of other app icons) and more.

You can read Google’s best practice guide on creating a compelling Google Play store listing to learn more.

Step 3. Invest in localization

The saying goes “think global, act local” and this is certainly true of apps.

Previous studies have revealed that 72.4% of global consumers preferred to use their native language when shopping online and that 56.2% of consumers said that the ability to obtain information in their own language is more important than price.

It makes logical sense. The better you can personalize your product for your audience, the better your results will be, so go the extra mile and localize your Google Play and App Store listings.

Google has a handy checklist for localization on Google Play and Apple has a comprehensive resource on internationalizing your app on the App Store.

Wrap up

A drop in visits of any kind causes alarm and panic. Hopefully this blog gives you a good starting point if you ever need to investigate why an apps traffic has dropped as well as providing some quick fire opportunities to win it back.

If you’re interested in further reading on ASO, I recommend reading App Radar’s and TheTool’s guides to ASO, as well as app search discoverability tips from Google and Apple themselves.



Source link

Better Content Through NLP (Natural Language Processing) – Whiteboard Friday



RuthBurrReedy

Gone are the days of optimizing content solely for search engines. For modern SEO, your content needs to please both robots and humans. But how do you know that what you’re writing can check the boxes for both man and machine?

In today’s Whiteboard Friday, Ruth Burr Reedy focuses on part of her recent MozCon 2019 talk and teaches us all about how Google uses NLP (natural language processing) to truly understand content, plus how you can harness that knowledge to better optimize what you write for people and bots alike.

Click on the whiteboard image above to open a high resolution version in a new tab!

Video Transcription

Howdy, Moz fans. I’m Ruth Burr Reedy, and I am the Vice President of Strategy at UpBuild, a boutique technical marketing agency specializing in technical SEO and advanced web analytics. I recently spoke at MozCon on a basic framework for SEO and approaching changes to our industry that thinks about SEO in the light of we are humans who are marketing to humans, but we are using a machine as the intermediary.

Those videos will be available online at some point. [Editor’s note: that point is now!] But today I wanted to talk about one point from my talk that I found really interesting and that has kind of changed the way that I approach content creation, and that is the idea that writing content that is easier for Google, a robot, to understand can actually make you a better writer and help you write better content for humans. It is a win-win. 

The relationships between entities, words, and how people search

To understand how Google is currently approaching parsing content and understanding what content is about, Google is spending a lot of time and a lot of energy and a lot of money on things like neural matching and natural language processing, which seek to understand basically when people talk, what are they talking about?

This goes along with the evolution of search to be more conversational. But there are a lot of times when someone is searching, but they don’t totally know what they want, and Google still wants them to get what they want because that’s how Google makes money. They are spending a lot of time trying to understand the relationships between entities and between words and how people use words to search.

The example that Danny Sullivan gave online, that I think is a really great example, is if someone is experiencing the soap opera effect on their TV. If you’ve ever seen a soap opera, you’ve noticed that they look kind of weird. Someone might be experiencing that, and not knowing what that’s called they can’t Google soap opera effect because they don’t know about it.

They might search something like, “Why does my TV look funny?” Neural matching helps Google understand that when somebody is searching “Why does my TV look funny?” one possible answer might be the soap opera effect. So they can serve up that result, and people are happy. 

Understanding salience

As we’re thinking about natural language processing, a core component of natural language processing is understanding salience.

Salience, content, and entities

Salience is a one-word way to sum up to what extent is this piece of content about this specific entity? At this point Google is really good at extracting entities from a piece of content. Entities are basically nouns, people, places, things, proper nouns, regular nouns.

Entities are things, people, etc., numbers, things like that. Google is really good at taking those out and saying, “Okay, here are all of the entities that are contained within this piece of content.” Salience attempts to understand how they’re related to each other, because what Google is really trying to understand when they’re crawling a page is: What is this page about, and is this a good example of a page about this topic?

Salience really goes into the second piece. To what extent is any given entity be the topic of a piece of content? It’s often amazing the degree to which a piece of content that a person has created is not actually about anything. I think we’ve all experienced that.

You’re searching and you come to a page and you’re like, “This was too vague. This was too broad. This said that it was about one thing, but it was actually about something else. I didn’t find what I needed. This wasn’t good information for me.” As marketers, we’re often on the other side of that, trying to get our clients to say what their product actually does on their website or say, “I know you think that you created a guide to Instagram for the holidays. But you actually wrote one paragraph about the holidays and then seven paragraphs about your new Instagram tool. This is not actually a blog post about Instagram for the holidays. It’s a piece of content about your tool.” These are the kinds of battles that we fight as marketers. 

Natural Language Processing (NLP) APIs

Fortunately, there are now a number of different APIs that you can use to understand natural language processing: 

Is it as sophisticated as what they’re using on their own stuff? Probably not. But you can test it out. Put in a piece of content and see (a) what entities Google is able to extract from it, and (b) how salient Google feels each of these entities is to the piece of content as a whole. Again, to what degree is this piece of content about this thing?

So this natural language processing API, which you can try for free and it’s actually not that expensive for an API if you want to build a tool with it, will assign each entity that it can extract a salient score between 0 and 1, saying, “Okay, how sure are we that this piece of content is about this thing versus just containing it?”

So the higher or the closer you get to 1, the more confident the tool is that this piece of content is about this thing. 0.9 would be really, really good. 0.01 means it’s there, but they’re not sure how well it’s related. 

A delicious example of how salience and entities work

The example I have here, and this is not taken from a real piece of content — these numbers are made up, it’s just an example — is if you had a chocolate chip cookie recipe, you would want chocolate cookies or chocolate chip cookies recipe, chocolate chip cookies, something like that to be the number one entity, the most salient entity, and you would want it to have a pretty high salient score.

You would want the tool to feel pretty confident, yes, this piece of content is about this topic. But what you can also see is the other entities it’s extracting and to what degree they are also salient to the topic. So you can see things like if you have a chocolate chip cookie recipe, you would expect to see things like cookie, butter, sugar, 350, which is the temperature you heat your oven, all of the different things that come together to make a chocolate chip cookie recipe.

But I think that it’s really, really important for us as SEOs to understand that salience is the future of related keywords. We’re beyond the time when to optimize for chocolate chip cookie recipe, we would also be looking for things like chocolate recipe, chocolate chips, chocolate cookie recipe, things like that. Stems, variants, TF-IDF, these are all older methodologies for understanding what a piece of content is about.

Instead what we need to understand is what are the entities that Google, using its vast body of knowledge, using things like Freebase, using large portions of the internet, where is Google seeing these entities co-occur at such a rate that they feel reasonably confident that a piece of content on one entity in order to be salient to that entity would include these other entities?

Using an expert is the best way to create content that’s salient to a topic

So chocolate chip cookie recipe, we’re now also making sure we’re adding things like butter, flour, sugar. This is actually really easy to do if you actually have a chocolate chip cookie recipe to put up there. This is I think what we’re going to start seeing as a content trend in SEO is that the best way to create content that is salient to a topic is to have an actual expert in that topic create that content.

Somebody with deep knowledge of a topic is naturally going to include co-occurring terms, because they know how to create something that’s about what it’s supposed to be about. I think what we’re going to start seeing is that people are going to have to start paying more for content marketing, frankly. Unfortunately, a lot of companies seem to think that content marketing is and should be cheap.

Content marketers, I feel you on that. It sucks, and it’s no longer the case. We need to start investing in content and investing in experts to create that content so that they can create that deep, rich, salient content that everybody really needs. 

How can you use this API to improve your own SEO? 

One of the things that I like to do with this kind of information is look at — and this is something that I’ve done for years, just not in this context — but a prime optimization target in general is pages that rank for a topic, but they rank on page 2.

What this often means is that Google understands that that keyword is a topic of the page, but it doesn’t necessarily understand that it is a good piece of content on that topic, that the page is actually solely about that content, that it’s a good resource. In other words, the signal is there, but it’s weak.

What you can do is take content that ranks but not well, run it through this natural language API or another natural language processing tool, and look at how the entities are extracted and how Google is determining that they’re related to each other. Sometimes it might be that you need to do some disambiguation. So in this example, you’ll notice that while chocolate cookies is called a work of art, and I agree, cookie here is actually called other.

This is because cookie means more than one thing. There’s cookies, the baked good, but then there’s also cookies, the packet of data. Both of those are legitimate uses of the word “cookie.” Words have multiple meanings. If you notice that Google, that this natural language processing API is having trouble correctly classifying your entities, that’s a good time to go in and do some disambiguation.

Make sure that the terms surrounding that term are clearly saying, “No, I mean the baked good, not the software piece of data.” That’s a really great way to kind of bump up your salience. Look at whether or not you have a strong salient score for your primary entity. You’d be amazed at how many pieces of content you can plug into this tool and the top, most salient entity is still only like a 0.01, a 0.14.

A lot of times the API is like “I think this is what it’s about,” but it’s not sure. This is a great time to go in and bump up that content, make it more robust, and look at ways that you can make those entities easier to both extract and to relate to each other. This brings me to my second point, which is my new favorite thing in the world.

Writing for humans and writing for machines, you can now do both at the same time. You no longer have to, and you really haven’t had to do this in a long time, but the idea that you might keyword stuff or otherwise create content for Google that your users might not see or care about is way, way, way over.

Now you can create content for Google that also is better for users, because the tenets of machine readability and human readability are moving closer and closer together. 

Tips for writing for human and machine readability:

Reduce semantic distances!

What I’ve done here is I did some research not on natural language processing, but on writing for human readability, that is advice from writers, from writing experts on how to write better, clearer, easier to read, easier to understand content.Then I pulled out the pieces of advice that also work as pieces of advice for writing for natural language processing. So natural language processing, again, is the process by which Google or really anything that might be processing language tries to understand how entities are related to each other within a given body of content.

Short, simple sentences

Short, simple sentences. Write simply. Don’t use a lot of flowery language. Short sentences and try to keep it to one idea per sentence. 

One idea per sentence

If you’re running on, if you’ve got a lot of different clauses, if you’re using a lot of pronouns and it’s becoming confusing what you’re talking about, that’s not great for readers.

It also makes it harder for machines to parse your content. 

Connect questions to answers

Then closely connecting questions to answers. So don’t say, “What is the best temperature to bake cookies? Well, let me tell you a story about my grandmother and my childhood,” and 500 words later here’s the answer. Connect questions to answers. 

What all three of those readability tips have in common is they boil down to reducing the semantic distance between entities.

If you want natural language processing to understand that two entities in your content are closely related, move them closer together in the sentence. Move the words closer together. Reduce the clutter, reduce the fluff, reduce the number of semantic hops that a robot might have to take between one entity and another to understand the relationship, and you’ve now created content that is more readable because it’s shorter and easier to skim, but also easier for a robot to parse and understand.

Be specific first, then explain nuance

Going back to the example of “What is the best temperature to bake chocolate chip cookies at?” Now the real answer to what is the best temperature to bake chocolate cookies is it depends. Hello. Hi, I’m an SEO, and I just answered a question with it depends. It does depend.

That is true, and that is real, but it is not a good answer. It is also not the kind of thing that a robot could extract and reproduce in, for example, voice search or a featured snippet. If somebody says, “Okay, Google, what is a good temperature to bake cookies at?” and Google says, “It depends,” that helps nobody even though it’s true. So in order to write for both machine and human readability, be specific first and then you can explain nuance.

Then you can go into the details. So a better, just as correct answer to “What is the temperature to bake chocolate chip cookies?” is the best temperature to bake chocolate chip cookies is usually between 325 and 425 degrees, depending on your altitude and how crisp you like your cookie. That is just as true as it depends and, in fact, means the same thing as it depends, but it’s a lot more specific.

It’s a lot more precise. It uses real numbers. It provides a real answer. I’ve shortened the distance between the question and the answer. I didn’t say it depends first. I said it depends at the end. That’s the kind of thing that you can do to improve readability and understanding for both humans and machines.

Get to the point (don’t bury the lede)

Get to the point. Don’t bury the lead. All of you journalists who try to become content marketers, and then everybody in content marketing said, “Oh, you need to wait till the end to get to your point or they won’t read the whole thing,”and you were like, “Don’t bury the lead,” you are correct. For those of you who aren’t familiar with journalism speak, not burying the lead basically means get to the point upfront, at the top.

Include all the information that somebody would really need to get from that piece of content. If they don’t read anything else, they read that one paragraph and they’ve gotten the gist. Then people who want to go deep can go deep. That’s how people actually like to consume content, and surprisingly it doesn’t mean they won’t read the content. It just means they don’t have to read it if they don’t have time, if they need a quick answer.

The same is true with machines. Get to the point upfront. Make it clear right away what the primary entity, the primary topic, the primary focus of your content is and then get into the details. You’ll have a much better structured piece of content that’s easier to parse on all sides. 

Avoid jargon and “marketing speak”

Avoid jargon. Avoid marketing speak. Not only is it terrible and very hard to understand. You see this a lot. I’m going back again to the example of getting your clients to say what their products do. You work with a lot of B2B companies, you will you will often run into this. Yes, but what does it do? It provides solutions to streamline the workflow and blah, blah. Okay, what does it do? This is the kind of thing that can be really, really hard for companies to get out of their own heads about, but it’s so important for users, for machines.

Avoid jargon. Avoid marketing speak. Not to get too tautological, but the more esoteric a word is, the less commonly it’s used. That’s actually what esoteric means. What that means is the less commonly a word is used, the less likely it is that Google is going to understand its semantic relationships to other entities.

Keep it simple. Be specific. Say what you mean. Wipe out all of the jargon. By wiping out jargon and kind of marketing speak and kind of the fluff that can happen in your content, you’re also, once again, reducing the semantic distances between entities, making them easier to parse. 

Organize your information to match the user journey

Organize it and map it out to the user journey. Think about the information somebody might need and the order in which they might need it. 

Break out subtopics with headings

Then break it out with subheadings. This is like very, very basic writing advice, and yet you all aren’t doing it. So if you’re not going to do it for your users, do it for machines. 

Format lists with bullets or numbers

You can also really impact skimmability for users by breaking out lists with bullets or numbers.

The great thing about that is that breaking out a list with bullets or numbers also makes information easier for a robot to parse and extract. If a lot of these tips seem like they’re the same tips that you would use to get featured snippets, they are, because featured snippets are actually a pretty good indicator that you’re creating content that a robot can find, parse, understand, and extract, and that’s what you want.

So if you’re targeting featured snippets, you’re probably already doing a lot of these things, good job. 

Grammar and spelling count!

The last thing, which I shouldn’t have to say, but I’m going to say is that grammar and spelling and punctuation and things like that absolutely do count. They count to users. They don’t count to all users, but they count to users. They also count to search engines.

Things like grammar, spelling, and punctuation are very, very easy signals for a machine to find and parse. Google has been specific in things, like the “Quality Rater Guidelines,”that a well-written, well-structured, well-spelled, grammatically correct document, that these are signs of authoritativeness. I’m not saying that having a greatly spelled document is going to mean that you immediately rocket to the top of the results.

I am saying that if you’re not on that stuff, it’s probably going to hurt you. So take the time to make sure everything is nice and tidy. You can use vernacular English. You don’t have to be perfect “AP Style Guide” all the time. But make sure that you are formatting things properly from a grammatical standpoint as well as a technical standpoint. What I love about all of this, this is just good writing.

This is good writing. It’s easy to understand. It’s easy to parse. It’s still so hard, especially in the marketing world, to get out of that world of jargon, to get to the point, to stop writing 2,000 words because we think we need 2,000 words, to really think about are we creating content that’s about what we think it’s about.

Use these tools to understand how readable, parsable, and understandable your content is

So my hope for the SEO world and for you is that you can use these tools not just to think about how to dial in the perfect keyword density or whatever to get an almost perfect score on the salience in the natural language processing API. What I’m hoping is that you will use these tools to help yourself understand how readable, how parsable, and how understandable your content is, how much your content is about what you say it’s about and what you think it’s about so you can create better stuff for users.

It makes the internet a better place, and it will probably make you some money as well. So these are my thoughts. I’d love to hear in the comments if you’re using the natural language processing API now, if you’ve built a tool with it, if you want to build a tool with it, what do you think about this, how do you use this, how has it gone. Tell me all about it. Holla atcha girl.

Have a great Friday.

Video transcription by Speechpad.com



Source link

Shutterstock’s new music subscription offers affordable music licensing for content creators



Amy Gesenhues

Shutterstock has launched an unlimited music subscription plan for content creators and digital marketers, offering more than 11,000 tracks that can be included in web-based content, including YouTube videos, podcasts and conference presentations.

The subscription fee is $149 per month, and includes access to the Shutterstock Music library with music tracks searchable by genre, mood or popularity. The company says its music selection has been curated by professional musicians, with hundreds of tracks added every month.

Why we should care

This latest offering from Shutterstock gives digital marketers and content creators the ability to spice up their content — putting a professional shine on things like podcast intros, YouTube ads or conference and trade show presentations. At $149 per month, it’s a cost-effective feature for marketers lacking the budget and resources to invest in high-end music productions for various projects.

“Our new unlimited licensing option empowers creators to license music as their needs arise and frees them to focus on the creative vision rather worrying about budget,” said Shutterstock VP of Product Christopher Cosentino.

Shutterstock is also adding “shorts” and “loops” music offerings to all of their licensing plans, making available shortened versions of a song (shorts) or segments of a longer song that repeats indefinitely (loops).

More on the news

  • The newly added shorts and loops come at no extra cost with all of Shutterstock plans. The shorts offer 15-second, 30-second and 60-second versions of songs.
  • Shutterstock now boasts a community of more than one million contributors, with hundreds of thousands of images added every week.
  • The image, video and now music licensing site has more than 280 million images and more than 16 million video clips available.

About The Author

Amy Gesenhues is a senior editor for Third Door Media, covering the latest news and updates for Marketing Land, Search Engine Land and MarTech Today. From 2009 to 2012, she was an award-winning syndicated columnist for a number of daily newspapers from New York to Texas. With more than ten years of marketing management experience, she has contributed to a variety of traditional and online publications, including MarketingProfs, SoftwareCEO, and Sales and Marketing Management Magazine. Read more of Amy’s articles.





Source link

It’s Content and It’s Links – Are We Making SEO Too Complicated?



AndrewDennis33

Content and links — to successfully leverage search as a marketing channel you need useful content and relevant links.

Many experienced SEOs have run numerous tests and experiments to correlate backlinks with higher rankings, and Google has espoused the importance of “great content” for as long as I can remember.

In fact, a Google employee straight up told us that content and links are two of the three (the other being RankBrain) most important ranking factors in its search algorithm.

So why do we seem to overcomplicate SEO by chasing new trends and tactics, overreacting to fluctuations in rankings, and obsessing over the length of our title tags? SEO is simple — it’s content and it’s links.

Now, this is a simple concept, but it is much more nuanced and complex to execute well. However, I believe that by getting back to basics and focusing on these two pillars of SEO we can all spend more time doing the work that will be most impactful, creating a better, more connected web, and elevating SEO as a practice within the marketing realm.

To support this movement, I want to provide you with strategic, actionable takeaways that you can leverage in your own content marketing and link building campaigns. So, without further ado, let’s look at how you can be successful in search with content and links.

Building the right content

As the Wu-Tang Clan famously said, “Content rules everything around me, C.R.E.A.M,” …well, it was something like that. The point is, everything in SEO begins and ends with content. Whether it’s a blog post, infographic, video, in-depth guide, interactive tool, or something else, content truly rules everything around us online.

Content attracts and engages visitors, building positive associations with your brand and inspiring them to take desired actions. Content also helps search engines better understand what your website is about and how they should rank your pages within their search results.

A screenshot of a cell phone

Description automatically generated

So where do you start with something as wide-reaching and important as a content strategy? Well, if everything in SEO begins and ends with content, then everything in content strategy begins and ends with keyword research.

Proper keyword research is the difference between a targeted content strategy that drives organic visibility and simply creating content for the sake of creating content. But don’t just take my word for it — check out this client project where keyword research was executed after a year of publishing content that wasn’t backed by keyword analysis:

A close up of a map

Description automatically generated

(Note: Each line represents content published within a given year, not total organic sessions of the site.)

In 2018, we started creating content based on keyword opportunities. The performance of that content has quickly surpassed (in terms of organic sessions) the older pages that were created without strategic research.

Start with keyword research

The concept of keyword research is straightforward — find the key terms and phrases that your audience uses to find information related to your business online. However, the execution of keyword research can be a bit more nuanced, and simply starting is often the most difficult part.

The best place to start is with the keywords that are already bringing people to your site, which you can find within Google Search Console.

Beyond the keywords that already bring people to your website, a baseline list of seed keywords can help you expand your keyword reach.

Seed keywords are the foundational terms that are related to your business and brand.

As a running example, let’s use Quip, a brand that sells oral care products. Quip’s seed keywords would be:

  • [toothbrush]
  • [toothpaste]
  • [toothbrush set]
  • [electric toothbrush]
  • [electric toothbrush set]
  • [toothbrush subscription]

These are some of the most basic head terms related to Quip’s products and services. From here, the list could be expanded, using keyword tools such as Moz’s Keyword Explorer, to find granular long-tail keywords and other related terms.

Expanded keyword research and analysis

The first step in keyword research and expanding your organic reach is to identify current rankings that can and should be improved.

Here are some examples of terms Moz’s Keyword Explorer reports Quip has top 50 rankings for:

  • [teeth whitening]
  • [sensitive teeth]
  • [whiten teeth]
  • [automatic toothbrush]
  • [tooth sensitivity]
  • [how often should you change your toothbrush]

These keywords represent “near-miss” opportunities for Quip, where it ranks on page two or three. Optimization and updates to existing pages could help Quip earn page one rankings and substantially more traffic.

For example, here are the first page results for [how often should you change your toothbrush]:

A screenshot of a cell phone

Description automatically generated

As expected, the results here are hyper-focused on answering the question how often a toothbrush needs to be changed, and there is a rich snippet that answers the question directly.

Now, look at Quip’s page where we can see there is room for improvement in answering searcher intent:

A picture containing person, object

Description automatically generated

The title of the page isn’t optimized for the main query, and a simple title change could help this page earn more visibility. Moz reports 1.7k–2.9k monthly search volume for [how often should you change your toothbrush]:

A screenshot of a cell phone

Description automatically generated

This is a stark contrast to the volume reported by Moz for [why is a fresh brush head so important] which is “no data” (which usually means very small):

A screenshot of a cell phone

Description automatically generated

Quip’s page is already ranking on page two for [how often should you change your toothbrush], so optimizing the title could help the page crack the top ten.

Furthermore, the content on the page is not optimized either:

A picture containing person

Description automatically generated

Rather than answering the question of how often to change a toothbrush concisely (like the page that has earned the rich snippet), the content is closer to ad copy. Putting a direct, clear answer to this question at the beginning of the content could help this page rank better.

And that’s just one query and one page!

Keyword research should uncover these types of opportunities, and with Moz’s Keyword Explorer you can also find ideas for new content through “Keyword Suggestions.”

Using Quip as an example again, we can plug in their seed keyword [toothbrush] and get multiple suggestions (MSV = monthly search volume):

  • [toothbrush holder] – MSV: 6.5k–9.3k
  • [how to properly brush your teeth] – MSV: 851–1.7k
  • [toothbrush cover] – MSV: 851–1.7k
  • [toothbrush for braces] – MSV: 501–850
  • [electric toothbrush holder] – MSV: 501–850
  • [toothbrush timer] – MSV: 501–850
  • [soft vs medium toothbrush] – MSV: 201–500
  • [electric toothbrush for braces] – MSV: 201–500
  • [electric toothbrush head holder] – MSV: 101–200
  • [toothbrush delivery] – MSV: 101–200

Using this method, we can extrapolate one seed keyword into ten more granular and related long-tail keywords — each of which may require a new page.

This handful of terms generates a wealth of content ideas and different ways Quip could address pain points and reach its audience.

Another source for keyword opportunities and inspiration are your competitors. For Quip, one of its strongest competitors is Colgate, a household name brand. Moz demonstrates the difference in market position with its “Competitor Overlap” tool:

A screenshot of a cell phone

Description automatically generated

Although many of Colgate’s keywords aren’t relevant to Quip, there are still opportunities to be gleaned here for Quip. One such example is [sensitive teeth], where Colgate is ranking top five, but Quip is on page two:

A screenshot of a computer

Description automatically generated

While many of the other keywords show Quip is ranking outside of the top 50, this is an opportunity that Quip could potentially capitalize on.

To analyze this opportunity, let’s look at the actual search results first.

A screenshot of a cell phone

Description automatically generated

It’s immediately clear that the intent here is informational — something to note when we examine Quip’s page. Also, scrolling down we can see that Colgate has two pages ranking on page one:

A screenshot of a cell phone

Description automatically generated

One of these pages is from a separate domain for hygienists and other dental professionals, but it still carries the Colgate brand and further demonstrates Colgate’s investment into this query, signaling this is a quality opportunity.

The next step for investigating this opportunity is to examine Colgate’s ranking page and check if it’s realistic for Quip to beat what they have. Here is Colgate’s page:

A screenshot of a social media post

Description automatically generated

This page is essentially a blog post:

A screenshot of a cell phone

Description automatically generated

If this page is ranking, it’s reasonable to believe that Quip could craft something that would be at least as good of a result for the query, and there is room for improvement in terms of design and formatting.

One thing to note, that is likely helping this page rank is the clear definition of “tooth sensitivity” and signs and symptoms listed on the sidebar:

A screenshot of a social media post

Description automatically generated

Now, let’s look at Quip’s page:

A screenshot of a cell phone

Description automatically generated

This appears to be a blog-esque page as well.

A screenshot of a cell phone

Description automatically generated

This page offers solid information on sensitive teeth, which matches the queries intent and is likely why the pages ranks on page two. However, the page appears to be targeted at [tooth sensitivity]:

A screenshot of a social media post

Description automatically generated

This is another great keyword opportunity for Quip:

A screenshot of a cell phone

Description automatically generated

However, this should be a secondary opportunity to [sensitive teeth] and should be mixed in to the copy on the page, but not the focal point. Also, the page one results for [tooth sensitivity] are largely the same as those for [sensitive teeth], including Colgate’s page:

A screenshot of a cell phone

Description automatically generated

So, one optimization Quip could make to the page could be to change some of these headers to include “sensitive teeth” (also, these are all H3s, and the page has no H2s, which isn’t optimal). Quip could draw inspiration from the questions that Google lists in the “People also ask” section of the SERP:

A screenshot of a cell phone

Description automatically generated

Also, a quick takeaway I had was that Quip’s page does not lead off with a definition of sensitive teeth or tooth sensitivity. We learned from Colgate’s page that quickly defining the term (sensitive teeth) and the associated symptoms could help the page rank better.

These are just a few of the options available to Quip to optimize its page, and as mentioned before, an investment into a sleek, easy to digest design could separate its page from the pack.

If Quip were able to move its page onto the first page of search results for [sensitive teeth], the increase in organic traffic could be significant. And [sensitive teeth] is just the tip of the proverbial iceberg — there is a wealth of opportunity with associated keywords, that Quip would rank well for also:

A screenshot of a cell phone

Description automatically generated

Executing well on these content opportunities and repeating the process over and over for relevant keywords is how you scale keyword-focused content that will perform well in search and bring more organic visitors.

Google won’t rank your page highly for simply existing. If you want to rank in Google search, start by creating a page that provides the best result for searchers and deserves to rank.

At Page One Power, we’ve leveraged this strategy and seen great results for clients. Here is an example of a client that is primarily focused on content creation and their corresponding growth in organic sessions:

A picture containing text

Description automatically generated

These pages (15) were all published in January, and you can see that roughly one month after publishing, these pages started taking off in terms of organic traffic. This is because these pages are backed by keyword research and optimized so well that even with few external backlinks, they can rank on or near page one for multiple queries.

However, this doesn’t mean you should ignore backlinks and link acquisition. While the above pages rank well without many links, the domain they’re on has a substantial backlink profile cultivated through strategic link building. Securing relevant, worthwhile links is still a major part of a successful SEO campaign.

Earning real links and credibility

The other half of this complicated “it’s content and it’s links” equation is… links, and while it seems straightforward, successful execution is rather difficult — particularly when it comes to link acquisition.

While there are tools and processes that can increase organization and efficiency, at the end of the day link building takes a lot of time and a lot of work — you must manually email real website owners to earn real links. As Matt Cutts famously said (we miss you, Matt!), “Link building is sweat, plus creativity.”

However, you can greatly improve your chances for success with link acquisition if you identify which pages (existing or need to be created) on your site are link-worthy and promote them for links.

Spoiler alert: these are not your “money pages.”

Converting pages certainly have a function on your website, but they typically have limited opportunities when it comes to link acquisition. Instead, you can support these pages — and other content on your site — through internal linking from more linkable pages.

So how do you identify linkable assets? Well, there are some general characteristics that directly correlate with link-worthiness:

  • Usefulness — concept explanation, step-by-step guide, collection of resources and advice, etc.
  • Uniqueness — a new or fresh perspective on an established topic, original research or data, prevailing coverage of a newsworthy event, etc.
  • Entertaining — novel game or quiz, humorous take on a typically serious subject, interactive tool, etc.

Along with these characteristics, you also need to consider the size of your potential linking audience. The further you move down your marketing funnel, the smaller the linking audience size; converting pages are traditionally difficult to earn links to because they serve a small audience of people looking to buy.

Instead, focus on assets that exist at the top of your marketing funnel and serve large audiences looking for information. The keywords associated with these pages are typically head terms that may prove difficult to rank for, but if your content is strong you can still earn links through targeted, manual outreach to relevant sites.

Ironically, your most linkable pages aren’t always the pages that will rank well for you in search, since larger audiences also mean more competition. However, using linkable assets to secure worthwhile links will help grow the authority and credibility of your brand and domain, supporting rankings for your keyword-focused and converting pages.

Going back to our Quip example, we see a page on their site that has the potential to be a linkable asset:

A close up of a map

Description automatically generated

Currently, this page is geared more towards conversions which hurts linkability. However, Quip could easily move conversion-focused elements to another page and internally link from this page to maintain a pathway to conversion while improving link-worthiness.

To truly make this page a linkable asset, Quip would need add depth on the topic of how to brush your teeth and hone in on a more specific audience. As the page currently stands, it is targeted at everybody who brushes, but to make the page more linkable Quip could focus on a specific age group (toddlers, young children, elderly, etc.) or perhaps a profession or group who works odd hours or travels frequently and doesn’t have the convenience of brushing at home. An increased focus on audience will help with linkability, making this page one that shares useful information in a way that is unique and entertaining.

It also happens that [how to properly brush your teeth] was one of the opportunities we identified earlier in our (light) keyword research, so this could be a great opportunity to earn keyword rankings and links!

Putting it all together and simplifying our message

Now before we put it all together and solve SEO once and for all, you might be thinking, “What about technical and on-page SEO?!?”

And to that, I say, well those are just makeu…just kidding!

Technical and on-page elements play a major role in successful SEO and getting these elements wrong can derail the success of any content you create and undermine the equity of the links you secure.

Let’s be clear: if Google can’t crawl your site, you’re not showing up in its search results.

However, I categorize these optimizations under the umbrella of “content” within our content and links formula. If you’re not considering how search engines consume your content, along with human readers, then your content likely won’t perform well in the results of said search engines.

Rather than dive into the deep and complex world of technical and on-page SEO in this post, I recommend reading some of the great resources here on Moz to ensure your content is set up for success from a technical standpoint.

But to review the strategy I’ve laid out here, to be successful in search you need to:

  1. Research your keywords and niche – Having the right content for your audience is critical to earning search visibility and business. Before you start creating content or updating existing pages, make sure you take the time to research your keywords and niche to better understand your current rankings and position in the search marketplace.
  2. Analyze and expand keyword opportunities – Beyond understanding your current rankings, you also need to identify and prioritize available keyword opportunities. Using tools like Moz you can uncover hidden opportunities with long-tail and related key terms, ensuring your content strategy is targeting your best opportunities.
  3. Craft strategic content that serves your search goals – Using keyword analysis to inform content creation, you can build content that addresses underserved queries and helpful guides that attract links. An essential aspect of a successful content plan is balancing keyword-focused content with broader, more linkable content and ensuring you’re addressing both SEO goals.
  4. Promote your pages for relevant links – Billions of new pages go live each day, and without proper promotion, even the best pages will be buried in the sea of content online. Strategic promotion of your pages will net you powerful backlinks and extra visibility from your audience.

Again, these concepts seem simple but are quite difficult to execute well. However, by drilling down to the two main factors for search visibility — content and links — you can avoid being overwhelmed or focusing on the wrong priorities and instead put all your efforts into the strategies that will provide the most SEO impact.

However, along with refocusing our own efforts, as SEOs we also need to simplify our message to the uninitiated (or as they’re also known, the other 99% of the population). I know from personal experience how quickly the eyes start to glaze over when I get into the nitty-gritty of SEO, so I typically pivot to focus on the most basic concepts: content and links.

People can wrap their minds around the simple process of creating good pages that answer a specific set of questions and then promoting those pages to acquire endorsements (backlinks). I suggest we embrace this same approach, on a broader scale, as an industry.

When we talk to potential and existing clients, colleagues, executives, etc., let’s keep things simple. If we focus on the two concepts that are the easiest to explain we will get better understanding and more buy-in for the work we do (it also happens that these two factors are the biggest drivers of success).

So go out, shout it from the rooftops — CONTENT AND LINKS — and let’s continue to do the work that will drive positive results for our websites and help secure SEOs rightful seat at the marketing table.



Source link

The Content Distribution Playbook – Whiteboard Friday



rosssimmonds

If you’re one of the many marketers that shares your content on Facebook, Twitter, and Linked before calling it good and moving on, this Whiteboard Friday is for you. In a super actionable follow-up to his MozCon 2019 presentation, Ross Simmonds reveals how to go beyond the mediocre when it comes to your content distribution plan, reaching new audiences in just the right place at the right time.

Click on the whiteboard image above to open a high resolution version in a new tab!

Video Transcription

What’s going on, Whiteboard Friday fans? My name is Ross Simmonds from Foundation Marketing, and today we’re going to be talking about how to develop a content distribution playbook that will drive meaningful and measurable results for your business. 

What is content distribution and why does it matter?

First and foremost, content distribution is the thing that you need to be thinking about if you want to combat the fact that it is becoming harder and harder than ever before to stand out as a content marketer, as a storyteller, and as a content creator in today’s landscape. It’s getting more and more difficult to rank for content. It’s getting more and more difficult to get organic reach through our social media channels, and that is why content distribution is so important.

You are facing a time when organic reach on social continues to drop more and more, where the ability to rank is becoming even more difficult because you’re competing against more ad space. You’re competing against more featured snippets. You’re competing against more companies. Because content marketers have screamed at the top of their lungs that content is king and the world has listened, it is becoming more and more difficult to stand out amongst the noise.

Most marketers have embraced this idea because for years we screamed, “Content is king, create more content,”and that is what the world has done. Most marketers start by just creating content, hoping that traffic will come, hoping that reach will come, and hoping that as a result of them creating content that profits will follow. In reality, the profits never come because they miss a significant piece of the puzzle, which is content distribution.

In today’s video, we’re going to be talking about how you can distribute your content more effectively across a few different channels, a few different strategies, and how you can take your content to the next level. 

There are two things that you can spend when it comes to content distribution: 

  1. You can spend time, 
  2. or you can spend money. 

In today’s video, we’re going to talk about exactly how you can distribute your content so when you write that blog post, you write that landing page, when you create that e-book, you create that infographic, whatever resource you’ve developed, you can ensure that that content is reaching the right people on the right channel at the right time.

◷: Owned channels

So how can you do it? We all have heard of owned channels. Owned channels are things that you own as a business, as a brand, as an organization. These are things that you can do without question probably today. 

Email marketing

For example, email marketing, it’s very likely that you have an email list of some sort. You can distribute your content to those people. 

In-app notifications

Let’s say you have a website that offers people a solution or a service directly inside of the site. Say it’s software as a service or something of that nature. If people are logging in on a regular basis to access your product, you can use in-app notifications to let those people know that you’ve launched a blog post. Or better yet, if you have a mobile app of any sort, you can do the same thing. You can use your app to let people know that you just launched a new piece of content.

Social channels

You have social media channels. Let’s say you have Twitter, LinkedIn, Facebook. Share that content to your heart’s desire on those channels as well. 

On-site banner

If you have a website, you can update an on-site banner, at the top or in the bottom right, that is letting people know who visit your site that you have a new piece of content. Let them know. They want to know that you’re creating new content. So why not advise them that you have done such?

Sales outreach

If you have a sales team of any sort, let’s say you’re in B2B and you have a sales team, one of the most effective ways is to empower your sales team, to communicate to your sales team that you have developed a new piece of content so they can follow up with leads, they can nurture those existing relationships and even existing customers to let them know that a new piece of content has gone live. That one-to-one connection can be huge. 

◷: Social media / other channels

So when you’ve done all of that, what else can you do? You can go into social media. You can go into other channels. Again, you can spend time distributing your content into these places where your audience is spending time as well. 

Social channels and groups

So if you have a Twitter account, you can send out tweets. If you have a Facebook page, of course you can put up status updates.

If you have a LinkedIn page, you can put up a status update as well. These three things are typically what most organizations do in that Phase 2, but that’s not where it ends. You can go deeper. You can do more. You can go into Facebook groups, whether as a page or as a human, and share your content into these communities as well. You can let them know that you’ve published a new piece of research and you would love for them to check it out.

Or you’re in these groups and you’re looking and waiting and looking for somebody to ask a question that your blog post, your research has answered, and then you respond to that question with the content that you’ve developed. Or you do the same exact thing in a LinkedIn group. LinkedIn groups are an awesome opportunity for you to go in and start seeding your content as well.

Medium

Or you go to Medium.com. You repurpose the content that you’ve developed. You launch it on Medium.com as well. There’s an import function on Medium where you can import your content, get a canonical link directly to your site, and you can share that on Medium as well. Medium.com is a great distribution channel, because you can seed that content to publications as well.

When your content is going to these publications, they already have existing subscribers, and those subscribers get notified that there’s a new piece being submitted by you. When they see it, that’s a new audience that you wouldn’t have reached before using any of those owned channels, because these are people who you wouldn’t have had access to before. So you want to take advantage of that as well.

Keep in mind you don’t always have to upload even the full article. You can upload a snippet and then have a CTA at the bottom, a call to action driving people to the article on your website. 

LinkedIn video

You can use LinkedIn video to do the same thing. Very similar concept. Imagine you have a LinkedIn video. You look into the camera and you say to your connections, “Hey, everyone, we just launched a new research piece that is breaking down X, Y, and Z, ABC. I would love for you to check it out. Check the link below.”

If you created that video and you shared it on your LinkedIn, your connections are going to see this video, and it’s going to break their pattern of what they typically see on LinkedIn. So when they see it, they’re going to engage, they’re going to watch that video, they’re going to click the link, and you’re going to get more reach for the content that you developed in the past. 

Slack communities

Slack communities are another great place to distribute your content. Slack isn’t just a great channel to build internal culture and communicate as an internal team.

There are actual communities, people who are passionate about photography, people who are passionate about e-commerce, people who are passionate about SEO. There are Slack communities today where these people are gathering to talk about their passions and their interests, and you can do the same thing that you would do in Facebook groups or LinkedIn groups in these various Slack communities. 

Instagram / Facebook stories

Instagram stories and Facebook stories, awesome, great channel for you to also distribute your content. You can add a link to these stories that you’re uploading, and you can simply say, “Swipe up if you want to get access to our latest research.” Or you can design a graphic that will say, “Swipe up to get our latest post.” People who are following you on these channels will swipe up. They’ll land on your article, they’ll land on your research, and they’ll consume that content as well. 

LinkedIn Pulse

LinkedIn Pulse, you have the opportunity now to upload an article directly to LinkedIn, press Publish, and again let it soar. You can use the same strategies that I talked about around Medium.com on LinkedIn, and you can drive results. 

Quora

Quora, it’s like a question-and-answer site, like Yahoo Answers back in the day, except with a way better design. You can go into Quora, and you can share just a native link and tag it with relevant content, relevant topics, and things of that nature. Or you can find a few questions that are related to the topic that you’ve covered in your post, in your research, whatever asset you developed, and you can add value to that person who asked that question, and within that value you make a reference to the link and the article that you developed in the past as well.

SlideShare

SlideShare, one of OGs of B2B marketing. You can go to SlideShare, upload a presentation version of the content that you’ve already developed. Let’s say you’ve written a long blog post. Why not take the assets within that blog post, turn them into a PDF, a SlideShare presentation, upload them there, and then distribute it through that network as well? Once you have those SlideShare presentations put together, what’s great about it is you can take those graphics and you can share them on Twitter, you can share them on Facebook, LinkedIn, you can put them into Medium.com, and distribute them further there as well.

Forums

You can go into forums. Let’s think about it. If your audience is spending time in a forum communicating about something, why not go into these communities and into these forums and connect with them on a one-to-one basis as well? There’s a huge opportunity in forums and communities that exist online, where you can build trust and you can seed your content into these communities where your audience is spending time.

A lot of people think forums are dead. They could never be more alive. If you type in your audience, your industry forums, I promise you you’ll probably come across something that will surprise you as an opportunity to seed your content. 

Reddit communities

Reddit communities, a lot of marketers get the heebie-jeebies when I talk about Reddit. They’re all like, “Marketers on Reddit? That doesn’t work. Reddit hates marketing.” I get it.

I understand what you’re thinking. But what they actually hate is the fact that marketers don’t get Reddit. Marketers don’t get the fact that Redditors just want value. If you can deliver value to people using Reddit, whether it’s through a post or in the comments, they will meet you with happiness and joy. They will be grateful of the fact that you’ve added value to their communities, to their subreddits, and they will reward you with upvotes, with traffic and clicks, and maybe even a few leads or a customer or two in the process.

Do not ignore Reddit as being the site that you can’t embrace. Whether you’re B2B or B2C, Redditors can like your content. Redditors will like your content if you go in with value first. 

Imgur

Sites like Imgur, another great distribution channel. Take some of those slides that you developed in the past, upload them to Imgur, and let them sing there as well.

There are way more distribution channels and distribution techniques that you can use that go beyond even what I’ve described here. But these just a few examples that show you that the power of distribution doesn’t exist just in a couple posts. It exists in actually spending the time, taking the time to distribute your stories and distribute your content across a wide variety of different channels.

$: Paid marketing

That’s spending time. You can also spend money through paid marketing. Paid marketing is also an opportunity for any brand to distribute their stories. 

Remarketing

First and foremost, you can use remarketing. Let’s talk about that email list that you’ve already developed. If you take that email list and you run remarketing ads to those people on Facebook, on Twitter, on LinkedIn, you can reach those people and get them engaged with new content that you’ve developed.

Let’s say somebody is already visiting your page. People are visiting your website. They’re visiting your content. Why not run remarketing ads to those people who already demonstrate some type of interest to get them back on your site, back engaged with your content, and tell your story to them as well? Another great opportunity is if you’ve leveraged video in any way, you can do remarketing ads on Facebook to people who have watched 10 seconds, 30 seconds, 20 seconds, whatever it may be to your content as well.

Quora ads

Then one of the opportunities that is definitely underrated is the fact that Quora now offers advertising as well. You can run ads on Quora to people who are asking or looking at questions related to your industry, related to the content that you’ve developed, and get your content in front of them as well. 

Influencer marketing

Then influencers, you can do sponsored content. You can reach out to these influencers and have them talk about your stories, talk about your content, and have them share it as well on behalf of the fact that you’ve developed something new and something that is interesting.

Think differently & rise above mediocrity

When I talk about influencer marketing, I talk about Reddit, I talk about SlideShare, I talk about LinkedIn video, I talk about Slack communities, a lot of marketers will quickly say, “I don’t think this is for me. I think this is too much. I think that this is too much manual work. I think this is too many niche communities. I think this is a little bit too much for my brand.

I get that. I understand your mindset, but this is what you need to recognize. Most marketers are going through this process. If you think that by distributing your content into the communities that your audience is spending time is just a little bit off brand or it doesn’t really suit you, that’s what most marketers already think. Most marketers already think that Twitter, Facebook, and LinkedIn is all they need to do to share their stories, get their content out there, and call it a day.

If you want to be like most marketers, you’re going to get what most marketers receive as a result, which is mediocre results. So I push you to think differently. I push you to push yourself to not be like most marketers, not to go down the path of mediocrity, and instead start looking for ways that you can either invest time or money into channels, into opportunities, and into communities where you can spread your content with value first and ultimately generate results for your business at the end of all of it.

So I hope that you can use this to uncover for yourself a content distribution playbook that works for your brand. Whether you’re in B2C or you’re in B2B, it doesn’t matter. You have to understand where your audience is spending time, understand how you can seed your content into these different spaces and unlock the power of content distribution. My name is Ross Simmonds.

I really hope you enjoyed this video. If you have any questions, don’t hesitate to reach out on Twitter, at TheCoolestCool, or hit me up any other way. I’m on every other channel. Of course I am. I love social. I love digital. I’m everywhere that you could find me, so feel free to reach out.

I hope you enjoyed this video and you can use it to give your content more reach and ultimately drive meaningful and measurable results for your business. Thank you so much.

Video transcription by Speechpad.com


If Ross’s Whiteboard Friday left you feeling energized and inspired to try new things with your content marketing, you’ll love his full MozCon 2019 talk — Keywords Aren’t Enough: How to Uncover Content Ideas Worth Chasing — available in our recently released video bundle. Learn how to use many of these same distribution channels as idea factories for your content, plus access 26 additional future-focused SEO topics from our top-notch speakers:

Grab the sessions now!

And don’t be shy — share the learnings with your whole team, preferably with snacks. It’s what video was made for!





Source link

Community Standards Enforcement Report, November 2019 Edition



isolomons

Today we’re publishing the fourth edition of our Community Standards Enforcement Report, detailing our work for Q2 and Q3 2019. We are now including metrics across ten policies on Facebook and metrics across four policies on Instagram.

These metrics include:

  • Prevalence: how often content that violates our policies was viewed
  • Content Actioned: how much content we took action on because it was found to violate our policies
  • Proactive Rate: of the content we took action on, how much was detected before someone reported it to us
  • Appealed Content: how much content people appealed after we took action
  • Restored Content: how much content was restored after we initially took action

We also launched a new page today so people can view examples of how our Community Standards apply to different types of content and see where we draw the line.

Adding Instagram to the Report
For the first time, we are sharing data on how we are doing at enforcing our policies on Instagram. In this first report for Instagram, we are providing data on four policy areas: child nudity and child sexual exploitation; regulated goods — specifically, illicit firearm and drug sales; suicide and self-injury; and terrorist propaganda. The report does not include appeals and restores metrics for Instagram, as appeals on Instagram were only launched in Q2 of this year, but these will be included in future reports.

While we use the same proactive detection systems to find and remove harmful content across both Instagram and Facebook, the metrics may be different across the two services. There are many reasons for this, including: the differences in the apps’ functionalities and how they’re used – for example, Instagram doesn’t have links, re-shares in feed, Pages or Groups; the differing sizes of our communities; where people in the world use one app more than another; and where we’ve had greater ability to use our proactive detection technology to date. When comparing metrics in order to see where progress has been made and where more improvements are needed, we encourage people to see how metrics change, quarter-over-quarter, for individual policy areas within an app.

What Else Is New in the Fourth Edition of the Report

  • Data on suicide and self-injury: We are now detailing how we’re taking action on suicide and self-injury content. This area is both sensitive and complex, and we work with experts to ensure everyone’s safety is considered. We remove content that depicts or encourages suicide or self-injury, including certain graphic imagery and real-time depictions that experts tell us might lead others to engage in similar behavior. We place a sensitivity screen over content that doesn’t violate our policies but that may be upsetting to some, including things like healed cuts or other non-graphic self-injury imagery in a context of recovery. We also recently strengthened our policies around self-harm and made improvements to our technology to find and remove more violating content.
    • On Facebook, we took action on about 2 million pieces of content in Q2 2019, of which 96.1% we detected proactively, and we saw further progress in Q3 when we removed 2.5 million pieces of content, of which 97.3% we detected proactively.
    • On Instagram, we saw similar progress and removed about 835,000 pieces of content in Q2 2019, of which 77.8% we detected proactively, and we removed about 845,000 pieces of content in Q3 2019, of which 79.1% we detected proactively.
  • Expanded data on terrorist propaganda: Our Dangerous Individuals and Organizations policy bans all terrorist organizations from having a presence on our services. To date, we have identified a wide range of groups, based on their behavior, as terrorist organizations. Previous reports only included our efforts specifically against al Qaeda, ISIS and their affiliates as we focused our measurement efforts on the groups understood to pose the broadest global threat. Now, we’ve expanded the report to include the actions we’re taking against all terrorist organizations. While the rate at which we detect and remove content associated with Al Qaeda, ISIS and their affiliates on Facebook has remained above 99%, the rate at which we proactively detect content affiliated with any terrorist organization on Facebook is 98.5% and on Instagram is 92.2%. We will continue to invest in automated techniques to combat terrorist content and iterate on our tactics because we know bad actors will continue to change theirs.
  • Estimating prevalence for suicide and self-injury and regulated goods: In this report, we are adding prevalence metrics for content that violates our suicide and self-injury and regulated goods (illicit sales of firearms and drugs) policies for the first time. Because we care most about how often people may see content that violates our policies, we measure prevalence, or the frequency at which people may see this content on our services. For the policy areas addressing the most severe safety concerns — child nudity and sexual exploitation of children, regulated goods, suicide and self-injury, and terrorist propaganda — the likelihood that people view content that violates these policies is very low, and we remove much of it before people see it. As a result, when we sample views of content in order to measure prevalence for these policy areas, many times we do not find enough, or sometimes any, violating samples to reliably estimate a metric. Instead, we can estimate an upper limit of how often someone would see content that violates these policies. In Q3 2019, this upper limit was 0.04%. Meaning that for each of these policies, out of every 10,000 views on Facebook or Instagram in Q3 2019, we estimate that no more than 4 of those views contained content that violated that policy.
    • It’s also important to note that when the prevalence is so low that we can only provide upper limits, this limit may change by a few hundredths of a percentage point between reporting periods, but changes that small do not mean there is a real difference in the prevalence of this content on the platform.

Progress to Help Keep People Safe
Across the most harmful types of content we work to combat, we’ve continued to strengthen our efforts to enforce our policies and bring greater transparency to our work. In addition to suicide and self-injury content and terrorist propaganda, the metrics for child nudity and sexual exploitation of children, as well as regulated goods, demonstrate this progress. The investments we’ve made in AI over the last five years continue to be a key factor in tackling these issues. In fact, recent advancements in this technology have helped with rate of detection and removal of violating content.

For child nudity and sexual exploitation of children, we made improvements to our processes for adding violations to our internal database in order to detect and remove additional instances of the same content shared on both Facebook and Instagram, enabling us to identify and remove more violating content.

On Facebook:

  • In Q3 2019, we removed about 11.6 million pieces of content, up from Q1 2019 when we removed about 5.8 million. Over the last four quarters, we proactively detected over 99% of the content we remove for violating this policy.

While we are including data for Instagram for the first time, we have made progress increasing content actioned and the proactive rate in this area within the last two quarters:

  • In Q2 2019, we removed about 512,000 pieces of content, of which 92.5% we detected proactively.
  • In Q3, we saw greater progress and removed 754,000 pieces of content, of which 94.6% we detected proactively.

For our regulated goods policy prohibiting illicit firearm and drug sales, continued investments in our proactive detection systems and advancements in our enforcement techniques have allowed us to build on the progress from the last report.

On Facebook:

  • In Q3 2019, we removed about 4.4 million pieces of drug sale content, of which 97.6% we detected proactively — an increase from Q1 2019 when we removed about 841,000 pieces of drug sale content, of which 84.4% we detected proactively.
  • Also in Q3 2019, we removed about 2.3 million pieces of firearm sales content, of which 93.8% we detected proactively — an increase from Q1 2019 when we removed about 609,000 pieces of firearm sale content, of which 69.9% we detected proactively.

On Instagram:

  • In Q3 2019, we removed about 1.5 million pieces of drug sale content, of which 95.3% we detected proactively.
  • In Q3 2019, we removed about 58,600 pieces of firearm sales content, of which 91.3% we detected proactively.

New Tactics in Combating Hate Speech
Over the last two years, we’ve invested in proactive detection of hate speech so that we can detect this harmful content before people report it to us and sometimes before anyone sees it. Our detection techniques include text and image matching, which means we’re identifying images and identical strings of text that have already been removed as hate speech, and machine-learning classifiers that look at things like language, as well as the reactions and comments to a post, to assess how closely it matches common phrases, patterns and attacks that we’ve seen previously in content that violates our policies against hate.

Initially, we’ve used these systems to proactively detect potential hate speech violations and send them to our content review teams since people can better assess context where AI cannot. Starting in Q2 2019, thanks to continued progress in our systems’ abilities to correctly detect violations, we began removing some posts automatically, but only when content is either identical or near-identical to text or images previously removed by our content review team as violating our policy, or where content very closely matches common attacks that violate our policy. We only do this in select instances, and it has only been possible because our automated systems have been trained on hundreds of thousands, if not millions, of different examples of violating content and common attacks. In all other cases when our systems proactively detect potential hate speech, the content is still sent to our review teams to make a final determination. With these evolutions in our detection systems, our proactive rate has climbed to 80%, from 68% in our last report, and we’ve increased the volume of content we find and remove for violating our hate speech policy.

While we are pleased with this progress, these technologies are not perfect and we know that mistakes can still happen. That’s why we continue to invest in systems that enable us to improve our accuracy in removing content that violates our policies while safeguarding content that discusses or condemns hate speech. Similar to how we review decisions made by our content review team in order to monitor the accuracy of our decisions, our teams routinely review removals by our automated systems to make sure we are enforcing our policies correctly. We also continue to review content again when people appeal and tell us we made a mistake in removing their post.

Updating our Metrics
Since our last report, we have improved the ways we measure how much content we take action on after identifying an issue in our accounting this summer. In this report, we are updating metrics we previously shared for content actioned, proactive rate, content appealed and content restored for the periods Q3 2018 through Q1 2019.

During those quarters, the issue with our accounting processes did not impact how we enforced our policies or how we informed people about those actions; it only impacted how we counted the actions we took. For example, if we find that a post containing one photo violates our policies, we want our metric to reflect that we took action on one piece of content — not two separate actions for removing the photo and the post. However, in July 2019, we found that the systems logging and counting these actions did not correctly log the actions taken. This was largely due to needing to count multiple actions that take place within a few milliseconds and not miss, or overstate, any of the individual actions taken.

We’ll continue to refine the processes we use to measure our actions and build a robust system to ensure the metrics we provide are accurate. We share more details about these processes here.





Source link

How Facebook Is Prepared for the 2019 UK General Election

Today, leaders from our offices in London and Menlo Park, California spoke with members of the press about Facebook’s efforts to prepare for the upcoming General Election in the UK on December 12, 2019. The following is a transcript of their remarks.

Rebecca Stimson, Head of UK Public Policy, Facebook

We wanted to bring you all together, now that the UK General Election is underway, to set out the range of actions we are taking to help ensure this election is transparent and secure – to answer your questions and to point you to the various resources we have available.  

There has already been a lot of focus on the role of social media within the campaign and there is a lot of information for us to set out. 

We have therefore gathered colleagues from both the UK and our headquarters in Menlo Park, California, covering our politics, product, policy and safety teams to take you through the details of those efforts. 

I will just say a few opening remarks before we dive into the details

Helping protect elections is one of our top priorities and over the last two years we’ve made some significant changes – these broadly fall into three camps:

  • We’ve introduced greater transparency so that people know what they are seeing online and can scrutinize it more effectively; 
  • We have built stronger defenses to prevent things like foreign interference; 
  • And we have invested in both people and technology to ensure these new policies are effective.

So taking these in turn. 

Transparency

On the issue of transparency. We’ve tightened our rules to make political ads much more transparent, so people can see who is trying to influence their vote and what they are saying. 

We’ll discuss this in more detail shortly, but to summarize:  

  • Anybody who wants to run political ads must go through a verification process to prove who they are and that they are based here in the UK; 
  • Every political ad is labelled so you can see who has paid for them;
  • Anybody can click on any ad they see on Facebook and get more information on why they are seeing it, as well as block ads from particular advertisers;
  • And finally, we put all political ads in an Ad Library so that everyone can see what ads are running, the types of people who saw them and how much was spent – not just while the ads are live, but for seven years afterwards.

Taken together these changes mean that political advertising on Facebook and Instagram is now more transparent than other forms of election campaigning, whether that’s billboards, newspaper ads, direct mail, leaflets or targeted emails. 

This is the first UK general election since we introduced these changes and we’re already seeing many journalists using these transparency tools to scrutinize the adverts which are running during this election – this is something we welcome and it’s exactly why we introduced these changes. 

Defense 

Turning to the stronger defenses we have put in place.

Nathaniel will shortly set out in more detail our work to prevent foreign interference and coordinated inauthentic behavior. But before he does I want to be clear right up front how seriously we take these issues and our commitment to doing everything we can to prevent election interference on our platforms. 

So just to highlight one of the things he will be talking about – we have, as part of this work, cracked down significantly on fake accounts. 

We now identify and shut down millions of fake accounts every day, many just seconds after they were created.

Investment

And lastly turning to investment in these issues.

We now have more than 35,000 people working on safety and security. We have been building and rolling out many of the new tools you will be hearing about today. And as Ella will set out later, we have introduced a number of safety measures including a dedicated reporting channel so that all candidates in the election can flag any abusive and threatening content directly to our teams.  

I’m also pleased to say that – now the election is underway – we have brought together an Elections Taskforce of people from our teams across the UK, EMEA and the US who are already working together every day to ensure election integrity on our platforms. 

The Elections Taskforce will be working on issues including threat intelligence, data science, engineering, operations, legal and others. It also includes representatives from WhatsApp and Instagram.

As we get closer to the election, these people will be brought together in physical spaces in their offices – what we call our Operations Centre. 

It’s important to remember that the Elections Taskforce is an additional layer of security on top of our ongoing monitoring for threats on the platform which operates 24/7. 

And while there will always be further improvements we can and will continue to make, and we can never say there won’t be challenges to respond to, we are confident that we’re better prepared than ever before.  

Political Ads

Before I wrap up this intro section of today’s call I also want to address two of the issues that have been hotly debated in the last few weeks – firstly whether political ads should be allowed on social media at all and secondly whether social media companies should decide what politicians can and can’t say as part of their campaigns. 

As Mark Zuckerberg has said, we have considered whether we should ban political ads altogether. They account for just 0.5% of our revenue and they’re always destined to be controversial. 

But we believe it’s important that candidates and politicians can communicate with their constituents and would be constituents. 

Online political ads are also important for both new challengers and campaigning groups to get their message out. 

Our approach is therefore to make political messages on our platforms as transparent as possible, not to remove them altogether. 

And there’s also a really difficult question – if you were to consider banning political ads, where do you draw the line – for example, would anyone advocate for blocking ads for important issues like climate change or women’s empowerment? 

Turning to the second issue – there is also a question about whether we should decide what politicians and political parties can and can’t say.  

We don’t believe a private company like Facebook should censor politicians. This is why we don’t send content or ads from politicians and political parties to our third party fact-checking partners.

This doesn’t mean that politicians can say whatever they want on Facebook. They can’t spread misinformation about where, when or how to vote. They can’t incite violence. We won’t allow them to share content that has previously been debunked as part of our third-party fact-checking program. And we of course take down content that violates local laws. 

But in general we believe political speech should be heard and we don’t feel it is right for private companies like us to fact-check or judge the veracity of what politicians and political parties say. 

Facebook’s approach to this issue is in line with the way political speech and campaigns have been treated in the UK for decades. 

Here in the UK – an open democracy with a vibrant free press – political speech has always been heavily scrutinized but it is not regulated. 

The UK has decided that there shouldn’t be rules about what political parties and candidates can and can’t say in their leaflets, direct mails, emails, billboards, newspaper ads or on the side of campaign buses.  

And as we’ve seen when politicians and campaigns have made hotly contested claims in previous elections and referenda, it’s not been the role of the Advertising Standards Authority, the Electoral Commission or any other regulator to police political speech. 

In our country it’s always been up to the media and the voters to scrutinize what politicians say and make their own minds up. 

Nevertheless, we have long called for new rules for the era of digital campaigning. 

Questions around what constitutes a political ad, who can run them and when, what steps those who purchase political ads must take, how much they can spend on them and whether there should be any rules on what they can and can’t say – these are all matters that can only be properly decided by Parliament and regulators.  

Legislation should be updated to set standards for the whole industry – for example, should all online political advertising be recorded in a public archive similar to our Ad Library and should that extend to traditional platforms like billboards, leaflets and direct mail?

We believe UK electoral law needs to be brought into the 21st century to give clarity to everyone – political parties, candidates and the platforms they use to promote their campaigns.

In the meantime our focus has been to increase transparency so anyone, anywhere, can scrutinize every ad that’s run and by whom. 

I will now pass you to the team to talk you through our efforts in more detail.

  • Nathaniel Gleicher will discuss tackling fake accounts and disrupting coordinated inauthentic behavior;
  • Rob Leathern will take you through our UK political advertising measures and Ad Library;
  • Antonia Woodford will outline our work tackling misinformation and our fact-checker partnerships; 
  • And finally, Ella Fallows will fill you on what we’re doing around safety of candidates and how we’re encouraging people to participate in the election; 

Nathaniel Gleicher, Head of Cybersecurity Policy, Facebook 

My team leads all our efforts across our apps to find and stop what we call influence operations, coordinated efforts to manipulate or corrupt public debate for a strategic goal. 

We also conduct regular red team exercises, both internally and with external partners to put ourselves into the shoes of threat actors and use that approach to identify and prepare for new and emerging threats. We’ll talk about some of the products of these efforts today. 

Before I dive into some of the details, as you’re listening to Rob, Antonia, and I, we’re going to be talking about a number of different initiatives that Facebook is focused on, both to protect the UK general election and more broadly, to respond to integrity threats. I wanted to give you a brief framework for how to think about these. 

The key distinction that you’ll hear again and again is a distinction between content and behavior. At Facebook, we have policies that enable to take action when we see content that violates our Community Standards. 

In addition, we have the tools that we use to respond when we see an actor engaged in deceptive or violating behavior, and we keep these two efforts distinct. And so, as you listen to us, we’ll be talking about different initiatives we have in both dimensions. 

Under content for example, you’ll hear Antonia talk about misinformation, about voter suppression, about hate speech, and about other types of content that we can take action against if someone tries to share that content on our platform. 

Under the behavioral side, you’ll hear me and you’ll hear Rob also mention some of our work around influence operations, around spam, and around hacking. 

I’m going to focus in particular on the first of these, influence operations; but the key distinction that I want to make is when we take action to remove someone because of their deceptive behavior, we’re not looking at, we’re not reviewing, and we’re not considering the content that they’re sharing. 

What we’re focused on is the fact that they are deceiving or misleading users through their actions. For example, using networks of fake accounts to conceal who they are and conceal who’s behind the operation. So we’ll refer back to these, but I think it’s helpful to distinguish between the content side of our enforcement and the behavior side of our enforcement. 

And that’s particularly important because we’ve seen some threat actors who work to understand where the boundaries are for content and make sure for example that the type of content they share doesn’t quite cross the line. 

And when we see someone doing that, because we have behavioral enforcement tools as well, we’re still able to make sure we’re protecting authenticity and public debate on the platform. 

In each of these dimensions, there are four pillars to our work. You’ll hear us refer to each of these during the call as well, but let me just say that these four fit together, and no one of these by themselves would be enough, but all four of the together give us a layered approach to defending public debate and ensuring authenticity on the platform. 

We have expert investigative teams that conduct proactive investigations to find, expose, and disrupt sophisticated threat actors. As we do that, we learn from those investigations and we build automated systems that can disrupt any kind of violating behavior across the platform at scale. 

We also, as Rebecca mentioned, build transparency tools so that users, external researchers and the press can see who is using the platform and ensure that they’re engaging authentically. It also forces threat actors who are trying to conceal their identity to work harder to conceal and mislead. 

And then lastly, one of the things that’s extremely clear to us, particularly in the election space, is that this is a whole of society effort. And so, we work closely with partners in government, in civil society, and across industry to tackle these threats. 

And we’ve found that where we could be most effective is where we bring the tools we bring to the table, and then can work with government and work with other partners to respond and get ahead of these challenges as they emerge. 

One of the ways that we do this is through proactive investigations into the deceptive efforts engaged in by bad actors. Over the last year, our investigative teams, working together with our partners in civil society, law enforcement, and industry, have found and stopped more than 50 campaigns engaged in coordinated inauthentic behavior across the world. 

This includes an operation we removed in May that originated from Iran and targeted a number of countries, including the UK. As we announced at the time, we removed 51 Facebook accounts, 36 pages, seven groups, and three Instagram accounts involved in coordinated inauthentic behavior. 

The page admins and account owners typically posted content in English or Arabic, and most of the operation had no focus on a particular country, although there were some pages focused on the UK and the United States. 

Similarly, in March we announced that we removed a domestic UK network of about 137 Facebook and Instagram accounts, pages, and groups that were engaged in coordinated inauthentic behavior. 

The individuals behind these accounts presented themselves as far right and anti-far right activists, frequently changed page and group names, and operated fake accounts to engage in hate speech and spread divisive comments on both sides of the political debate in the UK. 

These are the types of investigations that we focus our core investigative team on. Whenever we see a sophisticated actor that’s trying to evade our automated systems, those teams, which are made up of experts from law enforcement, the intelligence community, and investigative journalism, can find and reveal that behavior. 

When we expose it, we announce it publicly and we remove it from the platform. Those expert investigators proactively hunt for evidence of these types of coordinated inauthentic behavior (CIB) operations around the world. 

This team has not seen evidence of widespread foreign operations aimed at the UK. But we are continuing to search for this and we will remove and publicly share details of networks of CIB that we identify on our platforms. 

As always with these takedowns, we remove these operations for the deceptive behavior they engaged in, not for the content they shared. This is that content/behavior distinction that I mentioned earlier. As we’ve improved our ability to disrupt these operations, we’ve also deepened our understanding of the types of threats out there and how best to counter them. 

Based on these learnings, we’ve recently updated our inauthentic behavior policy which is posted publicly as part of our Community Standards, to clarify how we enforce against the spectrum of deceptive practices we see in our platforms, whether foreign or domestic, state or non state. For each investigation, we isolate any new behaviors we see and then we work to automate detection of them at scale. This connects to that second pillar of our integrity work. 

And this slows down the bad guy and lets our investigators focus on improving our defenses against emerging threats. A good example of this work is our efforts to find and block fake accounts, which Rebecca mentioned. 

We know bad actors use fake accounts as a way to mask their identity and inflict harm on our platforms. That’s why we’ve built an automated system to find and remove these fake accounts. And each time we conduct one of these takedowns, or any other of our enforcement actions, we learn more about what fake accounts look like and how we can have automated systems that detect and block them. 

This is why we have these systems in place today that block millions of fake accounts every day, often within minutes of their creation. Because information operations often target multiple platforms as well as traditional media, I mentioned our collaborations with industry, civil society and government. 

In addition to that, we are building increased transparency on our platform, so that the public along with open source researchers and journalists can find and expose more bad behavior themselves. 

This effort on transparency is incredibly important. Rob will talk about this in detail, but I do want to add one point here, specifically around pages. Increasingly, we’re seeing people operate pages that clearly disclose the organization behind them as a way to make others think they are independent. 

We want to make sure Facebook is used to engage authentically, and that users understand who is speaking to them and what perspective they are representing. We noted last month that we would be announcing new approaches to address this, and today we’re introducing a policy to require more accountability for pages that are concealing their ownership in order to mislead people.

If we find a page is misleading people about its purpose by concealing its ownership, we will require it to go through our business verification process, which we recently announced, and show more information on the page itself about who is behind that page, including the organization’s legal name and verified city, phone number, or website in order for it to stay up. 

This type of increased transparency helps ensure that the platform continues to be authentic and the people who use the platform know who they’re talking to and understand what they’re seeing. 

Rob Leathern, Director of Product, Business Integrity, Facebook 

In addition to making pages more transparent as Nathaniel has indicated, we’ve also put a lot of effort into making political advertising on Facebook more transparent than it is anywhere else. 

Every political and issue ad in the that runs on Facebook now goes into our Ad Library public archive that everyone can access, regardless of whether or not they have a Facebook account. 

We launched this in the UK in October 2018 and, since then, there’s been over 116,000 ads related to politics, elections, and social issues placed in the UK Ad Library. You can find all the ads that a candidate or organization is running, including how much they spent and who saw the ad. And we’re storing these ads in the Ad Library for seven years. 

Other media such as billboards, newspaper ads, direct mail, leaflets or targeted emails don’t today provide this level of transparency into the ad and who is seeing them. And as a result, we’ve seen a significant number of press stories regarding the election driven by the information in Facebook’s Ad Library. 

We’re proud of this resource and insight into ads running on Facebook and Instagram and that it is proving useful for media and researchers. And just last month, we made even more changes to both the Ad Library and Ad Library Reports. These include adding details in who the top advertising spenders are in each country in the UK, as well as providing an additional view by different date ranges which people have been asking for. 

We’re now also making it clear which Facebook platform an ad ran on. For example, if an ad ran on both Facebook and/or Instagram. 

For those of you unfamiliar with the Ad Library, which you can see at Facebook.com/adlibrary, I thought I’d run through it quickly. 

So this is the Ad Library. Here you see all the ads have been classified as relating to politics or issues. We keep them in the library for seven years. As I mentioned, you can find the Ad Library at Facebook.com/adlibrary. 

You can also access the Ad Library through a specific page. For example, for this Page, you can see not only the advertising information, but also the transparency about the Page itself, along with the spend data. 

Here is an example of the ads that this Page is running, both active as well as inactive. In addition, if an ad has been disapproved for violating any of our ad policies, you’re also able to see all of those ads as well. 

Here’s what it looks like if you click to see more detail about a specific ad. You’ll be able to see individual ad spend, impressions, and demographic information. 

And you’ll also be able to compare the individual ad spend to the overall macro spend by the Page, which is tracked in the section below. If you scroll back up, you’ll also be able to see the other information about the disclaimer that has been provided by the advertiser. 

We know we can’t protect elections alone and that everyone plays a part in keeping the platform safe and respectful. We ask people to share responsibly and to let us know when they see something that may violate our Advertising Policies and Community Standards. 

We also have the Ad Library API so journalists and academics can analyze ads about social issues, elections, or politics. The Ad Library application programming interface, or API, allows people to perform customized keyword searches of ads stored in the Ad Library. You can search data for all active and inactive issue, electoral or political ads. 

You can also access the Ad Library and the data therein through the specific page or through the Ad Library Report. Here is the Ad Library report, this allows you to see the spend by specific advertisers and you can download a full report of the data. 

Here we also allow you to see the spending by location and if you click in you can see the top spenders by region. So you can see, for example, in the various regions, who the top spenders in those areas are. 

Our goal is to provide an open API to news organizations, researchers, groups and people who can hold advertisers and us more accountable. 

We’ve definitely seen a lot of press, journalists, and researchers examining the data in the Ad Library and using it to generate these insights and we think that’s exactly a part of what will help hold both us and advertisers more accountable.

We hope these measures will build on existing transparency we have in place and help reporters, researchers and most importantly people on Facebook learn more about the Pages and information they’re engaging with. 

Antonia Woodford, Product Manager, Misinformation, Facebook

We are committed to fighting the spread of misinformation and viral hoaxes on Facebook. It is a responsibility we take seriously.

To accomplish this, we follow a three-pronged approach which we call remove, reduce, and inform. First and foremost, when something violates the laws or our policies, we’ll remove it from the platform all together.

As Nathaniel touched on, removing fake accounts is a priority, of which the vast majority are detected and removed within minutes of registration and before a person can report them. This is a key element in eliminating the potential spread of misinformation. 

The reduce and inform part of the equation is how we reduce the spread of problematic content that doesn’t violate the law or our community standards, while still ensuring freedom of expression on the platform and this is where the majority of our misinformation work is focused. 

To reduce the spread of misinformation, we work with third party fact-checkers. 

Through a combination of reporting from people on our platform and machine learning, potentially false posts are sent to third party fact-checkers to review. These fact-checkers review this content, check the facts, and then rate its accuracy. They’re able to review links in news articles as well as photos, videos, or text posts on Facebook.

After content has been rated false, our algorithm heavily downranks this content in News Feed so it’s seen by fewer people and far less likely to go viral. Fact-checkers can fact-check any posts they choose based on the queue we send them. 

And lastly, as part of our work to inform people about the content they see on Facebook, we just launched a new design to better warn people when they see content that’s illegal, false, or partly false by our fact-checking partners.

People will now see a more prominent label on photos and videos that have been fact-checked as false or partly false. This is a grey screen that sits over a post and says ‘false information’ and points people to fact-checkers’ articles debunking the claims. 

These clearer labels are what people have told us they want, what they have told us they expect Facebook to do, and what experts tell us is the right tactic for combating misinformation.

We’re rolling this change out in the UK this week for any photos and videos that have been rated through our fact-checking partnership. Though just one part of our overall strategy, fact-checking is a fundamental part of our strategy to combat this information and I want to share a little bit more about the program.

Our fact-checking partners are all accredited by the International Fact-Checking Network which requires them to abide by a code of principles such as nonpartisanship and transparency sources.

We currently have over 50 partners in over 40 languages around the world. As Rebecca outlined earlier, we don’t send content or ads from politicians and political parties to our third party fact-checking partners.

Here in the UK we work with Full Fact and FactCheckNI and I as part of our program. To recap we identify content that may be false using signals such as feedback from our users. This content is all submitted into a queue for our fact-checking partners to access. These fact-checkers then choose which content to review, check the facts, and rate the accuracy of the content.

These fact-checkers are independent organizations, so it is at their discretion what they choose to investigate. They can also fact-check whatever content they want outside of the posts we send their way.

If a fact-checker rates a story as false, it will appear lower in News Feed with the false information screen I mentioned earlier. This significantly reduces the number of people who see it.

Other posts that Full Fact and FactCheckNI choose to fact-check outside of our system will not be impacted on Facebook. 

And finally, on Tuesday we announced a partnership with the International Fact-Checking Network to create the Fact-Checking Innovation Initiative. This will fund innovation projects, new formats, and technologies to help benefit the broader fact-checking ecosystem. 

We are investing $500,000 into this new initiative, where organizations can submit applications for projects to improve fact-checkers’ scale and efficiency, increase the reach of fact-checks to empower more people with reliable information, build new tools to help combat misinformation, and encourage newsrooms to collaborate in fact-checking efforts.

Anyone from the UK can be a part of this new initiative. 

Ella Fallows, Politics and Government Outreach Manager UK, Facebook 

Our team’s role involves two main tasks: working with MPs and candidates to ensure they have a good experience and get the most from our platforms; and looking at how we can best use our platforms to promote participation in elections.

I’d like to start with the safety of MPs and candidates using our platforms. 

There is, rightly, a focus in the UK about the current tone of political debate. Let me be clear, hate speech and threats of violence have no place on our platforms and we’re investing heavily to tackle them. 

Additionally, for this campaign we have this week written to political parties and candidates setting out the range of safety measures we have in place and also to remind them of the terms and conditions and the Community Standards which govern their use of our platforms. 

As you may be aware, every piece of content on Facebook and Instagram has a report button, and when content is reported to us which violates our community standards, (what is and isn’t allowed on Facebook) it is removed. 

Since March this year, MPs have also had access to a dedicated reporting channel to flag any abusive and threatening content directly to our teams. Now that the General Election is underway we’re extending that support to all prospective candidates, making our team available to anyone standing to allow them to quickly report any concerns across our platforms and have them investigated. 

This is particularly pertinent to Tuesday’s news from the Government calling for a one stop shop for candidates, and we have already set up our own one stop shop so that there is a single point of contact for candidates for issues across Facebook and Instagram.

Behind that reporting channel sits my team, which is focused on escalating reports from candidates and making sure we’re taking action as quickly as possible on anything that violates our Community Standards or Advertising Policies. 

But that team is not working alone – it’s backed up by our 35,000-strong global safety and security team that oversees content and behavior across the platform every day. 

And our technology is also helping us to automatically detect more of this harmful content. For example, while there is further to go, the proportion of hate speech we remove before it’s reported to us has almost tripled over the last two years.

We also have a Government, Politics & Advocacy Portal which is a home for everything a candidate will need during the campaign, including ‘how to’ guides on subjects such as registering as a political advertiser and running campaigns on Facebook, best practice tips and troubleshooting guides for technical issues.

We’re working with all of the political parties and the Electoral Commission to ensure candidates are aware of both the reporting channel to reach my team and the Government, Politics & Advocacy Portal.

We’re also working with political parties and the Electoral Commission to help candidates prepare for the election through a few different initiatives:

  • Firstly, while we don’t provide ongoing guidance or embed anyone into campaigns, we have held sessions with each party on how to use and get the most from our platforms for their campaigns, and we’ll continue to hold webinars throughout the General Election period for any candidate and their staff to join.
  • We’re also working with women’s networks within the parties to hold dedicated sessions for female candidates providing extra guidance on safety and outlining the help available to prevent harassment on our platforms. We want to ensure we’re doing everything possible to help them connect with their constituents, free from harassment.
  • Finally, we’re working with the Electoral Commission and political parties to distribute to every candidate in the General Election the safety guides we have put together, to ensure we reach everyone not just those attending our outreach sessions. 

For example, we have developed a range of tools that allow public figures to moderate and filter the content that people put on their Facebook Pages to prevent negative content appearing in the first place. People who help manage Pages can hide or delete individual comments. 

They can also proactively moderate comments and posts by visitors by turning on the profanity filter, or blocking specific words or lists of words that they do not want to appear on their Page. Page admins can also remove or ban people from their Pages. 

We hope these steps help every candidate to reach their constituents, and get the most from our platforms. But our work doesn’t stop there.

The second area our team focuses on is promoting civic engagement. In addition to supporting and advising candidates, we also, of course, want to help promote voter participation in the election. 

For the past five years, we’ve used badges and reminders at the top of people’s News Feeds to encourage people to vote in elections around the world. The same will be true for this campaign. 

We’ll run reminders to register to vote, with a link to the Electoral Commission’s voter registration page, in the week running up to the voter registration deadline. 

On election day itself, we’ll also run a reminder to vote with a link to the Electoral Commission website so voters can find their polling station and any information they need. This will include a button to share that you voted. 

We know from speaking to the Electoral Commission that these reminders for past national votes in the UK have had a positive effect on voter registration.

We hope that this combination of steps will help to ensure both candidates and voters engaging with the General Election on our platforms have the best possible experience.





Source link

How Facebook Has Prepared for the 2019 UK General Election

Today, leaders from our offices in London and Menlo Park, California spoke with members of the press about Facebook’s efforts to prepare for the upcoming General Election in the UK on December 12, 2019. The following is a transcript of their remarks.

Rebecca Stimson, Head of UK Public Policy, Facebook

We wanted to bring you all together, now that the UK General Election is underway, to set out the range of actions we are taking to help ensure this election is transparent and secure – to answer your questions and to point you to the various resources we have available.  

There has already been a lot of focus on the role of social media within the campaign and there is a lot of information for us to set out. 

We have therefore gathered colleagues from both the UK and our headquarters in Menlo Park, California, covering our politics, product, policy and safety teams to take you through the details of those efforts. 

I will just say a few opening remarks before we dive into the details

Helping protect elections is one of our top priorities and over the last two years we’ve made some significant changes – these broadly fall into three camps:

  • We’ve introduced greater transparency so that people know what they are seeing online and can scrutinize it more effectively; 
  • We have built stronger defenses to prevent things like foreign interference; 
  • And we have invested in both people and technology to ensure these new policies are effective.

So taking these in turn. 

Transparency

On the issue of transparency. We’ve tightened our rules to make political ads much more transparent, so people can see who is trying to influence their vote and what they are saying. 

We’ll discuss this in more detail shortly, but to summarize:  

  • Anybody who wants to run political ads must go through a verification process to prove who they are and that they are based here in the UK; 
  • Every political ad is labelled so you can see who has paid for them;
  • Anybody can click on any ad they see on Facebook and get more information on why they are seeing it, as well as block ads from particular advertisers;
  • And finally, we put all political ads in an Ad Library so that everyone can see what ads are running, the types of people who saw them and how much was spent – not just while the ads are live, but for seven years afterwards.

Taken together these changes mean that political advertising on Facebook and Instagram is now more transparent than other forms of election campaigning, whether that’s billboards, newspaper ads, direct mail, leaflets or targeted emails. 

This is the first UK general election since we introduced these changes and we’re already seeing many journalists using these transparency tools to scrutinize the adverts which are running during this election – this is something we welcome and it’s exactly why we introduced these changes. 

Defense 

Turning to the stronger defenses we have put in place.

Nathaniel will shortly set out in more detail our work to prevent foreign interference and coordinated inauthentic behavior. But before he does I want to be clear right up front how seriously we take these issues and our commitment to doing everything we can to prevent election interference on our platforms. 

So just to highlight one of the things he will be talking about – we have, as part of this work, cracked down significantly on fake accounts. 

We now identify and shut down millions of fake accounts every day, many just seconds after they were created.

Investment

And lastly turning to investment in these issues.

We now have more than 35,000 people working on safety and security. We have been building and rolling out many of the new tools you will be hearing about today. And as Ella will set out later, we have introduced a number of safety measures including a dedicated reporting channel so that all candidates in the election can flag any abusive and threatening content directly to our teams.  

I’m also pleased to say that – now the election is underway – we have brought together an Elections Taskforce of people from our teams across the UK, EMEA and the US who are already working together every day to ensure election integrity on our platforms. 

The Elections Taskforce will be working on issues including threat intelligence, data science, engineering, operations, legal and others. It also includes representatives from WhatsApp and Instagram.

As we get closer to the election, these people will be brought together in physical spaces in their offices – what we call our Operations Centre. 

It’s important to remember that the Elections Taskforce is an additional layer of security on top of our ongoing monitoring for threats on the platform which operates 24/7. 

And while there will always be further improvements we can and will continue to make, and we can never say there won’t be challenges to respond to, we are confident that we’re better prepared than ever before.  

Political Ads

Before I wrap up this intro section of today’s call I also want to address two of the issues that have been hotly debated in the last few weeks – firstly whether political ads should be allowed on social media at all and secondly whether social media companies should decide what politicians can and can’t say as part of their campaigns. 

As Mark Zuckerberg has said, we have considered whether we should ban political ads altogether. They account for just 0.5% of our revenue and they’re always destined to be controversial. 

But we believe it’s important that candidates and politicians can communicate with their constituents and would be constituents. 

Online political ads are also important for both new challengers and campaigning groups to get their message out. 

Our approach is therefore to make political messages on our platforms as transparent as possible, not to remove them altogether. 

And there’s also a really difficult question – if you were to consider banning political ads, where do you draw the line – for example, would anyone advocate for blocking ads for important issues like climate change or women’s empowerment? 

Turning to the second issue – there is also a question about whether we should decide what politicians and political parties can and can’t say.  

We don’t believe a private company like Facebook should censor politicians. This is why we don’t send content or ads from politicians and political parties to our third party fact-checking partners.

This doesn’t mean that politicians can say whatever they want on Facebook. They can’t spread misinformation about where, when or how to vote. They can’t incite violence. We won’t allow them to share content that has previously been debunked as part of our third-party fact-checking program. And we of course take down content that violates local laws. 

But in general we believe political speech should be heard and we don’t feel it is right for private companies like us to fact-check or judge the veracity of what politicians and political parties say. 

Facebook’s approach to this issue is in line with the way political speech and campaigns have been treated in the UK for decades. 

Here in the UK – an open democracy with a vibrant free press – political speech has always been heavily scrutinized but it is not regulated. 

The UK has decided that there shouldn’t be rules about what political parties and candidates can and can’t say in their leaflets, direct mails, emails, billboards, newspaper ads or on the side of campaign buses.  

And as we’ve seen when politicians and campaigns have made hotly contested claims in previous elections and referenda, it’s not been the role of the Advertising Standards Authority, the Electoral Commission or any other regulator to police political speech. 

In our country it’s always been up to the media and the voters to scrutinize what politicians say and make their own minds up. 

Nevertheless, we have long called for new rules for the era of digital campaigning. 

Questions around what constitutes a political ad, who can run them and when, what steps those who purchase political ads must take, how much they can spend on them and whether there should be any rules on what they can and can’t say – these are all matters that can only be properly decided by Parliament and regulators.  

Legislation should be updated to set standards for the whole industry – for example, should all online political advertising be recorded in a public archive similar to our Ad Library and should that extend to traditional platforms like billboards, leaflets and direct mail?

We believe UK electoral law needs to be brought into the 21st century to give clarity to everyone – political parties, candidates and the platforms they use to promote their campaigns.

In the meantime our focus has been to increase transparency so anyone, anywhere, can scrutinize every ad that’s run and by whom. 

I will now pass you to the team to talk you through our efforts in more detail.

  • Nathaniel Gleicher will discuss tackling fake accounts and disrupting coordinated inauthentic behavior;
  • Rob Leathern will take you through our UK political advertising measures and Ad Library;
  • Antonia Woodford will outline our work tackling misinformation and our fact-checker partnerships; 
  • And finally, Ella Fallows will fill you on what we’re doing around safety of candidates and how we’re encouraging people to participate in the election; 

Nathaniel Gleicher, Head of Cybersecurity Policy, Facebook 

My team leads all our efforts across our apps to find and stop what we call influence operations, coordinated efforts to manipulate or corrupt public debate for a strategic goal. 

We also conduct regular red team exercises, both internally and with external partners to put ourselves into the shoes of threat actors and use that approach to identify and prepare for new and emerging threats. We’ll talk about some of the products of these efforts today. 

Before I dive into some of the details, as you’re listening to Rob, Antonia, and I, we’re going to be talking about a number of different initiatives that Facebook is focused on, both to protect the UK general election and more broadly, to respond to integrity threats. I wanted to give you a brief framework for how to think about these. 

The key distinction that you’ll hear again and again is a distinction between content and behavior. At Facebook, we have policies that enable to take action when we see content that violates our Community Standards. 

In addition, we have the tools that we use to respond when we see an actor engaged in deceptive or violating behavior, and we keep these two efforts distinct. And so, as you listen to us, we’ll be talking about different initiatives we have in both dimensions. 

Under content for example, you’ll hear Antonia talk about misinformation, about voter suppression, about hate speech, and about other types of content that we can take action against if someone tries to share that content on our platform. 

Under the behavioral side, you’ll hear me and you’ll hear Rob also mention some of our work around influence operations, around spam, and around hacking. 

I’m going to focus in particular on the first of these, influence operations; but the key distinction that I want to make is when we take action to remove someone because of their deceptive behavior, we’re not looking at, we’re not reviewing, and we’re not considering the content that they’re sharing. 

What we’re focused on is the fact that they are deceiving or misleading users through their actions. For example, using networks of fake accounts to conceal who they are and conceal who’s behind the operation. So we’ll refer back to these, but I think it’s helpful to distinguish between the content side of our enforcement and the behavior side of our enforcement. 

And that’s particularly important because we’ve seen some threat actors who work to understand where the boundaries are for content and make sure for example that the type of content they share doesn’t quite cross the line. 

And when we see someone doing that, because we have behavioral enforcement tools as well, we’re still able to make sure we’re protecting authenticity and public debate on the platform. 

In each of these dimensions, there are four pillars to our work. You’ll hear us refer to each of these during the call as well, but let me just say that these four fit together, and no one of these by themselves would be enough, but all four of the together give us a layered approach to defending public debate and ensuring authenticity on the platform. 

We have expert investigative teams that conduct proactive investigations to find, expose, and disrupt sophisticated threat actors. As we do that, we learn from those investigations and we build automated systems that can disrupt any kind of violating behavior across the platform at scale. 

We also, as Rebecca mentioned, build transparency tools so that users, external researchers and the press can see who is using the platform and ensure that they’re engaging authentically. It also forces threat actors who are trying to conceal their identity to work harder to conceal and mislead. 

And then lastly, one of the things that’s extremely clear to us, particularly in the election space, is that this is a whole of society effort. And so, we work closely with partners in government, in civil society, and across industry to tackle these threats. 

And we’ve found that where we could be most effective is where we bring the tools we bring to the table, and then can work with government and work with other partners to respond and get ahead of these challenges as they emerge. 

One of the ways that we do this is through proactive investigations into the deceptive efforts engaged in by bad actors. Over the last year, our investigative teams, working together with our partners in civil society, law enforcement, and industry, have found and stopped more than 50 campaigns engaged in coordinated inauthentic behavior across the world. 

This includes an operation we removed in May that originated from Iran and targeted a number of countries, including the UK. As we announced at the time, we removed 51 Facebook accounts, 36 pages, seven groups, and three Instagram accounts involved in coordinated inauthentic behavior. 

The page admins and account owners typically posted content in English or Arabic, and most of the operation had no focus on a particular country, although there were some pages focused on the UK and the United States. 

Similarly, in March we announced that we removed a domestic UK network of about 137 Facebook and Instagram accounts, pages, and groups that were engaged in coordinated inauthentic behavior. 

The individuals behind these accounts presented themselves as far right and anti-far right activists, frequently changed page and group names, and operated fake accounts to engage in hate speech and spread divisive comments on both sides of the political debate in the UK. 

These are the types of investigations that we focus our core investigative team on. Whenever we see a sophisticated actor that’s trying to evade our automated systems, those teams, which are made up of experts from law enforcement, the intelligence community, and investigative journalism, can find and reveal that behavior. 

When we expose it, we announce it publicly and we remove it from the platform. Those expert investigators proactively hunt for evidence of these types of coordinated inauthentic behavior (CIB) operations around the world. 

This team has not seen evidence of widespread foreign operations aimed at the UK. But we are continuing to search for this and we will remove and publicly share details of networks of CIB that we identify on our platforms. 

As always with these takedowns, we remove these operations for the deceptive behavior they engaged in, not for the content they shared. This is that content/behavior distinction that I mentioned earlier. As we’ve improved our ability to disrupt these operations, we’ve also deepened our understanding of the types of threats out there and how best to counter them. 

Based on these learnings, we’ve recently updated our inauthentic behavior policy which is posted publicly as part of our Community Standards, to clarify how we enforce against the spectrum of deceptive practices we see in our platforms, whether foreign or domestic, state or non state. For each investigation, we isolate any new behaviors we see and then we work to automate detection of them at scale. This connects to that second pillar of our integrity work. 

And this slows down the bad guy and lets our investigators focus on improving our defenses against emerging threats. A good example of this work is our efforts to find and block fake accounts, which Rebecca mentioned. 

We know bad actors use fake accounts as a way to mask their identity and inflict harm on our platforms. That’s why we’ve built an automated system to find and remove these fake accounts. And each time we conduct one of these takedowns, or any other of our enforcement actions, we learn more about what fake accounts look like and how we can have automated systems that detect and block them. 

This is why we have these systems in place today that block millions of fake accounts every day, often within minutes of their creation. Because information operations often target multiple platforms as well as traditional media, I mentioned our collaborations with industry, civil society and government. 

In addition to that, we are building increased transparency on our platform, so that the public along with open source researchers and journalists can find and expose more bad behavior themselves. 

This effort on transparency is incredibly important. Rob will talk about this in detail, but I do want to add one point here, specifically around pages. Increasingly, we’re seeing people operate pages that clearly disclose the organization behind them as a way to make others think they are independent. 

We want to make sure Facebook is used to engage authentically, and that users understand who is speaking to them and what perspective they are representing. We noted last month that we would be announcing new approaches to address this, and today we’re introducing a policy to require more accountability for pages that are concealing their ownership in order to mislead people.

If we find a page is misleading people about its purpose by concealing its ownership, we will require it to go through our business verification process, which we recently announced, and show more information on the page itself about who is behind that page, including the organization’s legal name and verified city, phone number, or website in order for it to stay up. 

This type of increased transparency helps ensure that the platform continues to be authentic and the people who use the platform know who they’re talking to and understand what they’re seeing. 

Rob Leathern, Director of Product, Business Integrity, Facebook 

In addition to making pages more transparent as Nathaniel has indicated, we’ve also put a lot of effort into making political advertising on Facebook more transparent than it is anywhere else. 

Every political and issue ad in the that runs on Facebook now goes into our Ad Library public archive that everyone can access, regardless of whether or not they have a Facebook account. 

We launched this in the UK in October 2018 and, since then, there’s been over 116,000 ads related to politics, elections, and social issues placed in the UK Ad Library. You can find all the ads that a candidate or organization is running, including how much they spent and who saw the ad. And we’re storing these ads in the Ad Library for seven years. 

Other media such as billboards, newspaper ads, direct mail, leaflets or targeted emails don’t today provide this level of transparency into the ad and who is seeing them. And as a result, we’ve seen a significant number of press stories regarding the election driven by the information in Facebook’s Ad Library. 

We’re proud of this resource and insight into ads running on Facebook and Instagram and that it is proving useful for media and researchers. And just last month, we made even more changes to both the Ad Library and Ad Library Reports. These include adding details in who the top advertising spenders are in each country in the UK, as well as providing an additional view by different date ranges which people have been asking for. 

We’re now also making it clear which Facebook platform an ad ran on. For example, if an ad ran on both Facebook and/or Instagram. 

For those of you unfamiliar with the Ad Library, which you can see at Facebook.com/adlibrary, I thought I’d run through it quickly. 

So this is the Ad Library. Here you see all the ads have been classified as relating to politics or issues. We keep them in the library for seven years. As I mentioned, you can find the Ad Library at Facebook.com/adlibrary. 

You can also access the Ad Library through a specific page. For example, for this Page, you can see not only the advertising information, but also the transparency about the Page itself, along with the spend data. 

Here is an example of the ads that this Page is running, both active as well as inactive. In addition, if an ad has been disapproved for violating any of our ad policies, you’re also able to see all of those ads as well. 

Here’s what it looks like if you click to see more detail about a specific ad. You’ll be able to see individual ad spend, impressions, and demographic information. 

And you’ll also be able to compare the individual ad spend to the overall macro spend by the Page, which is tracked in the section below. If you scroll back up, you’ll also be able to see the other information about the disclaimer that has been provided by the advertiser. 

We know we can’t protect elections alone and that everyone plays a part in keeping the platform safe and respectful. We ask people to share responsibly and to let us know when they see something that may violate our Advertising Policies and Community Standards. 

We also have the Ad Library API so journalists and academics can analyze ads about social issues, elections, or politics. The Ad Library application programming interface, or API, allows people to perform customized keyword searches of ads stored in the Ad Library. You can search data for all active and inactive issue, electoral or political ads. 

You can also access the Ad Library and the data therein through the specific page or through the Ad Library Report. Here is the Ad Library report, this allows you to see the spend by specific advertisers and you can download a full report of the data. 

Here we also allow you to see the spending by location and if you click in you can see the top spenders by region. So you can see, for example, in the various regions, who the top spenders in those areas are. 

Our goal is to provide an open API to news organizations, researchers, groups and people who can hold advertisers and us more accountable. 

We’ve definitely seen a lot of press, journalists, and researchers examining the data in the Ad Library and using it to generate these insights and we think that’s exactly a part of what will help hold both us and advertisers more accountable.

We hope these measures will build on existing transparency we have in place and help reporters, researchers and most importantly people on Facebook learn more about the Pages and information they’re engaging with. 

Antonia Woodford, Product Manager, Misinformation, Facebook

We are committed to fighting the spread of misinformation and viral hoaxes on Facebook. It is a responsibility we take seriously.

To accomplish this, we follow a three-pronged approach which we call remove, reduce, and inform. First and foremost, when something violates the laws or our policies, we’ll remove it from the platform all together.

As Nathaniel touched on, removing fake accounts is a priority, of which the vast majority are detected and removed within minutes of registration and before a person can report them. This is a key element in eliminating the potential spread of misinformation. 

The reduce and inform part of the equation is how we reduce the spread of problematic content that doesn’t violate the law or our community standards, while still ensuring freedom of expression on the platform and this is where the majority of our misinformation work is focused. 

To reduce the spread of misinformation, we work with third party fact-checkers. 

Through a combination of reporting from people on our platform and machine learning, potentially false posts are sent to third party fact-checkers to review. These fact-checkers review this content, check the facts, and then rate its accuracy. They’re able to review links in news articles as well as photos, videos, or text posts on Facebook.

After content has been rated false, our algorithm heavily downranks this content in News Feed so it’s seen by fewer people and far less likely to go viral. Fact-checkers can fact-check any posts they choose based on the queue we send them. 

And lastly, as part of our work to inform people about the content they see on Facebook, we just launched a new design to better warn people when they see content that’s illegal, false, or partly false by our fact-checking partners.

People will now see a more prominent label on photos and videos that have been fact-checked as false or partly false. This is a grey screen that sits over a post and says ‘false information’ and points people to fact-checkers’ articles debunking the claims. 

These clearer labels are what people have told us they want, what they have told us they expect Facebook to do, and what experts tell us is the right tactic for combating misinformation.

We’re rolling this change out in the UK this week for any photos and videos that have been rated through our fact-checking partnership. Though just one part of our overall strategy, fact-checking is a fundamental part of our strategy to combat this information and I want to share a little bit more about the program.

Our fact-checking partners are all accredited by the International Fact-Checking Network which requires them to abide by a code of principles such as nonpartisanship and transparency sources.

We currently have over 50 partners in over 40 languages around the world. As Rebecca outlined earlier, we don’t send content or ads from politicians and political parties to our third party fact-checking partners.

Here in the UK we work with Full Fact and FactCheckNI and I as part of our program. To recap we identify content that may be false using signals such as feedback from our users. This content is all submitted into a queue for our fact-checking partners to access. These fact-checkers then choose which content to review, check the facts, and rate the accuracy of the content.

These fact-checkers are independent organizations, so it is at their discretion what they choose to investigate. They can also fact-check whatever content they want outside of the posts we send their way.

If a fact-checker rates a story as false, it will appear lower in News Feed with the false information screen I mentioned earlier. This significantly reduces the number of people who see it.

Other posts that Full Fact and FactCheckNI choose to fact-check outside of our system will not be impacted on Facebook. 

And finally, on Tuesday we announced a partnership with the International Fact-Checking Network to create the Fact-Checking Innovation Initiative. This will fund innovation projects, new formats, and technologies to help benefit the broader fact-checking ecosystem. 

We are investing $500,000 into this new initiative, where organizations can submit applications for projects to improve fact-checkers’ scale and efficiency, increase the reach of fact-checks to empower more people with reliable information, build new tools to help combat misinformation, and encourage newsrooms to collaborate in fact-checking efforts.

Anyone from the UK can be a part of this new initiative. 

Ella Fallows, Politics and Government Outreach Manager UK, Facebook 

Our team’s role involves two main tasks: working with parties, MPs and candidates to ensure they have a good experience and get the most from our platforms; and looking at how we can best use our platforms to promote participation in elections.

I’d like to start with how MPs and candidates use our platforms. 

There is, rightly, a focus in the UK about the current tone of political debate. Let me be clear, hate speech and threats of violence have no place on our platforms and we’re investing heavily to tackle them. 

Additionally, for this campaign we have this week written to political parties and candidates setting out the range of safety measures we have in place and also to remind them of the terms and conditions and the Community Standards which govern our platforms. 

As you may be aware, every piece of content on Facebook and Instagram has a report button, and when content is reported to us which violates our Community Standards – what is and isn’t allowed on Facebook – it is removed. 

Since March of this year, MPs have also had access to a dedicated reporting channel to flag any abusive and threatening content directly to our teams. Now that the General Election is underway we’re extending that support to all prospective candidates, making our team available to anyone standing to allow them to quickly report any concerns across our platforms and have them investigated. 

This is particularly pertinent to Tuesday’s news from the Government calling for a one stop shop for candidates. We have already set up our own one stop shop so that there is a single point of contact for candidates for issues across Facebook and Instagram.

But our team is not working alone; it’s backed up by our 35,000-strong global safety and security team that oversees content and behavior across the platform every day. 

And our technology is also helping us to automatically detect more of this harmful content. For example, the proportion of hate speech we have removed before it’s reported to us has increased significantly over the last two years, and we will be releasing new figures on this later this month.

We also have a Government, Politics & Advocacy Portal which is a home for everything a candidate will need during the campaign, including ‘how to’ guides on subjects such as political advertising, campaigning on Facebook and troubleshooting guides for technical issues.

We’re working with the political parties to ensure candidates are aware of both the reporting channel to reach my team and the Government, Politics & Advocacy Portal.

We’re holding a series of sessions for candidates on safety and outlining the help available to address harassment on our platforms. We’ve already held dedicated sessions for female candidates in partnership with women’s networks within the parties to provide extra guidance. We want to ensure we’re doing everything possible to help them connect with their constituents, free from harassment.

Finally, we’re working with the Government to distribute to every candidate via returning officers in the General Election the safety guides we have put together, to ensure we reach everyone not just those attending our outreach sessions. Our safety guides include information on a range of tools we have developed:

  • For example, public figures are able to moderate and filter the content that people put on their Facebook Pages to prevent negative content appearing in the first place. People who help manage Pages can hide or delete individual comments. 
  • They can also proactively moderate comments and posts by visitors by turning on the profanity filter, or blocking specific words or lists of words that they do not want to appear on their Page. Page admins can also remove or ban people from their Pages. 

We hope these steps help every candidate to reach their constituents, and get the most from our platforms. But our work doesn’t stop there.

The second area our team focuses on is promoting civic engagement. In addition to supporting and advising candidates, we also, of course, want to help promote voter participation in the election. 

For the past five years, we’ve used badges and reminders at the top of people’s News Feeds to encourage people to vote in elections around the world. The same will be true for this campaign. 

We’ll run reminders to register to vote, with a link to the gov.uk voter registration page, in the week running up to the voter registration deadline. 

On election day itself, we’ll also run a reminder to vote with a link to the Electoral Commission website so voters can find their polling station and any information they need. This will include a button to share that you voted. 

We hope that this combination of steps will help to ensure both candidates and voters engaging with the General Election on our platforms have the best possible experience.





Source link