3 Tips to Enhance Your Content

Enhance your content marketing

Trying to stand out online is like standing in a crowd at a concert and screaming your friend’s name, and she’s 100 yards away, watching the main noisemakers on the stage. She can’t hear you. She’s not even looking in your direction.  And if she were, she’d probably see the bigger people around you first.

My point is that standing out is hard.  The internet is a saturated marketplace with a whole lot of noise, a lot more chaos, and a heavy-handed dash of capitalist ambition. It’s noisy, and most businesses are only adding to the noise rather than creating a bit of quiet.

No, I don’t mean going dark and ghosting your digital audience. I mean creating a moment of quiet from your audience. Getting them to stop in their scrolling, clicking, flipping, and tossing to engage with your video, post, blog, website, email or advertisement is the goal. That’s the pivotal moment. Will they or won’t they become a paying customer? What will make them click?

It’s in the way you connect, which is to say, your story. Not the “who we are” kind of story, but the “we’ve got your back, and here’s why” kind of story.

In this post, I’ll outline three ways you can spruce up your content to create that moment of quiet for your customer so they can make a split-second decision to trust you.

Tip 1: Determine the Problems You Solve

Make a list of 1-5 main problems you solve for your customers, and then make a point of talking about them to everyone, think about them regularly, and constantly work toward solving these problems. Go to bat for your customers against an obvious villain! They’ll thank you for it with their business.

Here are the problems we help our customers solve:

  1. Being left behind by the growth of digital marketing and the overwhelming noise online.
  2. Struggling to keep up with the competition online.
  3. Unable to commit the time to content.
  4. Not enough staff to help with content.
  5. No marketing budget.

That last one is actually a misnomer; every business has a marketing budget, or they wouldn’t be a business at all.

Tip 2:  Write Out How IT Happens

“It” being your customer’s purchasing journey. What steps do they need to take to buy from you? Creating these steps is a practical exercise, and not so much a creative one. Your purchasing process should be no fewer than three steps, and no more than six. Here’s an example:

The Your Imprint Customer Journey:

  1. Visit my site, blog and social pages
  2. Schedule a 15-minute consult.
  3. Get a custom quote with a scope of work.
  4. Sign the Services Agreement and other needed paperwork like a BA or NDA.
  5. Have a discovery meeting with us (jokingly called “The Data Dump”).
  6. Pay invoice after work is completed.

Yours may be much simpler, like:

  1. Visit store
  2. Purchase items
  3. Refer a friend for a free gift

Once you have the steps they can take, you’ll want to make sure those steps are easy to follow through your website. For example, if your main call to action is “Buy Now,” that needs to be a huge, bright button in the top right corner of your site. It should also be a button all over the website to make it easier for customers to buy no matter where they are on your website.

Tip 3: Create a Funnel

It’s much easier than most people make it sound.  I really like this picture from Bias Digital because it lays out the kind of content used in each part of the funnel.

3 Tips to Improve your Content Marketing
Image by Bias Digital.

Here’s how ours is set up to help you on your way to creating an airtight funnel: 

Visitors comes into the funnel by organic search, referrals, social media, or they go directly to our website. They get here with content to look at more content. That’s a good sign. At the top of the funnel, we build trust. We prove ourselves by:

  • Having a fast, user-friendly website that looks good
  • Writing content that is relatable to the audience
  • Making it clear what we want them to do (schedule a consult!)
  • Make it easy and convenient for them to buy (pay after we perform!)

In the middle part of the funnel, people aren’t usually ready to buy just yet. They’re thinking about us, though. They come back to the website a few more times, follow us on social, or sign up for our newsletter. In this part of the funnel, we create content for:

  • Email deals at the end of the newsletter
  • Social media giveaways
  • Polls and surveys to engage them with our services
  • Special printing deals

The bottom of the funnel is for sales. These are paying customers who deserve and expect to be honored by their favorite brands. Content here is focused on:

  • Delighting them with extra special offers and sneak peeks
  • Loyalty program/subscription
  • Regular gifting
  • Upsells

So, now you know what the customer needs from you because you solve a specific problem they’re having. You’ve created a clearly defined process to communicate with your audience on how they can buy from you, and you’ve made a funnel that will help you create content and keep up with the fast pace of digital marketing.

My article today is a broad brushstroke of how to boost your content to stand out. Stay tuned for more tips and helpful tools on distributing your content and broadcasting your message. Keep Learning by signing up for my company’s Content Marketing Newsletter (Your Imprint)!

Under Pressure? Don’t Worry, Stop Overthinking [The Weekly Wrap]

Listen to the Weekly Wrap here or subscribe on Apple Podcasts or Stitcher. If you enjoy the show, please take a moment to rate it or post a review.

And that’s a wrap of the week ending February 7, 2020

This week I’m ruminating on rumination (and suggesting an alternative to overthinking things). I offer my fresh take on the role of content marketing in brand activism. Julia McCoy joins me to talk about how following her gut instinct – even when it scared her – helped her build a profitable content marketing business that ended up being a true lifeline. Finally, I share an article that will help you quit worrying and find and fix the inefficiencies in your content pipeline.

Listen to the Weekly Wrap

Our theme this week is “What, me worry?” Let’s wrap it up.

One deep thought: Making decisions under pressure (2:30)

Are you overthinking things? Many people believe rehashing problems in our heads helps us figure out the answer. We become so focused on making the right decision, we lose the ability to make one at all.

Overthinking often rears its head as we plan a complex change. I recently worked with a director of content strategy at a large B2C company to map 12 weeks of tasks related to a major content initiative. Worry over whether the e-commerce team would meet the deadlines of a tech project in time to line up with the content efforts weighed on him – especially since he didn’t control that part of the project.

Overthinking often rears its head as we plan a complex change, says @Robert_Rose via @cmicontent. #WeeklyWrap Click To Tweet

I share the advice I gave this client about planning for the future but living (and working) in the moment:

HANDPICKED RELATED CONTENT:

A fresh take on content activism in a ‘post-truth’ world (10:15)

Commentary from PublishersDaily/MediaPost caught my eye this week. The article, Content Activism Can Help Brands Shape A Positive Future, covers a topic on my mind right now.

The article starts by covering ground most of us are familiar with – brands are using content “to align themselves with the key issues of our time.” It goes on to talk about content’s role:

(C)ontent marketing is simply a bigger canvas for brands to develop a narrative, but more recently, the narrative has changed from just product or services to include ethics and principles. We’re asking ‘how’ more than ever. As conversations develop around profit vs. purpose or inclusive capitalism, what they do creates brand value, preference, and differentiation.

The author points to the example of the Food Sustainability Index developed by the Barilla Center for Food & Nutrition and the Economist Intelligence Unit to promote knowledge around food sustainability. As the article notes, “In a post-truth world, content surfaced within quality editorial environments allows audiences to feel confident that a similar level of diligence and attention has been applied to the facts.”

I share my take on this interesting trend, including another example of how corporate social responsibility (CSR) content is no longer just about what the company is doing (getting rid of paper cups in the office or having the team run a 5K for charity), but where media is the CSR program. Interesting stuff, and I’ll be watching and writing about this more in the future.

#CorporateSocialResponsibility content is no longer just about what the company is doing, but where media is the #CSR program, says @Robert_Rose via @cmicontent. #WeeklyWrap Click To Tweet

HANDPICKED RELATED CONTENT:

This week’s person making a difference in content: Julia McCoy (14:44)

I often talk about how my guests are making meaning in content. And this week’s guest is the living standard of that.

Julia McCoy is a serial content marketer, entrepreneur, and author. She founded a multimillion-dollar content agency, Express Writers, with nothing more than $75 at 19 years old. Today, Julia has been named an industry thought leader in content marketing by Forbes and is the author of two best-selling books, founder of The Content Hacker, and educator. Julia has a passion for sharing what she knows in her books and in her online courses. Her latest book, Woman Rising, is a memoir that chronicles how she escaped from a religious cult and rebuilt her life.

Julia is somebody who definitely doesn’t let worry stop her from making (and meeting) ambitious goals. We talk at length about her unique path to building a content marketing business. Here’s a preview:

Three months into it, I had more work than I could handle. It was a breaking point for me. Do I continue solo and turn down all these gigs or do I build a business? Naturally, because I am that type of persona, I wanted to build a business. That’s where Express Writers came from. It was a five-minute idea I honestly thought wouldn’t last a year. Eight years later, 90 people on the staff, it’s like “pinch me.” But at the same time, it’s that equation of working hard really does equal success. It does come down to how hard are you willing to work?”

It all comes down to how hard are you willing to work, says @JuliaEMcCoy via @cmicontent. #WeeklyWrap #contentmarketing Click To Tweet

Listen in to our discussion, then learn more about Julia:

One content marketing idea you can use (31:45)

The post I’d love for you to look at this week is right in line with what I’ve been talking about – figuring out how to make worry-free plans.

In Fix These Big Inefficiencies in Your Content Pipeline, Kimberly Zhang offers really good tips on avoiding content marketing waste. She points out really smart areas to address, including writing for multiple personas, lacking in accountability, expertise on too many topics, creative teams that lack support. Fixing these areas will help you worry a little less.

An overloaded or overwhelmed #content team can lead to #contentmarketing waste, says @KimZhangCEO via @cmicontent. #WeeklyWrap Click To Tweet

Love for this week’s sponsor: ContentTECH Summit

Here’s something you don’t have to worry about – where to find education on content and technology – and a healthy dose of sunshine. I’m talking about ContentTECH Summit April 20 to 22 in San Diego.

We’ve got amazing speakers like Meg Walsh, who runs content services at Hilton Hotels; Cleve Gibbon, chief technology officer at Wunderman Thompson; and Wendy Richardson, senior vice president of global technical services for Mastercard.

These brand-level folks are ready to teach you the effective use of technology and better processes that can help your strategic efforts to create, manage, deliver, and scale your enterprise content and provide your customers with better digital experiences.

And I’ve got a discount for you. Just use the code ROSE100 and you’ll save $100 on registration.

Check out the agenda and register today.

The wrap-up

Join me next week as we look for the sharpest tools in our shed. (Don’t worry, we’ll nail it.) This is not a drill – and we aren’t nuts. But we will wrench one thought to level your head, pick one news item to hammer our point home, and share one content marketing rule that will help you measure up. And it’s all delivered in a little less time than it takes to learn that Kansas City is actually in Missouri.

If you have ideas for what you’d like to hear more of on our weekly play on words, let us know in the comments. And if you love the show, we’d sure love for you to review it or share it. Hashtag us up on Twitter: #WeeklyWrap.

To listen to past Weekly Wrap shows, go to the main Weekly Wrap page

How to subscribe

Cover image by Joseph Kalinowski/Content Marketing Institute




Source link

Content Marketing and Your Budget: Here’s What to Expect

content marketing budget

We all want to know: how much does it cost? 

That’s usually the first question I have whenever I’m shopping around for a service or product.  Most of us are thrifty buyers, and we’re looking for someone or something who can help us solve a problem we’re having.

Because of that, I don’t give two hoots about you and your story.  I’m a buyer, and I need something.  Lead, follow, or get out of my way.  Stop throwing how great you are down my throat! 

Businesses are adding too much extra noise to our digital world, and I’ve made it a goal to find more ways to help more businesses make better content marketing choices and stop contributing to the noise. 

That goal usually starts with the right price.  Most agencies and freelancers will NOT put their price on their website or anywhere in public.  We don’t have what would be considered “standard pricing” either.  There are a few reasons for this:

1.      There’s no telling what you need or want in a content marketing plan.  Once we have an idea of what you’re trying to do and the timeline for your goals, we’ll be able to tell you how much time and effort it’ll take us to help you. 

2.      There’s a bit of a price tag because of the time and materials required to perform the work.  Further, the industry is highly competitive, and certain competitors will do a half-assed job for you at half the price.  We’re not going to help them prey on you.  It’ll end up costing you more in the long run.

3.      Would you hire an employee without meeting them first?  Probably not.  And you shouldn’t hire a marketing consultant without meeting them first.  I act as an extension to your team, and I want to meet you before we do anything.  I don’t do hard sales – that defeats the purpose of my goal to minimize irrelevant noise and build B2B partnerships. 

What should you expect in terms of content marketing costs?

If someone is offering content marketing for under $1,000/mo, you should be wary.  Here’s why:

  • Unless they’re working for minimum wage, it’ll be difficult to create and implement a content strategy on that budget. 
  • The best people in the industry who can use content marketing to grow your business start in around $80/hour and can go up to $250/hour.
  • Expect a good content marketer to need a minimum of 20 hours per month to do this work. 

I’ve been in the content marketing niche for more than a decade, and I’m still surprised by the drastic differences in prices from freelancers and agencies.  It’s important to know what to expect.  Never hesitate to ask questions of someone trying to sell you marketing services.  Here are a few questions to consider:

  1. How will you track my return on investment? 
  2. Who will be responsible for what? 
  3. Who owns all the assets?
  4. Are there long-term or short-term contracts?

Helpful Tips

When looking for content marketing support, keep the following in mind: 

  • Small and medium businesses can benefit from a small, yet powerful content marketing team with a diverse skill set.  Don’t overspend at bigger agencies where you’re paying for skills you may not need, and don’t under-spend by hiring a single freelancer without the appropriate skill set.
  • Outsourcing is usually better than bringing someone in-house because you don’t have to worry about management, spacing, or overhead. 
  • Don’t try to put the tasks on a current employee.  Content marketing is a skilled trade, and one of the biggest, glaring holes in small and medium business marketing plans is a lack of qualified talent. 
    • For example, your secretary or cashier may know how to use social media, but she is not a social media expert who sells online. 

Reality Check

Respect your brand, respect your customers, and respect the industry.  Don’t insult your customers with cheap marketing ploys and ugly branding efforts.  They notice.  Marketing should be all about giving customers what they want and need in a way that benefits your business.

If you don’t know what that means, I’ll put it another way – if you have someone who has never been formally trained in the marketing industry in some way, you are cutting corners, your audience can see that, and they respond by giving their business to someone else and forgetting all about you. 

Let’s talk more. 

Discovery Conversion Case Study – Inbound Marketing Agency



Rachael Herman

Business Message/Focus

The restaurant is focused on local partnerships, sustainability, and healthy eating. 

The Promotion

The promotion was for a limited-time item that would be featuring a special vegetable from a local farmer.  Customers had to pre-order for pickup on scheduled days.

 

The Marketing Budget

We had a small marketing budget of $100 for this single promotion, and there would be no paid advertising.

The Results

Digitally, of the 5,000 people who were reached, 125 of them went to the landing page to see what this was all about. 

Of those, 25 quarts of soup were ordered by 14 people.  That’s a visitor-to-lead conversion (also referred to as a CTR or click-through rate) of 2.5% and a lead-to-customer conversion of 11.2%. Lead to sale is 20%. Those are good conversions for an organic promotion.

All other sales were made in-store or over the phone for total net sales of $702.  The client spent $450 making the item and, with marketing costs (we were under budget), ROI was calculated to be 1.7, or 170%.

Not a great return, but it is positive, so there’s room for PPC planning. 

What We Recommended

We recommended increasing exposure by adding pay-per-click on Google and Facebook.  If we increase exposure to 50,000 digital views by adding just $200 to that part of the budget, we could improve ROI to about 3-5, based on the above conversion rates. 

We’d focus on driving traffic to a specific landing page using UTM codes to watch and properly target the ad.  We also recommended doing long-term digital marketing through inbound methods to improve long-term growth, sales, and customer retention. 



Source link

How Facebook Is Prepared for the 2019 UK General Election

Today, leaders from our offices in London and Menlo Park, California spoke with members of the press about Facebook’s efforts to prepare for the upcoming General Election in the UK on December 12, 2019. The following is a transcript of their remarks.

Rebecca Stimson, Head of UK Public Policy, Facebook

We wanted to bring you all together, now that the UK General Election is underway, to set out the range of actions we are taking to help ensure this election is transparent and secure – to answer your questions and to point you to the various resources we have available.  

There has already been a lot of focus on the role of social media within the campaign and there is a lot of information for us to set out. 

We have therefore gathered colleagues from both the UK and our headquarters in Menlo Park, California, covering our politics, product, policy and safety teams to take you through the details of those efforts. 

I will just say a few opening remarks before we dive into the details

Helping protect elections is one of our top priorities and over the last two years we’ve made some significant changes – these broadly fall into three camps:

  • We’ve introduced greater transparency so that people know what they are seeing online and can scrutinize it more effectively; 
  • We have built stronger defenses to prevent things like foreign interference; 
  • And we have invested in both people and technology to ensure these new policies are effective.

So taking these in turn. 

Transparency

On the issue of transparency. We’ve tightened our rules to make political ads much more transparent, so people can see who is trying to influence their vote and what they are saying. 

We’ll discuss this in more detail shortly, but to summarize:  

  • Anybody who wants to run political ads must go through a verification process to prove who they are and that they are based here in the UK; 
  • Every political ad is labelled so you can see who has paid for them;
  • Anybody can click on any ad they see on Facebook and get more information on why they are seeing it, as well as block ads from particular advertisers;
  • And finally, we put all political ads in an Ad Library so that everyone can see what ads are running, the types of people who saw them and how much was spent – not just while the ads are live, but for seven years afterwards.

Taken together these changes mean that political advertising on Facebook and Instagram is now more transparent than other forms of election campaigning, whether that’s billboards, newspaper ads, direct mail, leaflets or targeted emails. 

This is the first UK general election since we introduced these changes and we’re already seeing many journalists using these transparency tools to scrutinize the adverts which are running during this election – this is something we welcome and it’s exactly why we introduced these changes. 

Defense 

Turning to the stronger defenses we have put in place.

Nathaniel will shortly set out in more detail our work to prevent foreign interference and coordinated inauthentic behavior. But before he does I want to be clear right up front how seriously we take these issues and our commitment to doing everything we can to prevent election interference on our platforms. 

So just to highlight one of the things he will be talking about – we have, as part of this work, cracked down significantly on fake accounts. 

We now identify and shut down millions of fake accounts every day, many just seconds after they were created.

Investment

And lastly turning to investment in these issues.

We now have more than 35,000 people working on safety and security. We have been building and rolling out many of the new tools you will be hearing about today. And as Ella will set out later, we have introduced a number of safety measures including a dedicated reporting channel so that all candidates in the election can flag any abusive and threatening content directly to our teams.  

I’m also pleased to say that – now the election is underway – we have brought together an Elections Taskforce of people from our teams across the UK, EMEA and the US who are already working together every day to ensure election integrity on our platforms. 

The Elections Taskforce will be working on issues including threat intelligence, data science, engineering, operations, legal and others. It also includes representatives from WhatsApp and Instagram.

As we get closer to the election, these people will be brought together in physical spaces in their offices – what we call our Operations Centre. 

It’s important to remember that the Elections Taskforce is an additional layer of security on top of our ongoing monitoring for threats on the platform which operates 24/7. 

And while there will always be further improvements we can and will continue to make, and we can never say there won’t be challenges to respond to, we are confident that we’re better prepared than ever before.  

Political Ads

Before I wrap up this intro section of today’s call I also want to address two of the issues that have been hotly debated in the last few weeks – firstly whether political ads should be allowed on social media at all and secondly whether social media companies should decide what politicians can and can’t say as part of their campaigns. 

As Mark Zuckerberg has said, we have considered whether we should ban political ads altogether. They account for just 0.5% of our revenue and they’re always destined to be controversial. 

But we believe it’s important that candidates and politicians can communicate with their constituents and would be constituents. 

Online political ads are also important for both new challengers and campaigning groups to get their message out. 

Our approach is therefore to make political messages on our platforms as transparent as possible, not to remove them altogether. 

And there’s also a really difficult question – if you were to consider banning political ads, where do you draw the line – for example, would anyone advocate for blocking ads for important issues like climate change or women’s empowerment? 

Turning to the second issue – there is also a question about whether we should decide what politicians and political parties can and can’t say.  

We don’t believe a private company like Facebook should censor politicians. This is why we don’t send content or ads from politicians and political parties to our third party fact-checking partners.

This doesn’t mean that politicians can say whatever they want on Facebook. They can’t spread misinformation about where, when or how to vote. They can’t incite violence. We won’t allow them to share content that has previously been debunked as part of our third-party fact-checking program. And we of course take down content that violates local laws. 

But in general we believe political speech should be heard and we don’t feel it is right for private companies like us to fact-check or judge the veracity of what politicians and political parties say. 

Facebook’s approach to this issue is in line with the way political speech and campaigns have been treated in the UK for decades. 

Here in the UK – an open democracy with a vibrant free press – political speech has always been heavily scrutinized but it is not regulated. 

The UK has decided that there shouldn’t be rules about what political parties and candidates can and can’t say in their leaflets, direct mails, emails, billboards, newspaper ads or on the side of campaign buses.  

And as we’ve seen when politicians and campaigns have made hotly contested claims in previous elections and referenda, it’s not been the role of the Advertising Standards Authority, the Electoral Commission or any other regulator to police political speech. 

In our country it’s always been up to the media and the voters to scrutinize what politicians say and make their own minds up. 

Nevertheless, we have long called for new rules for the era of digital campaigning. 

Questions around what constitutes a political ad, who can run them and when, what steps those who purchase political ads must take, how much they can spend on them and whether there should be any rules on what they can and can’t say – these are all matters that can only be properly decided by Parliament and regulators.  

Legislation should be updated to set standards for the whole industry – for example, should all online political advertising be recorded in a public archive similar to our Ad Library and should that extend to traditional platforms like billboards, leaflets and direct mail?

We believe UK electoral law needs to be brought into the 21st century to give clarity to everyone – political parties, candidates and the platforms they use to promote their campaigns.

In the meantime our focus has been to increase transparency so anyone, anywhere, can scrutinize every ad that’s run and by whom. 

I will now pass you to the team to talk you through our efforts in more detail.

  • Nathaniel Gleicher will discuss tackling fake accounts and disrupting coordinated inauthentic behavior;
  • Rob Leathern will take you through our UK political advertising measures and Ad Library;
  • Antonia Woodford will outline our work tackling misinformation and our fact-checker partnerships; 
  • And finally, Ella Fallows will fill you on what we’re doing around safety of candidates and how we’re encouraging people to participate in the election; 

Nathaniel Gleicher, Head of Cybersecurity Policy, Facebook 

My team leads all our efforts across our apps to find and stop what we call influence operations, coordinated efforts to manipulate or corrupt public debate for a strategic goal. 

We also conduct regular red team exercises, both internally and with external partners to put ourselves into the shoes of threat actors and use that approach to identify and prepare for new and emerging threats. We’ll talk about some of the products of these efforts today. 

Before I dive into some of the details, as you’re listening to Rob, Antonia, and I, we’re going to be talking about a number of different initiatives that Facebook is focused on, both to protect the UK general election and more broadly, to respond to integrity threats. I wanted to give you a brief framework for how to think about these. 

The key distinction that you’ll hear again and again is a distinction between content and behavior. At Facebook, we have policies that enable to take action when we see content that violates our Community Standards. 

In addition, we have the tools that we use to respond when we see an actor engaged in deceptive or violating behavior, and we keep these two efforts distinct. And so, as you listen to us, we’ll be talking about different initiatives we have in both dimensions. 

Under content for example, you’ll hear Antonia talk about misinformation, about voter suppression, about hate speech, and about other types of content that we can take action against if someone tries to share that content on our platform. 

Under the behavioral side, you’ll hear me and you’ll hear Rob also mention some of our work around influence operations, around spam, and around hacking. 

I’m going to focus in particular on the first of these, influence operations; but the key distinction that I want to make is when we take action to remove someone because of their deceptive behavior, we’re not looking at, we’re not reviewing, and we’re not considering the content that they’re sharing. 

What we’re focused on is the fact that they are deceiving or misleading users through their actions. For example, using networks of fake accounts to conceal who they are and conceal who’s behind the operation. So we’ll refer back to these, but I think it’s helpful to distinguish between the content side of our enforcement and the behavior side of our enforcement. 

And that’s particularly important because we’ve seen some threat actors who work to understand where the boundaries are for content and make sure for example that the type of content they share doesn’t quite cross the line. 

And when we see someone doing that, because we have behavioral enforcement tools as well, we’re still able to make sure we’re protecting authenticity and public debate on the platform. 

In each of these dimensions, there are four pillars to our work. You’ll hear us refer to each of these during the call as well, but let me just say that these four fit together, and no one of these by themselves would be enough, but all four of the together give us a layered approach to defending public debate and ensuring authenticity on the platform. 

We have expert investigative teams that conduct proactive investigations to find, expose, and disrupt sophisticated threat actors. As we do that, we learn from those investigations and we build automated systems that can disrupt any kind of violating behavior across the platform at scale. 

We also, as Rebecca mentioned, build transparency tools so that users, external researchers and the press can see who is using the platform and ensure that they’re engaging authentically. It also forces threat actors who are trying to conceal their identity to work harder to conceal and mislead. 

And then lastly, one of the things that’s extremely clear to us, particularly in the election space, is that this is a whole of society effort. And so, we work closely with partners in government, in civil society, and across industry to tackle these threats. 

And we’ve found that where we could be most effective is where we bring the tools we bring to the table, and then can work with government and work with other partners to respond and get ahead of these challenges as they emerge. 

One of the ways that we do this is through proactive investigations into the deceptive efforts engaged in by bad actors. Over the last year, our investigative teams, working together with our partners in civil society, law enforcement, and industry, have found and stopped more than 50 campaigns engaged in coordinated inauthentic behavior across the world. 

This includes an operation we removed in May that originated from Iran and targeted a number of countries, including the UK. As we announced at the time, we removed 51 Facebook accounts, 36 pages, seven groups, and three Instagram accounts involved in coordinated inauthentic behavior. 

The page admins and account owners typically posted content in English or Arabic, and most of the operation had no focus on a particular country, although there were some pages focused on the UK and the United States. 

Similarly, in March we announced that we removed a domestic UK network of about 137 Facebook and Instagram accounts, pages, and groups that were engaged in coordinated inauthentic behavior. 

The individuals behind these accounts presented themselves as far right and anti-far right activists, frequently changed page and group names, and operated fake accounts to engage in hate speech and spread divisive comments on both sides of the political debate in the UK. 

These are the types of investigations that we focus our core investigative team on. Whenever we see a sophisticated actor that’s trying to evade our automated systems, those teams, which are made up of experts from law enforcement, the intelligence community, and investigative journalism, can find and reveal that behavior. 

When we expose it, we announce it publicly and we remove it from the platform. Those expert investigators proactively hunt for evidence of these types of coordinated inauthentic behavior (CIB) operations around the world. 

This team has not seen evidence of widespread foreign operations aimed at the UK. But we are continuing to search for this and we will remove and publicly share details of networks of CIB that we identify on our platforms. 

As always with these takedowns, we remove these operations for the deceptive behavior they engaged in, not for the content they shared. This is that content/behavior distinction that I mentioned earlier. As we’ve improved our ability to disrupt these operations, we’ve also deepened our understanding of the types of threats out there and how best to counter them. 

Based on these learnings, we’ve recently updated our inauthentic behavior policy which is posted publicly as part of our Community Standards, to clarify how we enforce against the spectrum of deceptive practices we see in our platforms, whether foreign or domestic, state or non state. For each investigation, we isolate any new behaviors we see and then we work to automate detection of them at scale. This connects to that second pillar of our integrity work. 

And this slows down the bad guy and lets our investigators focus on improving our defenses against emerging threats. A good example of this work is our efforts to find and block fake accounts, which Rebecca mentioned. 

We know bad actors use fake accounts as a way to mask their identity and inflict harm on our platforms. That’s why we’ve built an automated system to find and remove these fake accounts. And each time we conduct one of these takedowns, or any other of our enforcement actions, we learn more about what fake accounts look like and how we can have automated systems that detect and block them. 

This is why we have these systems in place today that block millions of fake accounts every day, often within minutes of their creation. Because information operations often target multiple platforms as well as traditional media, I mentioned our collaborations with industry, civil society and government. 

In addition to that, we are building increased transparency on our platform, so that the public along with open source researchers and journalists can find and expose more bad behavior themselves. 

This effort on transparency is incredibly important. Rob will talk about this in detail, but I do want to add one point here, specifically around pages. Increasingly, we’re seeing people operate pages that clearly disclose the organization behind them as a way to make others think they are independent. 

We want to make sure Facebook is used to engage authentically, and that users understand who is speaking to them and what perspective they are representing. We noted last month that we would be announcing new approaches to address this, and today we’re introducing a policy to require more accountability for pages that are concealing their ownership in order to mislead people.

If we find a page is misleading people about its purpose by concealing its ownership, we will require it to go through our business verification process, which we recently announced, and show more information on the page itself about who is behind that page, including the organization’s legal name and verified city, phone number, or website in order for it to stay up. 

This type of increased transparency helps ensure that the platform continues to be authentic and the people who use the platform know who they’re talking to and understand what they’re seeing. 

Rob Leathern, Director of Product, Business Integrity, Facebook 

In addition to making pages more transparent as Nathaniel has indicated, we’ve also put a lot of effort into making political advertising on Facebook more transparent than it is anywhere else. 

Every political and issue ad in the that runs on Facebook now goes into our Ad Library public archive that everyone can access, regardless of whether or not they have a Facebook account. 

We launched this in the UK in October 2018 and, since then, there’s been over 116,000 ads related to politics, elections, and social issues placed in the UK Ad Library. You can find all the ads that a candidate or organization is running, including how much they spent and who saw the ad. And we’re storing these ads in the Ad Library for seven years. 

Other media such as billboards, newspaper ads, direct mail, leaflets or targeted emails don’t today provide this level of transparency into the ad and who is seeing them. And as a result, we’ve seen a significant number of press stories regarding the election driven by the information in Facebook’s Ad Library. 

We’re proud of this resource and insight into ads running on Facebook and Instagram and that it is proving useful for media and researchers. And just last month, we made even more changes to both the Ad Library and Ad Library Reports. These include adding details in who the top advertising spenders are in each country in the UK, as well as providing an additional view by different date ranges which people have been asking for. 

We’re now also making it clear which Facebook platform an ad ran on. For example, if an ad ran on both Facebook and/or Instagram. 

For those of you unfamiliar with the Ad Library, which you can see at Facebook.com/adlibrary, I thought I’d run through it quickly. 

So this is the Ad Library. Here you see all the ads have been classified as relating to politics or issues. We keep them in the library for seven years. As I mentioned, you can find the Ad Library at Facebook.com/adlibrary. 

You can also access the Ad Library through a specific page. For example, for this Page, you can see not only the advertising information, but also the transparency about the Page itself, along with the spend data. 

Here is an example of the ads that this Page is running, both active as well as inactive. In addition, if an ad has been disapproved for violating any of our ad policies, you’re also able to see all of those ads as well. 

Here’s what it looks like if you click to see more detail about a specific ad. You’ll be able to see individual ad spend, impressions, and demographic information. 

And you’ll also be able to compare the individual ad spend to the overall macro spend by the Page, which is tracked in the section below. If you scroll back up, you’ll also be able to see the other information about the disclaimer that has been provided by the advertiser. 

We know we can’t protect elections alone and that everyone plays a part in keeping the platform safe and respectful. We ask people to share responsibly and to let us know when they see something that may violate our Advertising Policies and Community Standards. 

We also have the Ad Library API so journalists and academics can analyze ads about social issues, elections, or politics. The Ad Library application programming interface, or API, allows people to perform customized keyword searches of ads stored in the Ad Library. You can search data for all active and inactive issue, electoral or political ads. 

You can also access the Ad Library and the data therein through the specific page or through the Ad Library Report. Here is the Ad Library report, this allows you to see the spend by specific advertisers and you can download a full report of the data. 

Here we also allow you to see the spending by location and if you click in you can see the top spenders by region. So you can see, for example, in the various regions, who the top spenders in those areas are. 

Our goal is to provide an open API to news organizations, researchers, groups and people who can hold advertisers and us more accountable. 

We’ve definitely seen a lot of press, journalists, and researchers examining the data in the Ad Library and using it to generate these insights and we think that’s exactly a part of what will help hold both us and advertisers more accountable.

We hope these measures will build on existing transparency we have in place and help reporters, researchers and most importantly people on Facebook learn more about the Pages and information they’re engaging with. 

Antonia Woodford, Product Manager, Misinformation, Facebook

We are committed to fighting the spread of misinformation and viral hoaxes on Facebook. It is a responsibility we take seriously.

To accomplish this, we follow a three-pronged approach which we call remove, reduce, and inform. First and foremost, when something violates the laws or our policies, we’ll remove it from the platform all together.

As Nathaniel touched on, removing fake accounts is a priority, of which the vast majority are detected and removed within minutes of registration and before a person can report them. This is a key element in eliminating the potential spread of misinformation. 

The reduce and inform part of the equation is how we reduce the spread of problematic content that doesn’t violate the law or our community standards, while still ensuring freedom of expression on the platform and this is where the majority of our misinformation work is focused. 

To reduce the spread of misinformation, we work with third party fact-checkers. 

Through a combination of reporting from people on our platform and machine learning, potentially false posts are sent to third party fact-checkers to review. These fact-checkers review this content, check the facts, and then rate its accuracy. They’re able to review links in news articles as well as photos, videos, or text posts on Facebook.

After content has been rated false, our algorithm heavily downranks this content in News Feed so it’s seen by fewer people and far less likely to go viral. Fact-checkers can fact-check any posts they choose based on the queue we send them. 

And lastly, as part of our work to inform people about the content they see on Facebook, we just launched a new design to better warn people when they see content that’s illegal, false, or partly false by our fact-checking partners.

People will now see a more prominent label on photos and videos that have been fact-checked as false or partly false. This is a grey screen that sits over a post and says ‘false information’ and points people to fact-checkers’ articles debunking the claims. 

These clearer labels are what people have told us they want, what they have told us they expect Facebook to do, and what experts tell us is the right tactic for combating misinformation.

We’re rolling this change out in the UK this week for any photos and videos that have been rated through our fact-checking partnership. Though just one part of our overall strategy, fact-checking is a fundamental part of our strategy to combat this information and I want to share a little bit more about the program.

Our fact-checking partners are all accredited by the International Fact-Checking Network which requires them to abide by a code of principles such as nonpartisanship and transparency sources.

We currently have over 50 partners in over 40 languages around the world. As Rebecca outlined earlier, we don’t send content or ads from politicians and political parties to our third party fact-checking partners.

Here in the UK we work with Full Fact and FactCheckNI and I as part of our program. To recap we identify content that may be false using signals such as feedback from our users. This content is all submitted into a queue for our fact-checking partners to access. These fact-checkers then choose which content to review, check the facts, and rate the accuracy of the content.

These fact-checkers are independent organizations, so it is at their discretion what they choose to investigate. They can also fact-check whatever content they want outside of the posts we send their way.

If a fact-checker rates a story as false, it will appear lower in News Feed with the false information screen I mentioned earlier. This significantly reduces the number of people who see it.

Other posts that Full Fact and FactCheckNI choose to fact-check outside of our system will not be impacted on Facebook. 

And finally, on Tuesday we announced a partnership with the International Fact-Checking Network to create the Fact-Checking Innovation Initiative. This will fund innovation projects, new formats, and technologies to help benefit the broader fact-checking ecosystem. 

We are investing $500,000 into this new initiative, where organizations can submit applications for projects to improve fact-checkers’ scale and efficiency, increase the reach of fact-checks to empower more people with reliable information, build new tools to help combat misinformation, and encourage newsrooms to collaborate in fact-checking efforts.

Anyone from the UK can be a part of this new initiative. 

Ella Fallows, Politics and Government Outreach Manager UK, Facebook 

Our team’s role involves two main tasks: working with MPs and candidates to ensure they have a good experience and get the most from our platforms; and looking at how we can best use our platforms to promote participation in elections.

I’d like to start with the safety of MPs and candidates using our platforms. 

There is, rightly, a focus in the UK about the current tone of political debate. Let me be clear, hate speech and threats of violence have no place on our platforms and we’re investing heavily to tackle them. 

Additionally, for this campaign we have this week written to political parties and candidates setting out the range of safety measures we have in place and also to remind them of the terms and conditions and the Community Standards which govern their use of our platforms. 

As you may be aware, every piece of content on Facebook and Instagram has a report button, and when content is reported to us which violates our community standards, (what is and isn’t allowed on Facebook) it is removed. 

Since March this year, MPs have also had access to a dedicated reporting channel to flag any abusive and threatening content directly to our teams. Now that the General Election is underway we’re extending that support to all prospective candidates, making our team available to anyone standing to allow them to quickly report any concerns across our platforms and have them investigated. 

This is particularly pertinent to Tuesday’s news from the Government calling for a one stop shop for candidates, and we have already set up our own one stop shop so that there is a single point of contact for candidates for issues across Facebook and Instagram.

Behind that reporting channel sits my team, which is focused on escalating reports from candidates and making sure we’re taking action as quickly as possible on anything that violates our Community Standards or Advertising Policies. 

But that team is not working alone – it’s backed up by our 35,000-strong global safety and security team that oversees content and behavior across the platform every day. 

And our technology is also helping us to automatically detect more of this harmful content. For example, while there is further to go, the proportion of hate speech we remove before it’s reported to us has almost tripled over the last two years.

We also have a Government, Politics & Advocacy Portal which is a home for everything a candidate will need during the campaign, including ‘how to’ guides on subjects such as registering as a political advertiser and running campaigns on Facebook, best practice tips and troubleshooting guides for technical issues.

We’re working with all of the political parties and the Electoral Commission to ensure candidates are aware of both the reporting channel to reach my team and the Government, Politics & Advocacy Portal.

We’re also working with political parties and the Electoral Commission to help candidates prepare for the election through a few different initiatives:

  • Firstly, while we don’t provide ongoing guidance or embed anyone into campaigns, we have held sessions with each party on how to use and get the most from our platforms for their campaigns, and we’ll continue to hold webinars throughout the General Election period for any candidate and their staff to join.
  • We’re also working with women’s networks within the parties to hold dedicated sessions for female candidates providing extra guidance on safety and outlining the help available to prevent harassment on our platforms. We want to ensure we’re doing everything possible to help them connect with their constituents, free from harassment.
  • Finally, we’re working with the Electoral Commission and political parties to distribute to every candidate in the General Election the safety guides we have put together, to ensure we reach everyone not just those attending our outreach sessions. 

For example, we have developed a range of tools that allow public figures to moderate and filter the content that people put on their Facebook Pages to prevent negative content appearing in the first place. People who help manage Pages can hide or delete individual comments. 

They can also proactively moderate comments and posts by visitors by turning on the profanity filter, or blocking specific words or lists of words that they do not want to appear on their Page. Page admins can also remove or ban people from their Pages. 

We hope these steps help every candidate to reach their constituents, and get the most from our platforms. But our work doesn’t stop there.

The second area our team focuses on is promoting civic engagement. In addition to supporting and advising candidates, we also, of course, want to help promote voter participation in the election. 

For the past five years, we’ve used badges and reminders at the top of people’s News Feeds to encourage people to vote in elections around the world. The same will be true for this campaign. 

We’ll run reminders to register to vote, with a link to the Electoral Commission’s voter registration page, in the week running up to the voter registration deadline. 

On election day itself, we’ll also run a reminder to vote with a link to the Electoral Commission website so voters can find their polling station and any information they need. This will include a button to share that you voted. 

We know from speaking to the Electoral Commission that these reminders for past national votes in the UK have had a positive effect on voter registration.

We hope that this combination of steps will help to ensure both candidates and voters engaging with the General Election on our platforms have the best possible experience.





Source link

How Facebook Has Prepared for the 2019 UK General Election

Today, leaders from our offices in London and Menlo Park, California spoke with members of the press about Facebook’s efforts to prepare for the upcoming General Election in the UK on December 12, 2019. The following is a transcript of their remarks.

Rebecca Stimson, Head of UK Public Policy, Facebook

We wanted to bring you all together, now that the UK General Election is underway, to set out the range of actions we are taking to help ensure this election is transparent and secure – to answer your questions and to point you to the various resources we have available.  

There has already been a lot of focus on the role of social media within the campaign and there is a lot of information for us to set out. 

We have therefore gathered colleagues from both the UK and our headquarters in Menlo Park, California, covering our politics, product, policy and safety teams to take you through the details of those efforts. 

I will just say a few opening remarks before we dive into the details

Helping protect elections is one of our top priorities and over the last two years we’ve made some significant changes – these broadly fall into three camps:

  • We’ve introduced greater transparency so that people know what they are seeing online and can scrutinize it more effectively; 
  • We have built stronger defenses to prevent things like foreign interference; 
  • And we have invested in both people and technology to ensure these new policies are effective.

So taking these in turn. 

Transparency

On the issue of transparency. We’ve tightened our rules to make political ads much more transparent, so people can see who is trying to influence their vote and what they are saying. 

We’ll discuss this in more detail shortly, but to summarize:  

  • Anybody who wants to run political ads must go through a verification process to prove who they are and that they are based here in the UK; 
  • Every political ad is labelled so you can see who has paid for them;
  • Anybody can click on any ad they see on Facebook and get more information on why they are seeing it, as well as block ads from particular advertisers;
  • And finally, we put all political ads in an Ad Library so that everyone can see what ads are running, the types of people who saw them and how much was spent – not just while the ads are live, but for seven years afterwards.

Taken together these changes mean that political advertising on Facebook and Instagram is now more transparent than other forms of election campaigning, whether that’s billboards, newspaper ads, direct mail, leaflets or targeted emails. 

This is the first UK general election since we introduced these changes and we’re already seeing many journalists using these transparency tools to scrutinize the adverts which are running during this election – this is something we welcome and it’s exactly why we introduced these changes. 

Defense 

Turning to the stronger defenses we have put in place.

Nathaniel will shortly set out in more detail our work to prevent foreign interference and coordinated inauthentic behavior. But before he does I want to be clear right up front how seriously we take these issues and our commitment to doing everything we can to prevent election interference on our platforms. 

So just to highlight one of the things he will be talking about – we have, as part of this work, cracked down significantly on fake accounts. 

We now identify and shut down millions of fake accounts every day, many just seconds after they were created.

Investment

And lastly turning to investment in these issues.

We now have more than 35,000 people working on safety and security. We have been building and rolling out many of the new tools you will be hearing about today. And as Ella will set out later, we have introduced a number of safety measures including a dedicated reporting channel so that all candidates in the election can flag any abusive and threatening content directly to our teams.  

I’m also pleased to say that – now the election is underway – we have brought together an Elections Taskforce of people from our teams across the UK, EMEA and the US who are already working together every day to ensure election integrity on our platforms. 

The Elections Taskforce will be working on issues including threat intelligence, data science, engineering, operations, legal and others. It also includes representatives from WhatsApp and Instagram.

As we get closer to the election, these people will be brought together in physical spaces in their offices – what we call our Operations Centre. 

It’s important to remember that the Elections Taskforce is an additional layer of security on top of our ongoing monitoring for threats on the platform which operates 24/7. 

And while there will always be further improvements we can and will continue to make, and we can never say there won’t be challenges to respond to, we are confident that we’re better prepared than ever before.  

Political Ads

Before I wrap up this intro section of today’s call I also want to address two of the issues that have been hotly debated in the last few weeks – firstly whether political ads should be allowed on social media at all and secondly whether social media companies should decide what politicians can and can’t say as part of their campaigns. 

As Mark Zuckerberg has said, we have considered whether we should ban political ads altogether. They account for just 0.5% of our revenue and they’re always destined to be controversial. 

But we believe it’s important that candidates and politicians can communicate with their constituents and would be constituents. 

Online political ads are also important for both new challengers and campaigning groups to get their message out. 

Our approach is therefore to make political messages on our platforms as transparent as possible, not to remove them altogether. 

And there’s also a really difficult question – if you were to consider banning political ads, where do you draw the line – for example, would anyone advocate for blocking ads for important issues like climate change or women’s empowerment? 

Turning to the second issue – there is also a question about whether we should decide what politicians and political parties can and can’t say.  

We don’t believe a private company like Facebook should censor politicians. This is why we don’t send content or ads from politicians and political parties to our third party fact-checking partners.

This doesn’t mean that politicians can say whatever they want on Facebook. They can’t spread misinformation about where, when or how to vote. They can’t incite violence. We won’t allow them to share content that has previously been debunked as part of our third-party fact-checking program. And we of course take down content that violates local laws. 

But in general we believe political speech should be heard and we don’t feel it is right for private companies like us to fact-check or judge the veracity of what politicians and political parties say. 

Facebook’s approach to this issue is in line with the way political speech and campaigns have been treated in the UK for decades. 

Here in the UK – an open democracy with a vibrant free press – political speech has always been heavily scrutinized but it is not regulated. 

The UK has decided that there shouldn’t be rules about what political parties and candidates can and can’t say in their leaflets, direct mails, emails, billboards, newspaper ads or on the side of campaign buses.  

And as we’ve seen when politicians and campaigns have made hotly contested claims in previous elections and referenda, it’s not been the role of the Advertising Standards Authority, the Electoral Commission or any other regulator to police political speech. 

In our country it’s always been up to the media and the voters to scrutinize what politicians say and make their own minds up. 

Nevertheless, we have long called for new rules for the era of digital campaigning. 

Questions around what constitutes a political ad, who can run them and when, what steps those who purchase political ads must take, how much they can spend on them and whether there should be any rules on what they can and can’t say – these are all matters that can only be properly decided by Parliament and regulators.  

Legislation should be updated to set standards for the whole industry – for example, should all online political advertising be recorded in a public archive similar to our Ad Library and should that extend to traditional platforms like billboards, leaflets and direct mail?

We believe UK electoral law needs to be brought into the 21st century to give clarity to everyone – political parties, candidates and the platforms they use to promote their campaigns.

In the meantime our focus has been to increase transparency so anyone, anywhere, can scrutinize every ad that’s run and by whom. 

I will now pass you to the team to talk you through our efforts in more detail.

  • Nathaniel Gleicher will discuss tackling fake accounts and disrupting coordinated inauthentic behavior;
  • Rob Leathern will take you through our UK political advertising measures and Ad Library;
  • Antonia Woodford will outline our work tackling misinformation and our fact-checker partnerships; 
  • And finally, Ella Fallows will fill you on what we’re doing around safety of candidates and how we’re encouraging people to participate in the election; 

Nathaniel Gleicher, Head of Cybersecurity Policy, Facebook 

My team leads all our efforts across our apps to find and stop what we call influence operations, coordinated efforts to manipulate or corrupt public debate for a strategic goal. 

We also conduct regular red team exercises, both internally and with external partners to put ourselves into the shoes of threat actors and use that approach to identify and prepare for new and emerging threats. We’ll talk about some of the products of these efforts today. 

Before I dive into some of the details, as you’re listening to Rob, Antonia, and I, we’re going to be talking about a number of different initiatives that Facebook is focused on, both to protect the UK general election and more broadly, to respond to integrity threats. I wanted to give you a brief framework for how to think about these. 

The key distinction that you’ll hear again and again is a distinction between content and behavior. At Facebook, we have policies that enable to take action when we see content that violates our Community Standards. 

In addition, we have the tools that we use to respond when we see an actor engaged in deceptive or violating behavior, and we keep these two efforts distinct. And so, as you listen to us, we’ll be talking about different initiatives we have in both dimensions. 

Under content for example, you’ll hear Antonia talk about misinformation, about voter suppression, about hate speech, and about other types of content that we can take action against if someone tries to share that content on our platform. 

Under the behavioral side, you’ll hear me and you’ll hear Rob also mention some of our work around influence operations, around spam, and around hacking. 

I’m going to focus in particular on the first of these, influence operations; but the key distinction that I want to make is when we take action to remove someone because of their deceptive behavior, we’re not looking at, we’re not reviewing, and we’re not considering the content that they’re sharing. 

What we’re focused on is the fact that they are deceiving or misleading users through their actions. For example, using networks of fake accounts to conceal who they are and conceal who’s behind the operation. So we’ll refer back to these, but I think it’s helpful to distinguish between the content side of our enforcement and the behavior side of our enforcement. 

And that’s particularly important because we’ve seen some threat actors who work to understand where the boundaries are for content and make sure for example that the type of content they share doesn’t quite cross the line. 

And when we see someone doing that, because we have behavioral enforcement tools as well, we’re still able to make sure we’re protecting authenticity and public debate on the platform. 

In each of these dimensions, there are four pillars to our work. You’ll hear us refer to each of these during the call as well, but let me just say that these four fit together, and no one of these by themselves would be enough, but all four of the together give us a layered approach to defending public debate and ensuring authenticity on the platform. 

We have expert investigative teams that conduct proactive investigations to find, expose, and disrupt sophisticated threat actors. As we do that, we learn from those investigations and we build automated systems that can disrupt any kind of violating behavior across the platform at scale. 

We also, as Rebecca mentioned, build transparency tools so that users, external researchers and the press can see who is using the platform and ensure that they’re engaging authentically. It also forces threat actors who are trying to conceal their identity to work harder to conceal and mislead. 

And then lastly, one of the things that’s extremely clear to us, particularly in the election space, is that this is a whole of society effort. And so, we work closely with partners in government, in civil society, and across industry to tackle these threats. 

And we’ve found that where we could be most effective is where we bring the tools we bring to the table, and then can work with government and work with other partners to respond and get ahead of these challenges as they emerge. 

One of the ways that we do this is through proactive investigations into the deceptive efforts engaged in by bad actors. Over the last year, our investigative teams, working together with our partners in civil society, law enforcement, and industry, have found and stopped more than 50 campaigns engaged in coordinated inauthentic behavior across the world. 

This includes an operation we removed in May that originated from Iran and targeted a number of countries, including the UK. As we announced at the time, we removed 51 Facebook accounts, 36 pages, seven groups, and three Instagram accounts involved in coordinated inauthentic behavior. 

The page admins and account owners typically posted content in English or Arabic, and most of the operation had no focus on a particular country, although there were some pages focused on the UK and the United States. 

Similarly, in March we announced that we removed a domestic UK network of about 137 Facebook and Instagram accounts, pages, and groups that were engaged in coordinated inauthentic behavior. 

The individuals behind these accounts presented themselves as far right and anti-far right activists, frequently changed page and group names, and operated fake accounts to engage in hate speech and spread divisive comments on both sides of the political debate in the UK. 

These are the types of investigations that we focus our core investigative team on. Whenever we see a sophisticated actor that’s trying to evade our automated systems, those teams, which are made up of experts from law enforcement, the intelligence community, and investigative journalism, can find and reveal that behavior. 

When we expose it, we announce it publicly and we remove it from the platform. Those expert investigators proactively hunt for evidence of these types of coordinated inauthentic behavior (CIB) operations around the world. 

This team has not seen evidence of widespread foreign operations aimed at the UK. But we are continuing to search for this and we will remove and publicly share details of networks of CIB that we identify on our platforms. 

As always with these takedowns, we remove these operations for the deceptive behavior they engaged in, not for the content they shared. This is that content/behavior distinction that I mentioned earlier. As we’ve improved our ability to disrupt these operations, we’ve also deepened our understanding of the types of threats out there and how best to counter them. 

Based on these learnings, we’ve recently updated our inauthentic behavior policy which is posted publicly as part of our Community Standards, to clarify how we enforce against the spectrum of deceptive practices we see in our platforms, whether foreign or domestic, state or non state. For each investigation, we isolate any new behaviors we see and then we work to automate detection of them at scale. This connects to that second pillar of our integrity work. 

And this slows down the bad guy and lets our investigators focus on improving our defenses against emerging threats. A good example of this work is our efforts to find and block fake accounts, which Rebecca mentioned. 

We know bad actors use fake accounts as a way to mask their identity and inflict harm on our platforms. That’s why we’ve built an automated system to find and remove these fake accounts. And each time we conduct one of these takedowns, or any other of our enforcement actions, we learn more about what fake accounts look like and how we can have automated systems that detect and block them. 

This is why we have these systems in place today that block millions of fake accounts every day, often within minutes of their creation. Because information operations often target multiple platforms as well as traditional media, I mentioned our collaborations with industry, civil society and government. 

In addition to that, we are building increased transparency on our platform, so that the public along with open source researchers and journalists can find and expose more bad behavior themselves. 

This effort on transparency is incredibly important. Rob will talk about this in detail, but I do want to add one point here, specifically around pages. Increasingly, we’re seeing people operate pages that clearly disclose the organization behind them as a way to make others think they are independent. 

We want to make sure Facebook is used to engage authentically, and that users understand who is speaking to them and what perspective they are representing. We noted last month that we would be announcing new approaches to address this, and today we’re introducing a policy to require more accountability for pages that are concealing their ownership in order to mislead people.

If we find a page is misleading people about its purpose by concealing its ownership, we will require it to go through our business verification process, which we recently announced, and show more information on the page itself about who is behind that page, including the organization’s legal name and verified city, phone number, or website in order for it to stay up. 

This type of increased transparency helps ensure that the platform continues to be authentic and the people who use the platform know who they’re talking to and understand what they’re seeing. 

Rob Leathern, Director of Product, Business Integrity, Facebook 

In addition to making pages more transparent as Nathaniel has indicated, we’ve also put a lot of effort into making political advertising on Facebook more transparent than it is anywhere else. 

Every political and issue ad in the that runs on Facebook now goes into our Ad Library public archive that everyone can access, regardless of whether or not they have a Facebook account. 

We launched this in the UK in October 2018 and, since then, there’s been over 116,000 ads related to politics, elections, and social issues placed in the UK Ad Library. You can find all the ads that a candidate or organization is running, including how much they spent and who saw the ad. And we’re storing these ads in the Ad Library for seven years. 

Other media such as billboards, newspaper ads, direct mail, leaflets or targeted emails don’t today provide this level of transparency into the ad and who is seeing them. And as a result, we’ve seen a significant number of press stories regarding the election driven by the information in Facebook’s Ad Library. 

We’re proud of this resource and insight into ads running on Facebook and Instagram and that it is proving useful for media and researchers. And just last month, we made even more changes to both the Ad Library and Ad Library Reports. These include adding details in who the top advertising spenders are in each country in the UK, as well as providing an additional view by different date ranges which people have been asking for. 

We’re now also making it clear which Facebook platform an ad ran on. For example, if an ad ran on both Facebook and/or Instagram. 

For those of you unfamiliar with the Ad Library, which you can see at Facebook.com/adlibrary, I thought I’d run through it quickly. 

So this is the Ad Library. Here you see all the ads have been classified as relating to politics or issues. We keep them in the library for seven years. As I mentioned, you can find the Ad Library at Facebook.com/adlibrary. 

You can also access the Ad Library through a specific page. For example, for this Page, you can see not only the advertising information, but also the transparency about the Page itself, along with the spend data. 

Here is an example of the ads that this Page is running, both active as well as inactive. In addition, if an ad has been disapproved for violating any of our ad policies, you’re also able to see all of those ads as well. 

Here’s what it looks like if you click to see more detail about a specific ad. You’ll be able to see individual ad spend, impressions, and demographic information. 

And you’ll also be able to compare the individual ad spend to the overall macro spend by the Page, which is tracked in the section below. If you scroll back up, you’ll also be able to see the other information about the disclaimer that has been provided by the advertiser. 

We know we can’t protect elections alone and that everyone plays a part in keeping the platform safe and respectful. We ask people to share responsibly and to let us know when they see something that may violate our Advertising Policies and Community Standards. 

We also have the Ad Library API so journalists and academics can analyze ads about social issues, elections, or politics. The Ad Library application programming interface, or API, allows people to perform customized keyword searches of ads stored in the Ad Library. You can search data for all active and inactive issue, electoral or political ads. 

You can also access the Ad Library and the data therein through the specific page or through the Ad Library Report. Here is the Ad Library report, this allows you to see the spend by specific advertisers and you can download a full report of the data. 

Here we also allow you to see the spending by location and if you click in you can see the top spenders by region. So you can see, for example, in the various regions, who the top spenders in those areas are. 

Our goal is to provide an open API to news organizations, researchers, groups and people who can hold advertisers and us more accountable. 

We’ve definitely seen a lot of press, journalists, and researchers examining the data in the Ad Library and using it to generate these insights and we think that’s exactly a part of what will help hold both us and advertisers more accountable.

We hope these measures will build on existing transparency we have in place and help reporters, researchers and most importantly people on Facebook learn more about the Pages and information they’re engaging with. 

Antonia Woodford, Product Manager, Misinformation, Facebook

We are committed to fighting the spread of misinformation and viral hoaxes on Facebook. It is a responsibility we take seriously.

To accomplish this, we follow a three-pronged approach which we call remove, reduce, and inform. First and foremost, when something violates the laws or our policies, we’ll remove it from the platform all together.

As Nathaniel touched on, removing fake accounts is a priority, of which the vast majority are detected and removed within minutes of registration and before a person can report them. This is a key element in eliminating the potential spread of misinformation. 

The reduce and inform part of the equation is how we reduce the spread of problematic content that doesn’t violate the law or our community standards, while still ensuring freedom of expression on the platform and this is where the majority of our misinformation work is focused. 

To reduce the spread of misinformation, we work with third party fact-checkers. 

Through a combination of reporting from people on our platform and machine learning, potentially false posts are sent to third party fact-checkers to review. These fact-checkers review this content, check the facts, and then rate its accuracy. They’re able to review links in news articles as well as photos, videos, or text posts on Facebook.

After content has been rated false, our algorithm heavily downranks this content in News Feed so it’s seen by fewer people and far less likely to go viral. Fact-checkers can fact-check any posts they choose based on the queue we send them. 

And lastly, as part of our work to inform people about the content they see on Facebook, we just launched a new design to better warn people when they see content that’s illegal, false, or partly false by our fact-checking partners.

People will now see a more prominent label on photos and videos that have been fact-checked as false or partly false. This is a grey screen that sits over a post and says ‘false information’ and points people to fact-checkers’ articles debunking the claims. 

These clearer labels are what people have told us they want, what they have told us they expect Facebook to do, and what experts tell us is the right tactic for combating misinformation.

We’re rolling this change out in the UK this week for any photos and videos that have been rated through our fact-checking partnership. Though just one part of our overall strategy, fact-checking is a fundamental part of our strategy to combat this information and I want to share a little bit more about the program.

Our fact-checking partners are all accredited by the International Fact-Checking Network which requires them to abide by a code of principles such as nonpartisanship and transparency sources.

We currently have over 50 partners in over 40 languages around the world. As Rebecca outlined earlier, we don’t send content or ads from politicians and political parties to our third party fact-checking partners.

Here in the UK we work with Full Fact and FactCheckNI and I as part of our program. To recap we identify content that may be false using signals such as feedback from our users. This content is all submitted into a queue for our fact-checking partners to access. These fact-checkers then choose which content to review, check the facts, and rate the accuracy of the content.

These fact-checkers are independent organizations, so it is at their discretion what they choose to investigate. They can also fact-check whatever content they want outside of the posts we send their way.

If a fact-checker rates a story as false, it will appear lower in News Feed with the false information screen I mentioned earlier. This significantly reduces the number of people who see it.

Other posts that Full Fact and FactCheckNI choose to fact-check outside of our system will not be impacted on Facebook. 

And finally, on Tuesday we announced a partnership with the International Fact-Checking Network to create the Fact-Checking Innovation Initiative. This will fund innovation projects, new formats, and technologies to help benefit the broader fact-checking ecosystem. 

We are investing $500,000 into this new initiative, where organizations can submit applications for projects to improve fact-checkers’ scale and efficiency, increase the reach of fact-checks to empower more people with reliable information, build new tools to help combat misinformation, and encourage newsrooms to collaborate in fact-checking efforts.

Anyone from the UK can be a part of this new initiative. 

Ella Fallows, Politics and Government Outreach Manager UK, Facebook 

Our team’s role involves two main tasks: working with parties, MPs and candidates to ensure they have a good experience and get the most from our platforms; and looking at how we can best use our platforms to promote participation in elections.

I’d like to start with how MPs and candidates use our platforms. 

There is, rightly, a focus in the UK about the current tone of political debate. Let me be clear, hate speech and threats of violence have no place on our platforms and we’re investing heavily to tackle them. 

Additionally, for this campaign we have this week written to political parties and candidates setting out the range of safety measures we have in place and also to remind them of the terms and conditions and the Community Standards which govern our platforms. 

As you may be aware, every piece of content on Facebook and Instagram has a report button, and when content is reported to us which violates our Community Standards – what is and isn’t allowed on Facebook – it is removed. 

Since March of this year, MPs have also had access to a dedicated reporting channel to flag any abusive and threatening content directly to our teams. Now that the General Election is underway we’re extending that support to all prospective candidates, making our team available to anyone standing to allow them to quickly report any concerns across our platforms and have them investigated. 

This is particularly pertinent to Tuesday’s news from the Government calling for a one stop shop for candidates. We have already set up our own one stop shop so that there is a single point of contact for candidates for issues across Facebook and Instagram.

But our team is not working alone; it’s backed up by our 35,000-strong global safety and security team that oversees content and behavior across the platform every day. 

And our technology is also helping us to automatically detect more of this harmful content. For example, the proportion of hate speech we have removed before it’s reported to us has increased significantly over the last two years, and we will be releasing new figures on this later this month.

We also have a Government, Politics & Advocacy Portal which is a home for everything a candidate will need during the campaign, including ‘how to’ guides on subjects such as political advertising, campaigning on Facebook and troubleshooting guides for technical issues.

We’re working with the political parties to ensure candidates are aware of both the reporting channel to reach my team and the Government, Politics & Advocacy Portal.

We’re holding a series of sessions for candidates on safety and outlining the help available to address harassment on our platforms. We’ve already held dedicated sessions for female candidates in partnership with women’s networks within the parties to provide extra guidance. We want to ensure we’re doing everything possible to help them connect with their constituents, free from harassment.

Finally, we’re working with the Government to distribute to every candidate via returning officers in the General Election the safety guides we have put together, to ensure we reach everyone not just those attending our outreach sessions. Our safety guides include information on a range of tools we have developed:

  • For example, public figures are able to moderate and filter the content that people put on their Facebook Pages to prevent negative content appearing in the first place. People who help manage Pages can hide or delete individual comments. 
  • They can also proactively moderate comments and posts by visitors by turning on the profanity filter, or blocking specific words or lists of words that they do not want to appear on their Page. Page admins can also remove or ban people from their Pages. 

We hope these steps help every candidate to reach their constituents, and get the most from our platforms. But our work doesn’t stop there.

The second area our team focuses on is promoting civic engagement. In addition to supporting and advising candidates, we also, of course, want to help promote voter participation in the election. 

For the past five years, we’ve used badges and reminders at the top of people’s News Feeds to encourage people to vote in elections around the world. The same will be true for this campaign. 

We’ll run reminders to register to vote, with a link to the gov.uk voter registration page, in the week running up to the voter registration deadline. 

On election day itself, we’ll also run a reminder to vote with a link to the Electoral Commission website so voters can find their polling station and any information they need. This will include a button to share that you voted. 

We hope that this combination of steps will help to ensure both candidates and voters engaging with the General Election on our platforms have the best possible experience.





Source link

Removing More Coordinated Inauthentic Behavior From Russia

By Nathaniel Gleicher, Head of Cybersecurity Policy

Today, we removed three networks of accounts, Pages and Groups for engaging in foreign interference — which is coordinated inauthentic behavior on behalf of a foreign actor — on Facebook and Instagram. They originated in Russia and targeted Madagascar, Central African Republic, Mozambique, Democratic Republic of the Congo, Côte d’Ivoire, Cameroon, Sudan and Libya. Each of these operations created networks of accounts to mislead others about who they were and what they were doing. Although the people behind these networks attempted to conceal their identities and coordination, our investigation connected these campaigns to entities associated with Russian financier Yevgeniy Prigozhin, who was previously indicted by the US Justice Department. We have shared information about our findings with law enforcement, policymakers and industry partners.

We’re constantly working to detect and stop this type of activity because we don’t want our services to be used to manipulate people. We’re taking down these Pages, Groups and accounts based on their behavior, not the content they posted. In each of these cases, the people behind this activity coordinated with one another and used fake accounts to misrepresent themselves, and that was the basis for our action.

We are making progress rooting out this abuse, but as we’ve said before, it’s an ongoing challenge. We’re committed to continually improving to stay ahead. That means building better technology, hiring more people and working closer with law enforcement, security experts and other companies.

What We’ve Found So Far

Today, we removed 35 Facebook accounts, 53 Pages, seven Groups and five Instagram accounts that originated in Russia and focused on Madagascar, the Central African Republic, Mozambique, Democratic Republic of the Congo, Côte d’Ivoire and Cameroon. The individuals behind this activity used a combination of fake accounts and authentic accounts of local nationals in Madagascar and Mozambique to manage Pages and Groups, and post their content. They typically posted about global and local political news including topics like Russian policies in Africa, elections in Madagascar and Mozambique, election monitoring by a local non-governmental organization and criticism of French and US policies.

  • Presence on Facebook: 35 Facebook accounts, 53 Pages, 7 Groups and 5 Instagram accounts.
  • Followers: About 475,000 accounts followed one or more of these Pages and around 450 people followed one or more of these Groups and around 650 people followed one or more of these Instagram accounts.
  • Advertising: Around $77,000 in spending for ads on Facebook paid for in US dollars. The first ad ran in April 2018 and the most recent ad ran in October 2019.

We found this activity as part of our internal investigations into Russia-linked, suspected coordinated inauthentic behavior in Africa. Our analysis benefited from open source reporting.

Below is a sample of the content posted by some of these Pages:

Page Name: “Sudan in the Eyes of Others” Caption Translation: Yam Brands, the company that owns the KFC franchise stated that it intends on opening 3 branches of its franchise in Sudan. The Spokesman of the company based in the american state of Kentucky, Takalaty Similiny, issued a statement saying that the branches are currently under construction and will open in mid November.

Translation: The Police of the Republic of Mozambique announced today that nine members of RENAMO were detained for their participation in the attempt to remove urns from one of the voting posts in the district of Machanga, Sofala and for having vandalised the infrastructure. According to the spokesperson for PRM, that spoke in a press-conference in Maputo, the nine people are accused of having lead around 300 RENAMO supporters that tried to remove the urns during counting at the Inharingue Primary School.

Translation: President of Central African Republic asked Vladimir Putin to organize the delivery of heavy weapons. Wednesday, in Sochi, the president Faustin-Archange Touadera asked his counterpart Vladimir Putin to increase the military assistance to the Republic, asking specifically for the supply of heavier weapons. “Russia is giving a considerable help to our country. They already carried out two weapons deliveries, trained our national troops, trained police officers, but for more effectiveness, we need heavy weapons. We hope that Russia will be able to allocate us combat vehicles, artillery canons and other killing weapons in order for us to bring our people to safety” said Touadera.
However, there is still an issue which is blocking us to implement this. The embargo on Central African Republic was not fully lifted in order for Russia to implement the plans of Touadera. Until now, it is only possible to supply weapons with a caliber less than 14,5 mm.
The embargo do not stop armed groups to get illegally heavy weapons for themselves, which is not helping the efforts of the government to establish peace.
We ask the Security Council of United Nations to draw attention on what their (sometimes reckless) sanctions are bringing.

We also removed 17 Facebook accounts, 18 Pages, 3 Groups and six Instagram accounts that originated in Russia and focused primarily on Sudan. The people behind this activity used a combination of authentic accounts of Sudanese nationals, fake and compromised accounts — some of which had already been disabled by our automated systems — to comment, post and manage Pages posing as news organizations, as well as direct traffic to off-platform sites. They frequently shared stories from SUNA (Sudan’s state news agency) as well as Russian state-controlled media Sputnik and RT, and posted primarily in Arabic and some in English. The Page administrators and account owners posted about local news and events in Sudan and other countries in Sub-Saharan Africa, including Sudanese-Russian relations, US-Russian relations, Russian foreign policy and Muslims in Russia.

  • Presence on Facebook and Instagram: 17 Facebook accounts, 18 Pages, 3 Groups and 6 accounts on Instagram.
  • Followers: About 457,000 accounts followed one or more of these Pages, about 1,300 accounts joined at least one of these Groups and around 2,900 people followed one or more of these Instagram accounts.
  • Advertising: Around $160 in spending for ads on Facebook paid for in Russian rubles. The first ad ran in April 2018 and the most recent ad ran in September 2019.

We found this activity as part of our internal investigations into Russia-linked, suspected coordinated inauthentic behavior in the region.

Below is a sample of the content posted by some of these Pages:

Translation (first two paragraphs): “American and British intelligence put together false information about Putin’s inner circle… a diplomatic military source said that American and British intelligence agencies are preparing to leak false information about people close to the president, Vladimir Putin, and the leadership of the Russian defense ministry.“

Page title: “Nile Echo”
Post translation (first paragraph only): French movements to abort Russian and Sudanese mediation in the Central African Republic…

Translation: #Article (I am completely sure that the person in the cell is not [ousted Sudanese leader Omar] al-Bashir, but I don’t have physical evidence proving this) Aml al-Kordofani wrote: The person resembling al-Bashir who is sitting behind the bars..who is he? The double game continues between the military and the sons of Gosh (referring to former Sudanese intelligence chief Salah Abdullah Mohamed Saleh) according to the American plan. The American plan employs psychological operations, as we mentioned earlier, and are undertaken by a huge office within the US Department of Defense.

Finally, we removed a network of 14 Facebook accounts, 12 Pages, one Group and one Instagram account that originated in Russia and focused on Libya. The individuals behind this activity used a combination of authentic accounts of Egyptian nationals, fake and compromised accounts — some of which had already been disabled by our automated systems — to manage Pages and drive people to an off-platform domain. They frequently shared stories from Russian state-controlled media Sputnik and RT. The Page admins and account owners typically posted in Arabic about local news and geopolitical issues including Libyan politics, crimes, natural disasters, public health, Turkey’s alleged sponsoring of terrorism in Libya, illegal migration, militia violence, the detention of Russian citizens in Libya for alleged interference in elections and a meeting between Khalifa Haftar, head of the Libyan National Army, and Putin. Some of these Pages posted content on multiple sides of political debate in Libya, including criticism of the Government of National Accord, US foreign policy, and Haftar, as well as support of Muammar Gaddafi and his son Saif al-Islam Gaddafi, Russian foreign policy, and Khalifa Haftar.

  • Presence on Facebook and Instagram: 14 Facebook accounts, 12 Pages, one Group and one account on Instagram.
  • Followers: About 212,000 accounts followed one or more of these Pages, 1 account joined this Group and around 29,300 people followed this Instagram account.
  • Advertising: About $10,000 USD paid for primarily in US dollars, euros and Egyptian pounds. The first ad ran in May 2014 and the most recent ad ran in October 2019.

Based on a tip shared by the Stanford Internet Observatory, we conducted an investigation into suspected Russia-linked coordinated inauthentic behavior and identified the full scope of this activity. Our analysis benefited from open source reporting.

Below is a sample of the content posted by some of these Pages:

Page name: “Voice of Libya” Post translation: The Government of National Accord [GNA] practices hypocrisy … the detention of two Russian citizens under the pretense that they are manipulating elections in Libya. But in reality, no elections are taking place in Libya now. So the pretense under which the Russians were arrested is fictitious and contrived.

Page title: “Libya Gaddafi” Post translation: “Why was late Libyan leader Muammar al-Gaddafi killed? Everyone was happy in Libya. There are people in America who sleep under bridges. There was never any discrimination in Libya, and there were not problems. The work was good and the money, too.”

Page name: “Voice of Libya” Post translation: First meeting between Haftar and Putin in Moscow. Several sources reported on the visit of the army’s commander-in-chief, Field Marshal Khalifa Haftar, to Moscow, where he met Russian President Vladimir Putin, to discuss developments in the military and political situation in Libya. This is Haftar’s first meeting with the Russian president. He has previously visited Russia and met with senior officials in the foreign and defense ministries, and they are expected to meet again.

Page title: “Falcons of the Conqueror” Post translation: Field Marshal Haftar: Libyans decide who to elect as the next president, and it is Saif al-Islam al-Gaddafi’s right to be a candidate





Source link

Helping to Protect the 2020 US Elections

By Guy Rosen, VP of Integrity; Katie Harbath, Public Policy Director, Global Elections; Nathaniel Gleicher, Head of Cybersecurity Policy and Rob Leathern, Director of Product Management

We have a responsibility to stop abuse and election interference on our platform. That’s why we’ve made significant investments since 2016 to better identify new threats, close vulnerabilities and reduce the spread of viral misinformation and fake accounts. 

Today, almost a year out from the 2020 elections in the US, we’re announcing several new measures to help protect the democratic process and providing an update on initiatives already underway:

Fighting foreign interference

  • Combating inauthentic behavior, including an updated policy
  • Protecting the accounts of candidates, elected officials, their teams and others through Facebook Protect 

Increasing transparency

  • Making Pages more transparent, including showing the confirmed owner of a Page
  • Labeling state-controlled media on their Page and in our Ad Library
  • Making it easier to understand political ads, including a new US presidential candidate spend tracker

Reducing misinformation

  • Preventing the spread of misinformation, including clearer fact-checking labels 
  • Fighting voter suppression and interference, including banning paid ads that suggest voting is useless or advise people not to vote
  • Helping people better understand the information they see online, including an initial investment of $2 million to support media literacy projects

Fighting Foreign Interference

Combating Inauthentic Behavior

Over the last three years, we’ve worked to identify new and emerging threats and remove coordinated inauthentic behavior across our apps. In the past year alone, we’ve taken down over 50 networks worldwide, many ahead of major democratic elections. As part of our effort to counter foreign influence campaigns, this morning we removed four separate networks of accounts, Pages and Groups on Facebook and Instagram for engaging in coordinated inauthentic behavior. Three of them originated in Iran and one in Russia. They targeted the US, North Africa and Latin America. We have identified these manipulation campaigns as part of our internal investigations into suspected Iran-linked inauthentic behavior, as well as ongoing proactive work ahead of the US elections.

We took down these networks based on their behavior, not the content they posted. In each case, the people behind this activity coordinated with one another and used fake accounts to misrepresent themselves, and that was the basis for our action. We have shared our findings with law enforcement and industry partners. More details can be found here.

As we’ve improved our ability to disrupt these operations, we’ve also built a deeper understanding of different threats and how best to counter them. We investigate and enforce against any type of inauthentic behavior. However, the most appropriate way to respond to someone boosting the popularity of their posts in their own country may not be the best way to counter foreign interference. That’s why we’re updating our inauthentic behavior policy to clarify how we deal with the range of deceptive practices we see on our platforms, whether foreign or domestic, state or non-state.

Protecting the Accounts of Candidates, Elected Officials and Their Teams

Today, we’re launching Facebook Protect to further secure the accounts of elected officials, candidates, their staff and others who may be particularly vulnerable to targeting by hackers and foreign adversaries. As we’ve seen in past elections, they can be targets of malicious activity. However, because campaigns are generally run for a short period of time, we don’t always know who these campaign-affiliated people are, making it harder to help protect them.

Beginning today, Page admins can enroll their organization’s Facebook and Instagram accounts in Facebook Protect and invite members of their organization to participate in the program as well. Participants will be required to turn on two-factor authentication, and their accounts will be monitored for hacking, such as login attempts from unusual locations or unverified devices. And, if we discover an attack against one account, we can review and protect other accounts affiliated with that same organization that are enrolled in our program. Read more about Facebook Protect and enroll here.

Increasing Transparency

Making Pages More Transparent

We want to make sure people are using Facebook authentically, and that they understand who is speaking to them. Over the past year, we’ve taken steps to ensure Pages are authentic and more transparent by showing people the Page’s primary country location and whether the Page has merged with other Pages. This gives people more context on the Page and makes it easier to understand who’s behind it. 

Increasingly, we’ve seen people failing to disclose the organization behind their Page as a way to make people think that a Page is run independently. To address this, we’re adding more information about who is behind a Page, including a new “Organizations That Manage This Page” tab that will feature the Page’s “Confirmed Page Owner,” including the organization’s legal name and verified city, phone number or website.

Initially, this information will only appear on Pages with large US audiences that have gone through Facebook’s business verification. In addition, Pages that have gone through the new authorization process to run ads about social issues, elections or politics in the US will also have this tab. And starting in January, these advertisers will be required to show their Confirmed Page Owner. 

If we find a Page is concealing its ownership in order to mislead people, we will require it to successfully complete the verification process and show more information in order for the Page to stay up. 

Labeling State-Controlled Media

We want to help people better understand the sources of news content they see on Facebook so they can make informed decisions about what they’re reading. Next month, we’ll begin labeling media outlets that are wholly or partially under the editorial control of their government as state-controlled media. This label will be on both their Page and in our Ad Library. 

We will hold these Pages to a higher standard of transparency because they combine the opinion-making influence of a media organization with the strategic backing of a state. 

We developed our own definition and standards for state-controlled media organizations with input from more than 40 experts around the world specializing in media, governance, human rights and development. Those consulted represent leading academic institutions, nonprofits and international organizations in this field, including Reporters Without Borders, Center for International Media Assistance, European Journalism Center, Oxford Internet Institute‘s Project on Computational Propaganda, Center for Media, Data and Society (CMDS) at the Central European University, the Council of Europe, UNESCO and others. 

It’s important to note that our policy draws an intentional distinction between state-controlled media and public media, which we define as any entity that is publicly financed, retains a public service mission and can demonstrate its independent editorial control. At this time, we’re focusing our labeling efforts only on state-controlled media. 

We will update the list of state-controlled media on a rolling basis beginning in November. And, in early 2020, we plan to expand our labeling to specific posts and apply these labels on Instagram as well. For any organization that believes we have applied the label in error, there will be an appeals process. 

Making it Easier to Understand Political Ads

In addition to making Pages more transparent, we’re updating the Ad Library, Ad Library Report and Ad Library API to help journalists, lawmakers, researchers and others learn more about the ads they see. This includes:

  • A new US presidential candidate spend tracker, so that people can see how much candidates have spent on ads
  • Adding additional spend details at the state or regional level to help people analyze advertiser and candidate efforts to reach voters geographically
  • Making it clear if an ad ran on Facebook, Instagram, Messenger or Audience Network
  • Adding useful API filters, providing programmatic access to download ad creatives and a repository of frequently used API scripts.

In addition to updates to the Ad Library API, in November, we will begin testing a new database with researchers that will enable them to quickly download the entire Ad Library, pull daily snapshots and track day-to-day changes.

Visit our Help Center to learn more about the changes to Pages and the Ad Library

Reducing Misinformation

Preventing the Spread of Viral Misinformation

On Facebook and Instagram, we work to keep confirmed misinformation from spreading. For example, we reduce its distribution so fewer people see it on Instagram, we remove it from Explore and hashtags, and on Facebook, we reduce its distribution in News Feed. On Instagram, we also make content from accounts that repeatedly post misinformation harder to find by filtering content from that account from Explore and hashtag pages for example. And on Facebook, if Pages, domains or Groups repeatedly share misinformation, we’ll continue to reduce their overall distribution and we’ll place restrictions on the Page’s ability to advertise and monetize.

Over the next month, content across Facebook and Instagram that has been rated false or partly false by a third-party fact-checker will start to be more prominently labeled so that people can better decide for themselves what to read, trust and share. The labels below will be shown on top of false and partly false photos and videos, including on top of Stories content on Instagram, and will link out to the assessment from the fact-checker.

Much like we do on Facebook when people try to share known misinformation, we’re also introducing a new pop-up that will appear when people attempt to share posts on Instagram that include content that has been debunked by third-party fact-checkers.

In addition to clearer labels, we’re also working to take faster action to prevent misinformation from going viral, especially given that quality reporting and fact-checking takes time. In many countries, including in the US, if we have signals that a piece of content is false, we temporarily reduce its distribution pending review by a third-party fact-checker.

Fighting Voter Suppression and Intimidation

Attempts to interfere with or suppress voting undermine our core values as a company, and we work proactively to remove this type of harmful content. Ahead of the 2018 midterm elections, we extended our voter suppression and intimidation policies to prohibit:

  • Misrepresentation of the dates, locations, times and methods for voting or voter registration (e.g. “Vote by text!”);
  • Misrepresentation of who can vote, qualifications for voting, whether a vote will be counted and what information and/or materials must be provided in order to vote (e.g. “If you voted in the primary, your vote in the general election won’t count.”); and 
  • Threats of violence relating to voting, voter registration or the outcome of an election.

We remove this type of content regardless of who it’s coming from, and ahead of the midterm elections, our Elections Operations Center removed more than 45,000 pieces of content that violated these policies more than 90% of which our systems detected before anyone reported the content to us. 

We also recognize that there are certain types of content, such as hate speech, that are equally likely to suppress voting. That’s why our hate speech policies ban efforts to exclude people from political participation on the basis of things like race, ethnicity or religion (e.g., telling people not to vote for a candidate because of the candidate’s race, or indicating that people of a certain religion should not be allowed to hold office).

In advance of the US 2020 elections, we’re implementing additional policies and expanding our technical capabilities on Facebook and Instagram to protect the integrity of the election. Following up on a commitment we made in the civil rights audit report released in June, we have now implemented our policy banning paid advertising that suggests voting is useless or meaningless, or advises people not to vote. 

In addition, our systems are now more effective at proactively detecting and removing this harmful content. We use machine learning to help us quickly identify potentially incorrect voting information and remove it. 

We are also continuing to expand and develop our partnerships to provide expertise on trends in voter suppression and intimidation, as well as early detection of violating content. This includes working directly with secretaries of state and election directors to address localized voter suppression that may only be occurring in a single state or district. This work will be supported by our Elections Operations Center during both the primary and general elections. 

Helping People Better Understand What They See Online

Part of our work to stop the spread of misinformation is helping people spot it for themselves. That’s why we partner with organizations and experts in media literacy. 

Today, we’re announcing an initial investment of $2 million to support projects that empower people to determine what to read and share — both on Facebook and elsewhere. 

These projects range from training programs to help ensure the largest Instagram accounts have the resources they need to reduce the spread of misinformation, to expanding a pilot program that brings together senior citizens and high school students to learn about online safety and media literacy, to public events in local venues like bookstores, community centers and libraries in cities across the country. We’re also supporting a series of training events focused on critical thinking among first-time voters. 

In addition, we’re including a new series of media literacy lessons in our Digital Literacy Library. These lessons are drawn from the Youth and Media team at the Berkman Klein Center for Internet & Society at Harvard University, which has made them available for free worldwide under a Creative Commons license. The lessons, created for middle and high school educators, are designed to be interactive and cover topics ranging from assessing the quality of the information online to more technical skills like reverse image search.

We’ll continue to develop our media literacy efforts in the US and we’ll have more to share soon. 





Source link

Mark Zuckerberg Stands for Voice and Free Expression

Today, Mark Zuckerberg spoke at Georgetown University about the importance of protecting free expression. He underscored his belief that giving everyone a voice empowers the powerless and pushes society to be better over time — a belief that’s at the core of Facebook.

In front of hundreds of students at the school’s Gaston Hall, Mark warned that we’re increasingly seeing laws and regulations around the world that undermine free expression and human rights. He argued that in order to make sure people can continue to have a voice, we should: 1) write policy that helps the values of voice and expression triumph around the world, 2) fend off the urge to define speech we don’t like as dangerous, and 3) build new institutions so companies like Facebook aren’t making so many important decisions about speech on our own. 

Read Mark’s full speech below.

Standing For Voice and Free Expression

Hey everyone. It’s great to be here at Georgetown with all of you today.

Before we get started, I want to acknowledge that today we lost an icon, Elijah Cummings. He was a powerful voice for equality, social progress and bringing people together.

When I was in college, our country had just gone to war in Iraq. The mood on campus was disbelief. It felt like we were acting without hearing a lot of important perspectives. The toll on soldiers, families and our national psyche was severe, and most of us felt powerless to stop it. I remember feeling that if more people had a voice to share their experiences, maybe things would have gone differently. Those early years shaped my belief that giving everyone a voice empowers the powerless and pushes society to be better over time.

Back then, I was building an early version of Facebook for my community, and I got to see my beliefs play out at smaller scale. When students got to express who they were and what mattered to them, they organized more social events, started more businesses, and even challenged some established ways of doing things on campus. It taught me that while the world’s attention focuses on major events and institutions, the bigger story is that most progress in our lives comes from regular people having more of a voice.

Since then, I’ve focused on building services to do two things: give people voice, and bring people together. These two simple ideas — voice and inclusion — go hand in hand. We’ve seen this throughout history, even if it doesn’t feel that way today. More people being able to share their perspectives has always been necessary to build a more inclusive society. And our mutual commitment to each other — that we hold each others’ right to express our views and be heard above our own desire to always get the outcomes we want — is how we make progress together.

But this view is increasingly being challenged. Some people believe giving more people a voice is driving division rather than bringing us together. More people across the spectrum believe that achieving the political outcomes they think matter is more important than every person having a voice. I think that’s dangerous. Today I want to talk about why, and some important choices we face around free expression.

Throughout history, we’ve seen how being able to use your voice helps people come together. We’ve seen this in the civil rights movement. Frederick Douglass once called free expression “the great moral renovator of society”. He said “slavery cannot tolerate free speech”. Civil rights leaders argued time and again that their protests were protected free expression, and one noted: “nearly all the cases involving the civil rights movement were decided on First Amendment grounds”.

We’ve seen this globally too, where the ability to speak freely has been central in the fight for democracy worldwide. The most repressive societies have always restricted speech the most — and when people are finally able to speak, they often call for change. This year alone, people have used their voices to end multiple long-running dictatorships in Northern Africa. And we’re already hearing from voices in those countries that had been excluded just because they were women, or they believed in democracy.

Our idea of free expression has become much broader over even the last 100 years. Many Americans know about the Enlightenment history and how we enshrined the First Amendment in our constitution, but fewer know how dramatically our cultural norms and legal protections have expanded, even in recent history.

The first Supreme Court case to seriously consider free speech and the First Amendment was in 1919, Schenk vs the United States. Back then, the First Amendment only applied to the federal government, and states could and often did restrict your right to speak. Our ability to call out things we felt were wrong also used to be much more restricted. Libel laws used to impose damages if you wrote something negative about someone, even if it was true. The standard later shifted so it became okay as long as you could prove your critique was true. We didn’t get the broad free speech protections we have now until the 1960s, when the Supreme Court ruled in opinions like New York Times vs Sullivan that you can criticize public figures as long as you’re not doing so with actual malice, even if what you’re saying is false.

We now have significantly broader power to call out things we feel are unjust and share our own personal experiences. Movements like #BlackLivesMatter and #MeToo went viral on Facebook — the hashtag #BlackLivesMatter was actually first used on Facebook — and this just wouldn’t have been possible in the same way before. 100 years back, many of the stories people have shared would have been against the law to even write down. And without the internet giving people the power to share them directly, they certainly wouldn’t have reached as many people. With Facebook, more than 2 billion people now have a greater opportunity to express themselves and help others.

While it’s easy to focus on major social movements, it’s important to remember that most progress happens in our everyday lives. It’s the Air Force moms who started a Facebook group so their children and other service members who can’t get home for the holidays have a place to go. It’s the church group that came together during a hurricane to provide food and volunteer to help with recovery. It’s the small business on the corner that now has access to the same sophisticated tools only the big guys used to, and now they can get their voice out and reach more customers, create jobs and become a hub in their local community. Progress and social cohesion come from billions of stories like this around the world.

People having the power to express themselves at scale is a new kind of force in the world — a Fifth Estate alongside the other power structures of society. People no longer have to rely on traditional gatekeepers in politics or media to make their voices heard, and that has important consequences. I understand the concerns about how tech platforms have centralized power, but I actually believe the much bigger story is how much these platforms have decentralized power by putting it directly into people’s hands. It’s part of this amazing expansion of voice through law, culture and technology.

So giving people a voice and broader inclusion go hand in hand, and the trend has been towards greater voice over time. But there’s also a counter-trend. In times of social turmoil, our impulse is often to pull back on free expression. We want the progress that comes from free expression, but not the tension.

We saw this when Martin Luther King Jr. wrote his famous letter from Birmingham Jail, where he was unconstitutionally jailed for protesting peacefully. We saw this in the efforts to shut down campus protests against the Vietnam War. We saw this way back when America was deeply polarized about its role in World War I, and the Supreme Court ruled that socialist leader Eugene Debs could be imprisoned for making an anti-war speech.

In the end, all of these decisions were wrong. Pulling back on free expression wasn’t the answer and, in fact, it often ended up hurting the minority views we seek to protect. From where we are now, it seems obvious that, of course, protests for civil rights or against wars should be allowed. Yet the desire to suppress this expression was felt deeply by much of society at the time.

Today, we are in another time of social tension. We face real issues that will take a long time to work through — massive economic transitions from globalization and technology, fallout from the 2008 financial crisis, and polarized reactions to greater migration. Many of our issues flow from these changes.

In the face of these tensions, once again a popular impulse is to pull back from free expression. We’re at another cross-roads. We can continue to stand for free expression, understanding its messiness, but believing that the long journey towards greater progress requires confronting ideas that challenge us. Or we can decide the cost is simply too great. I’m here today because I believe we must continue to stand for free expression.

At the same time, I know that free expression has never been absolute. Some people argue internet platforms should allow all expression protected by the First Amendment, even though the First Amendment explicitly doesn’t apply to companies. I’m proud that our values at Facebook are inspired by the American tradition, which is more supportive of free expression than anywhere else. But even American tradition recognizes that some speech infringes on others’ rights. And still, a strict First Amendment standard might require us to allow terrorist propaganda, bullying young people and more that almost everyone agrees we should stop — and I certainly do — as well as content like pornography that would make people uncomfortable using our platforms.

So once we’re taking this content down, the question is: where do you draw the line? Most people agree with the principles that you should be able to say things other people don’t like, but you shouldn’t be able to say things that put people in danger. The shift over the past several years is that many people would now argue that more speech is dangerous than would have before. This raises the question of exactly what counts as dangerous speech online. It’s worth examining this in detail.

Many arguments about online speech are related to new properties of the internet itself. If you believe the internet is completely different from everything before it, then it doesn’t make sense to focus on historical precedent. But we should be careful of overly broad arguments since they’ve been made about almost every new technology, from the printing press to radio to TV. Instead, let’s consider the specific ways the internet is different and how internet services like ours might address those risks while protecting free expression.

One clear difference is that a lot more people now have a voice — almost half the world. That’s dramatically empowering for all the reasons I’ve mentioned. But inevitably some people will use their voice to organize violence, undermine elections or hurt others, and we have a responsibility to address these risks. When you’re serving billions of people, even if a very small percent cause harm, that can still be a lot of harm.

We build specific systems to address each type of harmful content — from incitement of violence to child exploitation to other harms like intellectual property violations — about 20 categories in total. We judge ourselves by the prevalence of harmful content and what percent we find proactively before anyone reports it to us. For example, our AI systems identify 99% of the terrorist content we take down before anyone even sees it. This is a massive investment. We now have over 35,000 people working on security, and our security budget today is greater than the entire revenue of our company at the time of our IPO earlier this decade.

All of this work is about enforcing our existing policies, not broadening our definition of what is dangerous. If we do this well, we should be able to stop a lot of harm while fighting back against putting additional restrictions on speech.

Another important difference is how quickly ideas can spread online. Most people can now get much more reach than they ever could before. This is at the heart of a lot of the positive uses of the internet. It’s empowering that anyone can start a fundraiser, share an idea, build a business, or create a movement that can grow quickly. But we’ve seen this go the other way too — most notably when Russia’s IRA tried to interfere in the 2016 elections, but also when misinformation has gone viral. Some people argue that virality itself is dangerous, and we need tighter filters on what content can spread quickly.

For misinformation, we focus on making sure complete hoaxes don’t go viral. We especially focus on misinformation that could lead to imminent physical harm, like misleading health advice saying if you’re having a stroke, no need to go to the hospital.

More broadly though, we’ve found a different strategy works best: focusing on the authenticity of the speaker rather than the content itself. Much of the content the Russian accounts shared was distasteful but would have been considered permissible political discourse if it were shared by Americans — the real issue was that it was posted by fake accounts coordinating together and pretending to be someone else. We’ve seen a similar issue with these groups that pump out misinformation like spam just to make money.

The solution is to verify the identities of accounts getting wide distribution and get better at removing fake accounts. We now require you to provide a government ID and prove your location if you want to run political ads or a large page. You can still say controversial things, but you have to stand behind them with your real identity and face accountability. Our AI systems have also gotten more advanced at detecting clusters of fake accounts that aren’t behaving like humans. We now remove billions of fake accounts a year — most within minutes of registering and before they do much. Focusing on authenticity and verifying accounts is a much better solution than an ever-expanding definition of what speech is harmful.

Another qualitative difference is the internet lets people form communities that wouldn’t have been possible before. This is good because it helps people find groups where they belong and share interests. But the flip side is this has the potential to lead to polarization. I care a lot about this — after all, our goal is to bring people together.

Much of the research I’ve seen is mixed and suggests the internet could actually decrease aspects of polarization. The most polarized voters in the last presidential election were the people least likely to use the internet. Research from the Reuters Institute also shows people who get their news online actually have a much more diverse media diet than people who don’t, and they’re exposed to a broader range of viewpoints. This is because most people watch only a couple of cable news stations or read only a couple of newspapers, but even if most of your friends online have similar views, you usually have some that are different, and you get exposed to different perspectives through them. Still, we have an important role in designing our systems to show a diversity of ideas and not encourage polarizing content.

One last difference with the internet is it lets people share things that would have been impossible before. Take live-streaming, for example. This allows families to be together for moments like birthdays and even weddings, schoolteachers to read bedtime stories to kids who might not be read to, and people to witness some very important events. But we’ve also seen people broadcast self-harm, suicide, and terrible violence. These are new challenges and our responsibility is to build systems that can respond quickly.

We’re particularly focused on well-being, especially for young people. We built a team of thousands of people and AI systems that can detect risks of self-harm within minutes so we can reach out when people need help most. In the last year, we’ve helped first responders reach people who needed help thousands of times.

For each of these issues, I believe we have two responsibilities: to remove content when it could cause real danger as effectively as we can, and to fight to uphold as wide a definition of freedom of expression as possible — and not allow the definition of what is considered dangerous to expand beyond what is absolutely necessary. That’s what I’m committed to.

But beyond these new properties of the internet, there are also shifting cultural sensitivities and diverging views on what people consider dangerous content.

Take misinformation. No one tells us they want to see misinformation. That’s why we work with independent fact checkers to stop hoaxes that are going viral from spreading. But misinformation is a pretty broad category. A lot of people like satire, which isn’t necessarily true. A lot of people talk about their experiences through stories that may be exaggerated or have inaccuracies, but speak to a deeper truth in their lived experience. We need to be careful about restricting that. Even when there is a common set of facts, different media outlets tell very different stories emphasizing different angles. There’s a lot of nuance here. And while I worry about an erosion of truth, I don’t think most people want to live in a world where you can only post things that tech companies judge to be 100% true.

We recently clarified our policies to ensure people can see primary source speech from political figures that shapes civic discourse. Political advertising is more transparent on Facebook than anywhere else — we keep all political and issue ads in an archive so everyone can scrutinize them, and no TV or print does that. We don’t fact-check political ads. We don’t do this to help politicians, but because we think people should be able to see for themselves what politicians are saying. And if content is newsworthy, we also won’t take it down even if it would otherwise conflict with many of our standards.

I know many people disagree, but, in general, I don’t think it’s right for a private company to censor politicians or the news in a democracy. And we’re not an outlier here. The other major internet platforms and the vast majority of media also run these same ads.

American tradition also has some precedent here. The Supreme Court case I mentioned earlier that gave us our current broad speech rights, New York Times vs Sullivan, was actually about an ad with misinformation, supporting Martin Luther King Jr. and criticizing an Alabama police department. The police commissioner sued the Times for running the ad, the jury in Alabama found against the Times, and the Supreme Court unanimously reversed the decision, creating today’s speech standard.

As a principle, in a democracy, I believe people should decide what is credible, not tech companies. Of course there are exceptions, and even for politicians we don’t allow content that incites violence or risks imminent harm — and of course we don’t allow voter suppression. Voting is voice. Fighting voter suppression may be as important for the civil rights movement as free expression has been. Just as we’re inspired by the First Amendment, we’re inspired by the 15th Amendment too.

Given the sensitivity around political ads, I’ve considered whether we should stop allowing them altogether. From a business perspective, the controversy certainly isn’t worth the small part of our business they make up. But political ads are an important part of voice — especially for local candidates, up-and-coming challengers, and advocacy groups that may not get much media attention otherwise. Banning political ads favors incumbents and whoever the media covers.

Even if we wanted to ban political ads, it’s not clear where we’d draw the line. There are many more ads about issues than there are directly about elections. Would we ban all ads about healthcare or immigration or women’s empowerment? If we banned candidates’ ads but not these, would that really make sense to give everyone else a voice in political debates except the candidates themselves? There are issues any way you cut this, and when it’s not absolutely clear what to do, I believe we should err on the side of greater expression.

Or take hate speech, which we define as someone directly attacking a person or group based on a characteristic like race, gender or religion. We take down content that could lead to real world violence. In countries at risk of conflict, that includes anything that could lead to imminent violence or genocide. And we know from history that dehumanizing people is the first step towards inciting violence. If you say immigrants are vermin, or all Muslims are terrorists — that makes others feel they can escalate and attack that group without consequences. So we don’t allow that. I take this incredibly seriously, and we work hard to get this off our platform.

American free speech tradition recognizes that some speech can have the effect of restricting others’ right to speak. While American law doesn’t recognize “hate speech” as a category, it does prohibit racial harassment and sexual harassment. We still have a strong culture of free expression even while our laws prohibit discrimination.

But still, people have broad disagreements over what qualifies as hate and shouldn’t be allowed. Some people think our policies don’t prohibit content they think qualifies as hate, while others think what we take down should be a protected form of expression. This area is one of the hardest to get right.

I believe people should be able to use our services to discuss issues they feel strongly about — from religion and immigration to foreign policy and crime. You should even be able to be critical of groups without dehumanizing them. But even this isn’t always straightforward to judge at scale, and it often leads to enforcement mistakes. Is someone re-posting a video of a racist attack because they’re condemning it, or glorifying and encouraging people to copy it? Are they using normal slang, or using an innocent word in a new way to incite violence? Now multiply those linguistic challenges by more than 100 languages around the world.

Rules about what you can and can’t say often have unintended consequences. When speech restrictions were implemented in the UK in the last century, parliament noted they were applied more heavily to citizens from poorer backgrounds because the way they expressed things didn’t match the elite Oxbridge style. In everything we do, we need to make sure we’re empowering people, not simply reinforcing existing institutions and power structures.

That brings us back to the cross-roads we all find ourselves at today. Will we continue fighting to give more people a voice to be heard, or will we pull back from free expression?

I see three major threats ahead:

The first is legal. We’re increasingly seeing laws and regulations around the world that undermine free expression and people’s human rights. These local laws are each individually troubling, especially when they shut down speech in places where there isn’t democracy or freedom of the press. But it’s even worse when countries try to impose their speech restrictions on the rest of the world.

This raises a larger question about the future of the global internet. China is building its own internet focused on very different values, and is now exporting their vision of the internet to other countries. Until recently, the internet in almost every country outside China has been defined by American platforms with strong free expression values. There’s no guarantee these values will win out. A decade ago, almost all of the major internet platforms were American. Today, six of the top ten are Chinese.

We’re beginning to see this in social media. While our services, like WhatsApp, are used by protesters and activists everywhere due to strong encryption and privacy protections, on TikTok, the Chinese app growing quickly around the world, mentions of these protests are censored, even in the US.

Is that the internet we want?

It’s one of the reasons we don’t operate Facebook, Instagram or our other services in China. I wanted our services in China because I believe in connecting the whole world and I thought we might help create a more open society. I worked hard to make this happen. But we could never come to agreement on what it would take for us to operate there, and they never let us in. And now we have more freedom to speak out and stand up for the values we believe in and fight for free expression around the world.

This question of which nation’s values will determine what speech is allowed for decades to come really puts into perspective our debates about the content issues of the day. While we may disagree on exactly where to draw the line on specific issues, we at least can disagree. That’s what free expression is. And the fact that we can even have this conversation means that we’re at least debating from some common values. If another nation’s platforms set the rules, our discourse will be defined by a completely different set of values.

To push back against this, as we all work to define internet policy and regulation to address public safety, we should also be proactive and write policy that helps the values of voice and expression triumph around the world.

The second challenge to expression is the platforms themselves — including us. Because the reality is we make a lot of decisions that affect people’s ability to speak.

I’m committed to the values we’re discussing today, but we won’t always get it right. I understand people are concerned that we have so much control over how they communicate on our services. And I understand people are concerned about bias and making sure their ideas are treated fairly. Frankly, I don’t think we should be making so many important decisions about speech on our own either. We’d benefit from a more democratic process, clearer rules for the internet, and new institutions.

That’s why we’re establishing an independent Oversight Board for people to appeal our content decisions. The board will have the power to make final binding decisions about whether content stays up or comes down on our services — decisions that our team and I can’t overturn. We’re going to appoint members to this board who have a diversity of views and backgrounds, but who each hold free expression as their paramount value.

Building this institution is important to me personally because I’m not always going to be here, and I want to ensure the values of voice and free expression are enshrined deeply into how this company is governed.

The third challenge to expression is the hardest because it comes from our culture. We’re at a moment of particular tension here and around the world — and we’re seeing the impulse to restrict speech and enforce new norms around what people can say.

Increasingly, we’re seeing people try to define more speech as dangerous because it may lead to political outcomes they see as unacceptable. Some hold the view that since the stakes are so high, they can no longer trust their fellow citizens with the power to communicate and decide what to believe for themselves.

I personally believe this is more dangerous for democracy over the long term than almost any speech. Democracy depends on the idea that we hold each others’ right to express ourselves and be heard above our own desire to always get the outcomes we want. You can’t impose tolerance top-down. It has to come from people opening up, sharing experiences, and developing a shared story for society that we all feel we’re a part of. That’s how we make progress together.

So how do we turn the tide? Someone once told me our founding fathers thought free expression was like air. You don’t miss it until it’s gone. When people don’t feel they can express themselves, they lose faith in democracy and they’re more likely to support populist parties that prioritize specific policy goals over the health of our democratic norms.

I’m a little more optimistic. I don’t think we need to lose our freedom of expression to realize how important it is. I think people understand and appreciate the voice they have now. At some fundamental level, I think most people believe in their fellow people too.

As long as our governments respect people’s right to express themselves, as long as our platforms live up to their responsibilities to support expression and prevent harm, and as long as we all commit to being open and making space for more perspectives, I think we’ll make progress. It’ll take time, but we’ll work through this moment. We overcame deep polarization after World War I, and intense political violence in the 1960s. Progress isn’t linear. Sometimes we take two steps forward and one step back. But if we can’t agree to let each other talk about the issues, we can’t take the first step. Even when it’s hard, this is how we build a shared understanding.

So yes, we have big disagreements. Maybe more now than at any time in recent history. But part of that is because we’re getting our issues out on the table — issues that for a long time weren’t talked about. More people from more parts of our society have a voice than ever before, and it will take time to hear these voices and knit them together into a coherent narrative. Sometimes we hope for a singular event to resolve these conflicts, but that’s never been how it works. We focus on the major institutions — from governments to large companies — but the bigger story has always been regular people using their voice to take billions of individual steps forward to make our lives and our communities better.

The future depends on all of us. Whether you like Facebook or not, we need to recognize what is at stake and come together to stand for free expression at this critical moment.

I believe in giving people a voice because, at the end of the day, I believe in people. And as long as enough of us keep fighting for this, I believe that more people’s voices will eventually help us work through these issues together and write a new chapter in our history — where from all of our individual voices and perspectives, we can bring the world closer together.





Source link

Facebook, Elections and Political Speech

Speaking at the Atlantic Festival in Washington DC today I set out the measures that Facebook is taking to prevent outside interference in elections and Facebook’s attitude towards political speech on the platform. This is grounded in Facebook’s fundamental belief in free expression and respect for the democratic process, as well as the fact that, in mature democracies with a free press, political speech is already arguably the most scrutinized speech there is.  

You can read the full text of my speech below, but as I know there are often lots of questions about our policies and the way we enforce them I thought I’d share the key details.  

We rely on third-party fact-checkers to help reduce the spread of false news and other types of viral misinformation, like memes or manipulated photos and videos. We don’t believe, however, that it’s an appropriate role for us to referee political debates and prevent a politician’s speech from reaching its audience and being subject to public debate and scrutiny. That’s why Facebook exempts politicians from our third-party fact-checking program. We have had this policy on the books for over a year now, posted publicly on our site under our eligibility guidelines. This means that we will not send organic content or ads from politicians to our third-party fact-checking partners for review. However, when a politician shares previously debunked content including links, videos and photos, we plan to demote that content, display related information from fact-checkers, and reject its inclusion in advertisements. You can find more about the third-party fact-checking program and content eligibility here.

Facebook has had a newsworthiness exemption since 2016. This means that if someone makes a statement or shares a post which breaks our community standards we will still allow it on our platform if we believe the public interest in seeing it outweighs the risk of harm. Today, I announced that from now on we will treat speech from politicians as newsworthy content that should, as a general rule, be seen and heard. However, in keeping with the principle that we apply different standards to content for which we receive payment, this will not apply to ads – if someone chooses to post an ad on Facebook, they must still fall within our Community Standards and our advertising policies.

When we make a determination as to newsworthiness, we evaluate the public interest value of the piece of speech against the risk of harm. When balancing these interests, we take a number of factors into consideration, including country-specific circumstances, like whether there is an election underway or the country is at war; the nature of the speech, including whether it relates to governance or politics; and the political structure of the country, including whether the country has a free press. In evaluating the risk of harm, we will consider the severity of the harm. Content that has the potential to incite violence, for example, may pose a safety risk that outweighs the public interest value. Each of these evaluations will be holistic and comprehensive in nature, and will account for international human rights standards. 

Read the full speech below.

Facebook

For those of you who don’t know me, which I suspect is most of you, I used to be a politician – I spent two decades in European politics, including as Deputy Prime Minister in the UK for five years.

And perhaps because I acquired a taste for controversy in my time in politics, a year ago I came to work for Facebook.

I don’t have long with you, so I just want to touch on three things: I want to say a little about Facebook; about how we are getting ourselves ready for the 2020 election; and about our basic attitude towards political speech.

So…Facebook. 

As a European, I’m struck by the tone of the debate in the US around Facebook. Here you have this global success story, invented in America, based on American values, that is used by a third of the world’s population.

A company that has created 40,000 US jobs in the last two years, is set to create 40,000 more in the coming years, and contributes tens of billions of dollars to the economy. And with plans to spend more than $250 billion in the US in the next four years.

And while Facebook is subject to a lot of criticism in Europe, in India where I was earlier this month, and in many other places, the only place where it is being proposed that Facebook and other big Silicon Valley companies should be dismembered is here.

And whilst it might surprise you to hear me say this, I understand the underlying motive which leads people to call for that remedy – even if I don’t agree with the remedy itself.

Because what people want is that there should be proper competition, diversity, and accountability in how big tech companies operate – with success comes responsibility, and with power comes accountability.

But chopping up successful American businesses is not the best way to instill responsibility and accountability. For a start, Facebook and other US tech companies not only face fierce competition from each other for every service they provide – for photo and video sharing and messaging there are rival apps with millions or billions of users – but they also face increasingly fierce competition from their Chinese rivals. Giants like Alibaba, TikTok and WeChat.

More importantly, pulling apart globally successful American businesses won’t actually do anything to solve the big issues we are all grappling with – privacy, the use of data, harmful content and the integrity of our elections. 

Those things can and will only be addressed by creating new rules for the internet, new regulations to make sure companies like Facebook are accountable for the role they play and the decisions they take.

That is why we argue in favor of better regulation of big tech, not the break-up of successful American companies. 

Elections

Now, elections. It is no secret that Facebook made mistakes in 2016, and that Russia tried to use Facebook to interfere with the election by spreading division and misinformation. But we’ve learned the lessons of 2016. Facebook has spent the three years since building its defenses to stop that happening again.

  • Cracking down on fake accounts – the main source of fake news and malicious content – preventing millions from being created every day;
  • Bringing in independent fact-checkers to verify content;
  • Recruiting an army of people – now 30,000 – and investing hugely in artificial intelligence systems to take down harmful content.

And we are seeing results. Last year, a Stanford report found that interactions with fake news on Facebook was down by two-thirds since 2016.

I know there’s also a lot of concern about so-called deepfake videos. We’ve recently launched an initiative called the Deepfake Detection Challenge, working with the Partnership on AI, companies like Microsoft and universities like MIT, Berkeley and Oxford, to find ways to detect this new form of manipulated content so that we can identify them and take action.

But even when the videos aren’t as sophisticated – such as the now infamous Speaker Pelosi video – we know that we need to do more.

As Mark Zuckerberg has acknowledged publicly, we didn’t get to that video quickly enough and too many people saw it before we took action. We must and we will get better at identifying lightly manipulated content before it goes viral and provide users with much more forceful information when they do see it.

We will be making further announcements in this area in the near future.

Crucially, we have also tightened our rules on political ads. Political advertising on Facebook is now far more transparent than anywhere else – including TV, radio and print advertising.

People who want to run these ads now need to submit ID and information about their organization. We label the ads and let you know who’s paid for them. And we put these ads in a library for seven years so that anyone can see them.

Political speech

Of course, stopping election interference is only part of the story when it comes to Facebook’s role in elections. Which brings me to political speech.

Freedom of expression is an absolute founding principle for Facebook. Since day one, giving people a voice to express themselves has been at the heart of everything we do. We are champions of free speech and defend it in the face of attempts to restrict it. Censoring or stifling political discourse would be at odds with what we are about.

In a mature democracy with a free press, political speech is a crucial part of how democracy functions. And it is arguably the most scrutinized form of speech that exists.

 In newspapers, on network and cable TV, and on social media, journalists, pundits, satirists, talk show hosts and cartoonists – not to mention rival campaigns – analyze, ridicule, rebut and amplify the statements made by politicians.

At Facebook, our role is to make sure there is a level playing field, not to be a political participant ourselves.

To use tennis as an analogy, our job is to make sure the court is ready – the surface is flat, the lines painted, the net at the correct height. But we don’t pick up a racket and start playing. How the players play the game is up to them, not us.

We have a responsibility to protect the platform from outside interference, and to make sure that when people pay us for political ads we make it as transparent as possible. But it is not our role to intervene when politicians speak.

That’s why I want to be really clear today – we do not submit speech by politicians to our independent fact-checkers, and we generally allow it on the platform even when it would otherwise breach our normal content rules.

Of course, there are exceptions. Broadly speaking they are two-fold: where speech endangers people; and where we take money, which is why we have more stringent rules on advertising than we do for ordinary speech and rhetoric.

I was an elected politician for many years. I’ve had both words and objects thrown at me, I’ve been on the receiving end of all manner of accusations and insults.

It’s not new that politicians say nasty things about each other – that wasn’t invented by Facebook. What is new is that now they can reach people with far greater speed and at a far greater scale. That’s why we draw the line at any speech which can lead to real world violence and harm.

I know some people will say we should go further. That we are wrong to allow politicians to use our platform to say nasty things or make false claims. But imagine the reverse.

Would it be acceptable to society at large to have a private company in effect become a self-appointed referee for everything that politicians say? I don’t believe it would be. In open democracies, voters rightly believe that, as a general rule, they should be able to judge what politicians say themselves.  

Conclusion

So, in conclusion, I understand the debate about big tech companies and how to tackle the real concerns that exist about data, privacy, content and election integrity. But I firmly believe that simply breaking them up will not make the problems go away. The real solutions will only come through new, smart regulation instead.

And I hope I have given you some reassurance about our approach to preventing election interference, and some clarity over how we will treat political speech in the run up to 2020 and beyond.

Thank you.





Source link