Community Standards Enforcement Report, November 2019 Edition



isolomons

Today we’re publishing the fourth edition of our Community Standards Enforcement Report, detailing our work for Q2 and Q3 2019. We are now including metrics across ten policies on Facebook and metrics across four policies on Instagram.

These metrics include:

  • Prevalence: how often content that violates our policies was viewed
  • Content Actioned: how much content we took action on because it was found to violate our policies
  • Proactive Rate: of the content we took action on, how much was detected before someone reported it to us
  • Appealed Content: how much content people appealed after we took action
  • Restored Content: how much content was restored after we initially took action

We also launched a new page today so people can view examples of how our Community Standards apply to different types of content and see where we draw the line.

Adding Instagram to the Report
For the first time, we are sharing data on how we are doing at enforcing our policies on Instagram. In this first report for Instagram, we are providing data on four policy areas: child nudity and child sexual exploitation; regulated goods — specifically, illicit firearm and drug sales; suicide and self-injury; and terrorist propaganda. The report does not include appeals and restores metrics for Instagram, as appeals on Instagram were only launched in Q2 of this year, but these will be included in future reports.

While we use the same proactive detection systems to find and remove harmful content across both Instagram and Facebook, the metrics may be different across the two services. There are many reasons for this, including: the differences in the apps’ functionalities and how they’re used – for example, Instagram doesn’t have links, re-shares in feed, Pages or Groups; the differing sizes of our communities; where people in the world use one app more than another; and where we’ve had greater ability to use our proactive detection technology to date. When comparing metrics in order to see where progress has been made and where more improvements are needed, we encourage people to see how metrics change, quarter-over-quarter, for individual policy areas within an app.

What Else Is New in the Fourth Edition of the Report

  • Data on suicide and self-injury: We are now detailing how we’re taking action on suicide and self-injury content. This area is both sensitive and complex, and we work with experts to ensure everyone’s safety is considered. We remove content that depicts or encourages suicide or self-injury, including certain graphic imagery and real-time depictions that experts tell us might lead others to engage in similar behavior. We place a sensitivity screen over content that doesn’t violate our policies but that may be upsetting to some, including things like healed cuts or other non-graphic self-injury imagery in a context of recovery. We also recently strengthened our policies around self-harm and made improvements to our technology to find and remove more violating content.
    • On Facebook, we took action on about 2 million pieces of content in Q2 2019, of which 96.1% we detected proactively, and we saw further progress in Q3 when we removed 2.5 million pieces of content, of which 97.3% we detected proactively.
    • On Instagram, we saw similar progress and removed about 835,000 pieces of content in Q2 2019, of which 77.8% we detected proactively, and we removed about 845,000 pieces of content in Q3 2019, of which 79.1% we detected proactively.
  • Expanded data on terrorist propaganda: Our Dangerous Individuals and Organizations policy bans all terrorist organizations from having a presence on our services. To date, we have identified a wide range of groups, based on their behavior, as terrorist organizations. Previous reports only included our efforts specifically against al Qaeda, ISIS and their affiliates as we focused our measurement efforts on the groups understood to pose the broadest global threat. Now, we’ve expanded the report to include the actions we’re taking against all terrorist organizations. While the rate at which we detect and remove content associated with Al Qaeda, ISIS and their affiliates on Facebook has remained above 99%, the rate at which we proactively detect content affiliated with any terrorist organization on Facebook is 98.5% and on Instagram is 92.2%. We will continue to invest in automated techniques to combat terrorist content and iterate on our tactics because we know bad actors will continue to change theirs.
  • Estimating prevalence for suicide and self-injury and regulated goods: In this report, we are adding prevalence metrics for content that violates our suicide and self-injury and regulated goods (illicit sales of firearms and drugs) policies for the first time. Because we care most about how often people may see content that violates our policies, we measure prevalence, or the frequency at which people may see this content on our services. For the policy areas addressing the most severe safety concerns — child nudity and sexual exploitation of children, regulated goods, suicide and self-injury, and terrorist propaganda — the likelihood that people view content that violates these policies is very low, and we remove much of it before people see it. As a result, when we sample views of content in order to measure prevalence for these policy areas, many times we do not find enough, or sometimes any, violating samples to reliably estimate a metric. Instead, we can estimate an upper limit of how often someone would see content that violates these policies. In Q3 2019, this upper limit was 0.04%. Meaning that for each of these policies, out of every 10,000 views on Facebook or Instagram in Q3 2019, we estimate that no more than 4 of those views contained content that violated that policy.
    • It’s also important to note that when the prevalence is so low that we can only provide upper limits, this limit may change by a few hundredths of a percentage point between reporting periods, but changes that small do not mean there is a real difference in the prevalence of this content on the platform.

Progress to Help Keep People Safe
Across the most harmful types of content we work to combat, we’ve continued to strengthen our efforts to enforce our policies and bring greater transparency to our work. In addition to suicide and self-injury content and terrorist propaganda, the metrics for child nudity and sexual exploitation of children, as well as regulated goods, demonstrate this progress. The investments we’ve made in AI over the last five years continue to be a key factor in tackling these issues. In fact, recent advancements in this technology have helped with rate of detection and removal of violating content.

For child nudity and sexual exploitation of children, we made improvements to our processes for adding violations to our internal database in order to detect and remove additional instances of the same content shared on both Facebook and Instagram, enabling us to identify and remove more violating content.

On Facebook:

  • In Q3 2019, we removed about 11.6 million pieces of content, up from Q1 2019 when we removed about 5.8 million. Over the last four quarters, we proactively detected over 99% of the content we remove for violating this policy.

While we are including data for Instagram for the first time, we have made progress increasing content actioned and the proactive rate in this area within the last two quarters:

  • In Q2 2019, we removed about 512,000 pieces of content, of which 92.5% we detected proactively.
  • In Q3, we saw greater progress and removed 754,000 pieces of content, of which 94.6% we detected proactively.

For our regulated goods policy prohibiting illicit firearm and drug sales, continued investments in our proactive detection systems and advancements in our enforcement techniques have allowed us to build on the progress from the last report.

On Facebook:

  • In Q3 2019, we removed about 4.4 million pieces of drug sale content, of which 97.6% we detected proactively — an increase from Q1 2019 when we removed about 841,000 pieces of drug sale content, of which 84.4% we detected proactively.
  • Also in Q3 2019, we removed about 2.3 million pieces of firearm sales content, of which 93.8% we detected proactively — an increase from Q1 2019 when we removed about 609,000 pieces of firearm sale content, of which 69.9% we detected proactively.

On Instagram:

  • In Q3 2019, we removed about 1.5 million pieces of drug sale content, of which 95.3% we detected proactively.
  • In Q3 2019, we removed about 58,600 pieces of firearm sales content, of which 91.3% we detected proactively.

New Tactics in Combating Hate Speech
Over the last two years, we’ve invested in proactive detection of hate speech so that we can detect this harmful content before people report it to us and sometimes before anyone sees it. Our detection techniques include text and image matching, which means we’re identifying images and identical strings of text that have already been removed as hate speech, and machine-learning classifiers that look at things like language, as well as the reactions and comments to a post, to assess how closely it matches common phrases, patterns and attacks that we’ve seen previously in content that violates our policies against hate.

Initially, we’ve used these systems to proactively detect potential hate speech violations and send them to our content review teams since people can better assess context where AI cannot. Starting in Q2 2019, thanks to continued progress in our systems’ abilities to correctly detect violations, we began removing some posts automatically, but only when content is either identical or near-identical to text or images previously removed by our content review team as violating our policy, or where content very closely matches common attacks that violate our policy. We only do this in select instances, and it has only been possible because our automated systems have been trained on hundreds of thousands, if not millions, of different examples of violating content and common attacks. In all other cases when our systems proactively detect potential hate speech, the content is still sent to our review teams to make a final determination. With these evolutions in our detection systems, our proactive rate has climbed to 80%, from 68% in our last report, and we’ve increased the volume of content we find and remove for violating our hate speech policy.

While we are pleased with this progress, these technologies are not perfect and we know that mistakes can still happen. That’s why we continue to invest in systems that enable us to improve our accuracy in removing content that violates our policies while safeguarding content that discusses or condemns hate speech. Similar to how we review decisions made by our content review team in order to monitor the accuracy of our decisions, our teams routinely review removals by our automated systems to make sure we are enforcing our policies correctly. We also continue to review content again when people appeal and tell us we made a mistake in removing their post.

Updating our Metrics
Since our last report, we have improved the ways we measure how much content we take action on after identifying an issue in our accounting this summer. In this report, we are updating metrics we previously shared for content actioned, proactive rate, content appealed and content restored for the periods Q3 2018 through Q1 2019.

During those quarters, the issue with our accounting processes did not impact how we enforced our policies or how we informed people about those actions; it only impacted how we counted the actions we took. For example, if we find that a post containing one photo violates our policies, we want our metric to reflect that we took action on one piece of content — not two separate actions for removing the photo and the post. However, in July 2019, we found that the systems logging and counting these actions did not correctly log the actions taken. This was largely due to needing to count multiple actions that take place within a few milliseconds and not miss, or overstate, any of the individual actions taken.

We’ll continue to refine the processes we use to measure our actions and build a robust system to ensure the metrics we provide are accurate. We share more details about these processes here.





Source link

How Facebook Is Prepared for the 2019 UK General Election

Today, leaders from our offices in London and Menlo Park, California spoke with members of the press about Facebook’s efforts to prepare for the upcoming General Election in the UK on December 12, 2019. The following is a transcript of their remarks.

Rebecca Stimson, Head of UK Public Policy, Facebook

We wanted to bring you all together, now that the UK General Election is underway, to set out the range of actions we are taking to help ensure this election is transparent and secure – to answer your questions and to point you to the various resources we have available.  

There has already been a lot of focus on the role of social media within the campaign and there is a lot of information for us to set out. 

We have therefore gathered colleagues from both the UK and our headquarters in Menlo Park, California, covering our politics, product, policy and safety teams to take you through the details of those efforts. 

I will just say a few opening remarks before we dive into the details

Helping protect elections is one of our top priorities and over the last two years we’ve made some significant changes – these broadly fall into three camps:

  • We’ve introduced greater transparency so that people know what they are seeing online and can scrutinize it more effectively; 
  • We have built stronger defenses to prevent things like foreign interference; 
  • And we have invested in both people and technology to ensure these new policies are effective.

So taking these in turn. 

Transparency

On the issue of transparency. We’ve tightened our rules to make political ads much more transparent, so people can see who is trying to influence their vote and what they are saying. 

We’ll discuss this in more detail shortly, but to summarize:  

  • Anybody who wants to run political ads must go through a verification process to prove who they are and that they are based here in the UK; 
  • Every political ad is labelled so you can see who has paid for them;
  • Anybody can click on any ad they see on Facebook and get more information on why they are seeing it, as well as block ads from particular advertisers;
  • And finally, we put all political ads in an Ad Library so that everyone can see what ads are running, the types of people who saw them and how much was spent – not just while the ads are live, but for seven years afterwards.

Taken together these changes mean that political advertising on Facebook and Instagram is now more transparent than other forms of election campaigning, whether that’s billboards, newspaper ads, direct mail, leaflets or targeted emails. 

This is the first UK general election since we introduced these changes and we’re already seeing many journalists using these transparency tools to scrutinize the adverts which are running during this election – this is something we welcome and it’s exactly why we introduced these changes. 

Defense 

Turning to the stronger defenses we have put in place.

Nathaniel will shortly set out in more detail our work to prevent foreign interference and coordinated inauthentic behavior. But before he does I want to be clear right up front how seriously we take these issues and our commitment to doing everything we can to prevent election interference on our platforms. 

So just to highlight one of the things he will be talking about – we have, as part of this work, cracked down significantly on fake accounts. 

We now identify and shut down millions of fake accounts every day, many just seconds after they were created.

Investment

And lastly turning to investment in these issues.

We now have more than 35,000 people working on safety and security. We have been building and rolling out many of the new tools you will be hearing about today. And as Ella will set out later, we have introduced a number of safety measures including a dedicated reporting channel so that all candidates in the election can flag any abusive and threatening content directly to our teams.  

I’m also pleased to say that – now the election is underway – we have brought together an Elections Taskforce of people from our teams across the UK, EMEA and the US who are already working together every day to ensure election integrity on our platforms. 

The Elections Taskforce will be working on issues including threat intelligence, data science, engineering, operations, legal and others. It also includes representatives from WhatsApp and Instagram.

As we get closer to the election, these people will be brought together in physical spaces in their offices – what we call our Operations Centre. 

It’s important to remember that the Elections Taskforce is an additional layer of security on top of our ongoing monitoring for threats on the platform which operates 24/7. 

And while there will always be further improvements we can and will continue to make, and we can never say there won’t be challenges to respond to, we are confident that we’re better prepared than ever before.  

Political Ads

Before I wrap up this intro section of today’s call I also want to address two of the issues that have been hotly debated in the last few weeks – firstly whether political ads should be allowed on social media at all and secondly whether social media companies should decide what politicians can and can’t say as part of their campaigns. 

As Mark Zuckerberg has said, we have considered whether we should ban political ads altogether. They account for just 0.5% of our revenue and they’re always destined to be controversial. 

But we believe it’s important that candidates and politicians can communicate with their constituents and would be constituents. 

Online political ads are also important for both new challengers and campaigning groups to get their message out. 

Our approach is therefore to make political messages on our platforms as transparent as possible, not to remove them altogether. 

And there’s also a really difficult question – if you were to consider banning political ads, where do you draw the line – for example, would anyone advocate for blocking ads for important issues like climate change or women’s empowerment? 

Turning to the second issue – there is also a question about whether we should decide what politicians and political parties can and can’t say.  

We don’t believe a private company like Facebook should censor politicians. This is why we don’t send content or ads from politicians and political parties to our third party fact-checking partners.

This doesn’t mean that politicians can say whatever they want on Facebook. They can’t spread misinformation about where, when or how to vote. They can’t incite violence. We won’t allow them to share content that has previously been debunked as part of our third-party fact-checking program. And we of course take down content that violates local laws. 

But in general we believe political speech should be heard and we don’t feel it is right for private companies like us to fact-check or judge the veracity of what politicians and political parties say. 

Facebook’s approach to this issue is in line with the way political speech and campaigns have been treated in the UK for decades. 

Here in the UK – an open democracy with a vibrant free press – political speech has always been heavily scrutinized but it is not regulated. 

The UK has decided that there shouldn’t be rules about what political parties and candidates can and can’t say in their leaflets, direct mails, emails, billboards, newspaper ads or on the side of campaign buses.  

And as we’ve seen when politicians and campaigns have made hotly contested claims in previous elections and referenda, it’s not been the role of the Advertising Standards Authority, the Electoral Commission or any other regulator to police political speech. 

In our country it’s always been up to the media and the voters to scrutinize what politicians say and make their own minds up. 

Nevertheless, we have long called for new rules for the era of digital campaigning. 

Questions around what constitutes a political ad, who can run them and when, what steps those who purchase political ads must take, how much they can spend on them and whether there should be any rules on what they can and can’t say – these are all matters that can only be properly decided by Parliament and regulators.  

Legislation should be updated to set standards for the whole industry – for example, should all online political advertising be recorded in a public archive similar to our Ad Library and should that extend to traditional platforms like billboards, leaflets and direct mail?

We believe UK electoral law needs to be brought into the 21st century to give clarity to everyone – political parties, candidates and the platforms they use to promote their campaigns.

In the meantime our focus has been to increase transparency so anyone, anywhere, can scrutinize every ad that’s run and by whom. 

I will now pass you to the team to talk you through our efforts in more detail.

  • Nathaniel Gleicher will discuss tackling fake accounts and disrupting coordinated inauthentic behavior;
  • Rob Leathern will take you through our UK political advertising measures and Ad Library;
  • Antonia Woodford will outline our work tackling misinformation and our fact-checker partnerships; 
  • And finally, Ella Fallows will fill you on what we’re doing around safety of candidates and how we’re encouraging people to participate in the election; 

Nathaniel Gleicher, Head of Cybersecurity Policy, Facebook 

My team leads all our efforts across our apps to find and stop what we call influence operations, coordinated efforts to manipulate or corrupt public debate for a strategic goal. 

We also conduct regular red team exercises, both internally and with external partners to put ourselves into the shoes of threat actors and use that approach to identify and prepare for new and emerging threats. We’ll talk about some of the products of these efforts today. 

Before I dive into some of the details, as you’re listening to Rob, Antonia, and I, we’re going to be talking about a number of different initiatives that Facebook is focused on, both to protect the UK general election and more broadly, to respond to integrity threats. I wanted to give you a brief framework for how to think about these. 

The key distinction that you’ll hear again and again is a distinction between content and behavior. At Facebook, we have policies that enable to take action when we see content that violates our Community Standards. 

In addition, we have the tools that we use to respond when we see an actor engaged in deceptive or violating behavior, and we keep these two efforts distinct. And so, as you listen to us, we’ll be talking about different initiatives we have in both dimensions. 

Under content for example, you’ll hear Antonia talk about misinformation, about voter suppression, about hate speech, and about other types of content that we can take action against if someone tries to share that content on our platform. 

Under the behavioral side, you’ll hear me and you’ll hear Rob also mention some of our work around influence operations, around spam, and around hacking. 

I’m going to focus in particular on the first of these, influence operations; but the key distinction that I want to make is when we take action to remove someone because of their deceptive behavior, we’re not looking at, we’re not reviewing, and we’re not considering the content that they’re sharing. 

What we’re focused on is the fact that they are deceiving or misleading users through their actions. For example, using networks of fake accounts to conceal who they are and conceal who’s behind the operation. So we’ll refer back to these, but I think it’s helpful to distinguish between the content side of our enforcement and the behavior side of our enforcement. 

And that’s particularly important because we’ve seen some threat actors who work to understand where the boundaries are for content and make sure for example that the type of content they share doesn’t quite cross the line. 

And when we see someone doing that, because we have behavioral enforcement tools as well, we’re still able to make sure we’re protecting authenticity and public debate on the platform. 

In each of these dimensions, there are four pillars to our work. You’ll hear us refer to each of these during the call as well, but let me just say that these four fit together, and no one of these by themselves would be enough, but all four of the together give us a layered approach to defending public debate and ensuring authenticity on the platform. 

We have expert investigative teams that conduct proactive investigations to find, expose, and disrupt sophisticated threat actors. As we do that, we learn from those investigations and we build automated systems that can disrupt any kind of violating behavior across the platform at scale. 

We also, as Rebecca mentioned, build transparency tools so that users, external researchers and the press can see who is using the platform and ensure that they’re engaging authentically. It also forces threat actors who are trying to conceal their identity to work harder to conceal and mislead. 

And then lastly, one of the things that’s extremely clear to us, particularly in the election space, is that this is a whole of society effort. And so, we work closely with partners in government, in civil society, and across industry to tackle these threats. 

And we’ve found that where we could be most effective is where we bring the tools we bring to the table, and then can work with government and work with other partners to respond and get ahead of these challenges as they emerge. 

One of the ways that we do this is through proactive investigations into the deceptive efforts engaged in by bad actors. Over the last year, our investigative teams, working together with our partners in civil society, law enforcement, and industry, have found and stopped more than 50 campaigns engaged in coordinated inauthentic behavior across the world. 

This includes an operation we removed in May that originated from Iran and targeted a number of countries, including the UK. As we announced at the time, we removed 51 Facebook accounts, 36 pages, seven groups, and three Instagram accounts involved in coordinated inauthentic behavior. 

The page admins and account owners typically posted content in English or Arabic, and most of the operation had no focus on a particular country, although there were some pages focused on the UK and the United States. 

Similarly, in March we announced that we removed a domestic UK network of about 137 Facebook and Instagram accounts, pages, and groups that were engaged in coordinated inauthentic behavior. 

The individuals behind these accounts presented themselves as far right and anti-far right activists, frequently changed page and group names, and operated fake accounts to engage in hate speech and spread divisive comments on both sides of the political debate in the UK. 

These are the types of investigations that we focus our core investigative team on. Whenever we see a sophisticated actor that’s trying to evade our automated systems, those teams, which are made up of experts from law enforcement, the intelligence community, and investigative journalism, can find and reveal that behavior. 

When we expose it, we announce it publicly and we remove it from the platform. Those expert investigators proactively hunt for evidence of these types of coordinated inauthentic behavior (CIB) operations around the world. 

This team has not seen evidence of widespread foreign operations aimed at the UK. But we are continuing to search for this and we will remove and publicly share details of networks of CIB that we identify on our platforms. 

As always with these takedowns, we remove these operations for the deceptive behavior they engaged in, not for the content they shared. This is that content/behavior distinction that I mentioned earlier. As we’ve improved our ability to disrupt these operations, we’ve also deepened our understanding of the types of threats out there and how best to counter them. 

Based on these learnings, we’ve recently updated our inauthentic behavior policy which is posted publicly as part of our Community Standards, to clarify how we enforce against the spectrum of deceptive practices we see in our platforms, whether foreign or domestic, state or non state. For each investigation, we isolate any new behaviors we see and then we work to automate detection of them at scale. This connects to that second pillar of our integrity work. 

And this slows down the bad guy and lets our investigators focus on improving our defenses against emerging threats. A good example of this work is our efforts to find and block fake accounts, which Rebecca mentioned. 

We know bad actors use fake accounts as a way to mask their identity and inflict harm on our platforms. That’s why we’ve built an automated system to find and remove these fake accounts. And each time we conduct one of these takedowns, or any other of our enforcement actions, we learn more about what fake accounts look like and how we can have automated systems that detect and block them. 

This is why we have these systems in place today that block millions of fake accounts every day, often within minutes of their creation. Because information operations often target multiple platforms as well as traditional media, I mentioned our collaborations with industry, civil society and government. 

In addition to that, we are building increased transparency on our platform, so that the public along with open source researchers and journalists can find and expose more bad behavior themselves. 

This effort on transparency is incredibly important. Rob will talk about this in detail, but I do want to add one point here, specifically around pages. Increasingly, we’re seeing people operate pages that clearly disclose the organization behind them as a way to make others think they are independent. 

We want to make sure Facebook is used to engage authentically, and that users understand who is speaking to them and what perspective they are representing. We noted last month that we would be announcing new approaches to address this, and today we’re introducing a policy to require more accountability for pages that are concealing their ownership in order to mislead people.

If we find a page is misleading people about its purpose by concealing its ownership, we will require it to go through our business verification process, which we recently announced, and show more information on the page itself about who is behind that page, including the organization’s legal name and verified city, phone number, or website in order for it to stay up. 

This type of increased transparency helps ensure that the platform continues to be authentic and the people who use the platform know who they’re talking to and understand what they’re seeing. 

Rob Leathern, Director of Product, Business Integrity, Facebook 

In addition to making pages more transparent as Nathaniel has indicated, we’ve also put a lot of effort into making political advertising on Facebook more transparent than it is anywhere else. 

Every political and issue ad in the that runs on Facebook now goes into our Ad Library public archive that everyone can access, regardless of whether or not they have a Facebook account. 

We launched this in the UK in October 2018 and, since then, there’s been over 116,000 ads related to politics, elections, and social issues placed in the UK Ad Library. You can find all the ads that a candidate or organization is running, including how much they spent and who saw the ad. And we’re storing these ads in the Ad Library for seven years. 

Other media such as billboards, newspaper ads, direct mail, leaflets or targeted emails don’t today provide this level of transparency into the ad and who is seeing them. And as a result, we’ve seen a significant number of press stories regarding the election driven by the information in Facebook’s Ad Library. 

We’re proud of this resource and insight into ads running on Facebook and Instagram and that it is proving useful for media and researchers. And just last month, we made even more changes to both the Ad Library and Ad Library Reports. These include adding details in who the top advertising spenders are in each country in the UK, as well as providing an additional view by different date ranges which people have been asking for. 

We’re now also making it clear which Facebook platform an ad ran on. For example, if an ad ran on both Facebook and/or Instagram. 

For those of you unfamiliar with the Ad Library, which you can see at Facebook.com/adlibrary, I thought I’d run through it quickly. 

So this is the Ad Library. Here you see all the ads have been classified as relating to politics or issues. We keep them in the library for seven years. As I mentioned, you can find the Ad Library at Facebook.com/adlibrary. 

You can also access the Ad Library through a specific page. For example, for this Page, you can see not only the advertising information, but also the transparency about the Page itself, along with the spend data. 

Here is an example of the ads that this Page is running, both active as well as inactive. In addition, if an ad has been disapproved for violating any of our ad policies, you’re also able to see all of those ads as well. 

Here’s what it looks like if you click to see more detail about a specific ad. You’ll be able to see individual ad spend, impressions, and demographic information. 

And you’ll also be able to compare the individual ad spend to the overall macro spend by the Page, which is tracked in the section below. If you scroll back up, you’ll also be able to see the other information about the disclaimer that has been provided by the advertiser. 

We know we can’t protect elections alone and that everyone plays a part in keeping the platform safe and respectful. We ask people to share responsibly and to let us know when they see something that may violate our Advertising Policies and Community Standards. 

We also have the Ad Library API so journalists and academics can analyze ads about social issues, elections, or politics. The Ad Library application programming interface, or API, allows people to perform customized keyword searches of ads stored in the Ad Library. You can search data for all active and inactive issue, electoral or political ads. 

You can also access the Ad Library and the data therein through the specific page or through the Ad Library Report. Here is the Ad Library report, this allows you to see the spend by specific advertisers and you can download a full report of the data. 

Here we also allow you to see the spending by location and if you click in you can see the top spenders by region. So you can see, for example, in the various regions, who the top spenders in those areas are. 

Our goal is to provide an open API to news organizations, researchers, groups and people who can hold advertisers and us more accountable. 

We’ve definitely seen a lot of press, journalists, and researchers examining the data in the Ad Library and using it to generate these insights and we think that’s exactly a part of what will help hold both us and advertisers more accountable.

We hope these measures will build on existing transparency we have in place and help reporters, researchers and most importantly people on Facebook learn more about the Pages and information they’re engaging with. 

Antonia Woodford, Product Manager, Misinformation, Facebook

We are committed to fighting the spread of misinformation and viral hoaxes on Facebook. It is a responsibility we take seriously.

To accomplish this, we follow a three-pronged approach which we call remove, reduce, and inform. First and foremost, when something violates the laws or our policies, we’ll remove it from the platform all together.

As Nathaniel touched on, removing fake accounts is a priority, of which the vast majority are detected and removed within minutes of registration and before a person can report them. This is a key element in eliminating the potential spread of misinformation. 

The reduce and inform part of the equation is how we reduce the spread of problematic content that doesn’t violate the law or our community standards, while still ensuring freedom of expression on the platform and this is where the majority of our misinformation work is focused. 

To reduce the spread of misinformation, we work with third party fact-checkers. 

Through a combination of reporting from people on our platform and machine learning, potentially false posts are sent to third party fact-checkers to review. These fact-checkers review this content, check the facts, and then rate its accuracy. They’re able to review links in news articles as well as photos, videos, or text posts on Facebook.

After content has been rated false, our algorithm heavily downranks this content in News Feed so it’s seen by fewer people and far less likely to go viral. Fact-checkers can fact-check any posts they choose based on the queue we send them. 

And lastly, as part of our work to inform people about the content they see on Facebook, we just launched a new design to better warn people when they see content that’s illegal, false, or partly false by our fact-checking partners.

People will now see a more prominent label on photos and videos that have been fact-checked as false or partly false. This is a grey screen that sits over a post and says ‘false information’ and points people to fact-checkers’ articles debunking the claims. 

These clearer labels are what people have told us they want, what they have told us they expect Facebook to do, and what experts tell us is the right tactic for combating misinformation.

We’re rolling this change out in the UK this week for any photos and videos that have been rated through our fact-checking partnership. Though just one part of our overall strategy, fact-checking is a fundamental part of our strategy to combat this information and I want to share a little bit more about the program.

Our fact-checking partners are all accredited by the International Fact-Checking Network which requires them to abide by a code of principles such as nonpartisanship and transparency sources.

We currently have over 50 partners in over 40 languages around the world. As Rebecca outlined earlier, we don’t send content or ads from politicians and political parties to our third party fact-checking partners.

Here in the UK we work with Full Fact and FactCheckNI and I as part of our program. To recap we identify content that may be false using signals such as feedback from our users. This content is all submitted into a queue for our fact-checking partners to access. These fact-checkers then choose which content to review, check the facts, and rate the accuracy of the content.

These fact-checkers are independent organizations, so it is at their discretion what they choose to investigate. They can also fact-check whatever content they want outside of the posts we send their way.

If a fact-checker rates a story as false, it will appear lower in News Feed with the false information screen I mentioned earlier. This significantly reduces the number of people who see it.

Other posts that Full Fact and FactCheckNI choose to fact-check outside of our system will not be impacted on Facebook. 

And finally, on Tuesday we announced a partnership with the International Fact-Checking Network to create the Fact-Checking Innovation Initiative. This will fund innovation projects, new formats, and technologies to help benefit the broader fact-checking ecosystem. 

We are investing $500,000 into this new initiative, where organizations can submit applications for projects to improve fact-checkers’ scale and efficiency, increase the reach of fact-checks to empower more people with reliable information, build new tools to help combat misinformation, and encourage newsrooms to collaborate in fact-checking efforts.

Anyone from the UK can be a part of this new initiative. 

Ella Fallows, Politics and Government Outreach Manager UK, Facebook 

Our team’s role involves two main tasks: working with MPs and candidates to ensure they have a good experience and get the most from our platforms; and looking at how we can best use our platforms to promote participation in elections.

I’d like to start with the safety of MPs and candidates using our platforms. 

There is, rightly, a focus in the UK about the current tone of political debate. Let me be clear, hate speech and threats of violence have no place on our platforms and we’re investing heavily to tackle them. 

Additionally, for this campaign we have this week written to political parties and candidates setting out the range of safety measures we have in place and also to remind them of the terms and conditions and the Community Standards which govern their use of our platforms. 

As you may be aware, every piece of content on Facebook and Instagram has a report button, and when content is reported to us which violates our community standards, (what is and isn’t allowed on Facebook) it is removed. 

Since March this year, MPs have also had access to a dedicated reporting channel to flag any abusive and threatening content directly to our teams. Now that the General Election is underway we’re extending that support to all prospective candidates, making our team available to anyone standing to allow them to quickly report any concerns across our platforms and have them investigated. 

This is particularly pertinent to Tuesday’s news from the Government calling for a one stop shop for candidates, and we have already set up our own one stop shop so that there is a single point of contact for candidates for issues across Facebook and Instagram.

Behind that reporting channel sits my team, which is focused on escalating reports from candidates and making sure we’re taking action as quickly as possible on anything that violates our Community Standards or Advertising Policies. 

But that team is not working alone – it’s backed up by our 35,000-strong global safety and security team that oversees content and behavior across the platform every day. 

And our technology is also helping us to automatically detect more of this harmful content. For example, while there is further to go, the proportion of hate speech we remove before it’s reported to us has almost tripled over the last two years.

We also have a Government, Politics & Advocacy Portal which is a home for everything a candidate will need during the campaign, including ‘how to’ guides on subjects such as registering as a political advertiser and running campaigns on Facebook, best practice tips and troubleshooting guides for technical issues.

We’re working with all of the political parties and the Electoral Commission to ensure candidates are aware of both the reporting channel to reach my team and the Government, Politics & Advocacy Portal.

We’re also working with political parties and the Electoral Commission to help candidates prepare for the election through a few different initiatives:

  • Firstly, while we don’t provide ongoing guidance or embed anyone into campaigns, we have held sessions with each party on how to use and get the most from our platforms for their campaigns, and we’ll continue to hold webinars throughout the General Election period for any candidate and their staff to join.
  • We’re also working with women’s networks within the parties to hold dedicated sessions for female candidates providing extra guidance on safety and outlining the help available to prevent harassment on our platforms. We want to ensure we’re doing everything possible to help them connect with their constituents, free from harassment.
  • Finally, we’re working with the Electoral Commission and political parties to distribute to every candidate in the General Election the safety guides we have put together, to ensure we reach everyone not just those attending our outreach sessions. 

For example, we have developed a range of tools that allow public figures to moderate and filter the content that people put on their Facebook Pages to prevent negative content appearing in the first place. People who help manage Pages can hide or delete individual comments. 

They can also proactively moderate comments and posts by visitors by turning on the profanity filter, or blocking specific words or lists of words that they do not want to appear on their Page. Page admins can also remove or ban people from their Pages. 

We hope these steps help every candidate to reach their constituents, and get the most from our platforms. But our work doesn’t stop there.

The second area our team focuses on is promoting civic engagement. In addition to supporting and advising candidates, we also, of course, want to help promote voter participation in the election. 

For the past five years, we’ve used badges and reminders at the top of people’s News Feeds to encourage people to vote in elections around the world. The same will be true for this campaign. 

We’ll run reminders to register to vote, with a link to the Electoral Commission’s voter registration page, in the week running up to the voter registration deadline. 

On election day itself, we’ll also run a reminder to vote with a link to the Electoral Commission website so voters can find their polling station and any information they need. This will include a button to share that you voted. 

We know from speaking to the Electoral Commission that these reminders for past national votes in the UK have had a positive effect on voter registration.

We hope that this combination of steps will help to ensure both candidates and voters engaging with the General Election on our platforms have the best possible experience.





Source link

How Facebook Has Prepared for the 2019 UK General Election

Today, leaders from our offices in London and Menlo Park, California spoke with members of the press about Facebook’s efforts to prepare for the upcoming General Election in the UK on December 12, 2019. The following is a transcript of their remarks.

Rebecca Stimson, Head of UK Public Policy, Facebook

We wanted to bring you all together, now that the UK General Election is underway, to set out the range of actions we are taking to help ensure this election is transparent and secure – to answer your questions and to point you to the various resources we have available.  

There has already been a lot of focus on the role of social media within the campaign and there is a lot of information for us to set out. 

We have therefore gathered colleagues from both the UK and our headquarters in Menlo Park, California, covering our politics, product, policy and safety teams to take you through the details of those efforts. 

I will just say a few opening remarks before we dive into the details

Helping protect elections is one of our top priorities and over the last two years we’ve made some significant changes – these broadly fall into three camps:

  • We’ve introduced greater transparency so that people know what they are seeing online and can scrutinize it more effectively; 
  • We have built stronger defenses to prevent things like foreign interference; 
  • And we have invested in both people and technology to ensure these new policies are effective.

So taking these in turn. 

Transparency

On the issue of transparency. We’ve tightened our rules to make political ads much more transparent, so people can see who is trying to influence their vote and what they are saying. 

We’ll discuss this in more detail shortly, but to summarize:  

  • Anybody who wants to run political ads must go through a verification process to prove who they are and that they are based here in the UK; 
  • Every political ad is labelled so you can see who has paid for them;
  • Anybody can click on any ad they see on Facebook and get more information on why they are seeing it, as well as block ads from particular advertisers;
  • And finally, we put all political ads in an Ad Library so that everyone can see what ads are running, the types of people who saw them and how much was spent – not just while the ads are live, but for seven years afterwards.

Taken together these changes mean that political advertising on Facebook and Instagram is now more transparent than other forms of election campaigning, whether that’s billboards, newspaper ads, direct mail, leaflets or targeted emails. 

This is the first UK general election since we introduced these changes and we’re already seeing many journalists using these transparency tools to scrutinize the adverts which are running during this election – this is something we welcome and it’s exactly why we introduced these changes. 

Defense 

Turning to the stronger defenses we have put in place.

Nathaniel will shortly set out in more detail our work to prevent foreign interference and coordinated inauthentic behavior. But before he does I want to be clear right up front how seriously we take these issues and our commitment to doing everything we can to prevent election interference on our platforms. 

So just to highlight one of the things he will be talking about – we have, as part of this work, cracked down significantly on fake accounts. 

We now identify and shut down millions of fake accounts every day, many just seconds after they were created.

Investment

And lastly turning to investment in these issues.

We now have more than 35,000 people working on safety and security. We have been building and rolling out many of the new tools you will be hearing about today. And as Ella will set out later, we have introduced a number of safety measures including a dedicated reporting channel so that all candidates in the election can flag any abusive and threatening content directly to our teams.  

I’m also pleased to say that – now the election is underway – we have brought together an Elections Taskforce of people from our teams across the UK, EMEA and the US who are already working together every day to ensure election integrity on our platforms. 

The Elections Taskforce will be working on issues including threat intelligence, data science, engineering, operations, legal and others. It also includes representatives from WhatsApp and Instagram.

As we get closer to the election, these people will be brought together in physical spaces in their offices – what we call our Operations Centre. 

It’s important to remember that the Elections Taskforce is an additional layer of security on top of our ongoing monitoring for threats on the platform which operates 24/7. 

And while there will always be further improvements we can and will continue to make, and we can never say there won’t be challenges to respond to, we are confident that we’re better prepared than ever before.  

Political Ads

Before I wrap up this intro section of today’s call I also want to address two of the issues that have been hotly debated in the last few weeks – firstly whether political ads should be allowed on social media at all and secondly whether social media companies should decide what politicians can and can’t say as part of their campaigns. 

As Mark Zuckerberg has said, we have considered whether we should ban political ads altogether. They account for just 0.5% of our revenue and they’re always destined to be controversial. 

But we believe it’s important that candidates and politicians can communicate with their constituents and would be constituents. 

Online political ads are also important for both new challengers and campaigning groups to get their message out. 

Our approach is therefore to make political messages on our platforms as transparent as possible, not to remove them altogether. 

And there’s also a really difficult question – if you were to consider banning political ads, where do you draw the line – for example, would anyone advocate for blocking ads for important issues like climate change or women’s empowerment? 

Turning to the second issue – there is also a question about whether we should decide what politicians and political parties can and can’t say.  

We don’t believe a private company like Facebook should censor politicians. This is why we don’t send content or ads from politicians and political parties to our third party fact-checking partners.

This doesn’t mean that politicians can say whatever they want on Facebook. They can’t spread misinformation about where, when or how to vote. They can’t incite violence. We won’t allow them to share content that has previously been debunked as part of our third-party fact-checking program. And we of course take down content that violates local laws. 

But in general we believe political speech should be heard and we don’t feel it is right for private companies like us to fact-check or judge the veracity of what politicians and political parties say. 

Facebook’s approach to this issue is in line with the way political speech and campaigns have been treated in the UK for decades. 

Here in the UK – an open democracy with a vibrant free press – political speech has always been heavily scrutinized but it is not regulated. 

The UK has decided that there shouldn’t be rules about what political parties and candidates can and can’t say in their leaflets, direct mails, emails, billboards, newspaper ads or on the side of campaign buses.  

And as we’ve seen when politicians and campaigns have made hotly contested claims in previous elections and referenda, it’s not been the role of the Advertising Standards Authority, the Electoral Commission or any other regulator to police political speech. 

In our country it’s always been up to the media and the voters to scrutinize what politicians say and make their own minds up. 

Nevertheless, we have long called for new rules for the era of digital campaigning. 

Questions around what constitutes a political ad, who can run them and when, what steps those who purchase political ads must take, how much they can spend on them and whether there should be any rules on what they can and can’t say – these are all matters that can only be properly decided by Parliament and regulators.  

Legislation should be updated to set standards for the whole industry – for example, should all online political advertising be recorded in a public archive similar to our Ad Library and should that extend to traditional platforms like billboards, leaflets and direct mail?

We believe UK electoral law needs to be brought into the 21st century to give clarity to everyone – political parties, candidates and the platforms they use to promote their campaigns.

In the meantime our focus has been to increase transparency so anyone, anywhere, can scrutinize every ad that’s run and by whom. 

I will now pass you to the team to talk you through our efforts in more detail.

  • Nathaniel Gleicher will discuss tackling fake accounts and disrupting coordinated inauthentic behavior;
  • Rob Leathern will take you through our UK political advertising measures and Ad Library;
  • Antonia Woodford will outline our work tackling misinformation and our fact-checker partnerships; 
  • And finally, Ella Fallows will fill you on what we’re doing around safety of candidates and how we’re encouraging people to participate in the election; 

Nathaniel Gleicher, Head of Cybersecurity Policy, Facebook 

My team leads all our efforts across our apps to find and stop what we call influence operations, coordinated efforts to manipulate or corrupt public debate for a strategic goal. 

We also conduct regular red team exercises, both internally and with external partners to put ourselves into the shoes of threat actors and use that approach to identify and prepare for new and emerging threats. We’ll talk about some of the products of these efforts today. 

Before I dive into some of the details, as you’re listening to Rob, Antonia, and I, we’re going to be talking about a number of different initiatives that Facebook is focused on, both to protect the UK general election and more broadly, to respond to integrity threats. I wanted to give you a brief framework for how to think about these. 

The key distinction that you’ll hear again and again is a distinction between content and behavior. At Facebook, we have policies that enable to take action when we see content that violates our Community Standards. 

In addition, we have the tools that we use to respond when we see an actor engaged in deceptive or violating behavior, and we keep these two efforts distinct. And so, as you listen to us, we’ll be talking about different initiatives we have in both dimensions. 

Under content for example, you’ll hear Antonia talk about misinformation, about voter suppression, about hate speech, and about other types of content that we can take action against if someone tries to share that content on our platform. 

Under the behavioral side, you’ll hear me and you’ll hear Rob also mention some of our work around influence operations, around spam, and around hacking. 

I’m going to focus in particular on the first of these, influence operations; but the key distinction that I want to make is when we take action to remove someone because of their deceptive behavior, we’re not looking at, we’re not reviewing, and we’re not considering the content that they’re sharing. 

What we’re focused on is the fact that they are deceiving or misleading users through their actions. For example, using networks of fake accounts to conceal who they are and conceal who’s behind the operation. So we’ll refer back to these, but I think it’s helpful to distinguish between the content side of our enforcement and the behavior side of our enforcement. 

And that’s particularly important because we’ve seen some threat actors who work to understand where the boundaries are for content and make sure for example that the type of content they share doesn’t quite cross the line. 

And when we see someone doing that, because we have behavioral enforcement tools as well, we’re still able to make sure we’re protecting authenticity and public debate on the platform. 

In each of these dimensions, there are four pillars to our work. You’ll hear us refer to each of these during the call as well, but let me just say that these four fit together, and no one of these by themselves would be enough, but all four of the together give us a layered approach to defending public debate and ensuring authenticity on the platform. 

We have expert investigative teams that conduct proactive investigations to find, expose, and disrupt sophisticated threat actors. As we do that, we learn from those investigations and we build automated systems that can disrupt any kind of violating behavior across the platform at scale. 

We also, as Rebecca mentioned, build transparency tools so that users, external researchers and the press can see who is using the platform and ensure that they’re engaging authentically. It also forces threat actors who are trying to conceal their identity to work harder to conceal and mislead. 

And then lastly, one of the things that’s extremely clear to us, particularly in the election space, is that this is a whole of society effort. And so, we work closely with partners in government, in civil society, and across industry to tackle these threats. 

And we’ve found that where we could be most effective is where we bring the tools we bring to the table, and then can work with government and work with other partners to respond and get ahead of these challenges as they emerge. 

One of the ways that we do this is through proactive investigations into the deceptive efforts engaged in by bad actors. Over the last year, our investigative teams, working together with our partners in civil society, law enforcement, and industry, have found and stopped more than 50 campaigns engaged in coordinated inauthentic behavior across the world. 

This includes an operation we removed in May that originated from Iran and targeted a number of countries, including the UK. As we announced at the time, we removed 51 Facebook accounts, 36 pages, seven groups, and three Instagram accounts involved in coordinated inauthentic behavior. 

The page admins and account owners typically posted content in English or Arabic, and most of the operation had no focus on a particular country, although there were some pages focused on the UK and the United States. 

Similarly, in March we announced that we removed a domestic UK network of about 137 Facebook and Instagram accounts, pages, and groups that were engaged in coordinated inauthentic behavior. 

The individuals behind these accounts presented themselves as far right and anti-far right activists, frequently changed page and group names, and operated fake accounts to engage in hate speech and spread divisive comments on both sides of the political debate in the UK. 

These are the types of investigations that we focus our core investigative team on. Whenever we see a sophisticated actor that’s trying to evade our automated systems, those teams, which are made up of experts from law enforcement, the intelligence community, and investigative journalism, can find and reveal that behavior. 

When we expose it, we announce it publicly and we remove it from the platform. Those expert investigators proactively hunt for evidence of these types of coordinated inauthentic behavior (CIB) operations around the world. 

This team has not seen evidence of widespread foreign operations aimed at the UK. But we are continuing to search for this and we will remove and publicly share details of networks of CIB that we identify on our platforms. 

As always with these takedowns, we remove these operations for the deceptive behavior they engaged in, not for the content they shared. This is that content/behavior distinction that I mentioned earlier. As we’ve improved our ability to disrupt these operations, we’ve also deepened our understanding of the types of threats out there and how best to counter them. 

Based on these learnings, we’ve recently updated our inauthentic behavior policy which is posted publicly as part of our Community Standards, to clarify how we enforce against the spectrum of deceptive practices we see in our platforms, whether foreign or domestic, state or non state. For each investigation, we isolate any new behaviors we see and then we work to automate detection of them at scale. This connects to that second pillar of our integrity work. 

And this slows down the bad guy and lets our investigators focus on improving our defenses against emerging threats. A good example of this work is our efforts to find and block fake accounts, which Rebecca mentioned. 

We know bad actors use fake accounts as a way to mask their identity and inflict harm on our platforms. That’s why we’ve built an automated system to find and remove these fake accounts. And each time we conduct one of these takedowns, or any other of our enforcement actions, we learn more about what fake accounts look like and how we can have automated systems that detect and block them. 

This is why we have these systems in place today that block millions of fake accounts every day, often within minutes of their creation. Because information operations often target multiple platforms as well as traditional media, I mentioned our collaborations with industry, civil society and government. 

In addition to that, we are building increased transparency on our platform, so that the public along with open source researchers and journalists can find and expose more bad behavior themselves. 

This effort on transparency is incredibly important. Rob will talk about this in detail, but I do want to add one point here, specifically around pages. Increasingly, we’re seeing people operate pages that clearly disclose the organization behind them as a way to make others think they are independent. 

We want to make sure Facebook is used to engage authentically, and that users understand who is speaking to them and what perspective they are representing. We noted last month that we would be announcing new approaches to address this, and today we’re introducing a policy to require more accountability for pages that are concealing their ownership in order to mislead people.

If we find a page is misleading people about its purpose by concealing its ownership, we will require it to go through our business verification process, which we recently announced, and show more information on the page itself about who is behind that page, including the organization’s legal name and verified city, phone number, or website in order for it to stay up. 

This type of increased transparency helps ensure that the platform continues to be authentic and the people who use the platform know who they’re talking to and understand what they’re seeing. 

Rob Leathern, Director of Product, Business Integrity, Facebook 

In addition to making pages more transparent as Nathaniel has indicated, we’ve also put a lot of effort into making political advertising on Facebook more transparent than it is anywhere else. 

Every political and issue ad in the that runs on Facebook now goes into our Ad Library public archive that everyone can access, regardless of whether or not they have a Facebook account. 

We launched this in the UK in October 2018 and, since then, there’s been over 116,000 ads related to politics, elections, and social issues placed in the UK Ad Library. You can find all the ads that a candidate or organization is running, including how much they spent and who saw the ad. And we’re storing these ads in the Ad Library for seven years. 

Other media such as billboards, newspaper ads, direct mail, leaflets or targeted emails don’t today provide this level of transparency into the ad and who is seeing them. And as a result, we’ve seen a significant number of press stories regarding the election driven by the information in Facebook’s Ad Library. 

We’re proud of this resource and insight into ads running on Facebook and Instagram and that it is proving useful for media and researchers. And just last month, we made even more changes to both the Ad Library and Ad Library Reports. These include adding details in who the top advertising spenders are in each country in the UK, as well as providing an additional view by different date ranges which people have been asking for. 

We’re now also making it clear which Facebook platform an ad ran on. For example, if an ad ran on both Facebook and/or Instagram. 

For those of you unfamiliar with the Ad Library, which you can see at Facebook.com/adlibrary, I thought I’d run through it quickly. 

So this is the Ad Library. Here you see all the ads have been classified as relating to politics or issues. We keep them in the library for seven years. As I mentioned, you can find the Ad Library at Facebook.com/adlibrary. 

You can also access the Ad Library through a specific page. For example, for this Page, you can see not only the advertising information, but also the transparency about the Page itself, along with the spend data. 

Here is an example of the ads that this Page is running, both active as well as inactive. In addition, if an ad has been disapproved for violating any of our ad policies, you’re also able to see all of those ads as well. 

Here’s what it looks like if you click to see more detail about a specific ad. You’ll be able to see individual ad spend, impressions, and demographic information. 

And you’ll also be able to compare the individual ad spend to the overall macro spend by the Page, which is tracked in the section below. If you scroll back up, you’ll also be able to see the other information about the disclaimer that has been provided by the advertiser. 

We know we can’t protect elections alone and that everyone plays a part in keeping the platform safe and respectful. We ask people to share responsibly and to let us know when they see something that may violate our Advertising Policies and Community Standards. 

We also have the Ad Library API so journalists and academics can analyze ads about social issues, elections, or politics. The Ad Library application programming interface, or API, allows people to perform customized keyword searches of ads stored in the Ad Library. You can search data for all active and inactive issue, electoral or political ads. 

You can also access the Ad Library and the data therein through the specific page or through the Ad Library Report. Here is the Ad Library report, this allows you to see the spend by specific advertisers and you can download a full report of the data. 

Here we also allow you to see the spending by location and if you click in you can see the top spenders by region. So you can see, for example, in the various regions, who the top spenders in those areas are. 

Our goal is to provide an open API to news organizations, researchers, groups and people who can hold advertisers and us more accountable. 

We’ve definitely seen a lot of press, journalists, and researchers examining the data in the Ad Library and using it to generate these insights and we think that’s exactly a part of what will help hold both us and advertisers more accountable.

We hope these measures will build on existing transparency we have in place and help reporters, researchers and most importantly people on Facebook learn more about the Pages and information they’re engaging with. 

Antonia Woodford, Product Manager, Misinformation, Facebook

We are committed to fighting the spread of misinformation and viral hoaxes on Facebook. It is a responsibility we take seriously.

To accomplish this, we follow a three-pronged approach which we call remove, reduce, and inform. First and foremost, when something violates the laws or our policies, we’ll remove it from the platform all together.

As Nathaniel touched on, removing fake accounts is a priority, of which the vast majority are detected and removed within minutes of registration and before a person can report them. This is a key element in eliminating the potential spread of misinformation. 

The reduce and inform part of the equation is how we reduce the spread of problematic content that doesn’t violate the law or our community standards, while still ensuring freedom of expression on the platform and this is where the majority of our misinformation work is focused. 

To reduce the spread of misinformation, we work with third party fact-checkers. 

Through a combination of reporting from people on our platform and machine learning, potentially false posts are sent to third party fact-checkers to review. These fact-checkers review this content, check the facts, and then rate its accuracy. They’re able to review links in news articles as well as photos, videos, or text posts on Facebook.

After content has been rated false, our algorithm heavily downranks this content in News Feed so it’s seen by fewer people and far less likely to go viral. Fact-checkers can fact-check any posts they choose based on the queue we send them. 

And lastly, as part of our work to inform people about the content they see on Facebook, we just launched a new design to better warn people when they see content that’s illegal, false, or partly false by our fact-checking partners.

People will now see a more prominent label on photos and videos that have been fact-checked as false or partly false. This is a grey screen that sits over a post and says ‘false information’ and points people to fact-checkers’ articles debunking the claims. 

These clearer labels are what people have told us they want, what they have told us they expect Facebook to do, and what experts tell us is the right tactic for combating misinformation.

We’re rolling this change out in the UK this week for any photos and videos that have been rated through our fact-checking partnership. Though just one part of our overall strategy, fact-checking is a fundamental part of our strategy to combat this information and I want to share a little bit more about the program.

Our fact-checking partners are all accredited by the International Fact-Checking Network which requires them to abide by a code of principles such as nonpartisanship and transparency sources.

We currently have over 50 partners in over 40 languages around the world. As Rebecca outlined earlier, we don’t send content or ads from politicians and political parties to our third party fact-checking partners.

Here in the UK we work with Full Fact and FactCheckNI and I as part of our program. To recap we identify content that may be false using signals such as feedback from our users. This content is all submitted into a queue for our fact-checking partners to access. These fact-checkers then choose which content to review, check the facts, and rate the accuracy of the content.

These fact-checkers are independent organizations, so it is at their discretion what they choose to investigate. They can also fact-check whatever content they want outside of the posts we send their way.

If a fact-checker rates a story as false, it will appear lower in News Feed with the false information screen I mentioned earlier. This significantly reduces the number of people who see it.

Other posts that Full Fact and FactCheckNI choose to fact-check outside of our system will not be impacted on Facebook. 

And finally, on Tuesday we announced a partnership with the International Fact-Checking Network to create the Fact-Checking Innovation Initiative. This will fund innovation projects, new formats, and technologies to help benefit the broader fact-checking ecosystem. 

We are investing $500,000 into this new initiative, where organizations can submit applications for projects to improve fact-checkers’ scale and efficiency, increase the reach of fact-checks to empower more people with reliable information, build new tools to help combat misinformation, and encourage newsrooms to collaborate in fact-checking efforts.

Anyone from the UK can be a part of this new initiative. 

Ella Fallows, Politics and Government Outreach Manager UK, Facebook 

Our team’s role involves two main tasks: working with parties, MPs and candidates to ensure they have a good experience and get the most from our platforms; and looking at how we can best use our platforms to promote participation in elections.

I’d like to start with how MPs and candidates use our platforms. 

There is, rightly, a focus in the UK about the current tone of political debate. Let me be clear, hate speech and threats of violence have no place on our platforms and we’re investing heavily to tackle them. 

Additionally, for this campaign we have this week written to political parties and candidates setting out the range of safety measures we have in place and also to remind them of the terms and conditions and the Community Standards which govern our platforms. 

As you may be aware, every piece of content on Facebook and Instagram has a report button, and when content is reported to us which violates our Community Standards – what is and isn’t allowed on Facebook – it is removed. 

Since March of this year, MPs have also had access to a dedicated reporting channel to flag any abusive and threatening content directly to our teams. Now that the General Election is underway we’re extending that support to all prospective candidates, making our team available to anyone standing to allow them to quickly report any concerns across our platforms and have them investigated. 

This is particularly pertinent to Tuesday’s news from the Government calling for a one stop shop for candidates. We have already set up our own one stop shop so that there is a single point of contact for candidates for issues across Facebook and Instagram.

But our team is not working alone; it’s backed up by our 35,000-strong global safety and security team that oversees content and behavior across the platform every day. 

And our technology is also helping us to automatically detect more of this harmful content. For example, the proportion of hate speech we have removed before it’s reported to us has increased significantly over the last two years, and we will be releasing new figures on this later this month.

We also have a Government, Politics & Advocacy Portal which is a home for everything a candidate will need during the campaign, including ‘how to’ guides on subjects such as political advertising, campaigning on Facebook and troubleshooting guides for technical issues.

We’re working with the political parties to ensure candidates are aware of both the reporting channel to reach my team and the Government, Politics & Advocacy Portal.

We’re holding a series of sessions for candidates on safety and outlining the help available to address harassment on our platforms. We’ve already held dedicated sessions for female candidates in partnership with women’s networks within the parties to provide extra guidance. We want to ensure we’re doing everything possible to help them connect with their constituents, free from harassment.

Finally, we’re working with the Government to distribute to every candidate via returning officers in the General Election the safety guides we have put together, to ensure we reach everyone not just those attending our outreach sessions. Our safety guides include information on a range of tools we have developed:

  • For example, public figures are able to moderate and filter the content that people put on their Facebook Pages to prevent negative content appearing in the first place. People who help manage Pages can hide or delete individual comments. 
  • They can also proactively moderate comments and posts by visitors by turning on the profanity filter, or blocking specific words or lists of words that they do not want to appear on their Page. Page admins can also remove or ban people from their Pages. 

We hope these steps help every candidate to reach their constituents, and get the most from our platforms. But our work doesn’t stop there.

The second area our team focuses on is promoting civic engagement. In addition to supporting and advising candidates, we also, of course, want to help promote voter participation in the election. 

For the past five years, we’ve used badges and reminders at the top of people’s News Feeds to encourage people to vote in elections around the world. The same will be true for this campaign. 

We’ll run reminders to register to vote, with a link to the gov.uk voter registration page, in the week running up to the voter registration deadline. 

On election day itself, we’ll also run a reminder to vote with a link to the Electoral Commission website so voters can find their polling station and any information they need. This will include a button to share that you voted. 

We hope that this combination of steps will help to ensure both candidates and voters engaging with the General Election on our platforms have the best possible experience.





Source link

How Facebook Has Prepared for the 2019 UK General Election



bdarwell

Today, leaders from our offices in London and Menlo Park, California spoke with members of the press about Facebook’s efforts to prepare for the upcoming General Election in the UK on December 12, 2019. The following is a transcript of their remarks.

Rebecca Stimson, Head of UK Public Policy, Facebook

We wanted to bring you all together, now that the UK General Election is underway, to set out the range of actions we are taking to help ensure this election is transparent and secure – to answer your questions and to point you to the various resources we have available.  

There has already been a lot of focus on the role of social media within the campaign and there is a lot of information for us to set out. 

We have therefore gathered colleagues from both the UK and our headquarters in Menlo Park, California, covering our politics, product, policy and safety teams to take you through the details of those efforts. 

I will just say a few opening remarks before we dive into the details

Helping protect elections is one of our top priorities and over the last two years we’ve made some significant changes – these broadly fall into three camps:

  • We’ve introduced greater transparency so that people know what they are seeing online and can scrutinize it more effectively; 
  • We have built stronger defenses to prevent things like foreign interference; 
  • And we have invested in both people and technology to ensure these new policies are effective.

So taking these in turn. 

Transparency

On the issue of transparency. We’ve tightened our rules to make political ads much more transparent, so people can see who is trying to influence their vote and what they are saying. 

We’ll discuss this in more detail shortly, but to summarize:  

  • Anybody who wants to run political ads must go through a verification process to prove who they are and that they are based here in the UK; 
  • Every political ad is labelled so you can see who has paid for them;
  • Anybody can click on any ad they see on Facebook and get more information on why they are seeing it, as well as block ads from particular advertisers;
  • And finally, we put all political ads in an Ad Library so that everyone can see what ads are running, the types of people who saw them and how much was spent – not just while the ads are live, but for seven years afterwards.

Taken together these changes mean that political advertising on Facebook and Instagram is now more transparent than other forms of election campaigning, whether that’s billboards, newspaper ads, direct mail, leaflets or targeted emails. 

This is the first UK general election since we introduced these changes and we’re already seeing many journalists using these transparency tools to scrutinize the adverts which are running during this election – this is something we welcome and it’s exactly why we introduced these changes. 

Defense 

Turning to the stronger defenses we have put in place.

Nathaniel will shortly set out in more detail our work to prevent foreign interference and coordinated inauthentic behavior. But before he does I want to be clear right up front how seriously we take these issues and our commitment to doing everything we can to prevent election interference on our platforms. 

So just to highlight one of the things he will be talking about – we have, as part of this work, cracked down significantly on fake accounts. 

We now identify and shut down millions of fake accounts every day, many just seconds after they were created.

Investment

And lastly turning to investment in these issues.

We now have more than 35,000 people working on safety and security. We have been building and rolling out many of the new tools you will be hearing about today. And as Ella will set out later, we have introduced a number of safety measures including a dedicated reporting channel so that all candidates in the election can flag any abusive and threatening content directly to our teams.  

I’m also pleased to say that – now the election is underway – we have brought together an Elections Taskforce of people from our teams across the UK, EMEA and the US who are already working together every day to ensure election integrity on our platforms. 

The Elections Taskforce will be working on issues including threat intelligence, data science, engineering, operations, legal and others. It also includes representatives from WhatsApp and Instagram.

As we get closer to the election, these people will be brought together in physical spaces in their offices – what we call our Operations Centre. 

It’s important to remember that the Elections Taskforce is an additional layer of security on top of our ongoing monitoring for threats on the platform which operates 24/7. 

And while there will always be further improvements we can and will continue to make, and we can never say there won’t be challenges to respond to, we are confident that we’re better prepared than ever before.  

Political Ads

Before I wrap up this intro section of today’s call I also want to address two of the issues that have been hotly debated in the last few weeks – firstly whether political ads should be allowed on social media at all and secondly whether social media companies should decide what politicians can and can’t say as part of their campaigns. 

As Mark Zuckerberg has said, we have considered whether we should ban political ads altogether. They account for just 0.5% of our revenue and they’re always destined to be controversial. 

But we believe it’s important that candidates and politicians can communicate with their constituents and would be constituents. 

Online political ads are also important for both new challengers and campaigning groups to get their message out. 

Our approach is therefore to make political messages on our platforms as transparent as possible, not to remove them altogether. 

And there’s also a really difficult question – if you were to consider banning political ads, where do you draw the line – for example, would anyone advocate for blocking ads for important issues like climate change or women’s empowerment? 

Turning to the second issue – there is also a question about whether we should decide what politicians and political parties can and can’t say.  

We don’t believe a private company like Facebook should censor politicians. This is why we don’t send content or ads from politicians and political parties to our third party fact-checking partners.

This doesn’t mean that politicians can say whatever they want on Facebook. They can’t spread misinformation about where, when or how to vote. They can’t incite violence. We won’t allow them to share content that has previously been debunked as part of our third-party fact-checking program. And we of course take down content that violates local laws. 

But in general we believe political speech should be heard and we don’t feel it is right for private companies like us to fact-check or judge the veracity of what politicians and political parties say. 

Facebook’s approach to this issue is in line with the way political speech and campaigns have been treated in the UK for decades. 

Here in the UK – an open democracy with a vibrant free press – political speech has always been heavily scrutinized but it is not regulated. 

The UK has decided that there shouldn’t be rules about what political parties and candidates can and can’t say in their leaflets, direct mails, emails, billboards, newspaper ads or on the side of campaign buses.  

And as we’ve seen when politicians and campaigns have made hotly contested claims in previous elections and referenda, it’s not been the role of the Advertising Standards Authority, the Electoral Commission or any other regulator to police political speech. 

In our country it’s always been up to the media and the voters to scrutinize what politicians say and make their own minds up. 

Nevertheless, we have long called for new rules for the era of digital campaigning. 

Questions around what constitutes a political ad, who can run them and when, what steps those who purchase political ads must take, how much they can spend on them and whether there should be any rules on what they can and can’t say – these are all matters that can only be properly decided by Parliament and regulators.  

Legislation should be updated to set standards for the whole industry – for example, should all online political advertising be recorded in a public archive similar to our Ad Library and should that extend to traditional platforms like billboards, leaflets and direct mail?

We believe UK electoral law needs to be brought into the 21st century to give clarity to everyone – political parties, candidates and the platforms they use to promote their campaigns.

In the meantime our focus has been to increase transparency so anyone, anywhere, can scrutinize every ad that’s run and by whom. 

I will now pass you to the team to talk you through our efforts in more detail.

  • Nathaniel Gleicher will discuss tackling fake accounts and disrupting coordinated inauthentic behavior;
  • Rob Leathern will take you through our UK political advertising measures and Ad Library;
  • Antonia Woodford will outline our work tackling misinformation and our fact-checker partnerships; 
  • And finally, Ella Fallows will fill you on what we’re doing around safety of candidates and how we’re encouraging people to participate in the election; 
We have not seen evidence of widespread foreign operations aimed at the UK

Nathaniel Gleicher, Head of Cybersecurity Policy, Facebook 

My team leads all our efforts across our apps to find and stop what we call influence operations, coordinated efforts to manipulate or corrupt public debate for a strategic goal. 

We also conduct regular red team exercises, both internally and with external partners to put ourselves into the shoes of threat actors and use that approach to identify and prepare for new and emerging threats. We’ll talk about some of the products of these efforts today. 

Before I dive into some of the details, as you’re listening to Rob, Antonia, and I, we’re going to be talking about a number of different initiatives that Facebook is focused on, both to protect the UK general election and more broadly, to respond to integrity threats. I wanted to give you a brief framework for how to think about these. 

The key distinction that you’ll hear again and again is a distinction between content and behavior. At Facebook, we have policies that enable to take action when we see content that violates our Community Standards. 

In addition, we have the tools that we use to respond when we see an actor engaged in deceptive or violating behavior, and we keep these two efforts distinct. And so, as you listen to us, we’ll be talking about different initiatives we have in both dimensions. 

Under content for example, you’ll hear Antonia talk about misinformation, about voter suppression, about hate speech, and about other types of content that we can take action against if someone tries to share that content on our platform. 

Under the behavioral side, you’ll hear me and you’ll hear Rob also mention some of our work around influence operations, around spam, and around hacking. 

I’m going to focus in particular on the first of these, influence operations; but the key distinction that I want to make is when we take action to remove someone because of their deceptive behavior, we’re not looking at, we’re not reviewing, and we’re not considering the content that they’re sharing. 

What we’re focused on is the fact that they are deceiving or misleading users through their actions. For example, using networks of fake accounts to conceal who they are and conceal who’s behind the operation. So we’ll refer back to these, but I think it’s helpful to distinguish between the content side of our enforcement and the behavior side of our enforcement. 

And that’s particularly important because we’ve seen some threat actors who work to understand where the boundaries are for content and make sure for example that the type of content they share doesn’t quite cross the line. 

And when we see someone doing that, because we have behavioral enforcement tools as well, we’re still able to make sure we’re protecting authenticity and public debate on the platform. 

In each of these dimensions, there are four pillars to our work. You’ll hear us refer to each of these during the call as well, but let me just say that these four fit together, and no one of these by themselves would be enough, but all four of the together give us a layered approach to defending public debate and ensuring authenticity on the platform. 

We have expert investigative teams that conduct proactive investigations to find, expose, and disrupt sophisticated threat actors. As we do that, we learn from those investigations and we build automated systems that can disrupt any kind of violating behavior across the platform at scale. 

We also, as Rebecca mentioned, build transparency tools so that users, external researchers and the press can see who is using the platform and ensure that they’re engaging authentically. It also forces threat actors who are trying to conceal their identity to work harder to conceal and mislead. 

And then lastly, one of the things that’s extremely clear to us, particularly in the election space, is that this is a whole of society effort. And so, we work closely with partners in government, in civil society, and across industry to tackle these threats. 

And we’ve found that where we could be most effective is where we bring the tools we bring to the table, and then can work with government and work with other partners to respond and get ahead of these challenges as they emerge. 

One of the ways that we do this is through proactive investigations into the deceptive efforts engaged in by bad actors. Over the last year, our investigative teams, working together with our partners in civil society, law enforcement, and industry, have found and stopped more than 50 campaigns engaged in coordinated inauthentic behavior across the world. 

This includes an operation we removed in May that originated from Iran and targeted a number of countries, including the UK. As we announced at the time, we removed 51 Facebook accounts, 36 pages, seven groups, and three Instagram accounts involved in coordinated inauthentic behavior. 

The page admins and account owners typically posted content in English or Arabic, and most of the operation had no focus on a particular country, although there were some pages focused on the UK and the United States. 

Similarly, in March we announced that we removed a domestic UK network of about 137 Facebook and Instagram accounts, pages, and groups that were engaged in coordinated inauthentic behavior. 

The individuals behind these accounts presented themselves as far right and anti-far right activists, frequently changed page and group names, and operated fake accounts to engage in hate speech and spread divisive comments on both sides of the political debate in the UK. 

These are the types of investigations that we focus our core investigative team on. Whenever we see a sophisticated actor that’s trying to evade our automated systems, those teams, which are made up of experts from law enforcement, the intelligence community, and investigative journalism, can find and reveal that behavior. 

When we expose it, we announce it publicly and we remove it from the platform. Those expert investigators proactively hunt for evidence of these types of coordinated inauthentic behavior (CIB) operations around the world. 

This team has not seen evidence of widespread foreign operations aimed at the UK. But we are continuing to search for this and we will remove and publicly share details of networks of CIB that we identify on our platforms. 

As always with these takedowns, we remove these operations for the deceptive behavior they engaged in, not for the content they shared. This is that content/behavior distinction that I mentioned earlier. As we’ve improved our ability to disrupt these operations, we’ve also deepened our understanding of the types of threats out there and how best to counter them. 

Based on these learnings, we’ve recently updated our inauthentic behavior policy which is posted publicly as part of our Community Standards, to clarify how we enforce against the spectrum of deceptive practices we see in our platforms, whether foreign or domestic, state or non state. For each investigation, we isolate any new behaviors we see and then we work to automate detection of them at scale. This connects to that second pillar of our integrity work. 

And this slows down the bad guy and lets our investigators focus on improving our defenses against emerging threats. A good example of this work is our efforts to find and block fake accounts, which Rebecca mentioned. 

We know bad actors use fake accounts as a way to mask their identity and inflict harm on our platforms. That’s why we’ve built an automated system to find and remove these fake accounts. And each time we conduct one of these takedowns, or any other of our enforcement actions, we learn more about what fake accounts look like and how we can have automated systems that detect and block them. 

This is why we have these systems in place today that block millions of fake accounts every day, often within minutes of their creation. Because information operations often target multiple platforms as well as traditional media, I mentioned our collaborations with industry, civil society and government. 

In addition to that, we are building increased transparency on our platform, so that the public along with open source researchers and journalists can find and expose more bad behavior themselves. 

This effort on transparency is incredibly important. Rob will talk about this in detail, but I do want to add one point here, specifically around pages. Increasingly, we’re seeing people operate pages that clearly disclose the organization behind them as a way to make others think they are independent. 

We want to make sure Facebook is used to engage authentically, and that users understand who is speaking to them and what perspective they are representing. We noted last month that we would be announcing new approaches to address this, and today we’re introducing a policy to require more accountability for pages that are concealing their ownership in order to mislead people.

If we find a page is misleading people about its purpose by concealing its ownership, we will require it to go through our business verification process, which we recently announced, and show more information on the page itself about who is behind that page, including the organization’s legal name and verified city, phone number, or website in order for it to stay up. 

This type of increased transparency helps ensure that the platform continues to be authentic and the people who use the platform know who they’re talking to and understand what they’re seeing. 

We've put a lot of effort into making political advertising on Facebook more transparent

Rob Leathern, Director of Product, Business Integrity, Facebook 

In addition to making pages more transparent as Nathaniel has indicated, we’ve also put a lot of effort into making political advertising on Facebook more transparent than it is anywhere else. 

Every political and issue ad in the that runs on Facebook now goes into our Ad Library public archive that everyone can access, regardless of whether or not they have a Facebook account. 

We launched this in the UK in October 2018 and, since then, there’s been over 116,000 ads related to politics, elections, and social issues placed in the UK Ad Library. You can find all the ads that a candidate or organization is running, including how much they spent and who saw the ad. And we’re storing these ads in the Ad Library for seven years. 

Other media such as billboards, newspaper ads, direct mail, leaflets or targeted emails don’t today provide this level of transparency into the ad and who is seeing them. And as a result, we’ve seen a significant number of press stories regarding the election driven by the information in Facebook’s Ad Library. 

We’re proud of this resource and insight into ads running on Facebook and Instagram and that it is proving useful for media and researchers. And just last month, we made even more changes to both the Ad Library and Ad Library Reports. These include adding details in who the top advertising spenders are in each country in the UK, as well as providing an additional view by different date ranges which people have been asking for. 

We’re now also making it clear which Facebook platform an ad ran on. For example, if an ad ran on both Facebook and/or Instagram. 

For those of you unfamiliar with the Ad Library, which you can see at Facebook.com/adlibrary, I thought I’d run through it quickly. 

So this is the Ad Library. Here you see all the ads have been classified as relating to politics or issues. We keep them in the library for seven years. As I mentioned, you can find the Ad Library at Facebook.com/adlibrary. 

You can also access the Ad Library through a specific page. For example, for this Page, you can see not only the advertising information, but also the transparency about the Page itself, along with the spend data. 

Here is an example of the ads that this Page is running, both active as well as inactive. In addition, if an ad has been disapproved for violating any of our ad policies, you’re also able to see all of those ads as well. 

Here’s what it looks like if you click to see more detail about a specific ad. You’ll be able to see individual ad spend, impressions, and demographic information. 

And you’ll also be able to compare the individual ad spend to the overall macro spend by the Page, which is tracked in the section below. If you scroll back up, you’ll also be able to see the other information about the disclaimer that has been provided by the advertiser. 

We know we can’t protect elections alone and that everyone plays a part in keeping the platform safe and respectful. We ask people to share responsibly and to let us know when they see something that may violate our Advertising Policies and Community Standards. 

We also have the Ad Library API so journalists and academics can analyze ads about social issues, elections, or politics. The Ad Library application programming interface, or API, allows people to perform customized keyword searches of ads stored in the Ad Library. You can search data for all active and inactive issue, electoral or political ads. 

You can also access the Ad Library and the data therein through the specific page or through the Ad Library Report. Here is the Ad Library report, this allows you to see the spend by specific advertisers and you can download a full report of the data. 

Here we also allow you to see the spending by location and if you click in you can see the top spenders by region. So you can see, for example, in the various regions, who the top spenders in those areas are. 

Our goal is to provide an open API to news organizations, researchers, groups and people who can hold advertisers and us more accountable. 

We’ve definitely seen a lot of press, journalists, and researchers examining the data in the Ad Library and using it to generate these insights and we think that’s exactly a part of what will help hold both us and advertisers more accountable.

We hope these measures will build on existing transparency we have in place and help reporters, researchers and most importantly people on Facebook learn more about the Pages and information they’re engaging with. 

We are committed to fighting the spread of misinformation

Antonia Woodford, Product Manager, Misinformation, Facebook

We are committed to fighting the spread of misinformation and viral hoaxes on Facebook. It is a responsibility we take seriously.

To accomplish this, we follow a three-pronged approach which we call remove, reduce, and inform. First and foremost, when something violates the laws or our policies, we’ll remove it from the platform all together.

As Nathaniel touched on, removing fake accounts is a priority, of which the vast majority are detected and removed within minutes of registration and before a person can report them. This is a key element in eliminating the potential spread of misinformation. 

The reduce and inform part of the equation is how we reduce the spread of problematic content that doesn’t violate the law or our community standards, while still ensuring freedom of expression on the platform and this is where the majority of our misinformation work is focused. 

To reduce the spread of misinformation, we work with third party fact-checkers. 

Through a combination of reporting from people on our platform and machine learning, potentially false posts are sent to third party fact-checkers to review. These fact-checkers review this content, check the facts, and then rate its accuracy. They’re able to review links in news articles as well as photos, videos, or text posts on Facebook.

After content has been rated false, our algorithm heavily downranks this content in News Feed so it’s seen by fewer people and far less likely to go viral. Fact-checkers can fact-check any posts they choose based on the queue we send them. 

And lastly, as part of our work to inform people about the content they see on Facebook, we just launched a new design to better warn people when they see content that’s illegal, false, or partly false by our fact-checking partners.

People will now see a more prominent label on photos and videos that have been fact-checked as false or partly false. This is a grey screen that sits over a post and says ‘false information’ and points people to fact-checkers’ articles debunking the claims. 

These clearer labels are what people have told us they want, what they have told us they expect Facebook to do, and what experts tell us is the right tactic for combating misinformation.

We’re rolling this change out in the UK this week for any photos and videos that have been rated through our fact-checking partnership. Though just one part of our overall strategy, fact-checking is a fundamental part of our strategy to combat this information and I want to share a little bit more about the program.

Our fact-checking partners are all accredited by the International Fact-Checking Network which requires them to abide by a code of principles such as nonpartisanship and transparency sources.

We currently have over 50 partners in over 40 languages around the world. As Rebecca outlined earlier, we don’t send content or ads from politicians and political parties to our third party fact-checking partners.

Here in the UK we work with Full Fact and FactCheckNI and I as part of our program. To recap we identify content that may be false using signals such as feedback from our users. This content is all submitted into a queue for our fact-checking partners to access. These fact-checkers then choose which content to review, check the facts, and rate the accuracy of the content.

These fact-checkers are independent organizations, so it is at their discretion what they choose to investigate. They can also fact-check whatever content they want outside of the posts we send their way.

If a fact-checker rates a story as false, it will appear lower in News Feed with the false information screen I mentioned earlier. This significantly reduces the number of people who see it.

Other posts that Full Fact and FactCheckNI choose to fact-check outside of our system will not be impacted on Facebook. 

And finally, on Tuesday we announced a partnership with the International Fact-Checking Network to create the Fact-Checking Innovation Initiative. This will fund innovation projects, new formats, and technologies to help benefit the broader fact-checking ecosystem. 

We are investing $500,000 into this new initiative, where organizations can submit applications for projects to improve fact-checkers’ scale and efficiency, increase the reach of fact-checks to empower more people with reliable information, build new tools to help combat misinformation, and encourage newsrooms to collaborate in fact-checking efforts.

Anyone from the UK can be a part of this new initiative. 

Our 35,000-strong global safety and security team oversees content and behavior across the platform

Ella Fallows, Politics and Government Outreach Manager UK, Facebook 

Our team’s role involves two main tasks: working with parties, MPs and candidates to ensure they have a good experience and get the most from our platforms; and looking at how we can best use our platforms to promote participation in elections.

I’d like to start with how MPs and candidates use our platforms. 

There is, rightly, a focus in the UK about the current tone of political debate. Let me be clear, hate speech and threats of violence have no place on our platforms and we’re investing heavily to tackle them. 

Additionally, for this campaign we have this week written to political parties and candidates setting out the range of safety measures we have in place and also to remind them of the terms and conditions and the Community Standards which govern our platforms. 

As you may be aware, every piece of content on Facebook and Instagram has a report button, and when content is reported to us which violates our Community Standards – what is and isn’t allowed on Facebook – it is removed. 

Since March of this year, MPs have also had access to a dedicated reporting channel to flag any abusive and threatening content directly to our teams. Now that the General Election is underway we’re extending that support to all prospective candidates, making our team available to anyone standing to allow them to quickly report any concerns across our platforms and have them investigated. 

This is particularly pertinent to Tuesday’s news from the Government calling for a one stop shop for candidates. We have already set up our own one stop shop so that there is a single point of contact for candidates for issues across Facebook and Instagram.

But our team is not working alone; it’s backed up by our 35,000-strong global safety and security team that oversees content and behavior across the platform every day. 

And our technology is also helping us to automatically detect more of this harmful content. For example, the proportion of hate speech we have removed before it’s reported to us has increased significantly over the last two years, and we will be releasing new figures on this later this month.

We also have a Government, Politics & Advocacy Portal which is a home for everything a candidate will need during the campaign, including ‘how to’ guides on subjects such as political advertising, campaigning on Facebook and troubleshooting guides for technical issues.

We’re working with the political parties to ensure candidates are aware of both the reporting channel to reach my team and the Government, Politics & Advocacy Portal.

We’re holding a series of sessions for candidates on safety and outlining the help available to address harassment on our platforms. We’ve already held dedicated sessions for female candidates in partnership with women’s networks within the parties to provide extra guidance. We want to ensure we’re doing everything possible to help them connect with their constituents, free from harassment.

Finally, we’re working with the Government to distribute to every candidate via returning officers in the General Election the safety guides we have put together, to ensure we reach everyone not just those attending our outreach sessions. Our safety guides include information on a range of tools we have developed:

  • For example, public figures are able to moderate and filter the content that people put on their Facebook Pages to prevent negative content appearing in the first place. People who help manage Pages can hide or delete individual comments. 
  • They can also proactively moderate comments and posts by visitors by turning on the profanity filter, or blocking specific words or lists of words that they do not want to appear on their Page. Page admins can also remove or ban people from their Pages. 

We hope these steps help every candidate to reach their constituents, and get the most from our platforms. But our work doesn’t stop there.

The second area our team focuses on is promoting civic engagement. In addition to supporting and advising candidates, we also, of course, want to help promote voter participation in the election. 

For the past five years, we’ve used badges and reminders at the top of people’s News Feeds to encourage people to vote in elections around the world. The same will be true for this campaign. 

We’ll run reminders to register to vote, with a link to the gov.uk voter registration page, in the week running up to the voter registration deadline. 

On election day itself, we’ll also run a reminder to vote with a link to the Electoral Commission website so voters can find their polling station and any information they need. This will include a button to share that you voted. 

We hope that this combination of steps will help to ensure both candidates and voters engaging with the General Election on our platforms have the best possible experience.





Source link

How Facebook Has Prepared for the 2019 UK General Election – FACEBOOK



bdarwell

Today, leaders from our offices in London and Menlo Park, California spoke with members of the press about Facebook’s efforts to prepare for the upcoming General Election in the UK on December 12, 2019. The following is a transcript of their remarks.

Rebecca Stimson, Head of UK Public Policy, Facebook

We wanted to bring you all together, now that the UK General Election is underway, to set out the range of actions we are taking to help ensure this election is transparent and secure – to answer your questions and to point you to the various resources we have available.  

There has already been a lot of focus on the role of social media within the campaign and there is a lot of information for us to set out. 

We have therefore gathered colleagues from both the UK and our headquarters in Menlo Park, California, covering our politics, product, policy and safety teams to take you through the details of those efforts. 

I will just say a few opening remarks before we dive into the details

Helping protect elections is one of our top priorities and over the last two years we’ve made some significant changes – these broadly fall into three camps:

  • We’ve introduced greater transparency so that people know what they are seeing online and can scrutinize it more effectively; 
  • We have built stronger defenses to prevent things like foreign interference; 
  • And we have invested in both people and technology to ensure these new policies are effective.

So taking these in turn. 

Transparency

On the issue of transparency. We’ve tightened our rules to make political ads much more transparent, so people can see who is trying to influence their vote and what they are saying. 

We’ll discuss this in more detail shortly, but to summarize:  

  • Anybody who wants to run political ads must go through a verification process to prove who they are and that they are based here in the UK; 
  • Every political ad is labelled so you can see who has paid for them;
  • Anybody can click on any ad they see on Facebook and get more information on why they are seeing it, as well as block ads from particular advertisers;
  • And finally, we put all political ads in an Ad Library so that everyone can see what ads are running, the types of people who saw them and how much was spent – not just while the ads are live, but for seven years afterwards.

Taken together these changes mean that political advertising on Facebook and Instagram is now more transparent than other forms of election campaigning, whether that’s billboards, newspaper ads, direct mail, leaflets or targeted emails. 

This is the first UK general election since we introduced these changes and we’re already seeing many journalists using these transparency tools to scrutinize the adverts which are running during this election – this is something we welcome and it’s exactly why we introduced these changes. 

Defense 

Turning to the stronger defenses we have put in place.

Nathaniel will shortly set out in more detail our work to prevent foreign interference and coordinated inauthentic behavior. But before he does I want to be clear right up front how seriously we take these issues and our commitment to doing everything we can to prevent election interference on our platforms. 

So just to highlight one of the things he will be talking about – we have, as part of this work, cracked down significantly on fake accounts. 

We now identify and shut down millions of fake accounts every day, many just seconds after they were created.

Investment

And lastly turning to investment in these issues.

We now have more than 35,000 people working on safety and security. We have been building and rolling out many of the new tools you will be hearing about today. And as Ella will set out later, we have introduced a number of safety measures including a dedicated reporting channel so that all candidates in the election can flag any abusive and threatening content directly to our teams.  

I’m also pleased to say that – now the election is underway – we have brought together an Elections Taskforce of people from our teams across the UK, EMEA and the US who are already working together every day to ensure election integrity on our platforms. 

The Elections Taskforce will be working on issues including threat intelligence, data science, engineering, operations, legal and others. It also includes representatives from WhatsApp and Instagram.

As we get closer to the election, these people will be brought together in physical spaces in their offices – what we call our Operations Centre. 

It’s important to remember that the Elections Taskforce is an additional layer of security on top of our ongoing monitoring for threats on the platform which operates 24/7. 

And while there will always be further improvements we can and will continue to make, and we can never say there won’t be challenges to respond to, we are confident that we’re better prepared than ever before.  

Political Ads

Before I wrap up this intro section of today’s call I also want to address two of the issues that have been hotly debated in the last few weeks – firstly whether political ads should be allowed on social media at all and secondly whether social media companies should decide what politicians can and can’t say as part of their campaigns. 

As Mark Zuckerberg has said, we have considered whether we should ban political ads altogether. They account for just 0.5% of our revenue and they’re always destined to be controversial. 

But we believe it’s important that candidates and politicians can communicate with their constituents and would be constituents. 

Online political ads are also important for both new challengers and campaigning groups to get their message out. 

Our approach is therefore to make political messages on our platforms as transparent as possible, not to remove them altogether. 

And there’s also a really difficult question – if you were to consider banning political ads, where do you draw the line – for example, would anyone advocate for blocking ads for important issues like climate change or women’s empowerment? 

Turning to the second issue – there is also a question about whether we should decide what politicians and political parties can and can’t say.  

We don’t believe a private company like Facebook should censor politicians. This is why we don’t send content or ads from politicians and political parties to our third party fact-checking partners.

This doesn’t mean that politicians can say whatever they want on Facebook. They can’t spread misinformation about where, when or how to vote. They can’t incite violence. We won’t allow them to share content that has previously been debunked as part of our third-party fact-checking program. And we of course take down content that violates local laws. 

But in general we believe political speech should be heard and we don’t feel it is right for private companies like us to fact-check or judge the veracity of what politicians and political parties say. 

Facebook’s approach to this issue is in line with the way political speech and campaigns have been treated in the UK for decades. 

Here in the UK – an open democracy with a vibrant free press – political speech has always been heavily scrutinized but it is not regulated. 

The UK has decided that there shouldn’t be rules about what political parties and candidates can and can’t say in their leaflets, direct mails, emails, billboards, newspaper ads or on the side of campaign buses.  

And as we’ve seen when politicians and campaigns have made hotly contested claims in previous elections and referenda, it’s not been the role of the Advertising Standards Authority, the Electoral Commission or any other regulator to police political speech. 

In our country it’s always been up to the media and the voters to scrutinize what politicians say and make their own minds up. 

Nevertheless, we have long called for new rules for the era of digital campaigning. 

Questions around what constitutes a political ad, who can run them and when, what steps those who purchase political ads must take, how much they can spend on them and whether there should be any rules on what they can and can’t say – these are all matters that can only be properly decided by Parliament and regulators.  

Legislation should be updated to set standards for the whole industry – for example, should all online political advertising be recorded in a public archive similar to our Ad Library and should that extend to traditional platforms like billboards, leaflets and direct mail?

We believe UK electoral law needs to be brought into the 21st century to give clarity to everyone – political parties, candidates and the platforms they use to promote their campaigns.

In the meantime our focus has been to increase transparency so anyone, anywhere, can scrutinize every ad that’s run and by whom. 

I will now pass you to the team to talk you through our efforts in more detail.

  • Nathaniel Gleicher will discuss tackling fake accounts and disrupting coordinated inauthentic behavior;
  • Rob Leathern will take you through our UK political advertising measures and Ad Library;
  • Antonia Woodford will outline our work tackling misinformation and our fact-checker partnerships; 
  • And finally, Ella Fallows will fill you on what we’re doing around safety of candidates and how we’re encouraging people to participate in the election; 
We have not seen evidence of widespread foreign operations aimed at the UK

Nathaniel Gleicher, Head of Cybersecurity Policy, Facebook 

My team leads all our efforts across our apps to find and stop what we call influence operations, coordinated efforts to manipulate or corrupt public debate for a strategic goal. 

We also conduct regular red team exercises, both internally and with external partners to put ourselves into the shoes of threat actors and use that approach to identify and prepare for new and emerging threats. We’ll talk about some of the products of these efforts today. 

Before I dive into some of the details, as you’re listening to Rob, Antonia, and I, we’re going to be talking about a number of different initiatives that Facebook is focused on, both to protect the UK general election and more broadly, to respond to integrity threats. I wanted to give you a brief framework for how to think about these. 

The key distinction that you’ll hear again and again is a distinction between content and behavior. At Facebook, we have policies that enable to take action when we see content that violates our Community Standards. 

In addition, we have the tools that we use to respond when we see an actor engaged in deceptive or violating behavior, and we keep these two efforts distinct. And so, as you listen to us, we’ll be talking about different initiatives we have in both dimensions. 

Under content for example, you’ll hear Antonia talk about misinformation, about voter suppression, about hate speech, and about other types of content that we can take action against if someone tries to share that content on our platform. 

Under the behavioral side, you’ll hear me and you’ll hear Rob also mention some of our work around influence operations, around spam, and around hacking. 

I’m going to focus in particular on the first of these, influence operations; but the key distinction that I want to make is when we take action to remove someone because of their deceptive behavior, we’re not looking at, we’re not reviewing, and we’re not considering the content that they’re sharing. 

What we’re focused on is the fact that they are deceiving or misleading users through their actions. For example, using networks of fake accounts to conceal who they are and conceal who’s behind the operation. So we’ll refer back to these, but I think it’s helpful to distinguish between the content side of our enforcement and the behavior side of our enforcement. 

And that’s particularly important because we’ve seen some threat actors who work to understand where the boundaries are for content and make sure for example that the type of content they share doesn’t quite cross the line. 

And when we see someone doing that, because we have behavioral enforcement tools as well, we’re still able to make sure we’re protecting authenticity and public debate on the platform. 

In each of these dimensions, there are four pillars to our work. You’ll hear us refer to each of these during the call as well, but let me just say that these four fit together, and no one of these by themselves would be enough, but all four of the together give us a layered approach to defending public debate and ensuring authenticity on the platform. 

We have expert investigative teams that conduct proactive investigations to find, expose, and disrupt sophisticated threat actors. As we do that, we learn from those investigations and we build automated systems that can disrupt any kind of violating behavior across the platform at scale. 

We also, as Rebecca mentioned, build transparency tools so that users, external researchers and the press can see who is using the platform and ensure that they’re engaging authentically. It also forces threat actors who are trying to conceal their identity to work harder to conceal and mislead. 

And then lastly, one of the things that’s extremely clear to us, particularly in the election space, is that this is a whole of society effort. And so, we work closely with partners in government, in civil society, and across industry to tackle these threats. 

And we’ve found that where we could be most effective is where we bring the tools we bring to the table, and then can work with government and work with other partners to respond and get ahead of these challenges as they emerge. 

One of the ways that we do this is through proactive investigations into the deceptive efforts engaged in by bad actors. Over the last year, our investigative teams, working together with our partners in civil society, law enforcement, and industry, have found and stopped more than 50 campaigns engaged in coordinated inauthentic behavior across the world. 

This includes an operation we removed in May that originated from Iran and targeted a number of countries, including the UK. As we announced at the time, we removed 51 Facebook accounts, 36 pages, seven groups, and three Instagram accounts involved in coordinated inauthentic behavior. 

The page admins and account owners typically posted content in English or Arabic, and most of the operation had no focus on a particular country, although there were some pages focused on the UK and the United States. 

Similarly, in March we announced that we removed a domestic UK network of about 137 Facebook and Instagram accounts, pages, and groups that were engaged in coordinated inauthentic behavior. 

The individuals behind these accounts presented themselves as far right and anti-far right activists, frequently changed page and group names, and operated fake accounts to engage in hate speech and spread divisive comments on both sides of the political debate in the UK. 

These are the types of investigations that we focus our core investigative team on. Whenever we see a sophisticated actor that’s trying to evade our automated systems, those teams, which are made up of experts from law enforcement, the intelligence community, and investigative journalism, can find and reveal that behavior. 

When we expose it, we announce it publicly and we remove it from the platform. Those expert investigators proactively hunt for evidence of these types of coordinated inauthentic behavior (CIB) operations around the world. 

This team has not seen evidence of widespread foreign operations aimed at the UK. But we are continuing to search for this and we will remove and publicly share details of networks of CIB that we identify on our platforms. 

As always with these takedowns, we remove these operations for the deceptive behavior they engaged in, not for the content they shared. This is that content/behavior distinction that I mentioned earlier. As we’ve improved our ability to disrupt these operations, we’ve also deepened our understanding of the types of threats out there and how best to counter them. 

Based on these learnings, we’ve recently updated our inauthentic behavior policy which is posted publicly as part of our Community Standards, to clarify how we enforce against the spectrum of deceptive practices we see in our platforms, whether foreign or domestic, state or non state. For each investigation, we isolate any new behaviors we see and then we work to automate detection of them at scale. This connects to that second pillar of our integrity work. 

And this slows down the bad guy and lets our investigators focus on improving our defenses against emerging threats. A good example of this work is our efforts to find and block fake accounts, which Rebecca mentioned. 

We know bad actors use fake accounts as a way to mask their identity and inflict harm on our platforms. That’s why we’ve built an automated system to find and remove these fake accounts. And each time we conduct one of these takedowns, or any other of our enforcement actions, we learn more about what fake accounts look like and how we can have automated systems that detect and block them. 

This is why we have these systems in place today that block millions of fake accounts every day, often within minutes of their creation. Because information operations often target multiple platforms as well as traditional media, I mentioned our collaborations with industry, civil society and government. 

In addition to that, we are building increased transparency on our platform, so that the public along with open source researchers and journalists can find and expose more bad behavior themselves. 

This effort on transparency is incredibly important. Rob will talk about this in detail, but I do want to add one point here, specifically around pages. Increasingly, we’re seeing people operate pages that clearly disclose the organization behind them as a way to make others think they are independent. 

We want to make sure Facebook is used to engage authentically, and that users understand who is speaking to them and what perspective they are representing. We noted last month that we would be announcing new approaches to address this, and today we’re introducing a policy to require more accountability for pages that are concealing their ownership in order to mislead people.

If we find a page is misleading people about its purpose by concealing its ownership, we will require it to go through our business verification process, which we recently announced, and show more information on the page itself about who is behind that page, including the organization’s legal name and verified city, phone number, or website in order for it to stay up. 

This type of increased transparency helps ensure that the platform continues to be authentic and the people who use the platform know who they’re talking to and understand what they’re seeing. 

We've put a lot of effort into making political advertising on Facebook more transparent

Rob Leathern, Director of Product, Business Integrity, Facebook 

In addition to making pages more transparent as Nathaniel has indicated, we’ve also put a lot of effort into making political advertising on Facebook more transparent than it is anywhere else. 

Every political and issue ad in the that runs on Facebook now goes into our Ad Library public archive that everyone can access, regardless of whether or not they have a Facebook account. 

We launched this in the UK in October 2018 and, since then, there’s been over 116,000 ads related to politics, elections, and social issues placed in the UK Ad Library. You can find all the ads that a candidate or organization is running, including how much they spent and who saw the ad. And we’re storing these ads in the Ad Library for seven years. 

Other media such as billboards, newspaper ads, direct mail, leaflets or targeted emails don’t today provide this level of transparency into the ad and who is seeing them. And as a result, we’ve seen a significant number of press stories regarding the election driven by the information in Facebook’s Ad Library. 

We’re proud of this resource and insight into ads running on Facebook and Instagram and that it is proving useful for media and researchers. And just last month, we made even more changes to both the Ad Library and Ad Library Reports. These include adding details in who the top advertising spenders are in each country in the UK, as well as providing an additional view by different date ranges which people have been asking for. 

We’re now also making it clear which Facebook platform an ad ran on. For example, if an ad ran on both Facebook and/or Instagram. 

For those of you unfamiliar with the Ad Library, which you can see at Facebook.com/adlibrary, I thought I’d run through it quickly. 

So this is the Ad Library. Here you see all the ads have been classified as relating to politics or issues. We keep them in the library for seven years. As I mentioned, you can find the Ad Library at Facebook.com/adlibrary. 

You can also access the Ad Library through a specific page. For example, for this Page, you can see not only the advertising information, but also the transparency about the Page itself, along with the spend data. 

Here is an example of the ads that this Page is running, both active as well as inactive. In addition, if an ad has been disapproved for violating any of our ad policies, you’re also able to see all of those ads as well. 

Here’s what it looks like if you click to see more detail about a specific ad. You’ll be able to see individual ad spend, impressions, and demographic information. 

And you’ll also be able to compare the individual ad spend to the overall macro spend by the Page, which is tracked in the section below. If you scroll back up, you’ll also be able to see the other information about the disclaimer that has been provided by the advertiser. 

We know we can’t protect elections alone and that everyone plays a part in keeping the platform safe and respectful. We ask people to share responsibly and to let us know when they see something that may violate our Advertising Policies and Community Standards. 

We also have the Ad Library API so journalists and academics can analyze ads about social issues, elections, or politics. The Ad Library application programming interface, or API, allows people to perform customized keyword searches of ads stored in the Ad Library. You can search data for all active and inactive issue, electoral or political ads. 

You can also access the Ad Library and the data therein through the specific page or through the Ad Library Report. Here is the Ad Library report, this allows you to see the spend by specific advertisers and you can download a full report of the data. 

Here we also allow you to see the spending by location and if you click in you can see the top spenders by region. So you can see, for example, in the various regions, who the top spenders in those areas are. 

Our goal is to provide an open API to news organizations, researchers, groups and people who can hold advertisers and us more accountable. 

We’ve definitely seen a lot of press, journalists, and researchers examining the data in the Ad Library and using it to generate these insights and we think that’s exactly a part of what will help hold both us and advertisers more accountable.

We hope these measures will build on existing transparency we have in place and help reporters, researchers and most importantly people on Facebook learn more about the Pages and information they’re engaging with. 

We are committed to fighting the spread of misinformation

Antonia Woodford, Product Manager, Misinformation, Facebook

We are committed to fighting the spread of misinformation and viral hoaxes on Facebook. It is a responsibility we take seriously.

To accomplish this, we follow a three-pronged approach which we call remove, reduce, and inform. First and foremost, when something violates the laws or our policies, we’ll remove it from the platform all together.

As Nathaniel touched on, removing fake accounts is a priority, of which the vast majority are detected and removed within minutes of registration and before a person can report them. This is a key element in eliminating the potential spread of misinformation. 

The reduce and inform part of the equation is how we reduce the spread of problematic content that doesn’t violate the law or our community standards, while still ensuring freedom of expression on the platform and this is where the majority of our misinformation work is focused. 

To reduce the spread of misinformation, we work with third party fact-checkers. 

Through a combination of reporting from people on our platform and machine learning, potentially false posts are sent to third party fact-checkers to review. These fact-checkers review this content, check the facts, and then rate its accuracy. They’re able to review links in news articles as well as photos, videos, or text posts on Facebook.

After content has been rated false, our algorithm heavily downranks this content in News Feed so it’s seen by fewer people and far less likely to go viral. Fact-checkers can fact-check any posts they choose based on the queue we send them. 

And lastly, as part of our work to inform people about the content they see on Facebook, we just launched a new design to better warn people when they see content that’s illegal, false, or partly false by our fact-checking partners.

People will now see a more prominent label on photos and videos that have been fact-checked as false or partly false. This is a grey screen that sits over a post and says ‘false information’ and points people to fact-checkers’ articles debunking the claims. 

These clearer labels are what people have told us they want, what they have told us they expect Facebook to do, and what experts tell us is the right tactic for combating misinformation.

We’re rolling this change out in the UK this week for any photos and videos that have been rated through our fact-checking partnership. Though just one part of our overall strategy, fact-checking is a fundamental part of our strategy to combat this information and I want to share a little bit more about the program.

Our fact-checking partners are all accredited by the International Fact-Checking Network which requires them to abide by a code of principles such as nonpartisanship and transparency sources.

We currently have over 50 partners in over 40 languages around the world. As Rebecca outlined earlier, we don’t send content or ads from politicians and political parties to our third party fact-checking partners.

Here in the UK we work with Full Fact and FactCheckNI and I as part of our program. To recap we identify content that may be false using signals such as feedback from our users. This content is all submitted into a queue for our fact-checking partners to access. These fact-checkers then choose which content to review, check the facts, and rate the accuracy of the content.

These fact-checkers are independent organizations, so it is at their discretion what they choose to investigate. They can also fact-check whatever content they want outside of the posts we send their way.

If a fact-checker rates a story as false, it will appear lower in News Feed with the false information screen I mentioned earlier. This significantly reduces the number of people who see it.

Other posts that Full Fact and FactCheckNI choose to fact-check outside of our system will not be impacted on Facebook. 

And finally, on Tuesday we announced a partnership with the International Fact-Checking Network to create the Fact-Checking Innovation Initiative. This will fund innovation projects, new formats, and technologies to help benefit the broader fact-checking ecosystem. 

We are investing $500,000 into this new initiative, where organizations can submit applications for projects to improve fact-checkers’ scale and efficiency, increase the reach of fact-checks to empower more people with reliable information, build new tools to help combat misinformation, and encourage newsrooms to collaborate in fact-checking efforts.

Anyone from the UK can be a part of this new initiative. 

Our 35,000-strong global safety and security team oversees content and behavior across the platform

Ella Fallows, Politics and Government Outreach Manager UK, Facebook 

Our team’s role involves two main tasks: working with parties, MPs and candidates to ensure they have a good experience and get the most from our platforms; and looking at how we can best use our platforms to promote participation in elections.

I’d like to start with how MPs and candidates use our platforms. 

There is, rightly, a focus in the UK about the current tone of political debate. Let me be clear, hate speech and threats of violence have no place on our platforms and we’re investing heavily to tackle them. 

Additionally, for this campaign we have this week written to political parties and candidates setting out the range of safety measures we have in place and also to remind them of the terms and conditions and the Community Standards which govern our platforms. 

As you may be aware, every piece of content on Facebook and Instagram has a report button, and when content is reported to us which violates our Community Standards – what is and isn’t allowed on Facebook – it is removed. 

Since March of this year, MPs have also had access to a dedicated reporting channel to flag any abusive and threatening content directly to our teams. Now that the General Election is underway we’re extending that support to all prospective candidates, making our team available to anyone standing to allow them to quickly report any concerns across our platforms and have them investigated. 

This is particularly pertinent to Tuesday’s news from the Government calling for a one stop shop for candidates. We have already set up our own one stop shop so that there is a single point of contact for candidates for issues across Facebook and Instagram.

But our team is not working alone; it’s backed up by our 35,000-strong global safety and security team that oversees content and behavior across the platform every day. 

And our technology is also helping us to automatically detect more of this harmful content. For example, the proportion of hate speech we have removed before it’s reported to us has increased significantly over the last two years, and we will be releasing new figures on this later this month.

We also have a Government, Politics & Advocacy Portal which is a home for everything a candidate will need during the campaign, including ‘how to’ guides on subjects such as political advertising, campaigning on Facebook and troubleshooting guides for technical issues.

We’re working with the political parties to ensure candidates are aware of both the reporting channel to reach my team and the Government, Politics & Advocacy Portal.

We’re holding a series of sessions for candidates on safety and outlining the help available to address harassment on our platforms. We’ve already held dedicated sessions for female candidates in partnership with women’s networks within the parties to provide extra guidance. We want to ensure we’re doing everything possible to help them connect with their constituents, free from harassment.

Finally, we’re working with the Government to distribute to every candidate via returning officers in the General Election the safety guides we have put together, to ensure we reach everyone not just those attending our outreach sessions. Our safety guides include information on a range of tools we have developed:

  • For example, public figures are able to moderate and filter the content that people put on their Facebook Pages to prevent negative content appearing in the first place. People who help manage Pages can hide or delete individual comments. 
  • They can also proactively moderate comments and posts by visitors by turning on the profanity filter, or blocking specific words or lists of words that they do not want to appear on their Page. Page admins can also remove or ban people from their Pages. 

We hope these steps help every candidate to reach their constituents, and get the most from our platforms. But our work doesn’t stop there.

The second area our team focuses on is promoting civic engagement. In addition to supporting and advising candidates, we also, of course, want to help promote voter participation in the election. 

For the past five years, we’ve used badges and reminders at the top of people’s News Feeds to encourage people to vote in elections around the world. The same will be true for this campaign. 

We’ll run reminders to register to vote, with a link to the gov.uk voter registration page, in the week running up to the voter registration deadline. 

On election day itself, we’ll also run a reminder to vote with a link to the Electoral Commission website so voters can find their polling station and any information they need. This will include a button to share that you voted. 

We hope that this combination of steps will help to ensure both candidates and voters engaging with the General Election on our platforms have the best possible experience.





Source link

Social Shorts: Twitter bans political ads, Facebook’s Preventive Health tool, new CMO at Tommy Hilfiger



Amy Gesenhues

This collection of social media marketing and new hire announcements is a compilation of the past week’s briefs from our daily Marketing Land newsletter. Click here to subscribe and get more news like this delivered to your inbox every morning.

Twitter reports an uptick in data requests. Between January and June of 2019, Twitter received 7,300 demands for user data, with the majority of requests coming from U.S. government agencies (2,120 demands for 4,150 accounts), reports TechCrunch. Requests for data was up 6% compared to the same time period last year, according to the company’s latest transparency report. “Twitter said it removed 124,339 accounts for impersonation, and 115,861 accounts for promoting terrorism, a decline of 30% on the previous reporting period,” writes TechCruch. Twitter has also added impersonation data and insight to the report, offering up numbers on actions taken against accounts posing as another person, brand or organization. During the first half of the year, Twitter said it took action against 124,339 accounts for violating its impersonation policy.

No more political ads on Twitter. If you missed Wednesday’s news, Twitter CEO Jack Dorsey announced on Twitter, minutes before Facebook’s earnings call, the company will stop allowing political ads on the platform. “While internet advertising is incredibly powerful and very effective for commercial advertisers, that power brings significant risks to politics, where it can be used to influence votes to affect the lives of millions,” tweeted Dorsey. The CEO said the company plans to share more on the final policy by November 15 and begin enforcing its ban on political ads November 22. The timing of Dorsey’s tweet — right before Facebook’s earnings call — was arguably an indirect dig at Facebook in light of its stance not to fact-check ads from politicians. Facebook argues its policy is about free speech, and that political ads will account for only 0.5% of its total revenue next year. 

Facebook files lawsuit against OnlineNIC. Facebook is suing OnlineNIC and its privacy/proxy service ID Shield for registering fraudulent domain names, such as www-facebook-login.com and facebook-mails.com that were created to look as if they were connected to Facebook. “By mentioning our apps and services in the domain names, OnlineNIC and ID Shield intended to make them appear legitimate and confuse people,” writes Facebook Director of Platform Enforcement and Litigation Jessica Romero, “We don’t want people to be deceived, so we track and take action against suspicious and misleading domains, including those registered using privacy/proxy services that allow owners to hide their identity.” Facebook said there millions of such fraudulent domains, and that it actively reports such abuse to domain name registrars. But, in some instances, domain name registrars and privacy/proxy services fail to take down the domains. “This was the case with OnlineNIC and ID Shield, and that’s why we’ve taken this action to stop this type of domain name abuse,” writes Romero.

Sprout Social files to go public. The social media management platform Sprout Social has filed for its IPO, reports Crunchbase. Founded in 2010, the company currently has 23,000 customers in 100 countries per its S-1 filing, with the lion’s share of its revenue coming from software subscriptions. “The company estimates that the market opportunity for its product is $13 billion in the United States. And since about 30 percent of its revenue came from customers in other countries in 2018, Sprout believes the international opportunity is, at least, as large,” writes Crunchbase. Sprout Social revenue in 2018 was $78.8 million, up from $44.8 million the previous year.

Hootsuite and Proofpoint deliver compliance tool. Hootsuite and Proofpoint, a security and compliance company, have partnered to offer a real-time compliance verification feature for composing and publishing social media content via the Hootsuite composer platform. “This feature will put up the guardrails, and give our customers in regulated industries the confidence to empower their people to capitalize on the power of social media to achieve their business objectives, particularly around social selling,” said Hootsuite SVP of Product and Technology Ryan Donovan. The tool will be integrated into Hootsuite’s composer platform so that social media posts will be automatically screened — alerting Hootsuite users to common compliance policy violations as they type. Users will not be able to publish a social post until the violation is corrected.

Buffer adds scheduling feature for Instagram Stories. Earlier this month, the social media management tool Buffer rolled out a new feature that allows users to plan their Instagram Stories in advance, with the option to storyboard their posts, create draft captions, and schedule a reminder for posting for the Story. “We spent months talking to consumer brands that are using Stories as a key channel for their marketing, and found that there was a common theme; they were spending too much time navigating between tools to prepare their Stories, and scrambling to keep up with their audience’s appetite for regular content,” said Buffer Product Marketer Mike Eckstein. The feature is available on Buffer’s web application, as well as its iOS and Android apps. 

Facebook aims to keep users healthy. Facebook has rolled out a new Preventive Health Tool designed to connect users with health resources and check-up reminders. “To help you keep track of your checkups, we collect information you provide, such as when you set reminders or mark a screening as done. We also log more general activity, like frequency of clicks for a specific button, which allows us to understand how the tool is being used, in order to improve it over time,” wrote Facebook on its Newsroom Blog. Users will be able to set appointment reminders, schedule health tests and find affordable care using the Preventive Health tool. The company says recommendations for checkups offered by the tool are based on a user’s sex and age per their Facebook profile. At first glance, it looks like a helpful resource, but it’s worth noting just how much information you’re giving a platform that has never been all that good about keeping user data safe.

Employees want Facebook to change its ad policies. In response to Facebook’s policy of not fact-checking ads from politicians, a group of 250 Facebook employees signed a letter asking the company’s leaders to rethink how it handles political advertising, reports the New York Times. The employees said the policy is a “threat to what FB stands for” and that they “Strongly object to this policy as it stands.” The letter, which was posted on Facebook’s internal Workplace platform, was shared with the New York Times by three employees who asked not to be named. Facebook spokesperson Bertie Thomson told the New York Times, “Facebook’s culture is built on openness, so we appreciate our employees voicing their thoughts on this important topic. We remain committed to not censoring political speech, and will continue exploring additional steps we can take to bring increased transparency to political ads.”

Twitter confirms certain users will see more ads. Twitter has confirmed that users with high follower counts may be seeing more ads in their timeline. The company sent the following statement to Marketing Land when asked about the uptick in ads for certain users: “Historically, people with high follower counts have seen fewer ads. Recently, we’ve taken a more consistent approach of showing ads to everyone who uses Twitter and as a result, people with higher follower counts will notice an increase in the number of ads they’re seeing.” When asked what it deems a “high follower count,” Twitter would not divulge any numbers.

On the Move

Michael Scheiner has been named chief marketing officer for the Tommy Hilfiger. The retail brand says Scheiner will be tasked with bringing the company into a “new era of innovative marketing strategies” across digital and experiential platforms, aiming to reach the next generation of shoppers. Tommy Hilfiger CEO Daniel Grieder said he believes Scheiner will help fuel the company’s ongoing digital transformation. “I am excited to work closely with Tommy, Daniel and the company’s talented marketing teams around the world to write the next chapter,” said Scheiner. Before joining the company, Scheiner was the SVP of global marketing for Hollister Co.

Nationwide has promoted Ramon Jones to chief marketing officer. He has been with the insurance company for nearly 20 years and will be replacing Terrance Williams who announced he will be leaving the company in November. Jones will report to Nationwide CEO Kirt Walker and will oversee brand and marketing strategy, creative services, social media and corporate communications. “During his nearly two decades at Nationwide, he has held numerous leadership roles in the business and in marketing that make him uniquely qualified to promote and protect the Nationwide brand and position him to drive further business success,” said Walker about Jones’ appointment to CMO. Jones most recently served as Nationwide’s financial services marketing leader.

Beyond Meat has recruited Stuart Kronauge to serve as its chief marketing officer. The producer of plant-based burgers is bringing on Kronauge to help advance product sales in retail stores and build partnerships with more restaurants open to vegan offerings. Kronauge comes to Beyond Meat from Coca-Cola where she was employed for more than 20 years. Kronauge is credited with the resurgence of Coca-Cola’s Coke Zero, Diet Coke, Sprite and Fanta product lines during her time at the company. She will officially join Beyond Meat in January, 2020. 

Jeff Herzog has been appointed chairman and CEO of the digital marketing agency ZD3. “I am excited to be back in the industry as the CEO of ZD3 and grateful to my team and clients that continue to support our innovation and drive forward into rapidly expanding and changing digital landscape,” said Herzog. Prior to being named CEO, Herzog founded ZOG Digital in 2011. That agency was acquired by Inventis Digital in 2017. He is also the founder and former CEO of iCrossing before it was purchased in 2010 by Hearst Magazine.

Toronto-based agency SDI Marketing has named Tom Sorotschynski vice president. He will be tasked with driving the agency forward and report to Senior Vice President Kim Harland. “His passion for emerging technologies and understanding of the ever-evolving digital landscape allows him to create programs that are all-encompassing and truly connect consumers to brands,” said Harland. Sorotschynski most recently served as the vice president and general manager for Traffik & 5Crowd agencies owned by Sgsco, a global collective of brand agencies.

BrandStar, a full-service production and marketing agency, has hired Bradley Saveth to fill the newly created VP of strategic partnerships role. “Saveth joins BrandStar with a wealth of relationships, experience, and expertise to match clients with strategic opportunities and expand the agency’s global footprint,” said BrandStar CEO Mark Alfieri. Before joining BrandStar, Saveth launched two startups: goCharge and Vital Motion. He most recently was the president of Big Salad Consulting. During his career, Saveth has worked with a number of consumer brands, including Verizon, Wells Fargo, Wendy’s and more.


About The Author

Amy Gesenhues is a senior editor for Third Door Media, covering the latest news and updates for Marketing Land, Search Engine Land and MarTech Today. From 2009 to 2012, she was an award-winning syndicated columnist for a number of daily newspapers from New York to Texas. With more than ten years of marketing management experience, she has contributed to a variety of traditional and online publications, including MarketingProfs, SoftwareCEO, and Sales and Marketing Management Magazine. Read more of Amy’s articles.





Source link

Removing More Coordinated Inauthentic Behavior From Russia

By Nathaniel Gleicher, Head of Cybersecurity Policy

Today, we removed three networks of accounts, Pages and Groups for engaging in foreign interference — which is coordinated inauthentic behavior on behalf of a foreign actor — on Facebook and Instagram. They originated in Russia and targeted Madagascar, Central African Republic, Mozambique, Democratic Republic of the Congo, Côte d’Ivoire, Cameroon, Sudan and Libya. Each of these operations created networks of accounts to mislead others about who they were and what they were doing. Although the people behind these networks attempted to conceal their identities and coordination, our investigation connected these campaigns to entities associated with Russian financier Yevgeniy Prigozhin, who was previously indicted by the US Justice Department. We have shared information about our findings with law enforcement, policymakers and industry partners.

We’re constantly working to detect and stop this type of activity because we don’t want our services to be used to manipulate people. We’re taking down these Pages, Groups and accounts based on their behavior, not the content they posted. In each of these cases, the people behind this activity coordinated with one another and used fake accounts to misrepresent themselves, and that was the basis for our action.

We are making progress rooting out this abuse, but as we’ve said before, it’s an ongoing challenge. We’re committed to continually improving to stay ahead. That means building better technology, hiring more people and working closer with law enforcement, security experts and other companies.

What We’ve Found So Far

Today, we removed 35 Facebook accounts, 53 Pages, seven Groups and five Instagram accounts that originated in Russia and focused on Madagascar, the Central African Republic, Mozambique, Democratic Republic of the Congo, Côte d’Ivoire and Cameroon. The individuals behind this activity used a combination of fake accounts and authentic accounts of local nationals in Madagascar and Mozambique to manage Pages and Groups, and post their content. They typically posted about global and local political news including topics like Russian policies in Africa, elections in Madagascar and Mozambique, election monitoring by a local non-governmental organization and criticism of French and US policies.

  • Presence on Facebook: 35 Facebook accounts, 53 Pages, 7 Groups and 5 Instagram accounts.
  • Followers: About 475,000 accounts followed one or more of these Pages and around 450 people followed one or more of these Groups and around 650 people followed one or more of these Instagram accounts.
  • Advertising: Around $77,000 in spending for ads on Facebook paid for in US dollars. The first ad ran in April 2018 and the most recent ad ran in October 2019.

We found this activity as part of our internal investigations into Russia-linked, suspected coordinated inauthentic behavior in Africa. Our analysis benefited from open source reporting.

Below is a sample of the content posted by some of these Pages:

Page Name: “Sudan in the Eyes of Others” Caption Translation: Yam Brands, the company that owns the KFC franchise stated that it intends on opening 3 branches of its franchise in Sudan. The Spokesman of the company based in the american state of Kentucky, Takalaty Similiny, issued a statement saying that the branches are currently under construction and will open in mid November.

Translation: The Police of the Republic of Mozambique announced today that nine members of RENAMO were detained for their participation in the attempt to remove urns from one of the voting posts in the district of Machanga, Sofala and for having vandalised the infrastructure. According to the spokesperson for PRM, that spoke in a press-conference in Maputo, the nine people are accused of having lead around 300 RENAMO supporters that tried to remove the urns during counting at the Inharingue Primary School.

Translation: President of Central African Republic asked Vladimir Putin to organize the delivery of heavy weapons. Wednesday, in Sochi, the president Faustin-Archange Touadera asked his counterpart Vladimir Putin to increase the military assistance to the Republic, asking specifically for the supply of heavier weapons. “Russia is giving a considerable help to our country. They already carried out two weapons deliveries, trained our national troops, trained police officers, but for more effectiveness, we need heavy weapons. We hope that Russia will be able to allocate us combat vehicles, artillery canons and other killing weapons in order for us to bring our people to safety” said Touadera.
However, there is still an issue which is blocking us to implement this. The embargo on Central African Republic was not fully lifted in order for Russia to implement the plans of Touadera. Until now, it is only possible to supply weapons with a caliber less than 14,5 mm.
The embargo do not stop armed groups to get illegally heavy weapons for themselves, which is not helping the efforts of the government to establish peace.
We ask the Security Council of United Nations to draw attention on what their (sometimes reckless) sanctions are bringing.

We also removed 17 Facebook accounts, 18 Pages, 3 Groups and six Instagram accounts that originated in Russia and focused primarily on Sudan. The people behind this activity used a combination of authentic accounts of Sudanese nationals, fake and compromised accounts — some of which had already been disabled by our automated systems — to comment, post and manage Pages posing as news organizations, as well as direct traffic to off-platform sites. They frequently shared stories from SUNA (Sudan’s state news agency) as well as Russian state-controlled media Sputnik and RT, and posted primarily in Arabic and some in English. The Page administrators and account owners posted about local news and events in Sudan and other countries in Sub-Saharan Africa, including Sudanese-Russian relations, US-Russian relations, Russian foreign policy and Muslims in Russia.

  • Presence on Facebook and Instagram: 17 Facebook accounts, 18 Pages, 3 Groups and 6 accounts on Instagram.
  • Followers: About 457,000 accounts followed one or more of these Pages, about 1,300 accounts joined at least one of these Groups and around 2,900 people followed one or more of these Instagram accounts.
  • Advertising: Around $160 in spending for ads on Facebook paid for in Russian rubles. The first ad ran in April 2018 and the most recent ad ran in September 2019.

We found this activity as part of our internal investigations into Russia-linked, suspected coordinated inauthentic behavior in the region.

Below is a sample of the content posted by some of these Pages:

Translation (first two paragraphs): “American and British intelligence put together false information about Putin’s inner circle… a diplomatic military source said that American and British intelligence agencies are preparing to leak false information about people close to the president, Vladimir Putin, and the leadership of the Russian defense ministry.“

Page title: “Nile Echo”
Post translation (first paragraph only): French movements to abort Russian and Sudanese mediation in the Central African Republic…

Translation: #Article (I am completely sure that the person in the cell is not [ousted Sudanese leader Omar] al-Bashir, but I don’t have physical evidence proving this) Aml al-Kordofani wrote: The person resembling al-Bashir who is sitting behind the bars..who is he? The double game continues between the military and the sons of Gosh (referring to former Sudanese intelligence chief Salah Abdullah Mohamed Saleh) according to the American plan. The American plan employs psychological operations, as we mentioned earlier, and are undertaken by a huge office within the US Department of Defense.

Finally, we removed a network of 14 Facebook accounts, 12 Pages, one Group and one Instagram account that originated in Russia and focused on Libya. The individuals behind this activity used a combination of authentic accounts of Egyptian nationals, fake and compromised accounts — some of which had already been disabled by our automated systems — to manage Pages and drive people to an off-platform domain. They frequently shared stories from Russian state-controlled media Sputnik and RT. The Page admins and account owners typically posted in Arabic about local news and geopolitical issues including Libyan politics, crimes, natural disasters, public health, Turkey’s alleged sponsoring of terrorism in Libya, illegal migration, militia violence, the detention of Russian citizens in Libya for alleged interference in elections and a meeting between Khalifa Haftar, head of the Libyan National Army, and Putin. Some of these Pages posted content on multiple sides of political debate in Libya, including criticism of the Government of National Accord, US foreign policy, and Haftar, as well as support of Muammar Gaddafi and his son Saif al-Islam Gaddafi, Russian foreign policy, and Khalifa Haftar.

  • Presence on Facebook and Instagram: 14 Facebook accounts, 12 Pages, one Group and one account on Instagram.
  • Followers: About 212,000 accounts followed one or more of these Pages, 1 account joined this Group and around 29,300 people followed this Instagram account.
  • Advertising: About $10,000 USD paid for primarily in US dollars, euros and Egyptian pounds. The first ad ran in May 2014 and the most recent ad ran in October 2019.

Based on a tip shared by the Stanford Internet Observatory, we conducted an investigation into suspected Russia-linked coordinated inauthentic behavior and identified the full scope of this activity. Our analysis benefited from open source reporting.

Below is a sample of the content posted by some of these Pages:

Page name: “Voice of Libya” Post translation: The Government of National Accord [GNA] practices hypocrisy … the detention of two Russian citizens under the pretense that they are manipulating elections in Libya. But in reality, no elections are taking place in Libya now. So the pretense under which the Russians were arrested is fictitious and contrived.

Page title: “Libya Gaddafi” Post translation: “Why was late Libyan leader Muammar al-Gaddafi killed? Everyone was happy in Libya. There are people in America who sleep under bridges. There was never any discrimination in Libya, and there were not problems. The work was good and the money, too.”

Page name: “Voice of Libya” Post translation: First meeting between Haftar and Putin in Moscow. Several sources reported on the visit of the army’s commander-in-chief, Field Marshal Khalifa Haftar, to Moscow, where he met Russian President Vladimir Putin, to discuss developments in the military and political situation in Libya. This is Haftar’s first meeting with the Russian president. He has previously visited Russia and met with senior officials in the foreign and defense ministries, and they are expected to meet again.

Page title: “Falcons of the Conqueror” Post translation: Field Marshal Haftar: Libyans decide who to elect as the next president, and it is Saif al-Islam al-Gaddafi’s right to be a candidate





Source link

Social Shorts: LinkedIn refreshes Daily Rundown and Facebook’s VP of Ads exits company



Amy Gesenhues

This collection of social media marketing and new hire announcements is a compilation of the past week’s briefs from our daily Marketing Land newsletter. Click here to subscribe and get more news like this delivered to your inbox every morning.

Facebook makes space for news. Facebook has confirmed to Marketing Land it will not be placing ads in its newly introduced News tab, but publishers will still be able to monetize their content as usual with Instant Articles and other options.

The Washington Post first reported Facebook would be launching a news tab on the platform. Sources told the paper that the news stories featured in the tab will include articles from hundreds of news organizations, some of which will receive payment from Facebook for their content.

“Facebook’s service will include some human curation by a small editorial team of journalists, who will select top stories. But mostly the News tab will rely on computerized algorithms that seek to match user interests with offerings from a wide range of reports on politics, sports, health, technology, entertainment and other subjects,” reports the Washington Post. 

The Washington Post listed itself among the approximately 200 news organizations that will be included in the launch.

Other publications named by sources include the Wall Street Journal (which is owned by News Corp.), Business Insider, BuzzFeed News and local news publications. Sources told the Washington Post that the New York Times was also likely to participate, but terms had not been finalized. According to the report, payments by Facebook to news organizations will range from “hundreds of thousands to millions of dollars.”

New look for LinkedIn’s Daily Rundown. LinkedIn has redesigned its Daily Rundown, the platform where the site curates news stories selected by an internal team of editors. The redesign includes new navigation that makes it easier for users to move between stories and go deeper into specific topics. By clicking on a headline within the Daily Rundown, users will be able to see conversations happening on the platform about the story. The site has also started a pilot program allowing users to subscribe to regularly published pieces by industry thought leaders. LinkedIn mentioned two available newsletters: the “Get Hired” newsletter and one called “The Hustle” — both offering insights into the job hunt and professional goals. Users who have access to the pilot program can find more newsletters by clicking the “Newsletters Related to Your Industry” option under the “My Network” tab.

TikTok touts its safety measures. The short-form video app TikTok, which boasts more than 500 million users, has released a second set of videos that are part of its “You’re in Control” video series, highlighting the platform’s safety and privacy features. “Educating our users on the options we provide to help them craft their optimal TikTok experience is one of our top priorities,” writes TikTok on its newsroom blog. The company has enlisted 12 of its most popular creators to produce videos on six different topics, including how to block a user, how to filter comments, reporting inappropriate behavior and how to disable or enable the duet feature which allows two videos to be posted side-by-side in the same screen. 

Majority of political tweets come from only 10% of users. After analyzing a random sample of tweets from U.S. adults with public accounts, Pew Research found the majority of political tweets are made by a very small segment of users. According to their findings: 97% of tweets mentioning national politics came from only 10% of users. Those who are tweeting about politics are also more likely to follow others who feel the same way they do. “Political tweeters – defined as those who tweeted at least five times in total, and at least twice about national politics, over the year of the study period – are almost twice as likely as other Twitter users to say the people they follow on Twitter have political beliefs similar to their own,” writes Pew Research.

Facebook’s latest efforts to safeguard elections

Facebook is doing some crisis control this week after receiving much criticism for how it handles political content and advertising on the platform — allowing candidates to run ads containing false information. The company announced a number of new updates to its Ad Library, launched the Facebook Protect program and made Pages more transparent. None of these updates reverse the company’s decision that, as the Washington Post put it: “Opens a frightening new world for political communication — and for national politics.” 

Here’s a rundown of the company’s recent moves to keep its platform safe during elections:

Updates to Ad Library offer more insight into political ads. Facebook’s Ad Library, the platform that archives all ads run on the platform during the past seven years, will now include a feature that tracks spending by U.S. presidential candidates. Facebook is also adding spend details for candidate campaigns at the state and regional level, and clarification around where the ad ran: Facebook, Instagram, Messenger or Facebook’s Audience Network. Starting next month, Facebook said it will begin testing a new database that allows researchers to download the entire ad library and pull daily snapshots to track day-to-day changes.

Facebook Protect: A program to keep political accounts safe. Launched this week, Facebook Protect is designed to secure Facebook and Instagram accounts belonging to candidates, elected officials, federal and state departments and agencies, and party committees in the U.S. Participants must enroll to be part of the program, and once accepted, will receive advanced security protections and monitoring for potential hacking threats.

“If we discover an attack against one enrolled individual, we can review and protect other accounts that are enrolled in our program and affiliated with that same campaign,” writes Facebook on the Facebook Protect website, “Additionally, all Page admins of enrolled Pages will be required to go through Page Publishing Authorization to ensure the security of the Page, regardless of whether or not individual Page admins choose to enroll in this program.”

To enroll, Page owners must already have received the blue-verified badge. Once a Page has been verified it can enroll via the form listed at the bottom of the Facebook Protect site.

More transparency for political Pages. Facebook is adding an “Organizations that Manage This Page” tab to Pages to clarify what organizations are behind political Pages on the platform. The tab will list the organization’s legal name and verified city, phone number or website. For now, this information will only be included for Pages with large U.S. audiences that have already completed Facebook’s business verification process and any Pages that have been authorized to run ads about social issues, elections or politics in the U.S.

“If we find a Page is concealing its ownership in order to mislead people, we will require it to successfully complete the verification process and show more information in order for the Page to stay up,” writes Facebook executives Guy Rosen, Nathaniel Gleicher and Rob Leathern on the company’s Newsroom Blog.

On the Move

Rob Goldman, Facebook’s VP of ads, announced on Twitter that Tuesday was his last day at the company: “Some personal news: After more than 7 years, today is my last day at Facebook. What I will miss most are the people, who are among the smartest and most talented I’ve ever met. I wish them all the very best in their important work.” wrote Goldman. He first joined Facebook in 2012, and was named director of product ads and Pages in 2014.

Twitch, the Amazon-owned game streaming site, has named Doug Scott as its new chief marketing officer. He is replacing Kate Jhaveri, Twitch’s former CMO who left the company earlier this year. “Doug has deep experience extending brands into new markets across games and entertainment industries, making him the ideal fit to lead Twitch’s marketing strategy,” said Twitch COO Sara Clemens. Prior to joining Twitch, Scott led marketing for the social gaming platform Zynga and was CMO for the music startup BandPage. 

Hyundai Motors has hired Angela Zepeda as its next chief marketing officer. Zepeda will oversee all U.S. marketing and advertising efforts for the automaker, including strategic direction, brand development, national and regional advertising, experiential and social marketing, lead gen and more. “Angela was already a member of our extended family and we’ve seen firsthand her creativity, business acumen and talent in building our brand and leading teams,” said Hyundai COO Brian Smith. Prior to joining the company, Zepeda was the senior vice president and managing director of INNOCEAN USA, Hyundai’s agency of record.

Heidi Bullock has been named chief marketing officer for Tealium, a customer data orchestration platform. CEO Jeff Lunsford called Bullock a fantastic addition to the team. “Heidi will undoubtedly help us expand our market position in a high growth market and continue to solidify us as a global leader in the industry,” said Lunsford. Prior to joining Tealium, Bullock most recently served as the CMO for Engagio. She also held the role of group vice president of global marketing at Marketo.

The marketing performance management platform Allocadia has hired Julia Stead as the new chief marketing officer and added John Stetic to its Board of Directors. Stead will focus on working with Allocadia clients, guiding marketing leaders in developing strategy, investing intelligently and optimizing their marketing investment results. “I’m thrilled to be joining a team that is focused on helping other marketers achieve the same growth I’m passionate about driving for my own company,” said Stead. Before joining Allocadia, Stead was the VP of marketing at Invoca. Stetic, a veteran in the martech product industry, currently serves as the senior VP of innovation and partnerships at ServiceMax.

BitPay, a global blockchain payments provider, has appointed Bill Zielke as the company’s first chief marketing officer. In his new role, Zielke will be tasked with executing marketing strategy that supports BitPay’s growth objectives, building a strong business and consumer brand, and cultivating an increased awareness around cryptocurrency and its use. “We realized we needed a seasoned marketer to advance the company to the next level. We are excited to have Bill on board as we attract more users to BitPay and drive greater merchant acceptance of cryptocurrencies,” said CEO Stephen Pair. Zielke’s former marketing leadership roles include time at Ingo Money and Forter, serving as CMO at both venture-backed startups.


About The Author

Amy Gesenhues is a senior editor for Third Door Media, covering the latest news and updates for Marketing Land, Search Engine Land and MarTech Today. From 2009 to 2012, she was an award-winning syndicated columnist for a number of daily newspapers from New York to Texas. With more than ten years of marketing management experience, she has contributed to a variety of traditional and online publications, including MarketingProfs, SoftwareCEO, and Sales and Marketing Management Magazine. Read more of Amy’s articles.





Source link

Helping to Protect the 2020 US Elections

By Guy Rosen, VP of Integrity; Katie Harbath, Public Policy Director, Global Elections; Nathaniel Gleicher, Head of Cybersecurity Policy and Rob Leathern, Director of Product Management

We have a responsibility to stop abuse and election interference on our platform. That’s why we’ve made significant investments since 2016 to better identify new threats, close vulnerabilities and reduce the spread of viral misinformation and fake accounts. 

Today, almost a year out from the 2020 elections in the US, we’re announcing several new measures to help protect the democratic process and providing an update on initiatives already underway:

Fighting foreign interference

  • Combating inauthentic behavior, including an updated policy
  • Protecting the accounts of candidates, elected officials, their teams and others through Facebook Protect 

Increasing transparency

  • Making Pages more transparent, including showing the confirmed owner of a Page
  • Labeling state-controlled media on their Page and in our Ad Library
  • Making it easier to understand political ads, including a new US presidential candidate spend tracker

Reducing misinformation

  • Preventing the spread of misinformation, including clearer fact-checking labels 
  • Fighting voter suppression and interference, including banning paid ads that suggest voting is useless or advise people not to vote
  • Helping people better understand the information they see online, including an initial investment of $2 million to support media literacy projects

Fighting Foreign Interference

Combating Inauthentic Behavior

Over the last three years, we’ve worked to identify new and emerging threats and remove coordinated inauthentic behavior across our apps. In the past year alone, we’ve taken down over 50 networks worldwide, many ahead of major democratic elections. As part of our effort to counter foreign influence campaigns, this morning we removed four separate networks of accounts, Pages and Groups on Facebook and Instagram for engaging in coordinated inauthentic behavior. Three of them originated in Iran and one in Russia. They targeted the US, North Africa and Latin America. We have identified these manipulation campaigns as part of our internal investigations into suspected Iran-linked inauthentic behavior, as well as ongoing proactive work ahead of the US elections.

We took down these networks based on their behavior, not the content they posted. In each case, the people behind this activity coordinated with one another and used fake accounts to misrepresent themselves, and that was the basis for our action. We have shared our findings with law enforcement and industry partners. More details can be found here.

As we’ve improved our ability to disrupt these operations, we’ve also built a deeper understanding of different threats and how best to counter them. We investigate and enforce against any type of inauthentic behavior. However, the most appropriate way to respond to someone boosting the popularity of their posts in their own country may not be the best way to counter foreign interference. That’s why we’re updating our inauthentic behavior policy to clarify how we deal with the range of deceptive practices we see on our platforms, whether foreign or domestic, state or non-state.

Protecting the Accounts of Candidates, Elected Officials and Their Teams

Today, we’re launching Facebook Protect to further secure the accounts of elected officials, candidates, their staff and others who may be particularly vulnerable to targeting by hackers and foreign adversaries. As we’ve seen in past elections, they can be targets of malicious activity. However, because campaigns are generally run for a short period of time, we don’t always know who these campaign-affiliated people are, making it harder to help protect them.

Beginning today, Page admins can enroll their organization’s Facebook and Instagram accounts in Facebook Protect and invite members of their organization to participate in the program as well. Participants will be required to turn on two-factor authentication, and their accounts will be monitored for hacking, such as login attempts from unusual locations or unverified devices. And, if we discover an attack against one account, we can review and protect other accounts affiliated with that same organization that are enrolled in our program. Read more about Facebook Protect and enroll here.

Increasing Transparency

Making Pages More Transparent

We want to make sure people are using Facebook authentically, and that they understand who is speaking to them. Over the past year, we’ve taken steps to ensure Pages are authentic and more transparent by showing people the Page’s primary country location and whether the Page has merged with other Pages. This gives people more context on the Page and makes it easier to understand who’s behind it. 

Increasingly, we’ve seen people failing to disclose the organization behind their Page as a way to make people think that a Page is run independently. To address this, we’re adding more information about who is behind a Page, including a new “Organizations That Manage This Page” tab that will feature the Page’s “Confirmed Page Owner,” including the organization’s legal name and verified city, phone number or website.

Initially, this information will only appear on Pages with large US audiences that have gone through Facebook’s business verification. In addition, Pages that have gone through the new authorization process to run ads about social issues, elections or politics in the US will also have this tab. And starting in January, these advertisers will be required to show their Confirmed Page Owner. 

If we find a Page is concealing its ownership in order to mislead people, we will require it to successfully complete the verification process and show more information in order for the Page to stay up. 

Labeling State-Controlled Media

We want to help people better understand the sources of news content they see on Facebook so they can make informed decisions about what they’re reading. Next month, we’ll begin labeling media outlets that are wholly or partially under the editorial control of their government as state-controlled media. This label will be on both their Page and in our Ad Library. 

We will hold these Pages to a higher standard of transparency because they combine the opinion-making influence of a media organization with the strategic backing of a state. 

We developed our own definition and standards for state-controlled media organizations with input from more than 40 experts around the world specializing in media, governance, human rights and development. Those consulted represent leading academic institutions, nonprofits and international organizations in this field, including Reporters Without Borders, Center for International Media Assistance, European Journalism Center, Oxford Internet Institute‘s Project on Computational Propaganda, Center for Media, Data and Society (CMDS) at the Central European University, the Council of Europe, UNESCO and others. 

It’s important to note that our policy draws an intentional distinction between state-controlled media and public media, which we define as any entity that is publicly financed, retains a public service mission and can demonstrate its independent editorial control. At this time, we’re focusing our labeling efforts only on state-controlled media. 

We will update the list of state-controlled media on a rolling basis beginning in November. And, in early 2020, we plan to expand our labeling to specific posts and apply these labels on Instagram as well. For any organization that believes we have applied the label in error, there will be an appeals process. 

Making it Easier to Understand Political Ads

In addition to making Pages more transparent, we’re updating the Ad Library, Ad Library Report and Ad Library API to help journalists, lawmakers, researchers and others learn more about the ads they see. This includes:

  • A new US presidential candidate spend tracker, so that people can see how much candidates have spent on ads
  • Adding additional spend details at the state or regional level to help people analyze advertiser and candidate efforts to reach voters geographically
  • Making it clear if an ad ran on Facebook, Instagram, Messenger or Audience Network
  • Adding useful API filters, providing programmatic access to download ad creatives and a repository of frequently used API scripts.

In addition to updates to the Ad Library API, in November, we will begin testing a new database with researchers that will enable them to quickly download the entire Ad Library, pull daily snapshots and track day-to-day changes.

Visit our Help Center to learn more about the changes to Pages and the Ad Library

Reducing Misinformation

Preventing the Spread of Viral Misinformation

On Facebook and Instagram, we work to keep confirmed misinformation from spreading. For example, we reduce its distribution so fewer people see it on Instagram, we remove it from Explore and hashtags, and on Facebook, we reduce its distribution in News Feed. On Instagram, we also make content from accounts that repeatedly post misinformation harder to find by filtering content from that account from Explore and hashtag pages for example. And on Facebook, if Pages, domains or Groups repeatedly share misinformation, we’ll continue to reduce their overall distribution and we’ll place restrictions on the Page’s ability to advertise and monetize.

Over the next month, content across Facebook and Instagram that has been rated false or partly false by a third-party fact-checker will start to be more prominently labeled so that people can better decide for themselves what to read, trust and share. The labels below will be shown on top of false and partly false photos and videos, including on top of Stories content on Instagram, and will link out to the assessment from the fact-checker.

Much like we do on Facebook when people try to share known misinformation, we’re also introducing a new pop-up that will appear when people attempt to share posts on Instagram that include content that has been debunked by third-party fact-checkers.

In addition to clearer labels, we’re also working to take faster action to prevent misinformation from going viral, especially given that quality reporting and fact-checking takes time. In many countries, including in the US, if we have signals that a piece of content is false, we temporarily reduce its distribution pending review by a third-party fact-checker.

Fighting Voter Suppression and Intimidation

Attempts to interfere with or suppress voting undermine our core values as a company, and we work proactively to remove this type of harmful content. Ahead of the 2018 midterm elections, we extended our voter suppression and intimidation policies to prohibit:

  • Misrepresentation of the dates, locations, times and methods for voting or voter registration (e.g. “Vote by text!”);
  • Misrepresentation of who can vote, qualifications for voting, whether a vote will be counted and what information and/or materials must be provided in order to vote (e.g. “If you voted in the primary, your vote in the general election won’t count.”); and 
  • Threats of violence relating to voting, voter registration or the outcome of an election.

We remove this type of content regardless of who it’s coming from, and ahead of the midterm elections, our Elections Operations Center removed more than 45,000 pieces of content that violated these policies more than 90% of which our systems detected before anyone reported the content to us. 

We also recognize that there are certain types of content, such as hate speech, that are equally likely to suppress voting. That’s why our hate speech policies ban efforts to exclude people from political participation on the basis of things like race, ethnicity or religion (e.g., telling people not to vote for a candidate because of the candidate’s race, or indicating that people of a certain religion should not be allowed to hold office).

In advance of the US 2020 elections, we’re implementing additional policies and expanding our technical capabilities on Facebook and Instagram to protect the integrity of the election. Following up on a commitment we made in the civil rights audit report released in June, we have now implemented our policy banning paid advertising that suggests voting is useless or meaningless, or advises people not to vote. 

In addition, our systems are now more effective at proactively detecting and removing this harmful content. We use machine learning to help us quickly identify potentially incorrect voting information and remove it. 

We are also continuing to expand and develop our partnerships to provide expertise on trends in voter suppression and intimidation, as well as early detection of violating content. This includes working directly with secretaries of state and election directors to address localized voter suppression that may only be occurring in a single state or district. This work will be supported by our Elections Operations Center during both the primary and general elections. 

Helping People Better Understand What They See Online

Part of our work to stop the spread of misinformation is helping people spot it for themselves. That’s why we partner with organizations and experts in media literacy. 

Today, we’re announcing an initial investment of $2 million to support projects that empower people to determine what to read and share — both on Facebook and elsewhere. 

These projects range from training programs to help ensure the largest Instagram accounts have the resources they need to reduce the spread of misinformation, to expanding a pilot program that brings together senior citizens and high school students to learn about online safety and media literacy, to public events in local venues like bookstores, community centers and libraries in cities across the country. We’re also supporting a series of training events focused on critical thinking among first-time voters. 

In addition, we’re including a new series of media literacy lessons in our Digital Literacy Library. These lessons are drawn from the Youth and Media team at the Berkman Klein Center for Internet & Society at Harvard University, which has made them available for free worldwide under a Creative Commons license. The lessons, created for middle and high school educators, are designed to be interactive and cover topics ranging from assessing the quality of the information online to more technical skills like reverse image search.

We’ll continue to develop our media literacy efforts in the US and we’ll have more to share soon. 





Source link

Social Shorts: LinkedIn Events, Facebook Story Ad templates and Pinterest Lite



Amy Gesenhues

This collection of social media marketing and new hire announcements is a compilation of the past week’s briefs from our daily Marketing Land newsletter. Click here to subscribe and get more news like this delivered to your inbox every morning.

Event planning on LinkedIn. LinkedIn has rolled out a new Events feature that makes it easy to create and share event announcements. The feature can be found in the “Community” panel on the  left side of the newsfeed, and allows users to enter their event description, date, time, venue and invite connections using filters such as location, company, industry and school. “You’ll be able to seamlessly create and join professional events, invite your connections, manage your event, have conversations with other attendees, and stay in touch online after the event ends,” writes Ajay Datta, LinkedIn India’s head of product. Once an event is posted, users can track attendees, post updates and interact with the users they have invited. Users who have accepted an event invite on the platform will be able to see their events under the “My Network” tab. 

Twitter defends its policies for world leaders. Twitter is defending its policies for allowing content from world leaders that may otherwise be prohibited. After listing enforcement scenarios for why it may remove a Tweet from a political figure, such as “promotion of terrorism” and “Clear and direct threats of violence against an individual,” the company goes on to say, “We will err on the side of leaving the content up if there is a clear public interest in doing so.” The company said it understands why users want clear yes/no decisions regarding what’s allowed, but that it’s not that simple: “Our mission is to provide a forum that enables people to be informed and to engage their leaders directly.” 

Reddit pushes back against lawmakers. On Wednesday, executives from Reddit, Google and the Electronic Frontier Foundation took questions from Congress about possible modifications to the 1996 Communications Decency Act currently being considered by lawmakers, specifically Section 230 of the law which gives social media platforms immunity from being held responsible for content posted by users. “Lawmakers from both major political parties have said Congress could make additional changes to the law to restrict companies’ immunity,” reports Reuters. Reddit CEO Steve Huffman published his comments to Congress on Reddit’s Upvoted blog, explaining why Section 230 is critical to the company: “Reddit uses a different model of content moderation from our peers — one that empowers communities — and this model relies on Section 230. I’m here because even small changes to the law will have outsized consequences for our business, our communities, and what little competition remains in our industry.” 

An easier way to create Story Ads. Facebook is launching customizable templates for Story ads that can be used across Facebook, Instagram and Messenger. The templates allow advertisers to choose from a variety of layouts once they have uploaded their creative assets to Ads Manager, and come with editing tools to select background color, text and cropping options. “We’re making it easier for businesses of any size to create for fullscreen vertical stories placements,” writes Facebook on its business blog. Streetbees reported a 40% increase in incremental app installs and a 29% reduction in cost per incremental app click when using the new templates.

Instagram launches new data-security features. Instagram is giving users more control over their data with a new feature that allows them to remove third-party apps connected to their account. Under the account “Settings” page in the app, users will be able to click on “Security” to find the “Apps and Websites” option. From there, users can remove any third-party apps to keep them from accessing their data. The social media platform is also launching an authorization screen that will list all the information a third-party app is requesting to access. Users will have the option to cancel access to their data directly from the authorization screen. Unfortunately, it may take some time before all users have access to the new features — Instagram reports they will be rolling out over the next six months.

Reddit opens up its content to Snapchat. Reddit announced a new Snapchat integration, making it easy for users who have Snapchat downloaded on their mobile device to share Reddit posts on the app. “Simply tap the ‘share’ icon on an image, text or link-based post on Reddit’s iOS app and select the Snapchat option. Then choose a few friends to send the post to, or add it to your Story,” writes Reddit on its Upvoted Blog. Once posted to Snapchat, the content will include a sticker with the Reddit logo and source information. This is the first content-sharing integration for Reddit. Sharable content is limited to posts from “Safe for Work” communities and communities in good standing on the platform. At launch, the feature is only available on iOS, but will be rolling out on Android devices soon.

More bad news for Facebook Libra. Things are not looking good for Facebook’s Libra cryptocurrency project. Last week, PayPal announced it was pulling its participation in the network, and now eBay, Stripe, Visa and Mastercard are following suit. According to Gizmodo, the only two remaining payment platform partners are Mercado Pago and PayU. Visa said it would continue to evaluate the project and that its ultimate decision would based on the Libra Association’s ability to meet regulatory expectations.

“Visa’s continued interest in Libra stems from our belief that well-regulated blockchain-based networks could extend the value of secure digital payments to a greater number of people and places, particularly in emerging and developing markets,” a Visa spokesperson told CNBC. When Facebook initially launched the Libra Association, it had 28 companies that had agreed to be part of the group. That number is now at 22 — with each member agreeing to pay a $10 million fee to be part of the association and have voting rights. The Libra Association is scheduled to meet next week in Geneva to review the charter and appoint board members, reports CNBC.

Pinterest Lite. Pinterest is taking efforts to make sure its platform is available to users in emerging markets. The company launched its Pinterest Lite app last week, offering users an app that downloads faster and takes up less space on their mobile device. This is the second go at offering a Lite version of the app — according to TechCrunch, a Pinterest Lite app was pulled from Google Play last year. The latest Lite app is the result of a project that Pinterest started in July of 2017 when it formed a team tasked with rewriting its mobile web app from scratch as a progressive web app, reports TechCrunch. The new app is designed to offer a better user experience for people in low-bandwidth environments on limited data plans.

Facebook holiday preparations. Facebook is rolling out new Story Ad templates across Facebook, Instagram and Messenger in time for holiday promotions. “We know businesses have limited resources and time, and it may not always be possible to create new assets for ad campaigns. So we’re making it easier for businesses of all sizes to create vertical, full-screen assets,” writes Facebook on its News Blog. For Messenger, the company is rolling out instant replies in the coming weeks so that businesses can automatically respond to customer communications and create saved replies for commonly asked questions. It is also making it possible to set up an “away message” in Messenger for when a business is closed. 

New Twitter app for Mac users. After sunsetting its Mac desktop client last year, Twitter has launched a new Twitter app for Macs, but it is only available on the latest version of macOS Catalina. Twitter reported in June its plans to take advantage of Apple’s Mac Catalyst, a toolset that allows developers to bring iPad apps to the Mac desktop, reports TechCrunch. “Twitter had been one of the more highly anticipated Catalyst apps,” writes TechCrunch, citing that many users were left to rely on third-party apps, like TweetDeck or paid applications like Twitterific 5 or Tweetbot 3, when Twitter stopped offering its Mac desktop app. According to TechCrunch, the new app is free and the interface is consistent with the rest of Twitter’s platform apps, but the timeline doesn’t refresh in real-time. 

On the Move

The Minneapolis-based TV ad agency Marketing Architects has hired Marin Suska as its new VP of client growth. Suska was the agency’s former director of media services and is now returning after serving in various marketing roles at multiple agencies, including Haworth Marketing + Media, The Nerdery and Digital River. “Marketing Architects is a different agency than the one I knew working on the media team years ago,” said Suska, “The strategic nature of how this agency does business with clients, the colossal shift from radio to TV, and the greater goals lying ahead all played a role in my interest in returning to be a part of something big.”

Richard Nicoll has been named chief commerce officer and managing director at Liquid Omnicommerce, a Dubai-based retail consultancy. In his new role, Nicoll will be responsible for driving regional growth and enhancing the agency’s offerings. “He is the real pioneer of shopper marketing in the UAE,” said Liquid Omnicommerce Founder Sachinnn Laala, “With a wealth of experience in highly competitive global markets, the insights he will bring will benefit our clients tremendously.” Before joining Liquid Commerce, Nicoll served as the chief shopper marketing officer for Publicis Communications in Asia. Marketsmith Inc., a woman-owned marketing agency in New Jersey has added three new hires to its executive team this month.

Jo Maggiore has joined the agency as Creative Director, Samantha Foy has been named senior director of digital media and Rachel Schulties is the new VP of client performance. “We have brought in strong, accomplished women who each bring something unique to us, but all embrace data, analytics and modeling,” said Marketsmith President Rob Bochicchio, “More importantly, these leaders have made their mark with great clients and bring that expertise to Marketsmith to take outcomes for our client partners to a whole new level.” Before joining the agency, Maggiore was the director of digital creative at GNC. Foy previously worked at Active International and Schulties was in managed services at Digital Media Solution.


About The Author

Amy Gesenhues is a senior editor for Third Door Media, covering the latest news and updates for Marketing Land, Search Engine Land and MarTech Today. From 2009 to 2012, she was an award-winning syndicated columnist for a number of daily newspapers from New York to Texas. With more than ten years of marketing management experience, she has contributed to a variety of traditional and online publications, including MarketingProfs, SoftwareCEO, and Sales and Marketing Management Magazine. Read more of Amy’s articles.





Source link