How Facebook Is Prepared for the 2019 UK General Election

[ad_1]

Today, leaders from our offices in London and Menlo Park, California spoke with members of the press about Facebook’s efforts to prepare for the upcoming General Election in the UK on December 12, 2019. The following is a transcript of their remarks.

Rebecca Stimson, Head of UK Public Policy, Facebook

We wanted to bring you all together, now that the UK General Election is underway, to set out the range of actions we are taking to help ensure this election is transparent and secure – to answer your questions and to point you to the various resources we have available.  

There has already been a lot of focus on the role of social media within the campaign and there is a lot of information for us to set out. 

We have therefore gathered colleagues from both the UK and our headquarters in Menlo Park, California, covering our politics, product, policy and safety teams to take you through the details of those efforts. 

I will just say a few opening remarks before we dive into the details

Helping protect elections is one of our top priorities and over the last two years we’ve made some significant changes – these broadly fall into three camps:

  • We’ve introduced greater transparency so that people know what they are seeing online and can scrutinize it more effectively; 
  • We have built stronger defenses to prevent things like foreign interference; 
  • And we have invested in both people and technology to ensure these new policies are effective.

So taking these in turn. 

Transparency

On the issue of transparency. We’ve tightened our rules to make political ads much more transparent, so people can see who is trying to influence their vote and what they are saying. 

We’ll discuss this in more detail shortly, but to summarize:  

  • Anybody who wants to run political ads must go through a verification process to prove who they are and that they are based here in the UK; 
  • Every political ad is labelled so you can see who has paid for them;
  • Anybody can click on any ad they see on Facebook and get more information on why they are seeing it, as well as block ads from particular advertisers;
  • And finally, we put all political ads in an Ad Library so that everyone can see what ads are running, the types of people who saw them and how much was spent – not just while the ads are live, but for seven years afterwards.

Taken together these changes mean that political advertising on Facebook and Instagram is now more transparent than other forms of election campaigning, whether that’s billboards, newspaper ads, direct mail, leaflets or targeted emails. 

This is the first UK general election since we introduced these changes and we’re already seeing many journalists using these transparency tools to scrutinize the adverts which are running during this election – this is something we welcome and it’s exactly why we introduced these changes. 

Defense 

Turning to the stronger defenses we have put in place.

Nathaniel will shortly set out in more detail our work to prevent foreign interference and coordinated inauthentic behavior. But before he does I want to be clear right up front how seriously we take these issues and our commitment to doing everything we can to prevent election interference on our platforms. 

So just to highlight one of the things he will be talking about – we have, as part of this work, cracked down significantly on fake accounts. 

We now identify and shut down millions of fake accounts every day, many just seconds after they were created.

Investment

And lastly turning to investment in these issues.

We now have more than 35,000 people working on safety and security. We have been building and rolling out many of the new tools you will be hearing about today. And as Ella will set out later, we have introduced a number of safety measures including a dedicated reporting channel so that all candidates in the election can flag any abusive and threatening content directly to our teams.  

I’m also pleased to say that – now the election is underway – we have brought together an Elections Taskforce of people from our teams across the UK, EMEA and the US who are already working together every day to ensure election integrity on our platforms. 

The Elections Taskforce will be working on issues including threat intelligence, data science, engineering, operations, legal and others. It also includes representatives from WhatsApp and Instagram.

As we get closer to the election, these people will be brought together in physical spaces in their offices – what we call our Operations Centre. 

It’s important to remember that the Elections Taskforce is an additional layer of security on top of our ongoing monitoring for threats on the platform which operates 24/7. 

And while there will always be further improvements we can and will continue to make, and we can never say there won’t be challenges to respond to, we are confident that we’re better prepared than ever before.  

Political Ads

Before I wrap up this intro section of today’s call I also want to address two of the issues that have been hotly debated in the last few weeks – firstly whether political ads should be allowed on social media at all and secondly whether social media companies should decide what politicians can and can’t say as part of their campaigns. 

As Mark Zuckerberg has said, we have considered whether we should ban political ads altogether. They account for just 0.5% of our revenue and they’re always destined to be controversial. 

But we believe it’s important that candidates and politicians can communicate with their constituents and would be constituents. 

Online political ads are also important for both new challengers and campaigning groups to get their message out. 

Our approach is therefore to make political messages on our platforms as transparent as possible, not to remove them altogether. 

And there’s also a really difficult question – if you were to consider banning political ads, where do you draw the line – for example, would anyone advocate for blocking ads for important issues like climate change or women’s empowerment? 

Turning to the second issue – there is also a question about whether we should decide what politicians and political parties can and can’t say.  

We don’t believe a private company like Facebook should censor politicians. This is why we don’t send content or ads from politicians and political parties to our third party fact-checking partners.

This doesn’t mean that politicians can say whatever they want on Facebook. They can’t spread misinformation about where, when or how to vote. They can’t incite violence. We won’t allow them to share content that has previously been debunked as part of our third-party fact-checking program. And we of course take down content that violates local laws. 

But in general we believe political speech should be heard and we don’t feel it is right for private companies like us to fact-check or judge the veracity of what politicians and political parties say. 

Facebook’s approach to this issue is in line with the way political speech and campaigns have been treated in the UK for decades. 

Here in the UK – an open democracy with a vibrant free press – political speech has always been heavily scrutinized but it is not regulated. 

The UK has decided that there shouldn’t be rules about what political parties and candidates can and can’t say in their leaflets, direct mails, emails, billboards, newspaper ads or on the side of campaign buses.  

And as we’ve seen when politicians and campaigns have made hotly contested claims in previous elections and referenda, it’s not been the role of the Advertising Standards Authority, the Electoral Commission or any other regulator to police political speech. 

In our country it’s always been up to the media and the voters to scrutinize what politicians say and make their own minds up. 

Nevertheless, we have long called for new rules for the era of digital campaigning. 

Questions around what constitutes a political ad, who can run them and when, what steps those who purchase political ads must take, how much they can spend on them and whether there should be any rules on what they can and can’t say – these are all matters that can only be properly decided by Parliament and regulators.  

Legislation should be updated to set standards for the whole industry – for example, should all online political advertising be recorded in a public archive similar to our Ad Library and should that extend to traditional platforms like billboards, leaflets and direct mail?

We believe UK electoral law needs to be brought into the 21st century to give clarity to everyone – political parties, candidates and the platforms they use to promote their campaigns.

In the meantime our focus has been to increase transparency so anyone, anywhere, can scrutinize every ad that’s run and by whom. 

I will now pass you to the team to talk you through our efforts in more detail.

  • Nathaniel Gleicher will discuss tackling fake accounts and disrupting coordinated inauthentic behavior;
  • Rob Leathern will take you through our UK political advertising measures and Ad Library;
  • Antonia Woodford will outline our work tackling misinformation and our fact-checker partnerships; 
  • And finally, Ella Fallows will fill you on what we’re doing around safety of candidates and how we’re encouraging people to participate in the election; 

Nathaniel Gleicher, Head of Cybersecurity Policy, Facebook 

My team leads all our efforts across our apps to find and stop what we call influence operations, coordinated efforts to manipulate or corrupt public debate for a strategic goal. 

We also conduct regular red team exercises, both internally and with external partners to put ourselves into the shoes of threat actors and use that approach to identify and prepare for new and emerging threats. We’ll talk about some of the products of these efforts today. 

Before I dive into some of the details, as you’re listening to Rob, Antonia, and I, we’re going to be talking about a number of different initiatives that Facebook is focused on, both to protect the UK general election and more broadly, to respond to integrity threats. I wanted to give you a brief framework for how to think about these. 

The key distinction that you’ll hear again and again is a distinction between content and behavior. At Facebook, we have policies that enable to take action when we see content that violates our Community Standards. 

In addition, we have the tools that we use to respond when we see an actor engaged in deceptive or violating behavior, and we keep these two efforts distinct. And so, as you listen to us, we’ll be talking about different initiatives we have in both dimensions. 

Under content for example, you’ll hear Antonia talk about misinformation, about voter suppression, about hate speech, and about other types of content that we can take action against if someone tries to share that content on our platform. 

Under the behavioral side, you’ll hear me and you’ll hear Rob also mention some of our work around influence operations, around spam, and around hacking. 

I’m going to focus in particular on the first of these, influence operations; but the key distinction that I want to make is when we take action to remove someone because of their deceptive behavior, we’re not looking at, we’re not reviewing, and we’re not considering the content that they’re sharing. 

What we’re focused on is the fact that they are deceiving or misleading users through their actions. For example, using networks of fake accounts to conceal who they are and conceal who’s behind the operation. So we’ll refer back to these, but I think it’s helpful to distinguish between the content side of our enforcement and the behavior side of our enforcement. 

And that’s particularly important because we’ve seen some threat actors who work to understand where the boundaries are for content and make sure for example that the type of content they share doesn’t quite cross the line. 

And when we see someone doing that, because we have behavioral enforcement tools as well, we’re still able to make sure we’re protecting authenticity and public debate on the platform. 

In each of these dimensions, there are four pillars to our work. You’ll hear us refer to each of these during the call as well, but let me just say that these four fit together, and no one of these by themselves would be enough, but all four of the together give us a layered approach to defending public debate and ensuring authenticity on the platform. 

We have expert investigative teams that conduct proactive investigations to find, expose, and disrupt sophisticated threat actors. As we do that, we learn from those investigations and we build automated systems that can disrupt any kind of violating behavior across the platform at scale. 

We also, as Rebecca mentioned, build transparency tools so that users, external researchers and the press can see who is using the platform and ensure that they’re engaging authentically. It also forces threat actors who are trying to conceal their identity to work harder to conceal and mislead. 

And then lastly, one of the things that’s extremely clear to us, particularly in the election space, is that this is a whole of society effort. And so, we work closely with partners in government, in civil society, and across industry to tackle these threats. 

And we’ve found that where we could be most effective is where we bring the tools we bring to the table, and then can work with government and work with other partners to respond and get ahead of these challenges as they emerge. 

One of the ways that we do this is through proactive investigations into the deceptive efforts engaged in by bad actors. Over the last year, our investigative teams, working together with our partners in civil society, law enforcement, and industry, have found and stopped more than 50 campaigns engaged in coordinated inauthentic behavior across the world. 

This includes an operation we removed in May that originated from Iran and targeted a number of countries, including the UK. As we announced at the time, we removed 51 Facebook accounts, 36 pages, seven groups, and three Instagram accounts involved in coordinated inauthentic behavior. 

The page admins and account owners typically posted content in English or Arabic, and most of the operation had no focus on a particular country, although there were some pages focused on the UK and the United States. 

Similarly, in March we announced that we removed a domestic UK network of about 137 Facebook and Instagram accounts, pages, and groups that were engaged in coordinated inauthentic behavior. 

The individuals behind these accounts presented themselves as far right and anti-far right activists, frequently changed page and group names, and operated fake accounts to engage in hate speech and spread divisive comments on both sides of the political debate in the UK. 

These are the types of investigations that we focus our core investigative team on. Whenever we see a sophisticated actor that’s trying to evade our automated systems, those teams, which are made up of experts from law enforcement, the intelligence community, and investigative journalism, can find and reveal that behavior. 

When we expose it, we announce it publicly and we remove it from the platform. Those expert investigators proactively hunt for evidence of these types of coordinated inauthentic behavior (CIB) operations around the world. 

This team has not seen evidence of widespread foreign operations aimed at the UK. But we are continuing to search for this and we will remove and publicly share details of networks of CIB that we identify on our platforms. 

As always with these takedowns, we remove these operations for the deceptive behavior they engaged in, not for the content they shared. This is that content/behavior distinction that I mentioned earlier. As we’ve improved our ability to disrupt these operations, we’ve also deepened our understanding of the types of threats out there and how best to counter them. 

Based on these learnings, we’ve recently updated our inauthentic behavior policy which is posted publicly as part of our Community Standards, to clarify how we enforce against the spectrum of deceptive practices we see in our platforms, whether foreign or domestic, state or non state. For each investigation, we isolate any new behaviors we see and then we work to automate detection of them at scale. This connects to that second pillar of our integrity work. 

And this slows down the bad guy and lets our investigators focus on improving our defenses against emerging threats. A good example of this work is our efforts to find and block fake accounts, which Rebecca mentioned. 

We know bad actors use fake accounts as a way to mask their identity and inflict harm on our platforms. That’s why we’ve built an automated system to find and remove these fake accounts. And each time we conduct one of these takedowns, or any other of our enforcement actions, we learn more about what fake accounts look like and how we can have automated systems that detect and block them. 

This is why we have these systems in place today that block millions of fake accounts every day, often within minutes of their creation. Because information operations often target multiple platforms as well as traditional media, I mentioned our collaborations with industry, civil society and government. 

In addition to that, we are building increased transparency on our platform, so that the public along with open source researchers and journalists can find and expose more bad behavior themselves. 

This effort on transparency is incredibly important. Rob will talk about this in detail, but I do want to add one point here, specifically around pages. Increasingly, we’re seeing people operate pages that clearly disclose the organization behind them as a way to make others think they are independent. 

We want to make sure Facebook is used to engage authentically, and that users understand who is speaking to them and what perspective they are representing. We noted last month that we would be announcing new approaches to address this, and today we’re introducing a policy to require more accountability for pages that are concealing their ownership in order to mislead people.

If we find a page is misleading people about its purpose by concealing its ownership, we will require it to go through our business verification process, which we recently announced, and show more information on the page itself about who is behind that page, including the organization’s legal name and verified city, phone number, or website in order for it to stay up. 

This type of increased transparency helps ensure that the platform continues to be authentic and the people who use the platform know who they’re talking to and understand what they’re seeing. 

Rob Leathern, Director of Product, Business Integrity, Facebook 

In addition to making pages more transparent as Nathaniel has indicated, we’ve also put a lot of effort into making political advertising on Facebook more transparent than it is anywhere else. 

Every political and issue ad in the that runs on Facebook now goes into our Ad Library public archive that everyone can access, regardless of whether or not they have a Facebook account. 

We launched this in the UK in October 2018 and, since then, there’s been over 116,000 ads related to politics, elections, and social issues placed in the UK Ad Library. You can find all the ads that a candidate or organization is running, including how much they spent and who saw the ad. And we’re storing these ads in the Ad Library for seven years. 

Other media such as billboards, newspaper ads, direct mail, leaflets or targeted emails don’t today provide this level of transparency into the ad and who is seeing them. And as a result, we’ve seen a significant number of press stories regarding the election driven by the information in Facebook’s Ad Library. 

We’re proud of this resource and insight into ads running on Facebook and Instagram and that it is proving useful for media and researchers. And just last month, we made even more changes to both the Ad Library and Ad Library Reports. These include adding details in who the top advertising spenders are in each country in the UK, as well as providing an additional view by different date ranges which people have been asking for. 

We’re now also making it clear which Facebook platform an ad ran on. For example, if an ad ran on both Facebook and/or Instagram. 

For those of you unfamiliar with the Ad Library, which you can see at Facebook.com/adlibrary, I thought I’d run through it quickly. 

So this is the Ad Library. Here you see all the ads have been classified as relating to politics or issues. We keep them in the library for seven years. As I mentioned, you can find the Ad Library at Facebook.com/adlibrary. 

You can also access the Ad Library through a specific page. For example, for this Page, you can see not only the advertising information, but also the transparency about the Page itself, along with the spend data. 

Here is an example of the ads that this Page is running, both active as well as inactive. In addition, if an ad has been disapproved for violating any of our ad policies, you’re also able to see all of those ads as well. 

Here’s what it looks like if you click to see more detail about a specific ad. You’ll be able to see individual ad spend, impressions, and demographic information. 

And you’ll also be able to compare the individual ad spend to the overall macro spend by the Page, which is tracked in the section below. If you scroll back up, you’ll also be able to see the other information about the disclaimer that has been provided by the advertiser. 

We know we can’t protect elections alone and that everyone plays a part in keeping the platform safe and respectful. We ask people to share responsibly and to let us know when they see something that may violate our Advertising Policies and Community Standards. 

We also have the Ad Library API so journalists and academics can analyze ads about social issues, elections, or politics. The Ad Library application programming interface, or API, allows people to perform customized keyword searches of ads stored in the Ad Library. You can search data for all active and inactive issue, electoral or political ads. 

You can also access the Ad Library and the data therein through the specific page or through the Ad Library Report. Here is the Ad Library report, this allows you to see the spend by specific advertisers and you can download a full report of the data. 

Here we also allow you to see the spending by location and if you click in you can see the top spenders by region. So you can see, for example, in the various regions, who the top spenders in those areas are. 

Our goal is to provide an open API to news organizations, researchers, groups and people who can hold advertisers and us more accountable. 

We’ve definitely seen a lot of press, journalists, and researchers examining the data in the Ad Library and using it to generate these insights and we think that’s exactly a part of what will help hold both us and advertisers more accountable.

We hope these measures will build on existing transparency we have in place and help reporters, researchers and most importantly people on Facebook learn more about the Pages and information they’re engaging with. 

Antonia Woodford, Product Manager, Misinformation, Facebook

We are committed to fighting the spread of misinformation and viral hoaxes on Facebook. It is a responsibility we take seriously.

To accomplish this, we follow a three-pronged approach which we call remove, reduce, and inform. First and foremost, when something violates the laws or our policies, we’ll remove it from the platform all together.

As Nathaniel touched on, removing fake accounts is a priority, of which the vast majority are detected and removed within minutes of registration and before a person can report them. This is a key element in eliminating the potential spread of misinformation. 

The reduce and inform part of the equation is how we reduce the spread of problematic content that doesn’t violate the law or our community standards, while still ensuring freedom of expression on the platform and this is where the majority of our misinformation work is focused. 

To reduce the spread of misinformation, we work with third party fact-checkers. 

Through a combination of reporting from people on our platform and machine learning, potentially false posts are sent to third party fact-checkers to review. These fact-checkers review this content, check the facts, and then rate its accuracy. They’re able to review links in news articles as well as photos, videos, or text posts on Facebook.

After content has been rated false, our algorithm heavily downranks this content in News Feed so it’s seen by fewer people and far less likely to go viral. Fact-checkers can fact-check any posts they choose based on the queue we send them. 

And lastly, as part of our work to inform people about the content they see on Facebook, we just launched a new design to better warn people when they see content that’s illegal, false, or partly false by our fact-checking partners.

People will now see a more prominent label on photos and videos that have been fact-checked as false or partly false. This is a grey screen that sits over a post and says ‘false information’ and points people to fact-checkers’ articles debunking the claims. 

These clearer labels are what people have told us they want, what they have told us they expect Facebook to do, and what experts tell us is the right tactic for combating misinformation.

We’re rolling this change out in the UK this week for any photos and videos that have been rated through our fact-checking partnership. Though just one part of our overall strategy, fact-checking is a fundamental part of our strategy to combat this information and I want to share a little bit more about the program.

Our fact-checking partners are all accredited by the International Fact-Checking Network which requires them to abide by a code of principles such as nonpartisanship and transparency sources.

We currently have over 50 partners in over 40 languages around the world. As Rebecca outlined earlier, we don’t send content or ads from politicians and political parties to our third party fact-checking partners.

Here in the UK we work with Full Fact and FactCheckNI and I as part of our program. To recap we identify content that may be false using signals such as feedback from our users. This content is all submitted into a queue for our fact-checking partners to access. These fact-checkers then choose which content to review, check the facts, and rate the accuracy of the content.

These fact-checkers are independent organizations, so it is at their discretion what they choose to investigate. They can also fact-check whatever content they want outside of the posts we send their way.

If a fact-checker rates a story as false, it will appear lower in News Feed with the false information screen I mentioned earlier. This significantly reduces the number of people who see it.

Other posts that Full Fact and FactCheckNI choose to fact-check outside of our system will not be impacted on Facebook. 

And finally, on Tuesday we announced a partnership with the International Fact-Checking Network to create the Fact-Checking Innovation Initiative. This will fund innovation projects, new formats, and technologies to help benefit the broader fact-checking ecosystem. 

We are investing $500,000 into this new initiative, where organizations can submit applications for projects to improve fact-checkers’ scale and efficiency, increase the reach of fact-checks to empower more people with reliable information, build new tools to help combat misinformation, and encourage newsrooms to collaborate in fact-checking efforts.

Anyone from the UK can be a part of this new initiative. 

Ella Fallows, Politics and Government Outreach Manager UK, Facebook 

Our team’s role involves two main tasks: working with MPs and candidates to ensure they have a good experience and get the most from our platforms; and looking at how we can best use our platforms to promote participation in elections.

I’d like to start with the safety of MPs and candidates using our platforms. 

There is, rightly, a focus in the UK about the current tone of political debate. Let me be clear, hate speech and threats of violence have no place on our platforms and we’re investing heavily to tackle them. 

Additionally, for this campaign we have this week written to political parties and candidates setting out the range of safety measures we have in place and also to remind them of the terms and conditions and the Community Standards which govern their use of our platforms. 

As you may be aware, every piece of content on Facebook and Instagram has a report button, and when content is reported to us which violates our community standards, (what is and isn’t allowed on Facebook) it is removed. 

Since March this year, MPs have also had access to a dedicated reporting channel to flag any abusive and threatening content directly to our teams. Now that the General Election is underway we’re extending that support to all prospective candidates, making our team available to anyone standing to allow them to quickly report any concerns across our platforms and have them investigated. 

This is particularly pertinent to Tuesday’s news from the Government calling for a one stop shop for candidates, and we have already set up our own one stop shop so that there is a single point of contact for candidates for issues across Facebook and Instagram.

Behind that reporting channel sits my team, which is focused on escalating reports from candidates and making sure we’re taking action as quickly as possible on anything that violates our Community Standards or Advertising Policies. 

But that team is not working alone – it’s backed up by our 35,000-strong global safety and security team that oversees content and behavior across the platform every day. 

And our technology is also helping us to automatically detect more of this harmful content. For example, while there is further to go, the proportion of hate speech we remove before it’s reported to us has almost tripled over the last two years.

We also have a Government, Politics & Advocacy Portal which is a home for everything a candidate will need during the campaign, including ‘how to’ guides on subjects such as registering as a political advertiser and running campaigns on Facebook, best practice tips and troubleshooting guides for technical issues.

We’re working with all of the political parties and the Electoral Commission to ensure candidates are aware of both the reporting channel to reach my team and the Government, Politics & Advocacy Portal.

We’re also working with political parties and the Electoral Commission to help candidates prepare for the election through a few different initiatives:

  • Firstly, while we don’t provide ongoing guidance or embed anyone into campaigns, we have held sessions with each party on how to use and get the most from our platforms for their campaigns, and we’ll continue to hold webinars throughout the General Election period for any candidate and their staff to join.
  • We’re also working with women’s networks within the parties to hold dedicated sessions for female candidates providing extra guidance on safety and outlining the help available to prevent harassment on our platforms. We want to ensure we’re doing everything possible to help them connect with their constituents, free from harassment.
  • Finally, we’re working with the Electoral Commission and political parties to distribute to every candidate in the General Election the safety guides we have put together, to ensure we reach everyone not just those attending our outreach sessions. 

For example, we have developed a range of tools that allow public figures to moderate and filter the content that people put on their Facebook Pages to prevent negative content appearing in the first place. People who help manage Pages can hide or delete individual comments. 

They can also proactively moderate comments and posts by visitors by turning on the profanity filter, or blocking specific words or lists of words that they do not want to appear on their Page. Page admins can also remove or ban people from their Pages. 

We hope these steps help every candidate to reach their constituents, and get the most from our platforms. But our work doesn’t stop there.

The second area our team focuses on is promoting civic engagement. In addition to supporting and advising candidates, we also, of course, want to help promote voter participation in the election. 

For the past five years, we’ve used badges and reminders at the top of people’s News Feeds to encourage people to vote in elections around the world. The same will be true for this campaign. 

We’ll run reminders to register to vote, with a link to the Electoral Commission’s voter registration page, in the week running up to the voter registration deadline. 

On election day itself, we’ll also run a reminder to vote with a link to the Electoral Commission website so voters can find their polling station and any information they need. This will include a button to share that you voted. 

We know from speaking to the Electoral Commission that these reminders for past national votes in the UK have had a positive effect on voter registration.

We hope that this combination of steps will help to ensure both candidates and voters engaging with the General Election on our platforms have the best possible experience.



[ad_2]

Source link

Removing More Coordinated Inauthentic Behavior From Russia

[ad_1]

By Nathaniel Gleicher, Head of Cybersecurity Policy

Today, we removed three networks of accounts, Pages and Groups for engaging in foreign interference — which is coordinated inauthentic behavior on behalf of a foreign actor — on Facebook and Instagram. They originated in Russia and targeted Madagascar, Central African Republic, Mozambique, Democratic Republic of the Congo, Côte d’Ivoire, Cameroon, Sudan and Libya. Each of these operations created networks of accounts to mislead others about who they were and what they were doing. Although the people behind these networks attempted to conceal their identities and coordination, our investigation connected these campaigns to entities associated with Russian financier Yevgeniy Prigozhin, who was previously indicted by the US Justice Department. We have shared information about our findings with law enforcement, policymakers and industry partners.

We’re constantly working to detect and stop this type of activity because we don’t want our services to be used to manipulate people. We’re taking down these Pages, Groups and accounts based on their behavior, not the content they posted. In each of these cases, the people behind this activity coordinated with one another and used fake accounts to misrepresent themselves, and that was the basis for our action.

We are making progress rooting out this abuse, but as we’ve said before, it’s an ongoing challenge. We’re committed to continually improving to stay ahead. That means building better technology, hiring more people and working closer with law enforcement, security experts and other companies.

What We’ve Found So Far

Today, we removed 35 Facebook accounts, 53 Pages, seven Groups and five Instagram accounts that originated in Russia and focused on Madagascar, the Central African Republic, Mozambique, Democratic Republic of the Congo, Côte d’Ivoire and Cameroon. The individuals behind this activity used a combination of fake accounts and authentic accounts of local nationals in Madagascar and Mozambique to manage Pages and Groups, and post their content. They typically posted about global and local political news including topics like Russian policies in Africa, elections in Madagascar and Mozambique, election monitoring by a local non-governmental organization and criticism of French and US policies.

  • Presence on Facebook: 35 Facebook accounts, 53 Pages, 7 Groups and 5 Instagram accounts.
  • Followers: About 475,000 accounts followed one or more of these Pages and around 450 people followed one or more of these Groups and around 650 people followed one or more of these Instagram accounts.
  • Advertising: Around $77,000 in spending for ads on Facebook paid for in US dollars. The first ad ran in April 2018 and the most recent ad ran in October 2019.

We found this activity as part of our internal investigations into Russia-linked, suspected coordinated inauthentic behavior in Africa. Our analysis benefited from open source reporting.

Below is a sample of the content posted by some of these Pages:

Page Name: “Sudan in the Eyes of Others” Caption Translation: Yam Brands, the company that owns the KFC franchise stated that it intends on opening 3 branches of its franchise in Sudan. The Spokesman of the company based in the american state of Kentucky, Takalaty Similiny, issued a statement saying that the branches are currently under construction and will open in mid November.

Translation: The Police of the Republic of Mozambique announced today that nine members of RENAMO were detained for their participation in the attempt to remove urns from one of the voting posts in the district of Machanga, Sofala and for having vandalised the infrastructure. According to the spokesperson for PRM, that spoke in a press-conference in Maputo, the nine people are accused of having lead around 300 RENAMO supporters that tried to remove the urns during counting at the Inharingue Primary School.

Translation: President of Central African Republic asked Vladimir Putin to organize the delivery of heavy weapons. Wednesday, in Sochi, the president Faustin-Archange Touadera asked his counterpart Vladimir Putin to increase the military assistance to the Republic, asking specifically for the supply of heavier weapons. “Russia is giving a considerable help to our country. They already carried out two weapons deliveries, trained our national troops, trained police officers, but for more effectiveness, we need heavy weapons. We hope that Russia will be able to allocate us combat vehicles, artillery canons and other killing weapons in order for us to bring our people to safety” said Touadera.
However, there is still an issue which is blocking us to implement this. The embargo on Central African Republic was not fully lifted in order for Russia to implement the plans of Touadera. Until now, it is only possible to supply weapons with a caliber less than 14,5 mm.
The embargo do not stop armed groups to get illegally heavy weapons for themselves, which is not helping the efforts of the government to establish peace.
We ask the Security Council of United Nations to draw attention on what their (sometimes reckless) sanctions are bringing.

We also removed 17 Facebook accounts, 18 Pages, 3 Groups and six Instagram accounts that originated in Russia and focused primarily on Sudan. The people behind this activity used a combination of authentic accounts of Sudanese nationals, fake and compromised accounts — some of which had already been disabled by our automated systems — to comment, post and manage Pages posing as news organizations, as well as direct traffic to off-platform sites. They frequently shared stories from SUNA (Sudan’s state news agency) as well as Russian state-controlled media Sputnik and RT, and posted primarily in Arabic and some in English. The Page administrators and account owners posted about local news and events in Sudan and other countries in Sub-Saharan Africa, including Sudanese-Russian relations, US-Russian relations, Russian foreign policy and Muslims in Russia.

  • Presence on Facebook and Instagram: 17 Facebook accounts, 18 Pages, 3 Groups and 6 accounts on Instagram.
  • Followers: About 457,000 accounts followed one or more of these Pages, about 1,300 accounts joined at least one of these Groups and around 2,900 people followed one or more of these Instagram accounts.
  • Advertising: Around $160 in spending for ads on Facebook paid for in Russian rubles. The first ad ran in April 2018 and the most recent ad ran in September 2019.

We found this activity as part of our internal investigations into Russia-linked, suspected coordinated inauthentic behavior in the region.

Below is a sample of the content posted by some of these Pages:

Translation (first two paragraphs): “American and British intelligence put together false information about Putin’s inner circle… a diplomatic military source said that American and British intelligence agencies are preparing to leak false information about people close to the president, Vladimir Putin, and the leadership of the Russian defense ministry.“

Page title: “Nile Echo”
Post translation (first paragraph only): French movements to abort Russian and Sudanese mediation in the Central African Republic…

Translation: #Article (I am completely sure that the person in the cell is not [ousted Sudanese leader Omar] al-Bashir, but I don’t have physical evidence proving this) Aml al-Kordofani wrote: The person resembling al-Bashir who is sitting behind the bars..who is he? The double game continues between the military and the sons of Gosh (referring to former Sudanese intelligence chief Salah Abdullah Mohamed Saleh) according to the American plan. The American plan employs psychological operations, as we mentioned earlier, and are undertaken by a huge office within the US Department of Defense.

Finally, we removed a network of 14 Facebook accounts, 12 Pages, one Group and one Instagram account that originated in Russia and focused on Libya. The individuals behind this activity used a combination of authentic accounts of Egyptian nationals, fake and compromised accounts — some of which had already been disabled by our automated systems — to manage Pages and drive people to an off-platform domain. They frequently shared stories from Russian state-controlled media Sputnik and RT. The Page admins and account owners typically posted in Arabic about local news and geopolitical issues including Libyan politics, crimes, natural disasters, public health, Turkey’s alleged sponsoring of terrorism in Libya, illegal migration, militia violence, the detention of Russian citizens in Libya for alleged interference in elections and a meeting between Khalifa Haftar, head of the Libyan National Army, and Putin. Some of these Pages posted content on multiple sides of political debate in Libya, including criticism of the Government of National Accord, US foreign policy, and Haftar, as well as support of Muammar Gaddafi and his son Saif al-Islam Gaddafi, Russian foreign policy, and Khalifa Haftar.

  • Presence on Facebook and Instagram: 14 Facebook accounts, 12 Pages, one Group and one account on Instagram.
  • Followers: About 212,000 accounts followed one or more of these Pages, 1 account joined this Group and around 29,300 people followed this Instagram account.
  • Advertising: About $10,000 USD paid for primarily in US dollars, euros and Egyptian pounds. The first ad ran in May 2014 and the most recent ad ran in October 2019.

Based on a tip shared by the Stanford Internet Observatory, we conducted an investigation into suspected Russia-linked coordinated inauthentic behavior and identified the full scope of this activity. Our analysis benefited from open source reporting.

Below is a sample of the content posted by some of these Pages:

Page name: “Voice of Libya” Post translation: The Government of National Accord [GNA] practices hypocrisy … the detention of two Russian citizens under the pretense that they are manipulating elections in Libya. But in reality, no elections are taking place in Libya now. So the pretense under which the Russians were arrested is fictitious and contrived.

Page title: “Libya Gaddafi” Post translation: “Why was late Libyan leader Muammar al-Gaddafi killed? Everyone was happy in Libya. There are people in America who sleep under bridges. There was never any discrimination in Libya, and there were not problems. The work was good and the money, too.”

Page name: “Voice of Libya” Post translation: First meeting between Haftar and Putin in Moscow. Several sources reported on the visit of the army’s commander-in-chief, Field Marshal Khalifa Haftar, to Moscow, where he met Russian President Vladimir Putin, to discuss developments in the military and political situation in Libya. This is Haftar’s first meeting with the Russian president. He has previously visited Russia and met with senior officials in the foreign and defense ministries, and they are expected to meet again.

Page title: “Falcons of the Conqueror” Post translation: Field Marshal Haftar: Libyans decide who to elect as the next president, and it is Saif al-Islam al-Gaddafi’s right to be a candidate



[ad_2]

Source link

Helping to Protect the 2020 US Elections

[ad_1]

By Guy Rosen, VP of Integrity; Katie Harbath, Public Policy Director, Global Elections; Nathaniel Gleicher, Head of Cybersecurity Policy and Rob Leathern, Director of Product Management

We have a responsibility to stop abuse and election interference on our platform. That’s why we’ve made significant investments since 2016 to better identify new threats, close vulnerabilities and reduce the spread of viral misinformation and fake accounts. 

Today, almost a year out from the 2020 elections in the US, we’re announcing several new measures to help protect the democratic process and providing an update on initiatives already underway:

Fighting foreign interference

  • Combating inauthentic behavior, including an updated policy
  • Protecting the accounts of candidates, elected officials, their teams and others through Facebook Protect 

Increasing transparency

  • Making Pages more transparent, including showing the confirmed owner of a Page
  • Labeling state-controlled media on their Page and in our Ad Library
  • Making it easier to understand political ads, including a new US presidential candidate spend tracker

Reducing misinformation

  • Preventing the spread of misinformation, including clearer fact-checking labels 
  • Fighting voter suppression and interference, including banning paid ads that suggest voting is useless or advise people not to vote
  • Helping people better understand the information they see online, including an initial investment of $2 million to support media literacy projects

Fighting Foreign Interference

Combating Inauthentic Behavior

Over the last three years, we’ve worked to identify new and emerging threats and remove coordinated inauthentic behavior across our apps. In the past year alone, we’ve taken down over 50 networks worldwide, many ahead of major democratic elections. As part of our effort to counter foreign influence campaigns, this morning we removed four separate networks of accounts, Pages and Groups on Facebook and Instagram for engaging in coordinated inauthentic behavior. Three of them originated in Iran and one in Russia. They targeted the US, North Africa and Latin America. We have identified these manipulation campaigns as part of our internal investigations into suspected Iran-linked inauthentic behavior, as well as ongoing proactive work ahead of the US elections.

We took down these networks based on their behavior, not the content they posted. In each case, the people behind this activity coordinated with one another and used fake accounts to misrepresent themselves, and that was the basis for our action. We have shared our findings with law enforcement and industry partners. More details can be found here.

As we’ve improved our ability to disrupt these operations, we’ve also built a deeper understanding of different threats and how best to counter them. We investigate and enforce against any type of inauthentic behavior. However, the most appropriate way to respond to someone boosting the popularity of their posts in their own country may not be the best way to counter foreign interference. That’s why we’re updating our inauthentic behavior policy to clarify how we deal with the range of deceptive practices we see on our platforms, whether foreign or domestic, state or non-state.

Protecting the Accounts of Candidates, Elected Officials and Their Teams

Today, we’re launching Facebook Protect to further secure the accounts of elected officials, candidates, their staff and others who may be particularly vulnerable to targeting by hackers and foreign adversaries. As we’ve seen in past elections, they can be targets of malicious activity. However, because campaigns are generally run for a short period of time, we don’t always know who these campaign-affiliated people are, making it harder to help protect them.

Beginning today, Page admins can enroll their organization’s Facebook and Instagram accounts in Facebook Protect and invite members of their organization to participate in the program as well. Participants will be required to turn on two-factor authentication, and their accounts will be monitored for hacking, such as login attempts from unusual locations or unverified devices. And, if we discover an attack against one account, we can review and protect other accounts affiliated with that same organization that are enrolled in our program. Read more about Facebook Protect and enroll here.

Increasing Transparency

Making Pages More Transparent

We want to make sure people are using Facebook authentically, and that they understand who is speaking to them. Over the past year, we’ve taken steps to ensure Pages are authentic and more transparent by showing people the Page’s primary country location and whether the Page has merged with other Pages. This gives people more context on the Page and makes it easier to understand who’s behind it. 

Increasingly, we’ve seen people failing to disclose the organization behind their Page as a way to make people think that a Page is run independently. To address this, we’re adding more information about who is behind a Page, including a new “Organizations That Manage This Page” tab that will feature the Page’s “Confirmed Page Owner,” including the organization’s legal name and verified city, phone number or website.

Initially, this information will only appear on Pages with large US audiences that have gone through Facebook’s business verification. In addition, Pages that have gone through the new authorization process to run ads about social issues, elections or politics in the US will also have this tab. And starting in January, these advertisers will be required to show their Confirmed Page Owner. 

If we find a Page is concealing its ownership in order to mislead people, we will require it to successfully complete the verification process and show more information in order for the Page to stay up. 

Labeling State-Controlled Media

We want to help people better understand the sources of news content they see on Facebook so they can make informed decisions about what they’re reading. Next month, we’ll begin labeling media outlets that are wholly or partially under the editorial control of their government as state-controlled media. This label will be on both their Page and in our Ad Library. 

We will hold these Pages to a higher standard of transparency because they combine the opinion-making influence of a media organization with the strategic backing of a state. 

We developed our own definition and standards for state-controlled media organizations with input from more than 40 experts around the world specializing in media, governance, human rights and development. Those consulted represent leading academic institutions, nonprofits and international organizations in this field, including Reporters Without Borders, Center for International Media Assistance, European Journalism Center, Oxford Internet Institute‘s Project on Computational Propaganda, Center for Media, Data and Society (CMDS) at the Central European University, the Council of Europe, UNESCO and others. 

It’s important to note that our policy draws an intentional distinction between state-controlled media and public media, which we define as any entity that is publicly financed, retains a public service mission and can demonstrate its independent editorial control. At this time, we’re focusing our labeling efforts only on state-controlled media. 

We will update the list of state-controlled media on a rolling basis beginning in November. And, in early 2020, we plan to expand our labeling to specific posts and apply these labels on Instagram as well. For any organization that believes we have applied the label in error, there will be an appeals process. 

Making it Easier to Understand Political Ads

In addition to making Pages more transparent, we’re updating the Ad Library, Ad Library Report and Ad Library API to help journalists, lawmakers, researchers and others learn more about the ads they see. This includes:

  • A new US presidential candidate spend tracker, so that people can see how much candidates have spent on ads
  • Adding additional spend details at the state or regional level to help people analyze advertiser and candidate efforts to reach voters geographically
  • Making it clear if an ad ran on Facebook, Instagram, Messenger or Audience Network
  • Adding useful API filters, providing programmatic access to download ad creatives and a repository of frequently used API scripts.

In addition to updates to the Ad Library API, in November, we will begin testing a new database with researchers that will enable them to quickly download the entire Ad Library, pull daily snapshots and track day-to-day changes.

Visit our Help Center to learn more about the changes to Pages and the Ad Library

Reducing Misinformation

Preventing the Spread of Viral Misinformation

On Facebook and Instagram, we work to keep confirmed misinformation from spreading. For example, we reduce its distribution so fewer people see it on Instagram, we remove it from Explore and hashtags, and on Facebook, we reduce its distribution in News Feed. On Instagram, we also make content from accounts that repeatedly post misinformation harder to find by filtering content from that account from Explore and hashtag pages for example. And on Facebook, if Pages, domains or Groups repeatedly share misinformation, we’ll continue to reduce their overall distribution and we’ll place restrictions on the Page’s ability to advertise and monetize.

Over the next month, content across Facebook and Instagram that has been rated false or partly false by a third-party fact-checker will start to be more prominently labeled so that people can better decide for themselves what to read, trust and share. The labels below will be shown on top of false and partly false photos and videos, including on top of Stories content on Instagram, and will link out to the assessment from the fact-checker.

Much like we do on Facebook when people try to share known misinformation, we’re also introducing a new pop-up that will appear when people attempt to share posts on Instagram that include content that has been debunked by third-party fact-checkers.

In addition to clearer labels, we’re also working to take faster action to prevent misinformation from going viral, especially given that quality reporting and fact-checking takes time. In many countries, including in the US, if we have signals that a piece of content is false, we temporarily reduce its distribution pending review by a third-party fact-checker.

Fighting Voter Suppression and Intimidation

Attempts to interfere with or suppress voting undermine our core values as a company, and we work proactively to remove this type of harmful content. Ahead of the 2018 midterm elections, we extended our voter suppression and intimidation policies to prohibit:

  • Misrepresentation of the dates, locations, times and methods for voting or voter registration (e.g. “Vote by text!”);
  • Misrepresentation of who can vote, qualifications for voting, whether a vote will be counted and what information and/or materials must be provided in order to vote (e.g. “If you voted in the primary, your vote in the general election won’t count.”); and 
  • Threats of violence relating to voting, voter registration or the outcome of an election.

We remove this type of content regardless of who it’s coming from, and ahead of the midterm elections, our Elections Operations Center removed more than 45,000 pieces of content that violated these policies more than 90% of which our systems detected before anyone reported the content to us. 

We also recognize that there are certain types of content, such as hate speech, that are equally likely to suppress voting. That’s why our hate speech policies ban efforts to exclude people from political participation on the basis of things like race, ethnicity or religion (e.g., telling people not to vote for a candidate because of the candidate’s race, or indicating that people of a certain religion should not be allowed to hold office).

In advance of the US 2020 elections, we’re implementing additional policies and expanding our technical capabilities on Facebook and Instagram to protect the integrity of the election. Following up on a commitment we made in the civil rights audit report released in June, we have now implemented our policy banning paid advertising that suggests voting is useless or meaningless, or advises people not to vote. 

In addition, our systems are now more effective at proactively detecting and removing this harmful content. We use machine learning to help us quickly identify potentially incorrect voting information and remove it. 

We are also continuing to expand and develop our partnerships to provide expertise on trends in voter suppression and intimidation, as well as early detection of violating content. This includes working directly with secretaries of state and election directors to address localized voter suppression that may only be occurring in a single state or district. This work will be supported by our Elections Operations Center during both the primary and general elections. 

Helping People Better Understand What They See Online

Part of our work to stop the spread of misinformation is helping people spot it for themselves. That’s why we partner with organizations and experts in media literacy. 

Today, we’re announcing an initial investment of $2 million to support projects that empower people to determine what to read and share — both on Facebook and elsewhere. 

These projects range from training programs to help ensure the largest Instagram accounts have the resources they need to reduce the spread of misinformation, to expanding a pilot program that brings together senior citizens and high school students to learn about online safety and media literacy, to public events in local venues like bookstores, community centers and libraries in cities across the country. We’re also supporting a series of training events focused on critical thinking among first-time voters. 

In addition, we’re including a new series of media literacy lessons in our Digital Literacy Library. These lessons are drawn from the Youth and Media team at the Berkman Klein Center for Internet & Society at Harvard University, which has made them available for free worldwide under a Creative Commons license. The lessons, created for middle and high school educators, are designed to be interactive and cover topics ranging from assessing the quality of the information online to more technical skills like reverse image search.

We’ll continue to develop our media literacy efforts in the US and we’ll have more to share soon. 



[ad_2]

Source link

Mark Zuckerberg Stands for Voice and Free Expression

[ad_1]

Today, Mark Zuckerberg spoke at Georgetown University about the importance of protecting free expression. He underscored his belief that giving everyone a voice empowers the powerless and pushes society to be better over time — a belief that’s at the core of Facebook.

In front of hundreds of students at the school’s Gaston Hall, Mark warned that we’re increasingly seeing laws and regulations around the world that undermine free expression and human rights. He argued that in order to make sure people can continue to have a voice, we should: 1) write policy that helps the values of voice and expression triumph around the world, 2) fend off the urge to define speech we don’t like as dangerous, and 3) build new institutions so companies like Facebook aren’t making so many important decisions about speech on our own. 

Read Mark’s full speech below.

Standing For Voice and Free Expression

Hey everyone. It’s great to be here at Georgetown with all of you today.

Before we get started, I want to acknowledge that today we lost an icon, Elijah Cummings. He was a powerful voice for equality, social progress and bringing people together.

When I was in college, our country had just gone to war in Iraq. The mood on campus was disbelief. It felt like we were acting without hearing a lot of important perspectives. The toll on soldiers, families and our national psyche was severe, and most of us felt powerless to stop it. I remember feeling that if more people had a voice to share their experiences, maybe things would have gone differently. Those early years shaped my belief that giving everyone a voice empowers the powerless and pushes society to be better over time.

Back then, I was building an early version of Facebook for my community, and I got to see my beliefs play out at smaller scale. When students got to express who they were and what mattered to them, they organized more social events, started more businesses, and even challenged some established ways of doing things on campus. It taught me that while the world’s attention focuses on major events and institutions, the bigger story is that most progress in our lives comes from regular people having more of a voice.

Since then, I’ve focused on building services to do two things: give people voice, and bring people together. These two simple ideas — voice and inclusion — go hand in hand. We’ve seen this throughout history, even if it doesn’t feel that way today. More people being able to share their perspectives has always been necessary to build a more inclusive society. And our mutual commitment to each other — that we hold each others’ right to express our views and be heard above our own desire to always get the outcomes we want — is how we make progress together.

But this view is increasingly being challenged. Some people believe giving more people a voice is driving division rather than bringing us together. More people across the spectrum believe that achieving the political outcomes they think matter is more important than every person having a voice. I think that’s dangerous. Today I want to talk about why, and some important choices we face around free expression.

Throughout history, we’ve seen how being able to use your voice helps people come together. We’ve seen this in the civil rights movement. Frederick Douglass once called free expression “the great moral renovator of society”. He said “slavery cannot tolerate free speech”. Civil rights leaders argued time and again that their protests were protected free expression, and one noted: “nearly all the cases involving the civil rights movement were decided on First Amendment grounds”.

We’ve seen this globally too, where the ability to speak freely has been central in the fight for democracy worldwide. The most repressive societies have always restricted speech the most — and when people are finally able to speak, they often call for change. This year alone, people have used their voices to end multiple long-running dictatorships in Northern Africa. And we’re already hearing from voices in those countries that had been excluded just because they were women, or they believed in democracy.

Our idea of free expression has become much broader over even the last 100 years. Many Americans know about the Enlightenment history and how we enshrined the First Amendment in our constitution, but fewer know how dramatically our cultural norms and legal protections have expanded, even in recent history.

The first Supreme Court case to seriously consider free speech and the First Amendment was in 1919, Schenk vs the United States. Back then, the First Amendment only applied to the federal government, and states could and often did restrict your right to speak. Our ability to call out things we felt were wrong also used to be much more restricted. Libel laws used to impose damages if you wrote something negative about someone, even if it was true. The standard later shifted so it became okay as long as you could prove your critique was true. We didn’t get the broad free speech protections we have now until the 1960s, when the Supreme Court ruled in opinions like New York Times vs Sullivan that you can criticize public figures as long as you’re not doing so with actual malice, even if what you’re saying is false.

We now have significantly broader power to call out things we feel are unjust and share our own personal experiences. Movements like #BlackLivesMatter and #MeToo went viral on Facebook — the hashtag #BlackLivesMatter was actually first used on Facebook — and this just wouldn’t have been possible in the same way before. 100 years back, many of the stories people have shared would have been against the law to even write down. And without the internet giving people the power to share them directly, they certainly wouldn’t have reached as many people. With Facebook, more than 2 billion people now have a greater opportunity to express themselves and help others.

While it’s easy to focus on major social movements, it’s important to remember that most progress happens in our everyday lives. It’s the Air Force moms who started a Facebook group so their children and other service members who can’t get home for the holidays have a place to go. It’s the church group that came together during a hurricane to provide food and volunteer to help with recovery. It’s the small business on the corner that now has access to the same sophisticated tools only the big guys used to, and now they can get their voice out and reach more customers, create jobs and become a hub in their local community. Progress and social cohesion come from billions of stories like this around the world.

People having the power to express themselves at scale is a new kind of force in the world — a Fifth Estate alongside the other power structures of society. People no longer have to rely on traditional gatekeepers in politics or media to make their voices heard, and that has important consequences. I understand the concerns about how tech platforms have centralized power, but I actually believe the much bigger story is how much these platforms have decentralized power by putting it directly into people’s hands. It’s part of this amazing expansion of voice through law, culture and technology.

So giving people a voice and broader inclusion go hand in hand, and the trend has been towards greater voice over time. But there’s also a counter-trend. In times of social turmoil, our impulse is often to pull back on free expression. We want the progress that comes from free expression, but not the tension.

We saw this when Martin Luther King Jr. wrote his famous letter from Birmingham Jail, where he was unconstitutionally jailed for protesting peacefully. We saw this in the efforts to shut down campus protests against the Vietnam War. We saw this way back when America was deeply polarized about its role in World War I, and the Supreme Court ruled that socialist leader Eugene Debs could be imprisoned for making an anti-war speech.

In the end, all of these decisions were wrong. Pulling back on free expression wasn’t the answer and, in fact, it often ended up hurting the minority views we seek to protect. From where we are now, it seems obvious that, of course, protests for civil rights or against wars should be allowed. Yet the desire to suppress this expression was felt deeply by much of society at the time.

Today, we are in another time of social tension. We face real issues that will take a long time to work through — massive economic transitions from globalization and technology, fallout from the 2008 financial crisis, and polarized reactions to greater migration. Many of our issues flow from these changes.

In the face of these tensions, once again a popular impulse is to pull back from free expression. We’re at another cross-roads. We can continue to stand for free expression, understanding its messiness, but believing that the long journey towards greater progress requires confronting ideas that challenge us. Or we can decide the cost is simply too great. I’m here today because I believe we must continue to stand for free expression.

At the same time, I know that free expression has never been absolute. Some people argue internet platforms should allow all expression protected by the First Amendment, even though the First Amendment explicitly doesn’t apply to companies. I’m proud that our values at Facebook are inspired by the American tradition, which is more supportive of free expression than anywhere else. But even American tradition recognizes that some speech infringes on others’ rights. And still, a strict First Amendment standard might require us to allow terrorist propaganda, bullying young people and more that almost everyone agrees we should stop — and I certainly do — as well as content like pornography that would make people uncomfortable using our platforms.

So once we’re taking this content down, the question is: where do you draw the line? Most people agree with the principles that you should be able to say things other people don’t like, but you shouldn’t be able to say things that put people in danger. The shift over the past several years is that many people would now argue that more speech is dangerous than would have before. This raises the question of exactly what counts as dangerous speech online. It’s worth examining this in detail.

Many arguments about online speech are related to new properties of the internet itself. If you believe the internet is completely different from everything before it, then it doesn’t make sense to focus on historical precedent. But we should be careful of overly broad arguments since they’ve been made about almost every new technology, from the printing press to radio to TV. Instead, let’s consider the specific ways the internet is different and how internet services like ours might address those risks while protecting free expression.

One clear difference is that a lot more people now have a voice — almost half the world. That’s dramatically empowering for all the reasons I’ve mentioned. But inevitably some people will use their voice to organize violence, undermine elections or hurt others, and we have a responsibility to address these risks. When you’re serving billions of people, even if a very small percent cause harm, that can still be a lot of harm.

We build specific systems to address each type of harmful content — from incitement of violence to child exploitation to other harms like intellectual property violations — about 20 categories in total. We judge ourselves by the prevalence of harmful content and what percent we find proactively before anyone reports it to us. For example, our AI systems identify 99% of the terrorist content we take down before anyone even sees it. This is a massive investment. We now have over 35,000 people working on security, and our security budget today is greater than the entire revenue of our company at the time of our IPO earlier this decade.

All of this work is about enforcing our existing policies, not broadening our definition of what is dangerous. If we do this well, we should be able to stop a lot of harm while fighting back against putting additional restrictions on speech.

Another important difference is how quickly ideas can spread online. Most people can now get much more reach than they ever could before. This is at the heart of a lot of the positive uses of the internet. It’s empowering that anyone can start a fundraiser, share an idea, build a business, or create a movement that can grow quickly. But we’ve seen this go the other way too — most notably when Russia’s IRA tried to interfere in the 2016 elections, but also when misinformation has gone viral. Some people argue that virality itself is dangerous, and we need tighter filters on what content can spread quickly.

For misinformation, we focus on making sure complete hoaxes don’t go viral. We especially focus on misinformation that could lead to imminent physical harm, like misleading health advice saying if you’re having a stroke, no need to go to the hospital.

More broadly though, we’ve found a different strategy works best: focusing on the authenticity of the speaker rather than the content itself. Much of the content the Russian accounts shared was distasteful but would have been considered permissible political discourse if it were shared by Americans — the real issue was that it was posted by fake accounts coordinating together and pretending to be someone else. We’ve seen a similar issue with these groups that pump out misinformation like spam just to make money.

The solution is to verify the identities of accounts getting wide distribution and get better at removing fake accounts. We now require you to provide a government ID and prove your location if you want to run political ads or a large page. You can still say controversial things, but you have to stand behind them with your real identity and face accountability. Our AI systems have also gotten more advanced at detecting clusters of fake accounts that aren’t behaving like humans. We now remove billions of fake accounts a year — most within minutes of registering and before they do much. Focusing on authenticity and verifying accounts is a much better solution than an ever-expanding definition of what speech is harmful.

Another qualitative difference is the internet lets people form communities that wouldn’t have been possible before. This is good because it helps people find groups where they belong and share interests. But the flip side is this has the potential to lead to polarization. I care a lot about this — after all, our goal is to bring people together.

Much of the research I’ve seen is mixed and suggests the internet could actually decrease aspects of polarization. The most polarized voters in the last presidential election were the people least likely to use the internet. Research from the Reuters Institute also shows people who get their news online actually have a much more diverse media diet than people who don’t, and they’re exposed to a broader range of viewpoints. This is because most people watch only a couple of cable news stations or read only a couple of newspapers, but even if most of your friends online have similar views, you usually have some that are different, and you get exposed to different perspectives through them. Still, we have an important role in designing our systems to show a diversity of ideas and not encourage polarizing content.

One last difference with the internet is it lets people share things that would have been impossible before. Take live-streaming, for example. This allows families to be together for moments like birthdays and even weddings, schoolteachers to read bedtime stories to kids who might not be read to, and people to witness some very important events. But we’ve also seen people broadcast self-harm, suicide, and terrible violence. These are new challenges and our responsibility is to build systems that can respond quickly.

We’re particularly focused on well-being, especially for young people. We built a team of thousands of people and AI systems that can detect risks of self-harm within minutes so we can reach out when people need help most. In the last year, we’ve helped first responders reach people who needed help thousands of times.

For each of these issues, I believe we have two responsibilities: to remove content when it could cause real danger as effectively as we can, and to fight to uphold as wide a definition of freedom of expression as possible — and not allow the definition of what is considered dangerous to expand beyond what is absolutely necessary. That’s what I’m committed to.

But beyond these new properties of the internet, there are also shifting cultural sensitivities and diverging views on what people consider dangerous content.

Take misinformation. No one tells us they want to see misinformation. That’s why we work with independent fact checkers to stop hoaxes that are going viral from spreading. But misinformation is a pretty broad category. A lot of people like satire, which isn’t necessarily true. A lot of people talk about their experiences through stories that may be exaggerated or have inaccuracies, but speak to a deeper truth in their lived experience. We need to be careful about restricting that. Even when there is a common set of facts, different media outlets tell very different stories emphasizing different angles. There’s a lot of nuance here. And while I worry about an erosion of truth, I don’t think most people want to live in a world where you can only post things that tech companies judge to be 100% true.

We recently clarified our policies to ensure people can see primary source speech from political figures that shapes civic discourse. Political advertising is more transparent on Facebook than anywhere else — we keep all political and issue ads in an archive so everyone can scrutinize them, and no TV or print does that. We don’t fact-check political ads. We don’t do this to help politicians, but because we think people should be able to see for themselves what politicians are saying. And if content is newsworthy, we also won’t take it down even if it would otherwise conflict with many of our standards.

I know many people disagree, but, in general, I don’t think it’s right for a private company to censor politicians or the news in a democracy. And we’re not an outlier here. The other major internet platforms and the vast majority of media also run these same ads.

American tradition also has some precedent here. The Supreme Court case I mentioned earlier that gave us our current broad speech rights, New York Times vs Sullivan, was actually about an ad with misinformation, supporting Martin Luther King Jr. and criticizing an Alabama police department. The police commissioner sued the Times for running the ad, the jury in Alabama found against the Times, and the Supreme Court unanimously reversed the decision, creating today’s speech standard.

As a principle, in a democracy, I believe people should decide what is credible, not tech companies. Of course there are exceptions, and even for politicians we don’t allow content that incites violence or risks imminent harm — and of course we don’t allow voter suppression. Voting is voice. Fighting voter suppression may be as important for the civil rights movement as free expression has been. Just as we’re inspired by the First Amendment, we’re inspired by the 15th Amendment too.

Given the sensitivity around political ads, I’ve considered whether we should stop allowing them altogether. From a business perspective, the controversy certainly isn’t worth the small part of our business they make up. But political ads are an important part of voice — especially for local candidates, up-and-coming challengers, and advocacy groups that may not get much media attention otherwise. Banning political ads favors incumbents and whoever the media covers.

Even if we wanted to ban political ads, it’s not clear where we’d draw the line. There are many more ads about issues than there are directly about elections. Would we ban all ads about healthcare or immigration or women’s empowerment? If we banned candidates’ ads but not these, would that really make sense to give everyone else a voice in political debates except the candidates themselves? There are issues any way you cut this, and when it’s not absolutely clear what to do, I believe we should err on the side of greater expression.

Or take hate speech, which we define as someone directly attacking a person or group based on a characteristic like race, gender or religion. We take down content that could lead to real world violence. In countries at risk of conflict, that includes anything that could lead to imminent violence or genocide. And we know from history that dehumanizing people is the first step towards inciting violence. If you say immigrants are vermin, or all Muslims are terrorists — that makes others feel they can escalate and attack that group without consequences. So we don’t allow that. I take this incredibly seriously, and we work hard to get this off our platform.

American free speech tradition recognizes that some speech can have the effect of restricting others’ right to speak. While American law doesn’t recognize “hate speech” as a category, it does prohibit racial harassment and sexual harassment. We still have a strong culture of free expression even while our laws prohibit discrimination.

But still, people have broad disagreements over what qualifies as hate and shouldn’t be allowed. Some people think our policies don’t prohibit content they think qualifies as hate, while others think what we take down should be a protected form of expression. This area is one of the hardest to get right.

I believe people should be able to use our services to discuss issues they feel strongly about — from religion and immigration to foreign policy and crime. You should even be able to be critical of groups without dehumanizing them. But even this isn’t always straightforward to judge at scale, and it often leads to enforcement mistakes. Is someone re-posting a video of a racist attack because they’re condemning it, or glorifying and encouraging people to copy it? Are they using normal slang, or using an innocent word in a new way to incite violence? Now multiply those linguistic challenges by more than 100 languages around the world.

Rules about what you can and can’t say often have unintended consequences. When speech restrictions were implemented in the UK in the last century, parliament noted they were applied more heavily to citizens from poorer backgrounds because the way they expressed things didn’t match the elite Oxbridge style. In everything we do, we need to make sure we’re empowering people, not simply reinforcing existing institutions and power structures.

That brings us back to the cross-roads we all find ourselves at today. Will we continue fighting to give more people a voice to be heard, or will we pull back from free expression?

I see three major threats ahead:

The first is legal. We’re increasingly seeing laws and regulations around the world that undermine free expression and people’s human rights. These local laws are each individually troubling, especially when they shut down speech in places where there isn’t democracy or freedom of the press. But it’s even worse when countries try to impose their speech restrictions on the rest of the world.

This raises a larger question about the future of the global internet. China is building its own internet focused on very different values, and is now exporting their vision of the internet to other countries. Until recently, the internet in almost every country outside China has been defined by American platforms with strong free expression values. There’s no guarantee these values will win out. A decade ago, almost all of the major internet platforms were American. Today, six of the top ten are Chinese.

We’re beginning to see this in social media. While our services, like WhatsApp, are used by protesters and activists everywhere due to strong encryption and privacy protections, on TikTok, the Chinese app growing quickly around the world, mentions of these protests are censored, even in the US.

Is that the internet we want?

It’s one of the reasons we don’t operate Facebook, Instagram or our other services in China. I wanted our services in China because I believe in connecting the whole world and I thought we might help create a more open society. I worked hard to make this happen. But we could never come to agreement on what it would take for us to operate there, and they never let us in. And now we have more freedom to speak out and stand up for the values we believe in and fight for free expression around the world.

This question of which nation’s values will determine what speech is allowed for decades to come really puts into perspective our debates about the content issues of the day. While we may disagree on exactly where to draw the line on specific issues, we at least can disagree. That’s what free expression is. And the fact that we can even have this conversation means that we’re at least debating from some common values. If another nation’s platforms set the rules, our discourse will be defined by a completely different set of values.

To push back against this, as we all work to define internet policy and regulation to address public safety, we should also be proactive and write policy that helps the values of voice and expression triumph around the world.

The second challenge to expression is the platforms themselves — including us. Because the reality is we make a lot of decisions that affect people’s ability to speak.

I’m committed to the values we’re discussing today, but we won’t always get it right. I understand people are concerned that we have so much control over how they communicate on our services. And I understand people are concerned about bias and making sure their ideas are treated fairly. Frankly, I don’t think we should be making so many important decisions about speech on our own either. We’d benefit from a more democratic process, clearer rules for the internet, and new institutions.

That’s why we’re establishing an independent Oversight Board for people to appeal our content decisions. The board will have the power to make final binding decisions about whether content stays up or comes down on our services — decisions that our team and I can’t overturn. We’re going to appoint members to this board who have a diversity of views and backgrounds, but who each hold free expression as their paramount value.

Building this institution is important to me personally because I’m not always going to be here, and I want to ensure the values of voice and free expression are enshrined deeply into how this company is governed.

The third challenge to expression is the hardest because it comes from our culture. We’re at a moment of particular tension here and around the world — and we’re seeing the impulse to restrict speech and enforce new norms around what people can say.

Increasingly, we’re seeing people try to define more speech as dangerous because it may lead to political outcomes they see as unacceptable. Some hold the view that since the stakes are so high, they can no longer trust their fellow citizens with the power to communicate and decide what to believe for themselves.

I personally believe this is more dangerous for democracy over the long term than almost any speech. Democracy depends on the idea that we hold each others’ right to express ourselves and be heard above our own desire to always get the outcomes we want. You can’t impose tolerance top-down. It has to come from people opening up, sharing experiences, and developing a shared story for society that we all feel we’re a part of. That’s how we make progress together.

So how do we turn the tide? Someone once told me our founding fathers thought free expression was like air. You don’t miss it until it’s gone. When people don’t feel they can express themselves, they lose faith in democracy and they’re more likely to support populist parties that prioritize specific policy goals over the health of our democratic norms.

I’m a little more optimistic. I don’t think we need to lose our freedom of expression to realize how important it is. I think people understand and appreciate the voice they have now. At some fundamental level, I think most people believe in their fellow people too.

As long as our governments respect people’s right to express themselves, as long as our platforms live up to their responsibilities to support expression and prevent harm, and as long as we all commit to being open and making space for more perspectives, I think we’ll make progress. It’ll take time, but we’ll work through this moment. We overcame deep polarization after World War I, and intense political violence in the 1960s. Progress isn’t linear. Sometimes we take two steps forward and one step back. But if we can’t agree to let each other talk about the issues, we can’t take the first step. Even when it’s hard, this is how we build a shared understanding.

So yes, we have big disagreements. Maybe more now than at any time in recent history. But part of that is because we’re getting our issues out on the table — issues that for a long time weren’t talked about. More people from more parts of our society have a voice than ever before, and it will take time to hear these voices and knit them together into a coherent narrative. Sometimes we hope for a singular event to resolve these conflicts, but that’s never been how it works. We focus on the major institutions — from governments to large companies — but the bigger story has always been regular people using their voice to take billions of individual steps forward to make our lives and our communities better.

The future depends on all of us. Whether you like Facebook or not, we need to recognize what is at stake and come together to stand for free expression at this critical moment.

I believe in giving people a voice because, at the end of the day, I believe in people. And as long as enough of us keep fighting for this, I believe that more people’s voices will eventually help us work through these issues together and write a new chapter in our history — where from all of our individual voices and perspectives, we can bring the world closer together.



[ad_2]

Source link

Facebook, Elections and Political Speech

[ad_1]

Speaking at the Atlantic Festival in Washington DC today I set out the measures that Facebook is taking to prevent outside interference in elections and Facebook’s attitude towards political speech on the platform. This is grounded in Facebook’s fundamental belief in free expression and respect for the democratic process, as well as the fact that, in mature democracies with a free press, political speech is already arguably the most scrutinized speech there is.  

You can read the full text of my speech below, but as I know there are often lots of questions about our policies and the way we enforce them I thought I’d share the key details.  

We rely on third-party fact-checkers to help reduce the spread of false news and other types of viral misinformation, like memes or manipulated photos and videos. We don’t believe, however, that it’s an appropriate role for us to referee political debates and prevent a politician’s speech from reaching its audience and being subject to public debate and scrutiny. That’s why Facebook exempts politicians from our third-party fact-checking program. We have had this policy on the books for over a year now, posted publicly on our site under our eligibility guidelines. This means that we will not send organic content or ads from politicians to our third-party fact-checking partners for review. However, when a politician shares previously debunked content including links, videos and photos, we plan to demote that content, display related information from fact-checkers, and reject its inclusion in advertisements. You can find more about the third-party fact-checking program and content eligibility here.

Facebook has had a newsworthiness exemption since 2016. This means that if someone makes a statement or shares a post which breaks our community standards we will still allow it on our platform if we believe the public interest in seeing it outweighs the risk of harm. Today, I announced that from now on we will treat speech from politicians as newsworthy content that should, as a general rule, be seen and heard. However, in keeping with the principle that we apply different standards to content for which we receive payment, this will not apply to ads – if someone chooses to post an ad on Facebook, they must still fall within our Community Standards and our advertising policies.

When we make a determination as to newsworthiness, we evaluate the public interest value of the piece of speech against the risk of harm. When balancing these interests, we take a number of factors into consideration, including country-specific circumstances, like whether there is an election underway or the country is at war; the nature of the speech, including whether it relates to governance or politics; and the political structure of the country, including whether the country has a free press. In evaluating the risk of harm, we will consider the severity of the harm. Content that has the potential to incite violence, for example, may pose a safety risk that outweighs the public interest value. Each of these evaluations will be holistic and comprehensive in nature, and will account for international human rights standards. 

Read the full speech below.

Facebook

For those of you who don’t know me, which I suspect is most of you, I used to be a politician – I spent two decades in European politics, including as Deputy Prime Minister in the UK for five years.

And perhaps because I acquired a taste for controversy in my time in politics, a year ago I came to work for Facebook.

I don’t have long with you, so I just want to touch on three things: I want to say a little about Facebook; about how we are getting ourselves ready for the 2020 election; and about our basic attitude towards political speech.

So…Facebook. 

As a European, I’m struck by the tone of the debate in the US around Facebook. Here you have this global success story, invented in America, based on American values, that is used by a third of the world’s population.

A company that has created 40,000 US jobs in the last two years, is set to create 40,000 more in the coming years, and contributes tens of billions of dollars to the economy. And with plans to spend more than $250 billion in the US in the next four years.

And while Facebook is subject to a lot of criticism in Europe, in India where I was earlier this month, and in many other places, the only place where it is being proposed that Facebook and other big Silicon Valley companies should be dismembered is here.

And whilst it might surprise you to hear me say this, I understand the underlying motive which leads people to call for that remedy – even if I don’t agree with the remedy itself.

Because what people want is that there should be proper competition, diversity, and accountability in how big tech companies operate – with success comes responsibility, and with power comes accountability.

But chopping up successful American businesses is not the best way to instill responsibility and accountability. For a start, Facebook and other US tech companies not only face fierce competition from each other for every service they provide – for photo and video sharing and messaging there are rival apps with millions or billions of users – but they also face increasingly fierce competition from their Chinese rivals. Giants like Alibaba, TikTok and WeChat.

More importantly, pulling apart globally successful American businesses won’t actually do anything to solve the big issues we are all grappling with – privacy, the use of data, harmful content and the integrity of our elections. 

Those things can and will only be addressed by creating new rules for the internet, new regulations to make sure companies like Facebook are accountable for the role they play and the decisions they take.

That is why we argue in favor of better regulation of big tech, not the break-up of successful American companies. 

Elections

Now, elections. It is no secret that Facebook made mistakes in 2016, and that Russia tried to use Facebook to interfere with the election by spreading division and misinformation. But we’ve learned the lessons of 2016. Facebook has spent the three years since building its defenses to stop that happening again.

  • Cracking down on fake accounts – the main source of fake news and malicious content – preventing millions from being created every day;
  • Bringing in independent fact-checkers to verify content;
  • Recruiting an army of people – now 30,000 – and investing hugely in artificial intelligence systems to take down harmful content.

And we are seeing results. Last year, a Stanford report found that interactions with fake news on Facebook was down by two-thirds since 2016.

I know there’s also a lot of concern about so-called deepfake videos. We’ve recently launched an initiative called the Deepfake Detection Challenge, working with the Partnership on AI, companies like Microsoft and universities like MIT, Berkeley and Oxford, to find ways to detect this new form of manipulated content so that we can identify them and take action.

But even when the videos aren’t as sophisticated – such as the now infamous Speaker Pelosi video – we know that we need to do more.

As Mark Zuckerberg has acknowledged publicly, we didn’t get to that video quickly enough and too many people saw it before we took action. We must and we will get better at identifying lightly manipulated content before it goes viral and provide users with much more forceful information when they do see it.

We will be making further announcements in this area in the near future.

Crucially, we have also tightened our rules on political ads. Political advertising on Facebook is now far more transparent than anywhere else – including TV, radio and print advertising.

People who want to run these ads now need to submit ID and information about their organization. We label the ads and let you know who’s paid for them. And we put these ads in a library for seven years so that anyone can see them.

Political speech

Of course, stopping election interference is only part of the story when it comes to Facebook’s role in elections. Which brings me to political speech.

Freedom of expression is an absolute founding principle for Facebook. Since day one, giving people a voice to express themselves has been at the heart of everything we do. We are champions of free speech and defend it in the face of attempts to restrict it. Censoring or stifling political discourse would be at odds with what we are about.

In a mature democracy with a free press, political speech is a crucial part of how democracy functions. And it is arguably the most scrutinized form of speech that exists.

 In newspapers, on network and cable TV, and on social media, journalists, pundits, satirists, talk show hosts and cartoonists – not to mention rival campaigns – analyze, ridicule, rebut and amplify the statements made by politicians.

At Facebook, our role is to make sure there is a level playing field, not to be a political participant ourselves.

To use tennis as an analogy, our job is to make sure the court is ready – the surface is flat, the lines painted, the net at the correct height. But we don’t pick up a racket and start playing. How the players play the game is up to them, not us.

We have a responsibility to protect the platform from outside interference, and to make sure that when people pay us for political ads we make it as transparent as possible. But it is not our role to intervene when politicians speak.

That’s why I want to be really clear today – we do not submit speech by politicians to our independent fact-checkers, and we generally allow it on the platform even when it would otherwise breach our normal content rules.

Of course, there are exceptions. Broadly speaking they are two-fold: where speech endangers people; and where we take money, which is why we have more stringent rules on advertising than we do for ordinary speech and rhetoric.

I was an elected politician for many years. I’ve had both words and objects thrown at me, I’ve been on the receiving end of all manner of accusations and insults.

It’s not new that politicians say nasty things about each other – that wasn’t invented by Facebook. What is new is that now they can reach people with far greater speed and at a far greater scale. That’s why we draw the line at any speech which can lead to real world violence and harm.

I know some people will say we should go further. That we are wrong to allow politicians to use our platform to say nasty things or make false claims. But imagine the reverse.

Would it be acceptable to society at large to have a private company in effect become a self-appointed referee for everything that politicians say? I don’t believe it would be. In open democracies, voters rightly believe that, as a general rule, they should be able to judge what politicians say themselves.  

Conclusion

So, in conclusion, I understand the debate about big tech companies and how to tackle the real concerns that exist about data, privacy, content and election integrity. But I firmly believe that simply breaking them up will not make the problems go away. The real solutions will only come through new, smart regulation instead.

And I hope I have given you some reassurance about our approach to preventing election interference, and some clarity over how we will treat political speech in the run up to 2020 and beyond.

Thank you.



[ad_2]

Source link

Removing Coordinated Inauthentic Behavior in Thailand, Russia, Ukraine and Honduras

[ad_1]

By Nathaniel Gleicher, Head of Cybersecurity Policy

In the past week, we removed multiple Pages, Groups and accounts that were involved in coordinated inauthentic behavior on Facebook and Instagram. We found four separate, unconnected operations that originated in Thailand, Russia, Ukraine and Honduras. We didn’t find any links between the campaigns we’ve removed, but all created networks of accounts to mislead others about who they were and what they were doing.

We’re constantly working to detect and stop this type of activity because we don’t want our services to be used to manipulate people. We’re taking down these Pages, Groups and accounts based on their behavior, not the content they posted. In each of these cases, the people behind this activity coordinated with one another and used fake accounts to misrepresent themselves, and that was the basis for our action. We have shared information about our analysis with law enforcement, policymakers and industry partners.

We are making progress rooting out this abuse, but as we’ve said before, it’s an ongoing challenge. We’re committed to continually improving to stay ahead. That means building better technology, hiring more people and working more closely with law enforcement, security experts and other companies.

What We’ve Found So Far

We removed 12 Facebook accounts and 10 Facebook Pages for engaging in coordinated inauthentic behavior that originated in Thailand and focused primarily on Thailand and the US. The people behind this small network used fake accounts to create fictitious personas and run Pages, increase engagement, disseminate content, and also to drive people to off-platform blogs posing as news outlets. They also frequently shared divisive narratives and comments on topics including Thai politics, geopolitical issues like US-China relations, protests in Hong Kong, and criticism of democracy activists in Thailand. Although the people behind this activity attempted to conceal their identities, our review found that some of this activity was linked to an individual based in Thailand associated with New Eastern Outlook, a Russian government-funded journal based in Moscow.

  • Presence on Facebook: 12 accounts and 10 Pages.
  • Followers: About 38,000 accounts followed one or more of these Pages.
  • Advertising: Less than $18,000 in spending for ads on Facebook paid for in US dollars.

We identified these accounts through an internal investigation into suspected Thailand-linked coordinated inauthentic behavior. Our investigation benefited from information shared by local civil society organizations.

Below is a sample of the content posted by some of these Pages:

Further, last week, ahead of the election in Ukraine, we removed 18 Facebook accounts, nine Pages, and three Groups for engaging in coordinated inauthentic behavior that originated primarily in Russia and focused on Ukraine. The people behind this activity created fictitious personas, impersonated deceased Ukrainian journalists, and engaged in fake engagement tactics. They also operated fake accounts to increase the popularity of their content, deceive people about their location, and to drive people to off-platform websites. The Page administrators and account owners posted content about Ukrainian politics and news, including topics like Russia-Ukraine relations and criticism of the Ukrainian government.

  • Presence on Facebook: 18 Facebook accounts, 9 Pages, and 3 Groups.
  • Followers: About 80,000 accounts followed one or more of these Pages, about 10 accounts joined at least one of these Groups.
  • Advertising: Less than $100 spent on Facebook ads paid for in rubles.

We identified these accounts through an internal investigation into suspected Russia-linked coordinated inauthentic behavior, ahead of the elections in Ukraine. Our investigation benefited from public reporting including by a Ukrainian fact-checking organization.

Below is a sample of the content posted by some of these Pages:

Caption: “When it seems that there is no bottom. Ukrainian TV anchor hosted a show dressed up as Hitler“

Caption: “Poroshenko’s advisor is accused of organizing sex business in Europe.”

Caption: “A journalist from the US: there is a complete collapse of people’s hopes in Ukraine after Maidan“

Caption: “The art of being a savage”

Also last week, ahead of the election in Ukraine, we removed 83 Facebook accounts, two Pages, 29 Groups, and five Instagram accounts engaged in coordinated inauthentic behavior that originated in Russia and the Luhansk region in Ukraine and focused on Ukraine. The people behind this activity used fake accounts to impersonate military members in Ukraine, manage Groups posing as authentic military communities, and also to drive people to off-platform sites. They also operated Groups — some of which shifted focus from one political side to another over time — disseminating content about Ukraine and the Luhansk region. The Page admins and account owners frequently posted about local and political news including topics like the military conflict in Eastern Ukraine, Ukrainian public figures and politics.

  • Presence on Facebook and Instagram: 83 Facebook accounts, 2 Pages, 29 Groups, and 5 Instagram accounts.
  • Followers: Fewer than 1,000 accounts followed one or more of these Pages, under 35,000 accounts joined at least one of these Groups, and around 1,400 people followed one or more of these Instagram accounts.
  • Advertising: Less than $400 spent on Facebook and Instagram ads paid for in US dollars.

We identified this activity through an internal investigation into suspected coordinated inauthentic behavior in the region, ahead of the elections in Ukraine. Our investigation benefited from information shared with us by local law enforcement in Ukraine.

Below is a sample of the content posted by some of these Pages:

Caption: “Ukrainians destroy their past! For many years now, as we have witnessed a deep crisis of the post-Soviet Ukrainian statehood. The republic, which in the early 1990s had the best chance of successful development among all the new independent states, turned out to be the most unsuccessful. And the reasons here are not economic and, I would even say, not objective. The root of Ukrainian problems lies in the ideology itself, which forms the basis of the entire national-state project, and in the identity that it creates. It is purely negativistic, and any social actions based on it are, in one way or another, directed not at creation, but at destruction. Ukrainian ship led the young state to a crisis, the destruction of the spiritual, cultural, historical and linguistic community of the Russian and Ukrainian peoples, affecting almost all aspects of public life. Do not forget that the West played a special role in this, which since 2014 has openly interfered in the politics of another state and in fact sponsored an armed coup d’état.”

Caption: “Suicide is the only way out for warriors of the Armed Forces of Ukraine To date, a low moral and psychological level in the ranks of the armed forces of the Armed Forces of Ukraine has not risen. Since the beginning of the year in the conflict zone in the Donbas a considerable number of suicide cases have been recorded among privates of the Armed Forces of Ukraine. Nobody undertakes to give the exact number because the command does not report on every such state of emergency and tries to hide the fact of its own incompetence. Despite all the assurances of Kiev about the readiness for the offensive, the mood of the warriors is not happy. The Ukrainian army is not morally prepared, as it were, beautifully told Poroshenko about full-scale hostilities. During the four years of the war, there were many promises on his part, but in fact nothing. Initially, warriors come to the place of deployment morally unstable. Inadequate drinking and drug use exacerbates the already deplorable state of the heroes. Many have opened their eyes to the real causes of the war and they do not want to kill their fellow citizens. But no one will listen to them. And in recent time, lack of staff in the Armed Forces of Ukraine is being addressed with recruiting rookies, who undergo 3-day training and seeing a machine gun for the first time only at the frontline. So the warriors of light are losing their temper and they see hanging or shooting themselves as the only way out. Only recently cases of suicide have been made public by the Armed Forces of Ukraine, and no one will know what happened before. It is not good to show the glorious heroes of Ukraine from the dark side.”

Caption: “…algorithm and peculiarities of providing medical assistance to Ukrainian military. In the end of the visit representatives of Lithuanian and Ukrainian sides discussed questions of joint interest and expressed opinions on particular aspects of developing the sphere of collaboration even further.”

Headline: “Breaking: In the Bakhmutki area, the Armed Forces of Ukraine destroyed a car with peaceful citizens in it, using an anti-tank guided missile”

Finally, we removed 181 accounts and 1,488 Facebook Pages that were involved in domestic-focused coordinated inauthentic activity in Honduras. The individuals behind this activity operated fake accounts. They also created Pages designed to look like user profiles — using false names and stock images — to comment and amplify positive content about the president. Although the people behind this campaign attempted to conceal their identities, our review found that some of this activity was linked to individuals managing social media for the government of Honduras.

  • Presence on Facebook: 181 Facebook accounts and 1,488 Pages.
  • Followers: About 120,000 accounts followed one or more of these Pages.
  • Advertising: More than $23,000 spent on Facebook ads paid for in US dollars and Honduran lempiras.

We identified these accounts through an internal investigation into suspected coordinated inauthentic behavior in the region.

Below is a sample of the content posted by some of these Pages:

Caption: “Celebration first year of service of the national force and gangs. We are celebrating the first anniversary of service of the national force and gangs; all Hondurans must know what they face and what is the future. We have been beaten by violence, but the state of Honduras must solve it. It is so much the admiration and confidence of the Honduran people, that nowadays, the Fnamp HN receives the highest recognition that an institution in security can have. We recognize its work by the level of commitment to the you lose your life the life for the cause of others. We recognize its work by the level of commitment to the point of loosing the life for the cause of others.”

Caption: “Happy birthday, mother. I thank God because my mother – Elvira -, birthday today one more year of life. I will always be grateful to her for strengthening me and supporting me with her advice, and for being an example of faith and solidarity with the neighbor. God bless you, give you health and allow you to be many more years with us!”

Caption: “Happy Sunday! May the first step you take in the day be to move forward and leave a mark, fill yourself with energy and optimism, shield yourself from negativity with hope, and a desire to change Honduras.”

 



[ad_2]

Source link

Understanding Social Media and Conflict

[ad_1]

At Facebook, a dedicated, multidisciplinary team is focused on understanding the historical, political and technological contexts of countries in conflict. Today we’re sharing an update on their work to remove hate speech, reduce misinformation and polarization, and inform people through digital literacy programs.

By Samidh Chakrabarti, Director of Product Management, Civic Integrity; and Rosa Birch, Director of Strategic Response

Last week, we were among the thousands who gathered at RightsCon, an international summit on human rights in the digital age, where we listened to and learned from advocates, activists, academics, and civil society. It also gave our teams an opportunity to talk about the work we’re doing to understand and address the way social media is used in countries experiencing conflict. Today, we’re sharing updates on: 1) the dedicated team we’ve set up to proactively prevent the abuse of our platform and protect vulnerable groups in future instances of conflict around the world; 2) fundamental product changes that attempt to limit virality; and 3) the principles that inform our engagement with stakeholders around the world.

We care about these issues deeply and write today’s post not just as representatives of Facebook, but also as concerned citizens who are committed to protecting digital and human rights and promoting vibrant civic discourse. Both of us have dedicated our careers to working at the intersection of civics, policy and tech.

Last year, we set up a dedicated team spanning product, engineering, policy, research and operations to better understand and address the way social media is used in countries experiencing conflict. The people on this team have spent their careers studying issues like misinformation, hate speech, polarization and misinformation. Many have lived or worked in the countries we’re focused on. Here are just a few of them:

Ravi, Research Manager
With a PhD in social psychology, Ravi has spent much of his career looking at how conflicts can drive division and polarization. At Facebook, Ravi analyzes user behavior data and surveys to understand how content that doesn’t violate our Community Standards — such as posts from gossip pages — can still sow division. This analysis informs how we reduce the reach and impact of polarizing posts and comments.

Sarah, Program Manager
Beginning as a student in Cameroon, Sarah has devoted nearly a decade to understanding the role of technology in countries experiencing political and social conflict. In 2014, she moved to Myanmar to research the challenges activists face online and to support community organizations using social media. Sarah helps Facebook respond to complex crises and develop long-term product solutions to prevent abuse — for example, how to render Burmese content in a machine-readable format so our AI tools can better detect hate speech.

Abhishek, Research Scientist
With a masters in computer science and a doctorate in media theory, Abhishek focuses on issues including the technical challenges we face in different countries and how best to categorize different types of violent content. For example, research in Cameroon revealed that some images of violence being shared on Facebook helped people pinpoint — and avoid — conflict areas. Nuances like this help us consider the ethics of different product solutions, like removing or reducing the spread of certain content.

Emilar, Policy Manager
Prior to joining Facebook, Emilar spent more than a decade working on human rights and social justice issues in Africa, including as a member of the team that developed the African Declaration on Internet Rights and Freedoms. She joined the company to work on public policy issues in Southern Africa, including the promotion of affordable, widely available internet access and human rights both on and offline.

Ali, Product Manager
Born and raised in Iran in the 1980s and 90s, Ali and his family experienced violence and conflict firsthand as Iran and Iraq were involved in an eight-year conflict. Ali was an early adopter of blogging and wrote about much of what he saw around him in Iran. As an adult, Ali received his PhD in computer science but remained interested in geopolitical issues. His work on Facebook’s product team has allowed him to bridge his interest in technology and social science, effecting change by identifying technical solutions to root out hate speech and misinformation in a way that accounts for local nuances and cultural sensitivities.

In working on these issues, local groups have given us invaluable input on our products and programs. No one knows more about the challenges in a given community than the organizations and experts on the ground. We regularly solicit their input on our products, policies and programs, and last week we published the principles that guide our continued engagement with external stakeholders.

In the last year, we visited countries such as Lebanon, Cameroon, Nigeria, Myanmar, and Sri Lanka to speak with affected communities in these countries, better understand how they use Facebook, and evaluate what types of content might promote depolarization in these environments. These findings have led us to focus on three key areas: removing content and accounts that violate our Community Standards, reducing the spread of borderline content that has the potential to amplify and exacerbate tensions and informing people about our products and the internet at large. To address content that may lead to offline violence, our team is particularly focused on combating hate speech and misinformation.

Removing Bad Actors and Bad Content

Hate speech isn’t allowed under our Community Standards. As we shared last year, removing this content requires supplementing user reports with AI that can proactively flag potentially violating posts. We’re continuing to improve our detection in local languages such as Arabic, Burmese, Tagalog, Vietnamese, Bengali and Sinhalese. In the past few months, we’ve been able to detect and remove considerably more hate speech than before. Globally, we increased our proactive rate — the percent of the hate speech Facebook removed that we found before users reported it to us — from 51.5% in Q3 2018 to 65.4% in Q1 2019.

We’re also using new applications of AI to more effectively combat hate speech online. Memes and graphics that violate our policies, for example, get added to a photo bank so we can automatically delete similar posts. We’re also using AI to identify clusters of words that might be used in hateful and offensive ways, and tracking how those clusters vary over time and geography to stay ahead of local trends in hate speech. This allows us to remove viral text more quickly.

Still, we have a long way to go. Every time we want to use AI to proactively detect potentially violating content in a new country, we have to start from scratch and source a high volume of high quality, locally relevant examples to train the algorithms. Without this context-specific data, we risk losing language nuances that affect accuracy.

Globally, when it comes to misinformation, we reduce the spread of content that’s been deemed false by third-party fact-checkers. But in countries with fragile information ecosystems, false news can have more serious consequences, including violence. That’s why last year we updated our global violence and incitement policy such that we now remove misinformation that has the potential to contribute to imminent violence or physical harm. To enforce this policy, we partner with civil society organizations who can help us confirm whether content is false and has the potential to incite violence or harm.

Reducing Misinformation and Borderline Content

We’re also making fundamental changes to our products to address virality and reduce the spread of content that can amplify and exacerbate violence and conflict. In Sri Lanka, we have explored adding friction to message forwarding so that people can only share a message with a certain number of chat threads on Facebook Messenger. This is similar to a change we made to WhatsApp earlier this year to reduce forwarded messages around the world. It also delivers on user feedback that most people don’t want to receive chain messages.

And, as our CEO Mark Zuckerberg detailed last year, we have started to explore how best to discourage borderline content, or content that toes the permissible line without crossing it. This is especially true in countries experiencing conflict because borderline content, much of which is sensationalist and provocative, has the potential for more serious consequences in these countries. 

We are, for example, taking a more aggressive approach against people and groups who regularly violate our policies. In Myanmar, we have started to reduce the distribution of all content shared by people who have demonstrated a pattern of posting content that violates our Community Standards, an approach that we may roll out in other countries if it proves successful in mitigating harm. In cases where individuals or organizations more directly promote or engage violence, we will ban them under our policy against dangerous individuals and organizations. Reducing distribution of content is, however, another lever we can pull to combat the spread of hateful content and activity.  

We have also extended the use of artificial intelligence to recognize posts that may contain graphic violence and comments that are potentially violent or dehumanizing, so we can reduce their distribution while they undergo review by our Community Operations team. If this content violates our policies, we will remove it. By limiting visibility in this way, we hope to mitigate against the risk of offline harm and violence.

Giving People Additional Tools and Information

Perhaps most importantly, we continue to meet with and learn from civil society who are intimately familiar with trends and tensions on the ground and are often on the front lines of complex crises. To improve communication and better identify potentially harmful posts, we have built a new tool for our partners to flag content to us directly. We appreciate the burden and risk that this places on civil society organizations, which is why we’ve worked hard to streamline the reporting process and make it secure and safe.

Our partnerships have also been instrumental in promoting digital literacy in countries where many people are new to the internet. Last week, we announced a new program with GSMA called Internet One-on-One (1O1). The program, which we first launched in Myanmar with the goal of reaching 500,000 people in three months, offers one-on-one training sessions that includes a short video on the benefits of the internet and how to stay safe online. We plan to partner with other telecom companies and introduce similar programs in other countries. In Nigeria, we introduced a 12-week digital literacy program for secondary school students called Safe Online with Facebook. Developed in partnership with Re:Learn and Junior Achievement Nigeria, the program has worked with students at over 160 schools and covers a mix of online safety, news literacy, wellness tips and more, all facilitated by a team of trainers across Nigeria.

We know there’s more to do to better understand the role of social media in countries of conflict. We want to be part of the solution so that as we mitigate abuse and harmful content, people can continue using our services to communicate. In the wake of the horrific terrorist attacks in Sri Lanka, more than a quarter million people used Facebook’s Safety Check to mark themselves as safe and reassure loved ones. In the same vein, thousands of people in Sri Lanka used our crisis response tools to make offers and requests for help. These use cases — the good, the meaningful, the consequential — are ones that we want to preserve.

This is some of the most important work being done at Facebook and we fully recognize the gravity of these challenges. By tackling hate speech and misinformation, investing in AI and changes to our products, and strengthening our partnerships, we can continue to make progress on these issues around the world.



[ad_2]

Source link

Day 1 of F8 2019: Building New Products and Features for a Privacy-Focused Social Platform

[ad_1]

Today, more than 5,000 developers, creators and entrepreneurs from around the world came together for F8, our annual conference about the future of technology.  

Mark Zuckerberg opened the two-day event with a keynote on how we’re building a more privacy-focused social platform — giving people spaces where they can express themselves freely and feel connected to the people and communities that matter most. He shared how this is a fundamental shift in how we build products and run our company.

Mark then turned it over to leaders from Facebook, Instagram, WhatsApp, Messenger and AR/VR to share more announcements. Here are the highlights:

Messenger

As we build for a future of more private communications, Messenger announced several new products and features to help create closer ties between people, businesses and developers.

A Faster, Lighter App
People expect their messaging apps to be fast and reliable. We’re excited to announce that we’re re-building the architecture of Messenger from the ground up to be faster and lighter than ever before. This completely re-engineered Messenger will begin to roll out later this year.

A Way to Watch Together
When you’re not together with friends or family in your physical living room, Messenger will now let you discover and watch videos from Facebook together in real time. You’ll be able to seamlessly share a video from the Facebook app on Messenger and invite others to watch together while messaging or on a video chat. This could be your favorite show, a funny clip or even home videos. We are testing this now and will begin to roll it out globally later this year.

A Desktop App for Messenger
People want to seamlessly message from any device, and sometimes they just want a little more space to share and connect with the people they care about most. So today we’re announcing a Messenger Desktop app. You can download the app on your desktop — both Windows and MacOS — and have group video calls, collaborate on projects or multi-task while chatting in Messenger. We are testing this now and will roll it out globally later this year.

Better Ways to Connect with Close Friends
Close connections are built on messaging, which is why we are making it easier for you to find the content from the people you care about the most. In Messenger, we are introducing a dedicated space where you can discover Stories and messages with your closest friends and family. You’ll also be able to share snippets from your own day and can choose exactly who sees what you post. This will roll out later this year.

Helping Businesses Connect with Customers
We’re making it even easier for businesses to connect with potential customers by adding lead generation templates to Ads Manager. There, businesses can easily create an ad that drives people to a simple Q&A in Messenger to learn more about their customers. And to make it easier to book an appointment with businesses like car dealerships, stylists or cleaning services, we’ve created an appointment experience so people can book appointments within a Messenger conversation.

WhatsApp

Business Catalog
People and businesses are finding WhatsApp a great way to connect. In the months ahead people will be able to see a business catalog right within WhatsApp when chatting with a business. With catalogs, businesses can showcase their goods so people can easily discover them.

Facebook

People have always come to Facebook to connect with friends and family, but over time it’s become more than that – it’s also a place to connect with people who share your interests and passions. Today we’re making changes that put Groups at the center of Facebook and sharing new ways Facebook can help bring people together offline.

A Fresh Design
We’re rolling out FB5, a fresh new design for Facebook that’s simpler, faster, more immersive and puts your communities at the center. Overall, we’ve made it easier to find what you’re looking for and get to your most-used features.

People will start seeing some of these updates in the Facebook app right away, and the new desktop site will come in the next few months.

Putting Groups First
This redesign makes it easy for people to go from public spaces to more private ones, like Groups. There are tens of millions of active groups on Facebook. When people find the right one, it often becomes the most meaningful part of how they use Facebook. Today, more than 400 million people on Facebook belong to a group that they find meaningful. That’s why we’re introducing new tools that will make it easier for you to discover and engage with groups of people who share your interests:

  • Redesigned Groups tab to make discovery easier: We’ve completely redesigned the Groups tab and made discovery even better. The tab now shows a personalized feed of activity across all your groups. And the new discovery tool with improved recommendations lets you quickly find groups you might be interested in.
  • Making it easier to participate in Groups: We’re also making it easier to get relevant group recommendations elsewhere in the app like in Marketplace, Today In, the Gaming tab, and Facebook Watch. You may see more content from your groups in News Feed. And, you will be able to share content directly to your groups from News Feed, the same way you do with friends and family.
  • New features to support specific communities: Different communities have different needs, so we’re introducing new features for different types of groups. Through new Health Support groups, members can post questions and share information without their name appearing on a post. Job groups will have a new template for employers to post openings, and easier ways for job seekers to message the employer and apply directly through Facebook. Gaming groups will get a new chat feature so members can create threads for different topics within the group. And because we know people use Facebook Live to sell things in Buy and Sell groups, we’re exploring ways to let buyers easily ask questions and place orders without leaving the live broadcast.

Connecting with Your Secret Crush
On Facebook Dating, you can opt in to discover potential matches within your own Facebook communities: events, groups, friends of friends and more. It’s currently available in Colombia, Thailand, Canada, Argentina, and Mexico — and today, we’re expanding to 14 new countries: Philippines, Vietnam, Singapore, Malaysia, Laos, Brazil, Peru, Chile, Bolivia, Ecuador, Paraguay, Uruguay, Guyana, and Suriname.

We’re also announcing a new feature called Secret Crush. People have told us that they believe there is an opportunity to explore potential romantic relationships within their own extended circle of friends. So now, if you choose to use Secret Crush, you can select up to nine of your Facebook friends who you want to express interest in. If your crush has opted into Facebook Dating, they will get a notification saying that someone has a crush on them. If your crush adds you to their Secret Crush list, it’s a match! If your crush isn’t on Dating, doesn’t create a Secret Crush list, or doesn’t put you on their list, no one will know that you’ve entered a friend’s name.

A Way to Meet New Friends
We’ve created Meet New Friends to help people start friendships with new people from their shared communities like a school, workplace or city. It’s opt-in, so you will only see other people that are open to meeting new friends, and vice versa. We’ve started testing Meet New Friends in a few places, and we’ll roll it out wider soon. We will also be integrating Facebook Groups, making it possible to meet new friends from your most meaningful communities on Facebook.

Shipping on Marketplace
People will soon be able to ship Marketplace items anywhere in the continental US and pay for their purchases directly on Facebook. For sellers this means reaching more buyers and getting paid securely, and for buyers this means shopping more items — near or far.

A New Events Tab
This summer we’re introducing the new Events tab so you can see what’s happening around you, get recommendations, discover local businesses, and coordinate with friends to make plans to get together.

Instagram

We rolled out new ways to connect people with each other and their interests on Instagram.

The Ability to Shop from Creators
Starting next week, you can shop inspiring looks from the creators you love without leaving Instagram. Instead of taking a screenshot or asking for product details in comments or Direct, you can simply tap to see exactly what your favorite creators are wearing and buy it on the spot. Anyone in our global community will be able to shop from creators. We’ll begin testing this with a small group of creators next week, with plans to expand access over time. For more information on shopping from creators, click here

A Way to Fundraise for Causes
Starting today, you can raise money for a nonprofit you care about directly on Instagram. Through a donation sticker in Stories, you can create a fundraiser and mobilize your community around a cause you care about — with 100% of the money raised on Instagram going to the nonprofit you’re supporting. This will be available in the US now and we’re working to bring it to more countries. To learn more, check out the Instagram Help Center here.

A New and Improved Camera
In the coming weeks, we’re introducing a new camera design including Create Mode, which gives you an easy way to share without a photo or video. This new camera will make it easier to use popular creative tools like effects and interactive stickers, so you can express yourself more freely.

AR/VR

We’re building technology around how we naturally interact with people. We announced a number of new ways we’re helping people connect more deeply in video calls through Portal. We shared more on our work to bring AR experiences to more people and platforms, and we opened pre-orders for Oculus Quest and Oculus Rift S.

Portal Expands Internationally this Fall
Beginning with an initial expansion from the US to Canada, we’ll also offer the Portal and Portal+ in Europe this fall. We’re bringing WhatsApp to Portal — and we’ll be bringing end-to-end encryption to all calls. You’ll be able to call any of your friends who use WhatsApp — or Messenger — on their Portal, or on their phone.

Beyond Video Calling
This summer we are adding new ways to connect on Portal. You’ll be able to say, “Hey Portal, Good Morning” to get updates on birthdays, events and more. We’re also adding the ability to send private video messages from Portal to your loved ones. And, through our collaboration with Amazon, we’re adding more visual features and Alexa skills to Portal, including Flash Briefings, smart home control and the Amazon Prime Video app later this year. You’ll also be able to use Facebook Live on Portal, so you can share special moments, with your closest friends, in real time.

SuperFrame to Display Your Favorite Photos
Portal’s SuperFrame lets you display your favorite photos when you’re not on a call. You can already add photos to SuperFrame from your Facebook feed, and starting today, you’ll be able to add your favorites from Instagram as well. And our new mobile app will let you add photos to Portal’s Superframe directly from your camera roll, later this summer.

Spark AR Expands to More People
Since last F8, we’ve seen over one billion people use AR experiences powered by Spark AR, with hundreds of millions using AR each month across Facebook, Messenger, Instagram and Portal. Starting today, the new Spark AR Studio supports both Windows and Mac and includes new features and functionality for creation and collaboration. We’re also opening Instagram to the entire Spark AR creator and developer ecosystem this summer.

Oculus Quest and Rift S Pre-Orders Open
Our two newest virtual reality headsets — Oculus Quest and Oculus Rift S — ship May 21. Oculus Quest, our first all-in-one VR gaming system, lets you pick up and play almost anywhere without being tethered to a PC. For those with a gaming PC, Rift S gets you into the most immersive content that VR has to offer. Both start at $399 USD and you can pre-order today at oculus.com.

We’re also launching the new Oculus for Business later this year. We’re adding Oculus Quest to the program and will provide a suite of tools designed to help companies reshape the way they do business through VR.

With each feature and product announced today, we want to help people discover their communities, deepen their connections, find new opportunities and simply have fun. We’re excited to see all the ways developers, creators and entrepreneurs use these tools as we continue to build more private ways for people to communicate. For more details on today’s news, see our Developer Blog, Engineering Blog, Oculus Blog, Messenger Blog, and Instagram Press Center. You can also watch all F8 keynotes on the Facebook for Developers Page.

Downloads:

You can find the full press kit here.



[ad_2]

Source link

5 Words to Describe My Agency

[ad_1]

Rachael Herman

Running a marketing and advertising agency in this digital age is not for the faint of heart.

The competition is extreme, and most clients look at marketing as an expense rather than an investment.  Locally, I’m up against about 15 agencies and countless individual consultants who claim to be able to do it all.  I don’t even consider national and international agencies as competition.  I’m also up against a consistently high level of doubt and confusion from my audience.

Many business owners I talk to say they’re looking for a marketing director, but they end up with an individual consultant without the expertise to successfully implement a strategy.

Marketing directors know the ins and outs of the industry well enough to teach the subject and process, as well as create and manage the campaigns; however, they cannot do everything and will need specialists like graphic designer, web designer, copywriter, etc., etc.  Individual consultants generally specialize in one or two areas of expertise, so small businesses should be cautious when hiring a consultant without a creative team to back them up.

My partner and I are account directors (same as marketing directors, but for an agency), so it’s always difficult to answer the question, “what do you do for a living?”  Usually, I smile and respond with: “That depends on the day of the week.”

I love the spark of curiosity that comes from that statement.  Monday and Wednesday are for client work, Tuesday is for Your Imprint work, and Thursday and Friday are for campaign scheduling, networking, and sales.   Weekends are dedicated to catch-up work if I use one of the weekdays for emergency meetings.

These conversations usually lead to discussions about what my company does, in which case I touch on our 5 cornerstones.

Marketing & Advertising Agency Cornerstones

Dynamic

We’re positive, competitive, and we adapt quickly to changes in the industry.  It’s why our agency has doubled in size over two years.  It’s also why our two biggest clients from two separate industries and states are currently opening another location.  The work we’ve done for them has brought new clients to our doorstep, three of whom are already starting to see growth (the are 3-6 months with us).

Collaborative

To attract loyal advocates to your brand, we strive for transparency and strong collaboration with clear, friendly, and candid communication about what’s working, what’s not, and what to do about it.

Results-Driven

Everyone says that, but we define it as actively listening to the client and learning about their passions.  We create SMART goals, develop a plan, implement a strategy, and track those results.  Sometimes, the results are directly linked to a goal they had no idea how to put into words until we came along.

Effective & Efficient

To be effective in driving results, we put enormous effort into creating efficient processes that help us get the job done promptly.  To be efficient, we work smarter by using effective management skills.

Expansive Expertise

Websites, reviews, printing, SEO, social media, brand awareness, advertising & placement, e-commerce, email marketing, direct vs indirect marketing – who can keep up with it I all on their own?  The real value of an agency is to be able to span multiple areas of marketing to help streamline the process and improve brand awareness and return on investment.

The Sixth Element – Creative Genius

marketing-agency-advertising-agencyWe can’t forget about the genius.  After all, this is a creative industry that thrives on innovative art across multiple mediums.  The creative genius is not a cornerstone, but a job requirement.  As professional artists who deal in results, we have to have some otherworldly intellectual support, and for most artists, that’s a Genius.

In ancient Rome, a Genius was the guiding spirit of a person, place, or family.  It means “to bring into being, create, or produce.”

For us, creative genius is the ability to see the potential in a big-picture idea.  It’s the skill of intuitively understanding passions and defining unique goals after a friendly conversation with a client.  Genius means innovative originality.   While it’s not necessary to reinvent the wheel for every little thing, it is necessary to have a Genius in your corner as a marketing and advertising artist.

 

It’s hard to figure out how much and what to invest in your company. You can save 60-70% in costs by outsourcing your marketing. If you’re looking to accelerate your growth by investing in marketing and advertising, talk to me. It’s a free consultation, and you’ll walk away with several ideas or “homework” to move forward.

5 Words to Describe My Agency2019-04-29Fort Collins Digital Marketing & SEO Agencyhttps://yourimprint.net/wp-content/uploads/2019/04/marketing-agency-advertising-agency.jpg200px200px



[ad_2]

Source link

Remove, Reduce, Inform: New Steps to Manage Problematic Content

[ad_1]

By Guy Rosen, VP of Integrity, and Tessa Lyons, Head of News Feed Integrity

Since 2016, we have used a strategy called “remove, reduce, and inform” to manage problematic content across the Facebook family of apps. This involves removing content that violates our policies, reducing the spread of problematic content that does not violate our policies and informing people with additional information so they can choose what to click, read or share. This strategy applies not only during critical times like elections, but year-round.

Today in Menlo Park, we met with a small group of journalists to discuss our latest remove, reduce and inform updates to keep people safe and maintain the integrity of information that flows through the Facebook family of apps:

REMOVE  (read more)

  • Rolling out a new section on the Facebook Community Standards site where people can track the updates we make each month.
  • Updating the enforcement policy for Facebook groups and launching a new Group Quality feature.

REDUCE (read more)

  • Kicking off a collaborative process with outside experts to find new ways to fight more false news on Facebook, more quickly.
  • Expanding the content the Associated Press will review as a third-party fact-checker.
  • Reducing the reach of Facebook Groups that repeatedly share misinformation.
  • Incorporating a “Click-Gap” signal into News Feed ranking to ensure people see less low-quality content in their News Feed.

INFORM (read more)

  • Expanding the News Feed Context Button to images. (Updated on April 10, 2019 at 11AM PT to include this news.)
  • Adding Trust Indicators to the News Feed Context Button on English and Spanish content.
  • Adding more information to the Facebook Page Quality tab.
  • Allowing people to remove their posts and comments from a Facebook Group after they leave the group.
  • Combatting impersonations by bringing the Verified Badge from Facebook into Messenger.
  • Launching Messaging Settings and an Updated Block feature on Messenger for greater control.
  • Launched Forward Indicator and Context Button on Messenger to help prevent the spread of misinformation.

Remove

Facebook

We have Community Standards that outline what is and isn’t allowed on Facebook. They cover things like bullying, harassment and hate speech, and we remove content that goes against our standards as soon as we become aware of it. Last year, we made it easier for people to understand what we take down by publishing our internal enforcement guidelines and giving people the right to appeal our decisions on individual posts.

The Community Standards apply to all parts of Facebook, but different areas pose different challenges when it comes to enforcement. For the past two years, for example, we’ve been working on something called the Safe Communities Initiative, with the mission of protecting people from harmful groups and harm in groups. By using a combination of the latest technology, human review and user reports, we identify and remove harmful groups, whether they are public, closed or secret. We can now proactively detect many types of violating content posted in groups before anyone reports them and sometimes before few people, if any, even see them.

Similarly, Stories presents its own set of enforcement challenges when it comes to both removing and reducing the spread of problematic content. The format’s ephemerality means we need to work even faster to remove violating content. The creative tools that give people the ability to add text, stickers and drawings to photos and videos can be abused to mask violating content. And because people enjoy stringing together multiple Story cards, we have to view Stories as holistic — if we evaluate individual story cards in a vacuum, we might miss standards violations.

In addition to describing this context and history, today we discussed how we will be:

  • Rolling out a new section on the Community Standards site where people can track the updates we make each month. We revisit existing policies and draft new ones for several reasons, including to improve our enforcement accuracy or to get ahead of new trends raised by content reviewers, internal discussion, expert critique or external engagement. We’ll track all policy changes in this new section and share specifics on why we made the more substantive ones. Starting today, in English.
  • Updating the enforcement policy for groups and launching a new Group Quality feature. As part of the Safe Communities Initiative, we will be holding the admins of Facebook Groups more accountable for Community Standards violations. Starting in the coming weeks, when reviewing a group to decide whether or not to take it down, we will look at admin and moderator content violations in that group, including member posts they have approved, as a stronger signal that the group violates our standards. We’re also introducing a new feature called Group Quality, which offers an overview of content removed and flagged for most violations, as well as a section for false news found in the group. The goal is to give admins a clearer view into how and when we enforce our Community Standards. Starting in the coming weeks, globally.

For more information on Facebook’s “remove” work, see these videos on the people and process behind our Community Standards development.

Reduce

Facebook

There are types of content that are problematic but don’t meet the standards for removal under our Community Standards, such as misinformation and clickbait. People often tell us that they don’t like seeing this kind of content and while we allow it to be posted on Facebook, we want to make sure it’s not broadly distributed.

Over the last two years, we’ve focused heavily on reducing misinformation on Facebook. We’re getting better at enforcing against fake accounts and coordinated inauthentic behavior; we’re using both technology and people to fight the rise in photo and video-based misinformation; we’ve deployed new measures to help people spot false news and get more context about the stories they see in News Feed; and we’ve grown our third-party fact-checking program to include 45 certified fact-checking partners who review content in 24 languages.

Today, members of the Facebook News Feed team discussed how we will be:

  • Kicking off a collaborative process with outside experts to find new ways to fight more false news, more quickly. Our professional fact-checking partners are an important piece of our strategy against misinformation, but they face challenges of scale: There simply aren’t enough professional fact-checkers worldwide and, like all good journalism, fact-checking takes time. One promising idea to bolster their work, which we’ve been exploring since 2017, involves groups of Facebook users pointing to journalistic sources to corroborate or contradict claims made in potentially false content. Over the next few months, we’re going to build on those explorations, continuing to consult a wide range of academics, fact-checking experts, journalists, survey researchers and civil society organizations to understand the benefits and risks of ideas like this. We need to find solutions that support original reporting, promote trusted information, complement our existing fact-checking programs and allow for people to express themselves freely — without having Facebook be the judge of what is true. Any system we implement must have safeguards from gaming or manipulation, avoid introducing personal biases and protect minority voices. We’ll share updates with the public throughout this exploratory process and solicit feedback from broader groups of people around the world. Starting today, globally.
  • Expanding the role of The Associated Press as part of the third-party fact-checking program. As part of our third-party fact-checking program, AP will be expanding its efforts by debunking false and misleading video misinformation and Spanish-language content appearing on Facebook in the US. Starting today, in the US.
  • Reducing the reach of Groups that repeatedly share misinformation. When people in a group repeatedly share content that has been rated false by independent fact-checkers, we will reduce that group’s overall News Feed distribution. Starting today, globally.
  • Incorporating a “Click-Gap” signal into News Feed ranking. Ranking uses many signals to ensure people see less low-quality content in their News Feed. This new signal, Click-Gap, relies on the web graph, a conceptual “map” of the internet in which domains with a lot of inbound and outbound links are at the center of the graph and domains with fewer inbound and outbound links are at the edges. Click-Gap looks for domains with a disproportionate number of outbound Facebook clicks compared to their place in the web graph. This can be a sign that the domain is succeeding on News Feed in a way that doesn’t reflect the authority they’ve built outside it and is producing low-quality content. Starting today, globally.

For more information about how we set goals for our “reduce” initiatives on Facebook, read this blog post.

Instagram

Today we discussed how Instagram is working to ensure that the content we recommend to people is both safe and appropriate for the community. We have begun reducing the spread of posts that are inappropriate but do not go against Instagram’s Community Guidelines, limiting those types of posts from being recommended on our Explore and hashtag pages. For example, a sexually suggestive post will still appear in Feed if you follow the account that posts it, but this type of content may not appear for the broader community in Explore or hashtag pages.

Facebook

We’re investing in features and products that give people more information to help them decide what to read, trust and share. In the past year, we began offering more information on articles in News Feed with the Context Button, which shows the publisher’s Wikipedia entry, the website’s age, and where and how often the content has been shared. We helped Page owners improve their content with the Page Quality tab, which shows them which posts of theirs were removed for violating our Community Standards or were rated “False,” “Mixture” or “False Headline” by third-party fact-checkers. We also discussed how we will be:

  • Expanding the Context Button to images. Originally launched in April 2018, the Context Button feature provides people more background information about the publishers and articles they see in News Feed so they can better decide what to read, trust and share. We’re testing enabling this feature for images that have been reviewed by third-party fact-checkers. Testing now in the US. (Updated on April 10, 2019 at 11AM PT to include this news.)
  • Adding Trust Indicators to the Context Button. The Trust Indicators are standardized disclosures, created by a consortium of news organizations known as the Trust Project, that provide clarity on a news organization’s ethics and other standards for fairness and accuracy. The indicators we display in the context button cover the publication’s fact-checking practices, ethics statements, corrections, ownership and funding and editorial team. Started March 2019, on English and Spanish content.
  • Adding more information to the Page Quality tab. We’ll be providing more information in the tab over time, starting with more information in the coming months on a Page’s status with respect to clickbait. Starting soon, globally.
  • Allowing people to remove their posts and comments from a group after they leave the group. People will have this ability even if they are no longer a member of the group. With this update, we aim to bring greater transparency and personal control to groups. Starting soon, globally.

Messenger

At today’s event, Messenger highlighted new and updated privacy and safety features that give people greater control of their experience and help people stay informed.

  • Combatting impersonations by bringing the Verified Badge from Facebook into Messenger. This tool will help people avoid scammers that pretend to be high-profile people by providing a visible indicator of a verified account. Messenger continues to encourage use of the Report Impersonations tool, introduced last year, if someone believes they are interacting with a someone pretending to be a friend. Starting this week, globally.
  • Launching Messaging Settings and an Updated Block feature for greater control. Messaging Settings allow you to control whether people you’re not connected to, such as friends of your friends, people with your phone number or people who follow you on Instagram can reach your Chats list. The Updated Block feature makes it easier to block and avoid unwanted contact. Starting this week, globally.
  • Launched Forward Indicator and Context Button to help prevent the spread of misinformation. The Forward Indicator lets someone know if a message they received was forwarded by the sender, while the Context Button provides more background on shared articles. Started earlier this year, globally

[ad_2]

Source link