Mark Zuckerberg Stands for Voice and Free Expression

[ad_1]

Today, Mark Zuckerberg spoke at Georgetown University about the importance of protecting free expression. He underscored his belief that giving everyone a voice empowers the powerless and pushes society to be better over time — a belief that’s at the core of Facebook.

In front of hundreds of students at the school’s Gaston Hall, Mark warned that we’re increasingly seeing laws and regulations around the world that undermine free expression and human rights. He argued that in order to make sure people can continue to have a voice, we should: 1) write policy that helps the values of voice and expression triumph around the world, 2) fend off the urge to define speech we don’t like as dangerous, and 3) build new institutions so companies like Facebook aren’t making so many important decisions about speech on our own. 

Read Mark’s full speech below.

Standing For Voice and Free Expression

Hey everyone. It’s great to be here at Georgetown with all of you today.

Before we get started, I want to acknowledge that today we lost an icon, Elijah Cummings. He was a powerful voice for equality, social progress and bringing people together.

When I was in college, our country had just gone to war in Iraq. The mood on campus was disbelief. It felt like we were acting without hearing a lot of important perspectives. The toll on soldiers, families and our national psyche was severe, and most of us felt powerless to stop it. I remember feeling that if more people had a voice to share their experiences, maybe things would have gone differently. Those early years shaped my belief that giving everyone a voice empowers the powerless and pushes society to be better over time.

Back then, I was building an early version of Facebook for my community, and I got to see my beliefs play out at smaller scale. When students got to express who they were and what mattered to them, they organized more social events, started more businesses, and even challenged some established ways of doing things on campus. It taught me that while the world’s attention focuses on major events and institutions, the bigger story is that most progress in our lives comes from regular people having more of a voice.

Since then, I’ve focused on building services to do two things: give people voice, and bring people together. These two simple ideas — voice and inclusion — go hand in hand. We’ve seen this throughout history, even if it doesn’t feel that way today. More people being able to share their perspectives has always been necessary to build a more inclusive society. And our mutual commitment to each other — that we hold each others’ right to express our views and be heard above our own desire to always get the outcomes we want — is how we make progress together.

But this view is increasingly being challenged. Some people believe giving more people a voice is driving division rather than bringing us together. More people across the spectrum believe that achieving the political outcomes they think matter is more important than every person having a voice. I think that’s dangerous. Today I want to talk about why, and some important choices we face around free expression.

Throughout history, we’ve seen how being able to use your voice helps people come together. We’ve seen this in the civil rights movement. Frederick Douglass once called free expression “the great moral renovator of society”. He said “slavery cannot tolerate free speech”. Civil rights leaders argued time and again that their protests were protected free expression, and one noted: “nearly all the cases involving the civil rights movement were decided on First Amendment grounds”.

We’ve seen this globally too, where the ability to speak freely has been central in the fight for democracy worldwide. The most repressive societies have always restricted speech the most — and when people are finally able to speak, they often call for change. This year alone, people have used their voices to end multiple long-running dictatorships in Northern Africa. And we’re already hearing from voices in those countries that had been excluded just because they were women, or they believed in democracy.

Our idea of free expression has become much broader over even the last 100 years. Many Americans know about the Enlightenment history and how we enshrined the First Amendment in our constitution, but fewer know how dramatically our cultural norms and legal protections have expanded, even in recent history.

The first Supreme Court case to seriously consider free speech and the First Amendment was in 1919, Schenk vs the United States. Back then, the First Amendment only applied to the federal government, and states could and often did restrict your right to speak. Our ability to call out things we felt were wrong also used to be much more restricted. Libel laws used to impose damages if you wrote something negative about someone, even if it was true. The standard later shifted so it became okay as long as you could prove your critique was true. We didn’t get the broad free speech protections we have now until the 1960s, when the Supreme Court ruled in opinions like New York Times vs Sullivan that you can criticize public figures as long as you’re not doing so with actual malice, even if what you’re saying is false.

We now have significantly broader power to call out things we feel are unjust and share our own personal experiences. Movements like #BlackLivesMatter and #MeToo went viral on Facebook — the hashtag #BlackLivesMatter was actually first used on Facebook — and this just wouldn’t have been possible in the same way before. 100 years back, many of the stories people have shared would have been against the law to even write down. And without the internet giving people the power to share them directly, they certainly wouldn’t have reached as many people. With Facebook, more than 2 billion people now have a greater opportunity to express themselves and help others.

While it’s easy to focus on major social movements, it’s important to remember that most progress happens in our everyday lives. It’s the Air Force moms who started a Facebook group so their children and other service members who can’t get home for the holidays have a place to go. It’s the church group that came together during a hurricane to provide food and volunteer to help with recovery. It’s the small business on the corner that now has access to the same sophisticated tools only the big guys used to, and now they can get their voice out and reach more customers, create jobs and become a hub in their local community. Progress and social cohesion come from billions of stories like this around the world.

People having the power to express themselves at scale is a new kind of force in the world — a Fifth Estate alongside the other power structures of society. People no longer have to rely on traditional gatekeepers in politics or media to make their voices heard, and that has important consequences. I understand the concerns about how tech platforms have centralized power, but I actually believe the much bigger story is how much these platforms have decentralized power by putting it directly into people’s hands. It’s part of this amazing expansion of voice through law, culture and technology.

So giving people a voice and broader inclusion go hand in hand, and the trend has been towards greater voice over time. But there’s also a counter-trend. In times of social turmoil, our impulse is often to pull back on free expression. We want the progress that comes from free expression, but not the tension.

We saw this when Martin Luther King Jr. wrote his famous letter from Birmingham Jail, where he was unconstitutionally jailed for protesting peacefully. We saw this in the efforts to shut down campus protests against the Vietnam War. We saw this way back when America was deeply polarized about its role in World War I, and the Supreme Court ruled that socialist leader Eugene Debs could be imprisoned for making an anti-war speech.

In the end, all of these decisions were wrong. Pulling back on free expression wasn’t the answer and, in fact, it often ended up hurting the minority views we seek to protect. From where we are now, it seems obvious that, of course, protests for civil rights or against wars should be allowed. Yet the desire to suppress this expression was felt deeply by much of society at the time.

Today, we are in another time of social tension. We face real issues that will take a long time to work through — massive economic transitions from globalization and technology, fallout from the 2008 financial crisis, and polarized reactions to greater migration. Many of our issues flow from these changes.

In the face of these tensions, once again a popular impulse is to pull back from free expression. We’re at another cross-roads. We can continue to stand for free expression, understanding its messiness, but believing that the long journey towards greater progress requires confronting ideas that challenge us. Or we can decide the cost is simply too great. I’m here today because I believe we must continue to stand for free expression.

At the same time, I know that free expression has never been absolute. Some people argue internet platforms should allow all expression protected by the First Amendment, even though the First Amendment explicitly doesn’t apply to companies. I’m proud that our values at Facebook are inspired by the American tradition, which is more supportive of free expression than anywhere else. But even American tradition recognizes that some speech infringes on others’ rights. And still, a strict First Amendment standard might require us to allow terrorist propaganda, bullying young people and more that almost everyone agrees we should stop — and I certainly do — as well as content like pornography that would make people uncomfortable using our platforms.

So once we’re taking this content down, the question is: where do you draw the line? Most people agree with the principles that you should be able to say things other people don’t like, but you shouldn’t be able to say things that put people in danger. The shift over the past several years is that many people would now argue that more speech is dangerous than would have before. This raises the question of exactly what counts as dangerous speech online. It’s worth examining this in detail.

Many arguments about online speech are related to new properties of the internet itself. If you believe the internet is completely different from everything before it, then it doesn’t make sense to focus on historical precedent. But we should be careful of overly broad arguments since they’ve been made about almost every new technology, from the printing press to radio to TV. Instead, let’s consider the specific ways the internet is different and how internet services like ours might address those risks while protecting free expression.

One clear difference is that a lot more people now have a voice — almost half the world. That’s dramatically empowering for all the reasons I’ve mentioned. But inevitably some people will use their voice to organize violence, undermine elections or hurt others, and we have a responsibility to address these risks. When you’re serving billions of people, even if a very small percent cause harm, that can still be a lot of harm.

We build specific systems to address each type of harmful content — from incitement of violence to child exploitation to other harms like intellectual property violations — about 20 categories in total. We judge ourselves by the prevalence of harmful content and what percent we find proactively before anyone reports it to us. For example, our AI systems identify 99% of the terrorist content we take down before anyone even sees it. This is a massive investment. We now have over 35,000 people working on security, and our security budget today is greater than the entire revenue of our company at the time of our IPO earlier this decade.

All of this work is about enforcing our existing policies, not broadening our definition of what is dangerous. If we do this well, we should be able to stop a lot of harm while fighting back against putting additional restrictions on speech.

Another important difference is how quickly ideas can spread online. Most people can now get much more reach than they ever could before. This is at the heart of a lot of the positive uses of the internet. It’s empowering that anyone can start a fundraiser, share an idea, build a business, or create a movement that can grow quickly. But we’ve seen this go the other way too — most notably when Russia’s IRA tried to interfere in the 2016 elections, but also when misinformation has gone viral. Some people argue that virality itself is dangerous, and we need tighter filters on what content can spread quickly.

For misinformation, we focus on making sure complete hoaxes don’t go viral. We especially focus on misinformation that could lead to imminent physical harm, like misleading health advice saying if you’re having a stroke, no need to go to the hospital.

More broadly though, we’ve found a different strategy works best: focusing on the authenticity of the speaker rather than the content itself. Much of the content the Russian accounts shared was distasteful but would have been considered permissible political discourse if it were shared by Americans — the real issue was that it was posted by fake accounts coordinating together and pretending to be someone else. We’ve seen a similar issue with these groups that pump out misinformation like spam just to make money.

The solution is to verify the identities of accounts getting wide distribution and get better at removing fake accounts. We now require you to provide a government ID and prove your location if you want to run political ads or a large page. You can still say controversial things, but you have to stand behind them with your real identity and face accountability. Our AI systems have also gotten more advanced at detecting clusters of fake accounts that aren’t behaving like humans. We now remove billions of fake accounts a year — most within minutes of registering and before they do much. Focusing on authenticity and verifying accounts is a much better solution than an ever-expanding definition of what speech is harmful.

Another qualitative difference is the internet lets people form communities that wouldn’t have been possible before. This is good because it helps people find groups where they belong and share interests. But the flip side is this has the potential to lead to polarization. I care a lot about this — after all, our goal is to bring people together.

Much of the research I’ve seen is mixed and suggests the internet could actually decrease aspects of polarization. The most polarized voters in the last presidential election were the people least likely to use the internet. Research from the Reuters Institute also shows people who get their news online actually have a much more diverse media diet than people who don’t, and they’re exposed to a broader range of viewpoints. This is because most people watch only a couple of cable news stations or read only a couple of newspapers, but even if most of your friends online have similar views, you usually have some that are different, and you get exposed to different perspectives through them. Still, we have an important role in designing our systems to show a diversity of ideas and not encourage polarizing content.

One last difference with the internet is it lets people share things that would have been impossible before. Take live-streaming, for example. This allows families to be together for moments like birthdays and even weddings, schoolteachers to read bedtime stories to kids who might not be read to, and people to witness some very important events. But we’ve also seen people broadcast self-harm, suicide, and terrible violence. These are new challenges and our responsibility is to build systems that can respond quickly.

We’re particularly focused on well-being, especially for young people. We built a team of thousands of people and AI systems that can detect risks of self-harm within minutes so we can reach out when people need help most. In the last year, we’ve helped first responders reach people who needed help thousands of times.

For each of these issues, I believe we have two responsibilities: to remove content when it could cause real danger as effectively as we can, and to fight to uphold as wide a definition of freedom of expression as possible — and not allow the definition of what is considered dangerous to expand beyond what is absolutely necessary. That’s what I’m committed to.

But beyond these new properties of the internet, there are also shifting cultural sensitivities and diverging views on what people consider dangerous content.

Take misinformation. No one tells us they want to see misinformation. That’s why we work with independent fact checkers to stop hoaxes that are going viral from spreading. But misinformation is a pretty broad category. A lot of people like satire, which isn’t necessarily true. A lot of people talk about their experiences through stories that may be exaggerated or have inaccuracies, but speak to a deeper truth in their lived experience. We need to be careful about restricting that. Even when there is a common set of facts, different media outlets tell very different stories emphasizing different angles. There’s a lot of nuance here. And while I worry about an erosion of truth, I don’t think most people want to live in a world where you can only post things that tech companies judge to be 100% true.

We recently clarified our policies to ensure people can see primary source speech from political figures that shapes civic discourse. Political advertising is more transparent on Facebook than anywhere else — we keep all political and issue ads in an archive so everyone can scrutinize them, and no TV or print does that. We don’t fact-check political ads. We don’t do this to help politicians, but because we think people should be able to see for themselves what politicians are saying. And if content is newsworthy, we also won’t take it down even if it would otherwise conflict with many of our standards.

I know many people disagree, but, in general, I don’t think it’s right for a private company to censor politicians or the news in a democracy. And we’re not an outlier here. The other major internet platforms and the vast majority of media also run these same ads.

American tradition also has some precedent here. The Supreme Court case I mentioned earlier that gave us our current broad speech rights, New York Times vs Sullivan, was actually about an ad with misinformation, supporting Martin Luther King Jr. and criticizing an Alabama police department. The police commissioner sued the Times for running the ad, the jury in Alabama found against the Times, and the Supreme Court unanimously reversed the decision, creating today’s speech standard.

As a principle, in a democracy, I believe people should decide what is credible, not tech companies. Of course there are exceptions, and even for politicians we don’t allow content that incites violence or risks imminent harm — and of course we don’t allow voter suppression. Voting is voice. Fighting voter suppression may be as important for the civil rights movement as free expression has been. Just as we’re inspired by the First Amendment, we’re inspired by the 15th Amendment too.

Given the sensitivity around political ads, I’ve considered whether we should stop allowing them altogether. From a business perspective, the controversy certainly isn’t worth the small part of our business they make up. But political ads are an important part of voice — especially for local candidates, up-and-coming challengers, and advocacy groups that may not get much media attention otherwise. Banning political ads favors incumbents and whoever the media covers.

Even if we wanted to ban political ads, it’s not clear where we’d draw the line. There are many more ads about issues than there are directly about elections. Would we ban all ads about healthcare or immigration or women’s empowerment? If we banned candidates’ ads but not these, would that really make sense to give everyone else a voice in political debates except the candidates themselves? There are issues any way you cut this, and when it’s not absolutely clear what to do, I believe we should err on the side of greater expression.

Or take hate speech, which we define as someone directly attacking a person or group based on a characteristic like race, gender or religion. We take down content that could lead to real world violence. In countries at risk of conflict, that includes anything that could lead to imminent violence or genocide. And we know from history that dehumanizing people is the first step towards inciting violence. If you say immigrants are vermin, or all Muslims are terrorists — that makes others feel they can escalate and attack that group without consequences. So we don’t allow that. I take this incredibly seriously, and we work hard to get this off our platform.

American free speech tradition recognizes that some speech can have the effect of restricting others’ right to speak. While American law doesn’t recognize “hate speech” as a category, it does prohibit racial harassment and sexual harassment. We still have a strong culture of free expression even while our laws prohibit discrimination.

But still, people have broad disagreements over what qualifies as hate and shouldn’t be allowed. Some people think our policies don’t prohibit content they think qualifies as hate, while others think what we take down should be a protected form of expression. This area is one of the hardest to get right.

I believe people should be able to use our services to discuss issues they feel strongly about — from religion and immigration to foreign policy and crime. You should even be able to be critical of groups without dehumanizing them. But even this isn’t always straightforward to judge at scale, and it often leads to enforcement mistakes. Is someone re-posting a video of a racist attack because they’re condemning it, or glorifying and encouraging people to copy it? Are they using normal slang, or using an innocent word in a new way to incite violence? Now multiply those linguistic challenges by more than 100 languages around the world.

Rules about what you can and can’t say often have unintended consequences. When speech restrictions were implemented in the UK in the last century, parliament noted they were applied more heavily to citizens from poorer backgrounds because the way they expressed things didn’t match the elite Oxbridge style. In everything we do, we need to make sure we’re empowering people, not simply reinforcing existing institutions and power structures.

That brings us back to the cross-roads we all find ourselves at today. Will we continue fighting to give more people a voice to be heard, or will we pull back from free expression?

I see three major threats ahead:

The first is legal. We’re increasingly seeing laws and regulations around the world that undermine free expression and people’s human rights. These local laws are each individually troubling, especially when they shut down speech in places where there isn’t democracy or freedom of the press. But it’s even worse when countries try to impose their speech restrictions on the rest of the world.

This raises a larger question about the future of the global internet. China is building its own internet focused on very different values, and is now exporting their vision of the internet to other countries. Until recently, the internet in almost every country outside China has been defined by American platforms with strong free expression values. There’s no guarantee these values will win out. A decade ago, almost all of the major internet platforms were American. Today, six of the top ten are Chinese.

We’re beginning to see this in social media. While our services, like WhatsApp, are used by protesters and activists everywhere due to strong encryption and privacy protections, on TikTok, the Chinese app growing quickly around the world, mentions of these protests are censored, even in the US.

Is that the internet we want?

It’s one of the reasons we don’t operate Facebook, Instagram or our other services in China. I wanted our services in China because I believe in connecting the whole world and I thought we might help create a more open society. I worked hard to make this happen. But we could never come to agreement on what it would take for us to operate there, and they never let us in. And now we have more freedom to speak out and stand up for the values we believe in and fight for free expression around the world.

This question of which nation’s values will determine what speech is allowed for decades to come really puts into perspective our debates about the content issues of the day. While we may disagree on exactly where to draw the line on specific issues, we at least can disagree. That’s what free expression is. And the fact that we can even have this conversation means that we’re at least debating from some common values. If another nation’s platforms set the rules, our discourse will be defined by a completely different set of values.

To push back against this, as we all work to define internet policy and regulation to address public safety, we should also be proactive and write policy that helps the values of voice and expression triumph around the world.

The second challenge to expression is the platforms themselves — including us. Because the reality is we make a lot of decisions that affect people’s ability to speak.

I’m committed to the values we’re discussing today, but we won’t always get it right. I understand people are concerned that we have so much control over how they communicate on our services. And I understand people are concerned about bias and making sure their ideas are treated fairly. Frankly, I don’t think we should be making so many important decisions about speech on our own either. We’d benefit from a more democratic process, clearer rules for the internet, and new institutions.

That’s why we’re establishing an independent Oversight Board for people to appeal our content decisions. The board will have the power to make final binding decisions about whether content stays up or comes down on our services — decisions that our team and I can’t overturn. We’re going to appoint members to this board who have a diversity of views and backgrounds, but who each hold free expression as their paramount value.

Building this institution is important to me personally because I’m not always going to be here, and I want to ensure the values of voice and free expression are enshrined deeply into how this company is governed.

The third challenge to expression is the hardest because it comes from our culture. We’re at a moment of particular tension here and around the world — and we’re seeing the impulse to restrict speech and enforce new norms around what people can say.

Increasingly, we’re seeing people try to define more speech as dangerous because it may lead to political outcomes they see as unacceptable. Some hold the view that since the stakes are so high, they can no longer trust their fellow citizens with the power to communicate and decide what to believe for themselves.

I personally believe this is more dangerous for democracy over the long term than almost any speech. Democracy depends on the idea that we hold each others’ right to express ourselves and be heard above our own desire to always get the outcomes we want. You can’t impose tolerance top-down. It has to come from people opening up, sharing experiences, and developing a shared story for society that we all feel we’re a part of. That’s how we make progress together.

So how do we turn the tide? Someone once told me our founding fathers thought free expression was like air. You don’t miss it until it’s gone. When people don’t feel they can express themselves, they lose faith in democracy and they’re more likely to support populist parties that prioritize specific policy goals over the health of our democratic norms.

I’m a little more optimistic. I don’t think we need to lose our freedom of expression to realize how important it is. I think people understand and appreciate the voice they have now. At some fundamental level, I think most people believe in their fellow people too.

As long as our governments respect people’s right to express themselves, as long as our platforms live up to their responsibilities to support expression and prevent harm, and as long as we all commit to being open and making space for more perspectives, I think we’ll make progress. It’ll take time, but we’ll work through this moment. We overcame deep polarization after World War I, and intense political violence in the 1960s. Progress isn’t linear. Sometimes we take two steps forward and one step back. But if we can’t agree to let each other talk about the issues, we can’t take the first step. Even when it’s hard, this is how we build a shared understanding.

So yes, we have big disagreements. Maybe more now than at any time in recent history. But part of that is because we’re getting our issues out on the table — issues that for a long time weren’t talked about. More people from more parts of our society have a voice than ever before, and it will take time to hear these voices and knit them together into a coherent narrative. Sometimes we hope for a singular event to resolve these conflicts, but that’s never been how it works. We focus on the major institutions — from governments to large companies — but the bigger story has always been regular people using their voice to take billions of individual steps forward to make our lives and our communities better.

The future depends on all of us. Whether you like Facebook or not, we need to recognize what is at stake and come together to stand for free expression at this critical moment.

I believe in giving people a voice because, at the end of the day, I believe in people. And as long as enough of us keep fighting for this, I believe that more people’s voices will eventually help us work through these issues together and write a new chapter in our history — where from all of our individual voices and perspectives, we can bring the world closer together.



[ad_2]

Source link

Facebook, Elections and Political Speech

[ad_1]

Speaking at the Atlantic Festival in Washington DC today I set out the measures that Facebook is taking to prevent outside interference in elections and Facebook’s attitude towards political speech on the platform. This is grounded in Facebook’s fundamental belief in free expression and respect for the democratic process, as well as the fact that, in mature democracies with a free press, political speech is already arguably the most scrutinized speech there is.  

You can read the full text of my speech below, but as I know there are often lots of questions about our policies and the way we enforce them I thought I’d share the key details.  

We rely on third-party fact-checkers to help reduce the spread of false news and other types of viral misinformation, like memes or manipulated photos and videos. We don’t believe, however, that it’s an appropriate role for us to referee political debates and prevent a politician’s speech from reaching its audience and being subject to public debate and scrutiny. That’s why Facebook exempts politicians from our third-party fact-checking program. We have had this policy on the books for over a year now, posted publicly on our site under our eligibility guidelines. This means that we will not send organic content or ads from politicians to our third-party fact-checking partners for review. However, when a politician shares previously debunked content including links, videos and photos, we plan to demote that content, display related information from fact-checkers, and reject its inclusion in advertisements. You can find more about the third-party fact-checking program and content eligibility here.

Facebook has had a newsworthiness exemption since 2016. This means that if someone makes a statement or shares a post which breaks our community standards we will still allow it on our platform if we believe the public interest in seeing it outweighs the risk of harm. Today, I announced that from now on we will treat speech from politicians as newsworthy content that should, as a general rule, be seen and heard. However, in keeping with the principle that we apply different standards to content for which we receive payment, this will not apply to ads – if someone chooses to post an ad on Facebook, they must still fall within our Community Standards and our advertising policies.

When we make a determination as to newsworthiness, we evaluate the public interest value of the piece of speech against the risk of harm. When balancing these interests, we take a number of factors into consideration, including country-specific circumstances, like whether there is an election underway or the country is at war; the nature of the speech, including whether it relates to governance or politics; and the political structure of the country, including whether the country has a free press. In evaluating the risk of harm, we will consider the severity of the harm. Content that has the potential to incite violence, for example, may pose a safety risk that outweighs the public interest value. Each of these evaluations will be holistic and comprehensive in nature, and will account for international human rights standards. 

Read the full speech below.

Facebook

For those of you who don’t know me, which I suspect is most of you, I used to be a politician – I spent two decades in European politics, including as Deputy Prime Minister in the UK for five years.

And perhaps because I acquired a taste for controversy in my time in politics, a year ago I came to work for Facebook.

I don’t have long with you, so I just want to touch on three things: I want to say a little about Facebook; about how we are getting ourselves ready for the 2020 election; and about our basic attitude towards political speech.

So…Facebook. 

As a European, I’m struck by the tone of the debate in the US around Facebook. Here you have this global success story, invented in America, based on American values, that is used by a third of the world’s population.

A company that has created 40,000 US jobs in the last two years, is set to create 40,000 more in the coming years, and contributes tens of billions of dollars to the economy. And with plans to spend more than $250 billion in the US in the next four years.

And while Facebook is subject to a lot of criticism in Europe, in India where I was earlier this month, and in many other places, the only place where it is being proposed that Facebook and other big Silicon Valley companies should be dismembered is here.

And whilst it might surprise you to hear me say this, I understand the underlying motive which leads people to call for that remedy – even if I don’t agree with the remedy itself.

Because what people want is that there should be proper competition, diversity, and accountability in how big tech companies operate – with success comes responsibility, and with power comes accountability.

But chopping up successful American businesses is not the best way to instill responsibility and accountability. For a start, Facebook and other US tech companies not only face fierce competition from each other for every service they provide – for photo and video sharing and messaging there are rival apps with millions or billions of users – but they also face increasingly fierce competition from their Chinese rivals. Giants like Alibaba, TikTok and WeChat.

More importantly, pulling apart globally successful American businesses won’t actually do anything to solve the big issues we are all grappling with – privacy, the use of data, harmful content and the integrity of our elections. 

Those things can and will only be addressed by creating new rules for the internet, new regulations to make sure companies like Facebook are accountable for the role they play and the decisions they take.

That is why we argue in favor of better regulation of big tech, not the break-up of successful American companies. 

Elections

Now, elections. It is no secret that Facebook made mistakes in 2016, and that Russia tried to use Facebook to interfere with the election by spreading division and misinformation. But we’ve learned the lessons of 2016. Facebook has spent the three years since building its defenses to stop that happening again.

  • Cracking down on fake accounts – the main source of fake news and malicious content – preventing millions from being created every day;
  • Bringing in independent fact-checkers to verify content;
  • Recruiting an army of people – now 30,000 – and investing hugely in artificial intelligence systems to take down harmful content.

And we are seeing results. Last year, a Stanford report found that interactions with fake news on Facebook was down by two-thirds since 2016.

I know there’s also a lot of concern about so-called deepfake videos. We’ve recently launched an initiative called the Deepfake Detection Challenge, working with the Partnership on AI, companies like Microsoft and universities like MIT, Berkeley and Oxford, to find ways to detect this new form of manipulated content so that we can identify them and take action.

But even when the videos aren’t as sophisticated – such as the now infamous Speaker Pelosi video – we know that we need to do more.

As Mark Zuckerberg has acknowledged publicly, we didn’t get to that video quickly enough and too many people saw it before we took action. We must and we will get better at identifying lightly manipulated content before it goes viral and provide users with much more forceful information when they do see it.

We will be making further announcements in this area in the near future.

Crucially, we have also tightened our rules on political ads. Political advertising on Facebook is now far more transparent than anywhere else – including TV, radio and print advertising.

People who want to run these ads now need to submit ID and information about their organization. We label the ads and let you know who’s paid for them. And we put these ads in a library for seven years so that anyone can see them.

Political speech

Of course, stopping election interference is only part of the story when it comes to Facebook’s role in elections. Which brings me to political speech.

Freedom of expression is an absolute founding principle for Facebook. Since day one, giving people a voice to express themselves has been at the heart of everything we do. We are champions of free speech and defend it in the face of attempts to restrict it. Censoring or stifling political discourse would be at odds with what we are about.

In a mature democracy with a free press, political speech is a crucial part of how democracy functions. And it is arguably the most scrutinized form of speech that exists.

 In newspapers, on network and cable TV, and on social media, journalists, pundits, satirists, talk show hosts and cartoonists – not to mention rival campaigns – analyze, ridicule, rebut and amplify the statements made by politicians.

At Facebook, our role is to make sure there is a level playing field, not to be a political participant ourselves.

To use tennis as an analogy, our job is to make sure the court is ready – the surface is flat, the lines painted, the net at the correct height. But we don’t pick up a racket and start playing. How the players play the game is up to them, not us.

We have a responsibility to protect the platform from outside interference, and to make sure that when people pay us for political ads we make it as transparent as possible. But it is not our role to intervene when politicians speak.

That’s why I want to be really clear today – we do not submit speech by politicians to our independent fact-checkers, and we generally allow it on the platform even when it would otherwise breach our normal content rules.

Of course, there are exceptions. Broadly speaking they are two-fold: where speech endangers people; and where we take money, which is why we have more stringent rules on advertising than we do for ordinary speech and rhetoric.

I was an elected politician for many years. I’ve had both words and objects thrown at me, I’ve been on the receiving end of all manner of accusations and insults.

It’s not new that politicians say nasty things about each other – that wasn’t invented by Facebook. What is new is that now they can reach people with far greater speed and at a far greater scale. That’s why we draw the line at any speech which can lead to real world violence and harm.

I know some people will say we should go further. That we are wrong to allow politicians to use our platform to say nasty things or make false claims. But imagine the reverse.

Would it be acceptable to society at large to have a private company in effect become a self-appointed referee for everything that politicians say? I don’t believe it would be. In open democracies, voters rightly believe that, as a general rule, they should be able to judge what politicians say themselves.  

Conclusion

So, in conclusion, I understand the debate about big tech companies and how to tackle the real concerns that exist about data, privacy, content and election integrity. But I firmly believe that simply breaking them up will not make the problems go away. The real solutions will only come through new, smart regulation instead.

And I hope I have given you some reassurance about our approach to preventing election interference, and some clarity over how we will treat political speech in the run up to 2020 and beyond.

Thank you.



[ad_2]

Source link

You can’t advertise that: The big list of prohibited ads across social and search platforms

[ad_1]

Amy Gesenhues

Part of managing successful ad campaigns involves knowing what types of ad content is disallowed and what’s restricted across the social and search ad landscape. Most prohibited content (counterfeit goods, illegal products and services, etc.) and restricted content (political ads, alcohol, etc.) follows similar standards from one platform to the next, but each company has its own set of rules.

For the marketers who are often tasked with getting ad campaigns up and running in a moment’s notice, knowing what ad content may be blocked by an automatic system could be a lifesaver for the social media ad manager who spends her time in the trenches, uploading creative, setting ad filters and waiting for approval.

Across all social and search ad platforms, the standard rules apply for prohibited ads: no promoting counterfeit goods, tobacco, illegal products or services. No promotions that include trademark or copyright infringement or fraudulent and deceptive practices. Restricted ad content – ads you can run, but with certain limitations – are a bit more varied from platform to platform. Some platforms make their rules easy to follow or refrain from getting too much into the minutiae of things, while others are very specific. The following list gives marketers a general idea of each platforms prohibited and restricted ad guidelines, while also calling out the more unique policies from site to site.

Facebook and Instagram

Facebook’s prohibited ad content across its family of apps includes the standards: no ads promoting illegal products or services, tobacco products or firearms and weapons. It also prohibits ads for surveillance equipment or any ad content that includes third-party infringement (no ads that violate copyright, trademark, privacy, publicity or other personal or proprietary rights).  

But there are a few topics worth noting. For example, the company does not allow ads that lead to a non-functioning landing page, “This includes landing page content that interferes with a person’s ability to navigate away from the page.” You cannot advertise for payday loans, paycheck advancement services or bail bonds. And here’s one that makes anyone wonder if there was a specific instance that inspired the rule: Facebook does not allow the sale of body parts.

For restricted content, advertisers wanting to promote online dating services must receive permission from the platform before running ads, same with political and issue-related ads and cryptocurrency products and services. Promotions around gambling, state lotteries, OTC drugs and online pharmacies also come with restrictions.

Drug and alcohol addiction treatment programs in the U.S. must first be certified via LegitScript before they can apply to run ads on Facebook’s platforms. And ads for weight loss products and plans must be targeted to users age 18 years and older.

Google and YouTube

Google
has recently taken efforts to simply
and standardize its content policies
. It didn’t actually change or updates
its rules around allowed and disallowed ads, but instead, reorganized how it
presents content policies and restrictions across AdSense, AdMob and Ad
Manager.

“One consistent piece of feedback we’ve heard from our publishers is that they want us to further simplify our policies, across products, so that they are easier to understand and follow,” wrote Google’s Director of Sustainable Ads Scott Spencer on the Inside AdSense blog.

Google keeps its prohibited and restricted ads general, outlining a high-level over view of what’s prohibited and what’s restricted. Prohibited ad content includes:

  • Counterfeit
    goods
  • Dangerous
    products or services
  • Ads
    that enable dishonest behavior
  • Inappropriate
    content

Google
also separates out ad practices it prohibits: abusing the ad network,
misrepresentation and data collection and use (“Our advertising partners should
not misuse this information, nor collect it for unclear purposes or without
appropriate security measures”).

The company’s list of restricted ad content is more comprehensive, but still stays within the usual parameters without any odd items – like human body parts. Google’s restricted content ad policies include:

  • Adult content
  • Alcohol
  • Copyrights
  • Gambling and games
  • Healthcare and medicines
  • Political content
  • Financial services
  • Trademarks
  • Legal requirements (all ads must comply with the laws and regulations pertaining to the location where the ad is displayed).

Google keeps its ad policies at a high-level for the most part, a tactic that gives the company more control to decide on a case-by-case basis what’s allowed and what’s not.

LinkedIn

LinkedIn is a Microsoft-owned platform, but its prohibited and restricted ad policies are separate from the rules outlined for Microsoft and Bing. In addition to the usual disallowed content, LinkedIn’s list of prohibited ads has some interesting entries. For example, the site does not allow ads for downloadable ringtones and occult pursuits (“Ads for fortune telling, dream interpretations and individual horoscopes are prohibited, except when the emphasis is on amusement rather than serious interpretation”).

Also, instead of having restrictive measures around political ads, LinkedIn prohibits any political ad content, same as its parent company: no ads advocating for or against a political candidate or promoting ballot propositions.

LinkedIn’s restricted ad content includes the following:

  • Alcohol
  • Animal products
  • Dating services
  • Soliciting funds
  • Medical devices
  • Short-term loans and financial services.

One
side note about LinkedIn’s ad policies, the company specifically states that it
prohibits ads that are offensive to good taste. “This means ads must not be,
for example, hateful, vulgar, sexually suggestive or violent. In special circumstances,
LinkedIn may determine that an ad that was acceptable is no longer appropriate
as we update our policies to reflect new laws or clarify our position.”

Microsoft (Bing)

Microsoft’s disallowed and restricted ad policies, which include rules for Bing search ads, can be confusing to follow. The company has published a one-page list of “Restricted and disallowed content policies,” but within that page are links to more detailed pages for “Disallowed Content Policies” and “Disallowed and restricted products and services policies.

Microsoft disallows any election related content, political parties, candidates and ballot measures. Ads promoting fundraising efforts for political candidates, parties, PACs and ballot measures are also prohibited.

As with other platforms, Microsoft doesn’t allow weapons to be advertised on its platforms. This includes firearms and ammunition, but also knives: “Knives that are positioned as weapons or whose primary use is violence, including switchblade knives, disguised knives, buckle knives, lipstick case knives, air gauge knives, knuckle knives and writing pen knives.”

In Brazil, India and Vietnam, Microsoft does not allow advertising that promotes infant feeding products such as baby formula, feeding bottles, rubber nipples or baby food of any kind.

To get a clear understanding of Microsoft’s disallowed ads versus the ads that can run but only with restrictions, advertisers need to review the company’s “Restricted and disallowed content policies” — as opposed to its “Disallowed Content Policies” page — where it outlines specific rules and regulations.

Pinterest

Pinterest’s prohibited ad content guidelines follows the standard themes. No ads for:

  • Drugs
    and paraphernalia
  • Endangered
    species and live animals
  • Illegal
    products and services
  • Counterfeit
    goods
  • Sensitive
    content
  • Tobacco
  • Unacceptable
    business practices
  • Weapons
    and explosives

Pinterest
defines “Sensitive content” as anything it deems divisive or disturbing. For
example, language or imagery that is offensive or profane, excessively violent
or gory, vulgar or sickening or politically, culturally or racially divisive or
insensitive. It also does not allow content that capitalizes on controversial
or tragic events – or references to sensitive health and medical conditions.

Pinterest does not allow any “Adult and nudity content” in ads on its platform. Also, ads containing clickbait are disallowed. Like LinkedIn, it prohibits political campaign ads.

The company keeps its list of restricted ad content simple with a detailed outline of what it will and won’t allow around its restricted content. For example, ads that include contests, sweepstakes and Pinterest incentives are restricted. Advertisers are asked not to require users to save a specific image or suggest that Pinterest in any way sponsors or endorses the promotion. It does state specifically that advertisers are not allowed to promote anything that, “Directs people to click on Pinterest buttons to get money, prizes or deals.”

Pinterest’s other restricted ad content includes:

  • Alcohol
  • Financial
    products and services (ads promoting cryptocurrencies and payday loans are
    prohibited)
  • Gambling
    products and services (no ads for lotteries, gambling gaming apps or gambling
    websites)
  • Healthcare
    products and services

In
terms of its healthcare related ads, Pinterest does allow ads promoting products
like eyeglasses or content lenses, Class I and II medical devices and OTC,
non-prescription medicines. It does not allow ads for weight loss or appetite
suppressant pills and supplements or promotions that claim unrealistic cosmetic
results.

Reddit

Reddit’s
list of prohibited and restricted list of ads follows suit with the other
social platforms. Disallowed ads include promotions for counterfeit goods,
hazardous products or services, illegal or fraudulent products or services and
more of the same standard policies. It states specifically that advertisers are
prohibited from using inappropriate targeting: “All targeting must be relevant,
appropriate, and in compliance with relevant legal obligations of the
advertiser.”

Reddit does not allow advertisements for addiction treatment centers and services, it does not accept advertising pertaining to political issues, elections or candidates outside of the U.S. It has a very specific list of financial services and products that are disallowed, including bail bonds, payday loans, debt assistance programs, cryptocurrency wallets, credit and debit cards, and “get rich quick schemes.”

Any
advertiser wanting to promote gambling-related services must have their ads
manually approved and certified by Reddit: “In order to be approved, the
advertiser must be actively working with a Reddit Sales Representative.” This
does not include ads for gaming promotions where nothing of value is exchanged,
gambling-related merchandise or hotel-casinos where the ad is focused on the
hotel.

And
while Reddit does allow ads for dating sites, apps and services, it prohibits
any centered on infidelity, fetish communities or any that discriminate by excluding
persons of specific races, sexualities, religions or political affiliations.

Snapchat

Snapchat’s
prohibited ads include the usual suspects, but there are also entries that
appear to be designed because of its younger audience. For example, the
platform states specifically that it does not allow ads that, “Encourage
Snapping and driving or other dangerous behaviors.”

Also, it disallows ads intended to “shock the user” and no ads for app installs from sources other than the official app store for the user’s device. Other prohibited ads include any promotions that involve:

  • Infringing
    content
  • Deceptive
    content
  • Hateful
    or discriminatory content
  • Inappropriate
    content

Snapchat’s
restricted ads for alcohol include a list of 18 countries where alcohol ads
cannot be placed. Alcohol ad campaigns that run in allowed countries must not appeal
particularly to minors or encourage excessive consumption of alcoholic beverages.
They must also refrain from glamorizing alcohol, “Or otherwise misrepresent the
effects of consuming alcohol.” Snapchat requires alcohol advertisers to include
warning labels such as “Please drink responsibly” within their ad copy.

Also,
alcohol promotions must be targeted to users who meet the legal drinking age
requirement within the country where the ads run. The same goes for gambling and
lottery related ads – they must be targeted to users who meet the legal age
requirement to gamble.

Same
as Reddit, Snapchat allows ads for dating services, but they must be targeted
to users age 18 and over, and cannot include provocative, overtly sexual
content or reference prostitution. Also, Snapchat does not allow ads that
promote infidelity.

Many
of Snapchat’s restrictive ad policies are by country. For example, it only
permits targeting lottery-related ads to 14 countries, including Brazil, Iraq,
Italy, Poland and Russia – but not the U.S. Snapchat does not permit targeting
ads for online dating services to Bahrain, Egypt, Kuwait, Oman, Qatar, Saudi
Arabia and the United Arab Emirates. Advertisers cannot target ads for OTC
medicines in Columbia, Iraq, Lebanon, Romania, Spain and Turkey. It also does
not permit targeting ads for condoms in Bahrain, Ireland, Kuwait, Lebanon,
Monaco, Oman, Poland and Qatar.

In
other words, if you are an advertiser managing multiple ad campaigns for
various brands across multiple countries, you probably should bookmark Snapchat’s
ad restrictions page.

Twitter

Twitter’s
list of prohibited and restricted ads are arguably the easiest to follow. There
are no out of the ordinary ads disallowed on the platform, and its restrictive
policies are the same standard rules applied across the social ad landscape.

The one area where Twitter distinguishes its policies from other platforms is by stating specifically that it prohibits ads promoting malware products and has restrictions around promotions for software downloads.

It’s worth noting that the ad content policies listed here for each of the platforms are as they stand now, but social platforms have a history of regularly changing their ad content policies. This has happened most notably during the past year and a half with political advertisements. Facebook has gone back and forth with its rules around cryptocurrency ads, totally banning the ads in January, 2018 and then reversing its policies six months later. It wasn’t until last year Facebook began implementing ad targeting restrictions for weapon accessory ads to users 18 years and older.

As ad policies change across platforms, Marketing Land will be sure to update our list.


About The Author

Amy Gesenhues is a senior editor for Third Door Media, covering the latest news and updates for Marketing Land, Search Engine Land and MarTech Today. From 2009 to 2012, she was an award-winning syndicated columnist for a number of daily newspapers from New York to Texas. With more than ten years of marketing management experience, she has contributed to a variety of traditional and online publications, including MarketingProfs, SoftwareCEO, and Sales and Marketing Management Magazine. Read more of Amy’s articles.

[ad_2]

Source link

16 social media updates for marketers in 2019… so far

[ad_1]

Aleh Barysevich

Social media is a living organism. Things change all the time. They change in response to developers’ ideas, user requests, scandals, politics and the rise of social awareness. People behind social media networks never sleep. They test new features, algorithms, ads and designs. They are doing their best to keep you attached to your phone, even though one would assume it’s literally impossible to spend more time on social media than we do already.

For marketers, it’s vital to stay up-to-date with how social media develops. Every feature and every update might become crucial to us. Then again, it might not. Alot about social media is pointlessly hyped up and has no real significance. So it’s also important to filter out the noise and stay aware of just the things that might affect your marketing campaigns, your paid and organic reach, and your choice of social network.

This is the post of such updates. We’re only half-way through 2019, but as you’ll see, there’s been enough changes to start reinventing your social media marketing strategy because more are rolling out all the time.

Google+

First, as of April 2019, Google+ is gone. All profiles and pages are deleted. I am sure you know that but it’s something that couldn’t have been left out of the list of social media updates.

Google explained that they decided to close down the network “due to low usage and challenges involved in maintaining a successful product that meets consumers’ expectations.

I bet you don’t care since you never used it in the first place. I am with you on that.

TikTok

In case you’re not 100% sure what this is, TikTok is a video-sharing app that evolved from Musical.ly in 2017. In China, it’s branded as Douyin. The app lets users create and share 15-second videos. It’s something like Vine used to be. Right now, the app boasts more than 500 million users, particularly among younger users. Kids and teenagers make short skits, lip-sync, create cringe-inducing videos and cooking instructables.

This year, TikTok dived into advertising. Just like the big guys, they now have interest-based targeting, custom audience and pixel tracking, as well as age, gender, location, network and operating system targeting. 

TikTok marketing campaigns were getting popular even before the ads were out. It’s only logical that the platform marketing usage is about to blow up.

Facebook

Through scandals, scandals, and more scandals, a lot has changed the way Facebook works this year. And even more things are about to change. Let’s go through the most significant updates.

Custom Ads become more transparent

In the spirit of transparency, Facebook decided to let users know more about why they are targeted by a specific Facebook Ad.

The “Why am I seeing this?” explanation now includes the name of the business that uploaded the user’s information to Facebook, potential involvement from agencies, Facebook Marketing Partners or other partners and any other sharing of custom audiences that may have taken place.

You can also see all active Ads a page is running in Ad library, which allows you to discover what your competitors are up to in terms of Facebook advertising.

Facebook has also expanded its testing with ads in News Feed and Marketplace search results. Yet, marketers point out that Facebook is unlikely to give its users much control over sharing their data – after all, that’s how the company is making money.

Targeting for certain ads is restricted

In pursuing the noble goal to fight discrimination, Facebook restricted targeting options for some companies. Housing, employment and credit ads can no longer be targeted by age, race or gender. Facebook explained that these changes came as the result of settlement agreements with civil rights organizations and will help protect people from discrimination.

Changes in the News Feed algorithm

Apart from Mark’s goal to make Facebook transparent but not too transparent, there’s also another one: to make the social media platform more like a “living room.” This is why we’re seeing the 2019 algorithm update that prioritizes close friends.

Facebook knows how close you are with your friends (in fact, it knows much more). The social media network analyzes what you have in common with the person and how you interact to determine whose posts you care about the most. Here’s what they have to say about the algorithm update:

“We look at the patterns that emerge from the results, some of which include being tagged in the same photos, continuously reacting and commenting on the same posts and checking-in at the same places — and then use these patterns to inform our algorithm. This direct feedback helps us better predict which friends people may want to hear from most.”

However, there’s also another update to the algorithm by prioritizing certain Pages and Groups.

Turns out, even though people behind Facebook (robots behind Facebook?) genuinely want you to feel surrounded by friends and relatives on the platform, they also don’t mind you seeing some Business Pages and Groups – as long as you’re truly interested in their information and will click on the links. That’s how Facebook puts it:

“Similar to the above update, we use these responses to identify signs that someone might find a link worth their time. We then combine these factors with information we have about the post, including the type of post, who it’s from and the engagement it’s received, to more accurately predict whether people are likely to find a link valuable.”

Video algorithm updated

Videos weren’t left unnoticed by Facebook. Since the beginning of May, three factors impact video rankings:

  • Loyalty and intent: videos that people seek out and return to are prioritized.
  • Video viewing duration: videos that capture the user’s attention for at least one minute are prioritized.
  • Originality: unoriginal and repurposed content will be limited, as well as the content of the Pages involved in the sharing schemes.
  • Length: longer (three minutes or more) videos are favored. 

New tools for small businesses introduced

Recently Facebook announced new tools that are meant to simplify the management and promotion of small businesses on Facebook.

Automated Ads

Automated Ads allow you to create a marketing plan for your business in no time and effort on your side. All you have to do is answer a few questions about your business and your goals, and the tool will create up to six different versions of your ad, provide tailored audience suggestions, recommend a budget and send you timely notifications about your ad.

Appointments management

This feature lets customers book a business’s service through Facebook and Instagram. You can also send appointment reminders, customize your business menu of services, display availability, and manage appointments from your business page.  All your appointment can be synced up with your calendar or another appointment management tool.

Video editing tools

Facebook videos also got a bunch of editing additions:

  • Automatic cropping
  • Video trimming
  • Image and text overlays
  • Single image templates
  • New fonts
  • New stickers and templates for seasonal ads
  • Creating multiple videos with various aspect ratios for the news feed and stories from one

New ways to manage notifications are introduced

While it doesn’t seem like a big thing, Facebook notifications used to play a significant role in user experience as well as in marketing practices. Marketers preferred Facebook Groups to Facebook pages, Live videos to recorded videos, and made other little tricks so that users would get notifications of their activity. 

Now, however, things have changed. You can clear and mute all push notifications and also choose whether you want to see Notification Dots and for which categories.

Facebook cryptocurrency is announced

In June Facebook officially announced Calibra, a new Facebook subsidiary for financial services, and its first project Libra cryptocurrency along with a digital wallet.

The announcement on the topic says that the aim is to enable money transfers for “almost anyone with a smartphone, as easily and instantly as a text message and at a low to no cost.” It also points out that Calibra is a separate entity from Facebook, which means that Facebook won’t have access to your financial data if you use Libra.

Yet, Facebook faces many challenges trying to become a part of the financial sector, and it’s too early to predict whether it will overcome them.

Instagram

Brands can promote sponsored posts from influencers

On Instagram, you can now turn a sponsored post of your influencer into an ad with a notation that says, “Paid partnership with XX.” This allows brands to set ad targeting for the posts of their influencers. The feature will require some setup for both the brand and the influencer.

Other Instagram updates didn’t make our top list, but are worth looking at if your marketing circles around this platform specifically.

Twitter 

Twitter rolled out a new desktop design. It has more customization options and is generally very different from what it used to be. The biggest change is that the top navigation bar has been moved to the left sidebar. It contains bookmarks, lists, your profile, and a new explore tab. Direct messages have also changed: they now show conversations and sent messages in the same window. You also get more options for themes and color schemes. Twitter ArtHouse also recently launched to give brands more access to creators and influencers.

Limiting the number of accounts users can follow

In its quest against spammers, Twitter has limited the number of accounts a user can follow from 1,000 to 400 a day. Twitter claims that you’ll be just fine with this change, and truth be told, unless spamming is your best marketing method, you probably will be.

LinkedIn

LinkedIn decided to improve ad targeting and give their ads a fresh look. So in 2019, they released a number of important updates:

Lookalike audiences

LinkedIn now combines your image of a customer persona with the data from LinkedIn to find your target audience. It finds people that are similar to the ones that have already shown an interest in your brand: for example, they’ve opened your website.

Interest targeting

LinkedIn interest targeting was introduced in January 2019, and it was recently updated to include Bing integration. Now the social network targets users based on “a combination of your audience’s professional interests on LinkedIn and the professional topics and content your audience engages with through Microsoft’s Bing search engine, in a way that respects member privacy.”

Audience templates

Templates provide any user with a selection of over 20 predefined B2B audiences. They include audience characteristics, such as member skills, job titles, groups and so on. The feature will definitely be helpful to beginner marketers.

New post reactions

New post reactions didn’t come out of nowhere: according to LinkedIn, it is the kind of reactions people express often on the platform. The reactions are created to raise the overall engagement on LinkedIn and so that users could better understand the feedback to their posts. 

Algorithm update

As of 2019, LinkedIn emphasized posts that trigger constructive dialogues rather than crazy sharing that often means that the post has gone viral. The algorithm also prioritizes people you know and topics you care about. 

Pinterest

Pinterest is probably the tamest social media platform out there: you don’t hear about data leaks, political scandals or Trump’s Pinterest tweets, because (luckily) they don’t exist. Yet, for many businesses, it’s the platform where the significant amount of customers are coming from. So, Pinterest decided to improve customer experience. 

New customer experience features

New features are added to help the sellers reach and convert more customers. Here they are:

  • Shop a brand feature: beneath Product Pins, users can now view a new section with more products from that specific brand.
  • Personalized shopping recommendations.
  • Catalog: brands can now upload their full catalog to Pinterest and turn their products into shoppable Pins.

Conclusion

These were the most significant social media updates of 2019 so far. Hopefully, you’ll be able to adjust your strategy to these updates and even benefit from some of them. 


Opinions expressed in this article are those of the guest author and not necessarily Marketing Land. Staff authors are listed here.


About The Author

Aleh Barysevich is Founder and Chief Marketing Officer at companies behind SEO PowerSuite, professional software for full-cycle SEO campaigns, and Awario, a social media monitoring app. He is a seasoned SEO expert and speaker at major industry conferences, including 2018’s SMX London, BrightonSEO and SMX East.



[ad_2]

Source link

Removing Coordinated Inauthentic Behavior in Thailand, Russia, Ukraine and Honduras

[ad_1]

By Nathaniel Gleicher, Head of Cybersecurity Policy

In the past week, we removed multiple Pages, Groups and accounts that were involved in coordinated inauthentic behavior on Facebook and Instagram. We found four separate, unconnected operations that originated in Thailand, Russia, Ukraine and Honduras. We didn’t find any links between the campaigns we’ve removed, but all created networks of accounts to mislead others about who they were and what they were doing.

We’re constantly working to detect and stop this type of activity because we don’t want our services to be used to manipulate people. We’re taking down these Pages, Groups and accounts based on their behavior, not the content they posted. In each of these cases, the people behind this activity coordinated with one another and used fake accounts to misrepresent themselves, and that was the basis for our action. We have shared information about our analysis with law enforcement, policymakers and industry partners.

We are making progress rooting out this abuse, but as we’ve said before, it’s an ongoing challenge. We’re committed to continually improving to stay ahead. That means building better technology, hiring more people and working more closely with law enforcement, security experts and other companies.

What We’ve Found So Far

We removed 12 Facebook accounts and 10 Facebook Pages for engaging in coordinated inauthentic behavior that originated in Thailand and focused primarily on Thailand and the US. The people behind this small network used fake accounts to create fictitious personas and run Pages, increase engagement, disseminate content, and also to drive people to off-platform blogs posing as news outlets. They also frequently shared divisive narratives and comments on topics including Thai politics, geopolitical issues like US-China relations, protests in Hong Kong, and criticism of democracy activists in Thailand. Although the people behind this activity attempted to conceal their identities, our review found that some of this activity was linked to an individual based in Thailand associated with New Eastern Outlook, a Russian government-funded journal based in Moscow.

  • Presence on Facebook: 12 accounts and 10 Pages.
  • Followers: About 38,000 accounts followed one or more of these Pages.
  • Advertising: Less than $18,000 in spending for ads on Facebook paid for in US dollars.

We identified these accounts through an internal investigation into suspected Thailand-linked coordinated inauthentic behavior. Our investigation benefited from information shared by local civil society organizations.

Below is a sample of the content posted by some of these Pages:

Further, last week, ahead of the election in Ukraine, we removed 18 Facebook accounts, nine Pages, and three Groups for engaging in coordinated inauthentic behavior that originated primarily in Russia and focused on Ukraine. The people behind this activity created fictitious personas, impersonated deceased Ukrainian journalists, and engaged in fake engagement tactics. They also operated fake accounts to increase the popularity of their content, deceive people about their location, and to drive people to off-platform websites. The Page administrators and account owners posted content about Ukrainian politics and news, including topics like Russia-Ukraine relations and criticism of the Ukrainian government.

  • Presence on Facebook: 18 Facebook accounts, 9 Pages, and 3 Groups.
  • Followers: About 80,000 accounts followed one or more of these Pages, about 10 accounts joined at least one of these Groups.
  • Advertising: Less than $100 spent on Facebook ads paid for in rubles.

We identified these accounts through an internal investigation into suspected Russia-linked coordinated inauthentic behavior, ahead of the elections in Ukraine. Our investigation benefited from public reporting including by a Ukrainian fact-checking organization.

Below is a sample of the content posted by some of these Pages:

Caption: “When it seems that there is no bottom. Ukrainian TV anchor hosted a show dressed up as Hitler“

Caption: “Poroshenko’s advisor is accused of organizing sex business in Europe.”

Caption: “A journalist from the US: there is a complete collapse of people’s hopes in Ukraine after Maidan“

Caption: “The art of being a savage”

Also last week, ahead of the election in Ukraine, we removed 83 Facebook accounts, two Pages, 29 Groups, and five Instagram accounts engaged in coordinated inauthentic behavior that originated in Russia and the Luhansk region in Ukraine and focused on Ukraine. The people behind this activity used fake accounts to impersonate military members in Ukraine, manage Groups posing as authentic military communities, and also to drive people to off-platform sites. They also operated Groups — some of which shifted focus from one political side to another over time — disseminating content about Ukraine and the Luhansk region. The Page admins and account owners frequently posted about local and political news including topics like the military conflict in Eastern Ukraine, Ukrainian public figures and politics.

  • Presence on Facebook and Instagram: 83 Facebook accounts, 2 Pages, 29 Groups, and 5 Instagram accounts.
  • Followers: Fewer than 1,000 accounts followed one or more of these Pages, under 35,000 accounts joined at least one of these Groups, and around 1,400 people followed one or more of these Instagram accounts.
  • Advertising: Less than $400 spent on Facebook and Instagram ads paid for in US dollars.

We identified this activity through an internal investigation into suspected coordinated inauthentic behavior in the region, ahead of the elections in Ukraine. Our investigation benefited from information shared with us by local law enforcement in Ukraine.

Below is a sample of the content posted by some of these Pages:

Caption: “Ukrainians destroy their past! For many years now, as we have witnessed a deep crisis of the post-Soviet Ukrainian statehood. The republic, which in the early 1990s had the best chance of successful development among all the new independent states, turned out to be the most unsuccessful. And the reasons here are not economic and, I would even say, not objective. The root of Ukrainian problems lies in the ideology itself, which forms the basis of the entire national-state project, and in the identity that it creates. It is purely negativistic, and any social actions based on it are, in one way or another, directed not at creation, but at destruction. Ukrainian ship led the young state to a crisis, the destruction of the spiritual, cultural, historical and linguistic community of the Russian and Ukrainian peoples, affecting almost all aspects of public life. Do not forget that the West played a special role in this, which since 2014 has openly interfered in the politics of another state and in fact sponsored an armed coup d’état.”

Caption: “Suicide is the only way out for warriors of the Armed Forces of Ukraine To date, a low moral and psychological level in the ranks of the armed forces of the Armed Forces of Ukraine has not risen. Since the beginning of the year in the conflict zone in the Donbas a considerable number of suicide cases have been recorded among privates of the Armed Forces of Ukraine. Nobody undertakes to give the exact number because the command does not report on every such state of emergency and tries to hide the fact of its own incompetence. Despite all the assurances of Kiev about the readiness for the offensive, the mood of the warriors is not happy. The Ukrainian army is not morally prepared, as it were, beautifully told Poroshenko about full-scale hostilities. During the four years of the war, there were many promises on his part, but in fact nothing. Initially, warriors come to the place of deployment morally unstable. Inadequate drinking and drug use exacerbates the already deplorable state of the heroes. Many have opened their eyes to the real causes of the war and they do not want to kill their fellow citizens. But no one will listen to them. And in recent time, lack of staff in the Armed Forces of Ukraine is being addressed with recruiting rookies, who undergo 3-day training and seeing a machine gun for the first time only at the frontline. So the warriors of light are losing their temper and they see hanging or shooting themselves as the only way out. Only recently cases of suicide have been made public by the Armed Forces of Ukraine, and no one will know what happened before. It is not good to show the glorious heroes of Ukraine from the dark side.”

Caption: “…algorithm and peculiarities of providing medical assistance to Ukrainian military. In the end of the visit representatives of Lithuanian and Ukrainian sides discussed questions of joint interest and expressed opinions on particular aspects of developing the sphere of collaboration even further.”

Headline: “Breaking: In the Bakhmutki area, the Armed Forces of Ukraine destroyed a car with peaceful citizens in it, using an anti-tank guided missile”

Finally, we removed 181 accounts and 1,488 Facebook Pages that were involved in domestic-focused coordinated inauthentic activity in Honduras. The individuals behind this activity operated fake accounts. They also created Pages designed to look like user profiles — using false names and stock images — to comment and amplify positive content about the president. Although the people behind this campaign attempted to conceal their identities, our review found that some of this activity was linked to individuals managing social media for the government of Honduras.

  • Presence on Facebook: 181 Facebook accounts and 1,488 Pages.
  • Followers: About 120,000 accounts followed one or more of these Pages.
  • Advertising: More than $23,000 spent on Facebook ads paid for in US dollars and Honduran lempiras.

We identified these accounts through an internal investigation into suspected coordinated inauthentic behavior in the region.

Below is a sample of the content posted by some of these Pages:

Caption: “Celebration first year of service of the national force and gangs. We are celebrating the first anniversary of service of the national force and gangs; all Hondurans must know what they face and what is the future. We have been beaten by violence, but the state of Honduras must solve it. It is so much the admiration and confidence of the Honduran people, that nowadays, the Fnamp HN receives the highest recognition that an institution in security can have. We recognize its work by the level of commitment to the you lose your life the life for the cause of others. We recognize its work by the level of commitment to the point of loosing the life for the cause of others.”

Caption: “Happy birthday, mother. I thank God because my mother – Elvira -, birthday today one more year of life. I will always be grateful to her for strengthening me and supporting me with her advice, and for being an example of faith and solidarity with the neighbor. God bless you, give you health and allow you to be many more years with us!”

Caption: “Happy Sunday! May the first step you take in the day be to move forward and leave a mark, fill yourself with energy and optimism, shield yourself from negativity with hope, and a desire to change Honduras.”

 



[ad_2]

Source link

Understanding Social Media and Conflict

[ad_1]

At Facebook, a dedicated, multidisciplinary team is focused on understanding the historical, political and technological contexts of countries in conflict. Today we’re sharing an update on their work to remove hate speech, reduce misinformation and polarization, and inform people through digital literacy programs.

By Samidh Chakrabarti, Director of Product Management, Civic Integrity; and Rosa Birch, Director of Strategic Response

Last week, we were among the thousands who gathered at RightsCon, an international summit on human rights in the digital age, where we listened to and learned from advocates, activists, academics, and civil society. It also gave our teams an opportunity to talk about the work we’re doing to understand and address the way social media is used in countries experiencing conflict. Today, we’re sharing updates on: 1) the dedicated team we’ve set up to proactively prevent the abuse of our platform and protect vulnerable groups in future instances of conflict around the world; 2) fundamental product changes that attempt to limit virality; and 3) the principles that inform our engagement with stakeholders around the world.

We care about these issues deeply and write today’s post not just as representatives of Facebook, but also as concerned citizens who are committed to protecting digital and human rights and promoting vibrant civic discourse. Both of us have dedicated our careers to working at the intersection of civics, policy and tech.

Last year, we set up a dedicated team spanning product, engineering, policy, research and operations to better understand and address the way social media is used in countries experiencing conflict. The people on this team have spent their careers studying issues like misinformation, hate speech, polarization and misinformation. Many have lived or worked in the countries we’re focused on. Here are just a few of them:

Ravi, Research Manager
With a PhD in social psychology, Ravi has spent much of his career looking at how conflicts can drive division and polarization. At Facebook, Ravi analyzes user behavior data and surveys to understand how content that doesn’t violate our Community Standards — such as posts from gossip pages — can still sow division. This analysis informs how we reduce the reach and impact of polarizing posts and comments.

Sarah, Program Manager
Beginning as a student in Cameroon, Sarah has devoted nearly a decade to understanding the role of technology in countries experiencing political and social conflict. In 2014, she moved to Myanmar to research the challenges activists face online and to support community organizations using social media. Sarah helps Facebook respond to complex crises and develop long-term product solutions to prevent abuse — for example, how to render Burmese content in a machine-readable format so our AI tools can better detect hate speech.

Abhishek, Research Scientist
With a masters in computer science and a doctorate in media theory, Abhishek focuses on issues including the technical challenges we face in different countries and how best to categorize different types of violent content. For example, research in Cameroon revealed that some images of violence being shared on Facebook helped people pinpoint — and avoid — conflict areas. Nuances like this help us consider the ethics of different product solutions, like removing or reducing the spread of certain content.

Emilar, Policy Manager
Prior to joining Facebook, Emilar spent more than a decade working on human rights and social justice issues in Africa, including as a member of the team that developed the African Declaration on Internet Rights and Freedoms. She joined the company to work on public policy issues in Southern Africa, including the promotion of affordable, widely available internet access and human rights both on and offline.

Ali, Product Manager
Born and raised in Iran in the 1980s and 90s, Ali and his family experienced violence and conflict firsthand as Iran and Iraq were involved in an eight-year conflict. Ali was an early adopter of blogging and wrote about much of what he saw around him in Iran. As an adult, Ali received his PhD in computer science but remained interested in geopolitical issues. His work on Facebook’s product team has allowed him to bridge his interest in technology and social science, effecting change by identifying technical solutions to root out hate speech and misinformation in a way that accounts for local nuances and cultural sensitivities.

In working on these issues, local groups have given us invaluable input on our products and programs. No one knows more about the challenges in a given community than the organizations and experts on the ground. We regularly solicit their input on our products, policies and programs, and last week we published the principles that guide our continued engagement with external stakeholders.

In the last year, we visited countries such as Lebanon, Cameroon, Nigeria, Myanmar, and Sri Lanka to speak with affected communities in these countries, better understand how they use Facebook, and evaluate what types of content might promote depolarization in these environments. These findings have led us to focus on three key areas: removing content and accounts that violate our Community Standards, reducing the spread of borderline content that has the potential to amplify and exacerbate tensions and informing people about our products and the internet at large. To address content that may lead to offline violence, our team is particularly focused on combating hate speech and misinformation.

Removing Bad Actors and Bad Content

Hate speech isn’t allowed under our Community Standards. As we shared last year, removing this content requires supplementing user reports with AI that can proactively flag potentially violating posts. We’re continuing to improve our detection in local languages such as Arabic, Burmese, Tagalog, Vietnamese, Bengali and Sinhalese. In the past few months, we’ve been able to detect and remove considerably more hate speech than before. Globally, we increased our proactive rate — the percent of the hate speech Facebook removed that we found before users reported it to us — from 51.5% in Q3 2018 to 65.4% in Q1 2019.

We’re also using new applications of AI to more effectively combat hate speech online. Memes and graphics that violate our policies, for example, get added to a photo bank so we can automatically delete similar posts. We’re also using AI to identify clusters of words that might be used in hateful and offensive ways, and tracking how those clusters vary over time and geography to stay ahead of local trends in hate speech. This allows us to remove viral text more quickly.

Still, we have a long way to go. Every time we want to use AI to proactively detect potentially violating content in a new country, we have to start from scratch and source a high volume of high quality, locally relevant examples to train the algorithms. Without this context-specific data, we risk losing language nuances that affect accuracy.

Globally, when it comes to misinformation, we reduce the spread of content that’s been deemed false by third-party fact-checkers. But in countries with fragile information ecosystems, false news can have more serious consequences, including violence. That’s why last year we updated our global violence and incitement policy such that we now remove misinformation that has the potential to contribute to imminent violence or physical harm. To enforce this policy, we partner with civil society organizations who can help us confirm whether content is false and has the potential to incite violence or harm.

Reducing Misinformation and Borderline Content

We’re also making fundamental changes to our products to address virality and reduce the spread of content that can amplify and exacerbate violence and conflict. In Sri Lanka, we have explored adding friction to message forwarding so that people can only share a message with a certain number of chat threads on Facebook Messenger. This is similar to a change we made to WhatsApp earlier this year to reduce forwarded messages around the world. It also delivers on user feedback that most people don’t want to receive chain messages.

And, as our CEO Mark Zuckerberg detailed last year, we have started to explore how best to discourage borderline content, or content that toes the permissible line without crossing it. This is especially true in countries experiencing conflict because borderline content, much of which is sensationalist and provocative, has the potential for more serious consequences in these countries. 

We are, for example, taking a more aggressive approach against people and groups who regularly violate our policies. In Myanmar, we have started to reduce the distribution of all content shared by people who have demonstrated a pattern of posting content that violates our Community Standards, an approach that we may roll out in other countries if it proves successful in mitigating harm. In cases where individuals or organizations more directly promote or engage violence, we will ban them under our policy against dangerous individuals and organizations. Reducing distribution of content is, however, another lever we can pull to combat the spread of hateful content and activity.  

We have also extended the use of artificial intelligence to recognize posts that may contain graphic violence and comments that are potentially violent or dehumanizing, so we can reduce their distribution while they undergo review by our Community Operations team. If this content violates our policies, we will remove it. By limiting visibility in this way, we hope to mitigate against the risk of offline harm and violence.

Giving People Additional Tools and Information

Perhaps most importantly, we continue to meet with and learn from civil society who are intimately familiar with trends and tensions on the ground and are often on the front lines of complex crises. To improve communication and better identify potentially harmful posts, we have built a new tool for our partners to flag content to us directly. We appreciate the burden and risk that this places on civil society organizations, which is why we’ve worked hard to streamline the reporting process and make it secure and safe.

Our partnerships have also been instrumental in promoting digital literacy in countries where many people are new to the internet. Last week, we announced a new program with GSMA called Internet One-on-One (1O1). The program, which we first launched in Myanmar with the goal of reaching 500,000 people in three months, offers one-on-one training sessions that includes a short video on the benefits of the internet and how to stay safe online. We plan to partner with other telecom companies and introduce similar programs in other countries. In Nigeria, we introduced a 12-week digital literacy program for secondary school students called Safe Online with Facebook. Developed in partnership with Re:Learn and Junior Achievement Nigeria, the program has worked with students at over 160 schools and covers a mix of online safety, news literacy, wellness tips and more, all facilitated by a team of trainers across Nigeria.

We know there’s more to do to better understand the role of social media in countries of conflict. We want to be part of the solution so that as we mitigate abuse and harmful content, people can continue using our services to communicate. In the wake of the horrific terrorist attacks in Sri Lanka, more than a quarter million people used Facebook’s Safety Check to mark themselves as safe and reassure loved ones. In the same vein, thousands of people in Sri Lanka used our crisis response tools to make offers and requests for help. These use cases — the good, the meaningful, the consequential — are ones that we want to preserve.

This is some of the most important work being done at Facebook and we fully recognize the gravity of these challenges. By tackling hate speech and misinformation, investing in AI and changes to our products, and strengthening our partnerships, we can continue to make progress on these issues around the world.



[ad_2]

Source link

Day 1 of F8 2019: Building New Products and Features for a Privacy-Focused Social Platform

[ad_1]

Today, more than 5,000 developers, creators and entrepreneurs from around the world came together for F8, our annual conference about the future of technology.  

Mark Zuckerberg opened the two-day event with a keynote on how we’re building a more privacy-focused social platform — giving people spaces where they can express themselves freely and feel connected to the people and communities that matter most. He shared how this is a fundamental shift in how we build products and run our company.

Mark then turned it over to leaders from Facebook, Instagram, WhatsApp, Messenger and AR/VR to share more announcements. Here are the highlights:

Messenger

As we build for a future of more private communications, Messenger announced several new products and features to help create closer ties between people, businesses and developers.

A Faster, Lighter App
People expect their messaging apps to be fast and reliable. We’re excited to announce that we’re re-building the architecture of Messenger from the ground up to be faster and lighter than ever before. This completely re-engineered Messenger will begin to roll out later this year.

A Way to Watch Together
When you’re not together with friends or family in your physical living room, Messenger will now let you discover and watch videos from Facebook together in real time. You’ll be able to seamlessly share a video from the Facebook app on Messenger and invite others to watch together while messaging or on a video chat. This could be your favorite show, a funny clip or even home videos. We are testing this now and will begin to roll it out globally later this year.

A Desktop App for Messenger
People want to seamlessly message from any device, and sometimes they just want a little more space to share and connect with the people they care about most. So today we’re announcing a Messenger Desktop app. You can download the app on your desktop — both Windows and MacOS — and have group video calls, collaborate on projects or multi-task while chatting in Messenger. We are testing this now and will roll it out globally later this year.

Better Ways to Connect with Close Friends
Close connections are built on messaging, which is why we are making it easier for you to find the content from the people you care about the most. In Messenger, we are introducing a dedicated space where you can discover Stories and messages with your closest friends and family. You’ll also be able to share snippets from your own day and can choose exactly who sees what you post. This will roll out later this year.

Helping Businesses Connect with Customers
We’re making it even easier for businesses to connect with potential customers by adding lead generation templates to Ads Manager. There, businesses can easily create an ad that drives people to a simple Q&A in Messenger to learn more about their customers. And to make it easier to book an appointment with businesses like car dealerships, stylists or cleaning services, we’ve created an appointment experience so people can book appointments within a Messenger conversation.

WhatsApp

Business Catalog
People and businesses are finding WhatsApp a great way to connect. In the months ahead people will be able to see a business catalog right within WhatsApp when chatting with a business. With catalogs, businesses can showcase their goods so people can easily discover them.

Facebook

People have always come to Facebook to connect with friends and family, but over time it’s become more than that – it’s also a place to connect with people who share your interests and passions. Today we’re making changes that put Groups at the center of Facebook and sharing new ways Facebook can help bring people together offline.

A Fresh Design
We’re rolling out FB5, a fresh new design for Facebook that’s simpler, faster, more immersive and puts your communities at the center. Overall, we’ve made it easier to find what you’re looking for and get to your most-used features.

People will start seeing some of these updates in the Facebook app right away, and the new desktop site will come in the next few months.

Putting Groups First
This redesign makes it easy for people to go from public spaces to more private ones, like Groups. There are tens of millions of active groups on Facebook. When people find the right one, it often becomes the most meaningful part of how they use Facebook. Today, more than 400 million people on Facebook belong to a group that they find meaningful. That’s why we’re introducing new tools that will make it easier for you to discover and engage with groups of people who share your interests:

  • Redesigned Groups tab to make discovery easier: We’ve completely redesigned the Groups tab and made discovery even better. The tab now shows a personalized feed of activity across all your groups. And the new discovery tool with improved recommendations lets you quickly find groups you might be interested in.
  • Making it easier to participate in Groups: We’re also making it easier to get relevant group recommendations elsewhere in the app like in Marketplace, Today In, the Gaming tab, and Facebook Watch. You may see more content from your groups in News Feed. And, you will be able to share content directly to your groups from News Feed, the same way you do with friends and family.
  • New features to support specific communities: Different communities have different needs, so we’re introducing new features for different types of groups. Through new Health Support groups, members can post questions and share information without their name appearing on a post. Job groups will have a new template for employers to post openings, and easier ways for job seekers to message the employer and apply directly through Facebook. Gaming groups will get a new chat feature so members can create threads for different topics within the group. And because we know people use Facebook Live to sell things in Buy and Sell groups, we’re exploring ways to let buyers easily ask questions and place orders without leaving the live broadcast.

Connecting with Your Secret Crush
On Facebook Dating, you can opt in to discover potential matches within your own Facebook communities: events, groups, friends of friends and more. It’s currently available in Colombia, Thailand, Canada, Argentina, and Mexico — and today, we’re expanding to 14 new countries: Philippines, Vietnam, Singapore, Malaysia, Laos, Brazil, Peru, Chile, Bolivia, Ecuador, Paraguay, Uruguay, Guyana, and Suriname.

We’re also announcing a new feature called Secret Crush. People have told us that they believe there is an opportunity to explore potential romantic relationships within their own extended circle of friends. So now, if you choose to use Secret Crush, you can select up to nine of your Facebook friends who you want to express interest in. If your crush has opted into Facebook Dating, they will get a notification saying that someone has a crush on them. If your crush adds you to their Secret Crush list, it’s a match! If your crush isn’t on Dating, doesn’t create a Secret Crush list, or doesn’t put you on their list, no one will know that you’ve entered a friend’s name.

A Way to Meet New Friends
We’ve created Meet New Friends to help people start friendships with new people from their shared communities like a school, workplace or city. It’s opt-in, so you will only see other people that are open to meeting new friends, and vice versa. We’ve started testing Meet New Friends in a few places, and we’ll roll it out wider soon. We will also be integrating Facebook Groups, making it possible to meet new friends from your most meaningful communities on Facebook.

Shipping on Marketplace
People will soon be able to ship Marketplace items anywhere in the continental US and pay for their purchases directly on Facebook. For sellers this means reaching more buyers and getting paid securely, and for buyers this means shopping more items — near or far.

A New Events Tab
This summer we’re introducing the new Events tab so you can see what’s happening around you, get recommendations, discover local businesses, and coordinate with friends to make plans to get together.

Instagram

We rolled out new ways to connect people with each other and their interests on Instagram.

The Ability to Shop from Creators
Starting next week, you can shop inspiring looks from the creators you love without leaving Instagram. Instead of taking a screenshot or asking for product details in comments or Direct, you can simply tap to see exactly what your favorite creators are wearing and buy it on the spot. Anyone in our global community will be able to shop from creators. We’ll begin testing this with a small group of creators next week, with plans to expand access over time. For more information on shopping from creators, click here

A Way to Fundraise for Causes
Starting today, you can raise money for a nonprofit you care about directly on Instagram. Through a donation sticker in Stories, you can create a fundraiser and mobilize your community around a cause you care about — with 100% of the money raised on Instagram going to the nonprofit you’re supporting. This will be available in the US now and we’re working to bring it to more countries. To learn more, check out the Instagram Help Center here.

A New and Improved Camera
In the coming weeks, we’re introducing a new camera design including Create Mode, which gives you an easy way to share without a photo or video. This new camera will make it easier to use popular creative tools like effects and interactive stickers, so you can express yourself more freely.

AR/VR

We’re building technology around how we naturally interact with people. We announced a number of new ways we’re helping people connect more deeply in video calls through Portal. We shared more on our work to bring AR experiences to more people and platforms, and we opened pre-orders for Oculus Quest and Oculus Rift S.

Portal Expands Internationally this Fall
Beginning with an initial expansion from the US to Canada, we’ll also offer the Portal and Portal+ in Europe this fall. We’re bringing WhatsApp to Portal — and we’ll be bringing end-to-end encryption to all calls. You’ll be able to call any of your friends who use WhatsApp — or Messenger — on their Portal, or on their phone.

Beyond Video Calling
This summer we are adding new ways to connect on Portal. You’ll be able to say, “Hey Portal, Good Morning” to get updates on birthdays, events and more. We’re also adding the ability to send private video messages from Portal to your loved ones. And, through our collaboration with Amazon, we’re adding more visual features and Alexa skills to Portal, including Flash Briefings, smart home control and the Amazon Prime Video app later this year. You’ll also be able to use Facebook Live on Portal, so you can share special moments, with your closest friends, in real time.

SuperFrame to Display Your Favorite Photos
Portal’s SuperFrame lets you display your favorite photos when you’re not on a call. You can already add photos to SuperFrame from your Facebook feed, and starting today, you’ll be able to add your favorites from Instagram as well. And our new mobile app will let you add photos to Portal’s Superframe directly from your camera roll, later this summer.

Spark AR Expands to More People
Since last F8, we’ve seen over one billion people use AR experiences powered by Spark AR, with hundreds of millions using AR each month across Facebook, Messenger, Instagram and Portal. Starting today, the new Spark AR Studio supports both Windows and Mac and includes new features and functionality for creation and collaboration. We’re also opening Instagram to the entire Spark AR creator and developer ecosystem this summer.

Oculus Quest and Rift S Pre-Orders Open
Our two newest virtual reality headsets — Oculus Quest and Oculus Rift S — ship May 21. Oculus Quest, our first all-in-one VR gaming system, lets you pick up and play almost anywhere without being tethered to a PC. For those with a gaming PC, Rift S gets you into the most immersive content that VR has to offer. Both start at $399 USD and you can pre-order today at oculus.com.

We’re also launching the new Oculus for Business later this year. We’re adding Oculus Quest to the program and will provide a suite of tools designed to help companies reshape the way they do business through VR.

With each feature and product announced today, we want to help people discover their communities, deepen their connections, find new opportunities and simply have fun. We’re excited to see all the ways developers, creators and entrepreneurs use these tools as we continue to build more private ways for people to communicate. For more details on today’s news, see our Developer Blog, Engineering Blog, Oculus Blog, Messenger Blog, and Instagram Press Center. You can also watch all F8 keynotes on the Facebook for Developers Page.

Downloads:

You can find the full press kit here.



[ad_2]

Source link

Remove, Reduce, Inform: New Steps to Manage Problematic Content

[ad_1]

By Guy Rosen, VP of Integrity, and Tessa Lyons, Head of News Feed Integrity

Since 2016, we have used a strategy called “remove, reduce, and inform” to manage problematic content across the Facebook family of apps. This involves removing content that violates our policies, reducing the spread of problematic content that does not violate our policies and informing people with additional information so they can choose what to click, read or share. This strategy applies not only during critical times like elections, but year-round.

Today in Menlo Park, we met with a small group of journalists to discuss our latest remove, reduce and inform updates to keep people safe and maintain the integrity of information that flows through the Facebook family of apps:

REMOVE  (read more)

  • Rolling out a new section on the Facebook Community Standards site where people can track the updates we make each month.
  • Updating the enforcement policy for Facebook groups and launching a new Group Quality feature.

REDUCE (read more)

  • Kicking off a collaborative process with outside experts to find new ways to fight more false news on Facebook, more quickly.
  • Expanding the content the Associated Press will review as a third-party fact-checker.
  • Reducing the reach of Facebook Groups that repeatedly share misinformation.
  • Incorporating a “Click-Gap” signal into News Feed ranking to ensure people see less low-quality content in their News Feed.

INFORM (read more)

  • Expanding the News Feed Context Button to images. (Updated on April 10, 2019 at 11AM PT to include this news.)
  • Adding Trust Indicators to the News Feed Context Button on English and Spanish content.
  • Adding more information to the Facebook Page Quality tab.
  • Allowing people to remove their posts and comments from a Facebook Group after they leave the group.
  • Combatting impersonations by bringing the Verified Badge from Facebook into Messenger.
  • Launching Messaging Settings and an Updated Block feature on Messenger for greater control.
  • Launched Forward Indicator and Context Button on Messenger to help prevent the spread of misinformation.

Remove

Facebook

We have Community Standards that outline what is and isn’t allowed on Facebook. They cover things like bullying, harassment and hate speech, and we remove content that goes against our standards as soon as we become aware of it. Last year, we made it easier for people to understand what we take down by publishing our internal enforcement guidelines and giving people the right to appeal our decisions on individual posts.

The Community Standards apply to all parts of Facebook, but different areas pose different challenges when it comes to enforcement. For the past two years, for example, we’ve been working on something called the Safe Communities Initiative, with the mission of protecting people from harmful groups and harm in groups. By using a combination of the latest technology, human review and user reports, we identify and remove harmful groups, whether they are public, closed or secret. We can now proactively detect many types of violating content posted in groups before anyone reports them and sometimes before few people, if any, even see them.

Similarly, Stories presents its own set of enforcement challenges when it comes to both removing and reducing the spread of problematic content. The format’s ephemerality means we need to work even faster to remove violating content. The creative tools that give people the ability to add text, stickers and drawings to photos and videos can be abused to mask violating content. And because people enjoy stringing together multiple Story cards, we have to view Stories as holistic — if we evaluate individual story cards in a vacuum, we might miss standards violations.

In addition to describing this context and history, today we discussed how we will be:

  • Rolling out a new section on the Community Standards site where people can track the updates we make each month. We revisit existing policies and draft new ones for several reasons, including to improve our enforcement accuracy or to get ahead of new trends raised by content reviewers, internal discussion, expert critique or external engagement. We’ll track all policy changes in this new section and share specifics on why we made the more substantive ones. Starting today, in English.
  • Updating the enforcement policy for groups and launching a new Group Quality feature. As part of the Safe Communities Initiative, we will be holding the admins of Facebook Groups more accountable for Community Standards violations. Starting in the coming weeks, when reviewing a group to decide whether or not to take it down, we will look at admin and moderator content violations in that group, including member posts they have approved, as a stronger signal that the group violates our standards. We’re also introducing a new feature called Group Quality, which offers an overview of content removed and flagged for most violations, as well as a section for false news found in the group. The goal is to give admins a clearer view into how and when we enforce our Community Standards. Starting in the coming weeks, globally.

For more information on Facebook’s “remove” work, see these videos on the people and process behind our Community Standards development.

Reduce

Facebook

There are types of content that are problematic but don’t meet the standards for removal under our Community Standards, such as misinformation and clickbait. People often tell us that they don’t like seeing this kind of content and while we allow it to be posted on Facebook, we want to make sure it’s not broadly distributed.

Over the last two years, we’ve focused heavily on reducing misinformation on Facebook. We’re getting better at enforcing against fake accounts and coordinated inauthentic behavior; we’re using both technology and people to fight the rise in photo and video-based misinformation; we’ve deployed new measures to help people spot false news and get more context about the stories they see in News Feed; and we’ve grown our third-party fact-checking program to include 45 certified fact-checking partners who review content in 24 languages.

Today, members of the Facebook News Feed team discussed how we will be:

  • Kicking off a collaborative process with outside experts to find new ways to fight more false news, more quickly. Our professional fact-checking partners are an important piece of our strategy against misinformation, but they face challenges of scale: There simply aren’t enough professional fact-checkers worldwide and, like all good journalism, fact-checking takes time. One promising idea to bolster their work, which we’ve been exploring since 2017, involves groups of Facebook users pointing to journalistic sources to corroborate or contradict claims made in potentially false content. Over the next few months, we’re going to build on those explorations, continuing to consult a wide range of academics, fact-checking experts, journalists, survey researchers and civil society organizations to understand the benefits and risks of ideas like this. We need to find solutions that support original reporting, promote trusted information, complement our existing fact-checking programs and allow for people to express themselves freely — without having Facebook be the judge of what is true. Any system we implement must have safeguards from gaming or manipulation, avoid introducing personal biases and protect minority voices. We’ll share updates with the public throughout this exploratory process and solicit feedback from broader groups of people around the world. Starting today, globally.
  • Expanding the role of The Associated Press as part of the third-party fact-checking program. As part of our third-party fact-checking program, AP will be expanding its efforts by debunking false and misleading video misinformation and Spanish-language content appearing on Facebook in the US. Starting today, in the US.
  • Reducing the reach of Groups that repeatedly share misinformation. When people in a group repeatedly share content that has been rated false by independent fact-checkers, we will reduce that group’s overall News Feed distribution. Starting today, globally.
  • Incorporating a “Click-Gap” signal into News Feed ranking. Ranking uses many signals to ensure people see less low-quality content in their News Feed. This new signal, Click-Gap, relies on the web graph, a conceptual “map” of the internet in which domains with a lot of inbound and outbound links are at the center of the graph and domains with fewer inbound and outbound links are at the edges. Click-Gap looks for domains with a disproportionate number of outbound Facebook clicks compared to their place in the web graph. This can be a sign that the domain is succeeding on News Feed in a way that doesn’t reflect the authority they’ve built outside it and is producing low-quality content. Starting today, globally.

For more information about how we set goals for our “reduce” initiatives on Facebook, read this blog post.

Instagram

Today we discussed how Instagram is working to ensure that the content we recommend to people is both safe and appropriate for the community. We have begun reducing the spread of posts that are inappropriate but do not go against Instagram’s Community Guidelines, limiting those types of posts from being recommended on our Explore and hashtag pages. For example, a sexually suggestive post will still appear in Feed if you follow the account that posts it, but this type of content may not appear for the broader community in Explore or hashtag pages.

Facebook

We’re investing in features and products that give people more information to help them decide what to read, trust and share. In the past year, we began offering more information on articles in News Feed with the Context Button, which shows the publisher’s Wikipedia entry, the website’s age, and where and how often the content has been shared. We helped Page owners improve their content with the Page Quality tab, which shows them which posts of theirs were removed for violating our Community Standards or were rated “False,” “Mixture” or “False Headline” by third-party fact-checkers. We also discussed how we will be:

  • Expanding the Context Button to images. Originally launched in April 2018, the Context Button feature provides people more background information about the publishers and articles they see in News Feed so they can better decide what to read, trust and share. We’re testing enabling this feature for images that have been reviewed by third-party fact-checkers. Testing now in the US. (Updated on April 10, 2019 at 11AM PT to include this news.)
  • Adding Trust Indicators to the Context Button. The Trust Indicators are standardized disclosures, created by a consortium of news organizations known as the Trust Project, that provide clarity on a news organization’s ethics and other standards for fairness and accuracy. The indicators we display in the context button cover the publication’s fact-checking practices, ethics statements, corrections, ownership and funding and editorial team. Started March 2019, on English and Spanish content.
  • Adding more information to the Page Quality tab. We’ll be providing more information in the tab over time, starting with more information in the coming months on a Page’s status with respect to clickbait. Starting soon, globally.
  • Allowing people to remove their posts and comments from a group after they leave the group. People will have this ability even if they are no longer a member of the group. With this update, we aim to bring greater transparency and personal control to groups. Starting soon, globally.

Messenger

At today’s event, Messenger highlighted new and updated privacy and safety features that give people greater control of their experience and help people stay informed.

  • Combatting impersonations by bringing the Verified Badge from Facebook into Messenger. This tool will help people avoid scammers that pretend to be high-profile people by providing a visible indicator of a verified account. Messenger continues to encourage use of the Report Impersonations tool, introduced last year, if someone believes they are interacting with a someone pretending to be a friend. Starting this week, globally.
  • Launching Messaging Settings and an Updated Block feature for greater control. Messaging Settings allow you to control whether people you’re not connected to, such as friends of your friends, people with your phone number or people who follow you on Instagram can reach your Chats list. The Updated Block feature makes it easier to block and avoid unwanted contact. Starting this week, globally.
  • Launched Forward Indicator and Context Button to help prevent the spread of misinformation. The Forward Indicator lets someone know if a message they received was forwarded by the sender, while the Context Button provides more background on shared articles. Started earlier this year, globally

[ad_2]

Source link

The Hunt for False News: EU Edition

[ad_1]

By Antonia Woodford, Product Manager

Reducing the amount of false news on Facebook is always important, critically so during times of heightened civic discourse, such as the lead-up to major elections. That’s why limiting the spread of misinformation has been a key pillar in our investments around election integrity.

As a company, when it comes to misinformation, we prioritize reducing the harm it causes and often look at its impacts in the aggregate. However, we can learn a lot about trends and nuances by examining specific cases and how they spread. In this second edition of “The Hunt for False News,” we travel to the EU, ahead of May’s parliamentary elections, to take a deeper look at some examples of misinformation that circulated there recently.

What we saw
In January, a photo of a letter supposedly written by the headmistress of a Dresden primary school was posted to Facebook. The letter announces that in the following week, four imams would be visiting the school to introduce the children to the Koran and Islam. This “theme week” would include a compulsory visit to a mosque, and parents were encouraged to buy a Koran and avoid giving their children pork for breakfast on the day of the imams’ visit. The letter closed by saying that the school was pleased to be bringing parents and children closer to Islam, as it is an important topic in Germany.

Was it true?
No. German fact-checker Correctiv used image editing software to take a closer look at the letterhead, which had been blacked out in the photo. By increasing the contrast and brightness, they show that the blacked-out section was not, in fact, a school address, but a nonsense string of letters. Correctiv also notes that the letter circulated on various social networks, and when a Twitter user asked various German officials for comment on the photo, the Saxon Ministry of Education tweeted back that the letter was a fake.

What to know
False news often gains traction when it feeds off of hot-button political and social issues — in this case, the growing population of Muslims in Germany. As we noted in an example about migrants and refugees in the last edition of “The Hunt for False News,” content that disparages or stirs up distrust of distinct groups of people, as this letter does, is another key trend in misinformation.

A few months ago, we expanded our fact-checking efforts to include photos, like this one, and videos. This fake letter falls into the “manipulated or fabricated” category of photo and video misinformation. (The other two major categories are out-of-context media or media with false audio or text claims.) In general, we see that photos and videos make up a greater share of fact-checked posts than article links do. In fact, in the lead-up to the US midterm elections, photos and videos made up two thirds of fact-checker ratings in the U.S.

How we caught it
There are two primary ways we find stories that are likely to be false: either we use machine learning to detect potentially false stories on Facebook, or else they’re identified by our third-party fact-checkers themselves. Once a potentially false story has been found — regardless of how it was identified — fact-checkers review the claims in the story, rate their accuracy and provide an explanation as to how they arrived at their rating. This photo was identified via machine learning.

What we saw
A video shared on Facebook in February shows a man in a suit walking through what looks like a government assembly hall, shaking hands and dropping small cards at a number of empty seats. The caption claims that the man is using Spanish national ID cards to register absent congresspeople as “present” to collect their per diem payments.
Was it true?
No. First off, as fact-checker Maldito Bulo notes, the assembly hall shown in the video isn’t that of the Spanish Parliament — it’s the Ukrainian Parliament. In the Ukrainian Parliament, members must use identification cards to vote, but that is not true in the Spanish Parliament.

What to know
This is a classic example of an “out of context” video, another major category of misinformation. In the past, this video has circulated with claims that it shows the French or Brazilian parliaments, according to Maldito Bulo.

How we found it
The video was identified by Maldito Bulo, who rated it false, leading us to downrank it in News Feed and show Maldito Bulo’s debunking article alongside the video in Related Articles. Our machine learning models picked up additional videos making the same claim and surfaced them to our fact-checkers. Maldito Bulo and another fact-checker, Newtral, rated them false, leading us to take action on them, as well.

These videos were posted in February, before we had expanded our fact-checking partnership to Spain. They were rated soon after the expansion and quickly taken action on, but in the intervening time had been shared tens of thousands of times. This is a strong sign of why it’s important for us to keep developing new methods for fighting misinformation faster and at a larger scale.

What we saw
An article from a now defunct Dutch site citing 11 reasons to avoid getting a flu shot — including a claim that the flu shot can cause Alzheimer’s — was shared to Facebook in December 2018. The article cited research by Dr. Hugh Fudenburg supposedly showing that people who regularly have a flu shot are 10 times more likely to develop Alzheimer’s. The article link was caught early; it had only been shared about 23 times before it was fact-checked.

Was it true?
Our fact-checking partner Nu.nl declared that one of the claims in the story — that the flu shot increases the risk of Alzheimer’s — was very unlikely and gave the overall article a “mixture” rating on this basis. Nu.nl noted that there is no published research showing the flu vaccine impacts your chance of getting Alzheimer’s; Fudenberg is reported to have spoken about research linking flu shots and the disease at a 1997 conference, but those findings have never been published and there are no supporting scientific theories that make it plausible the flu shot would affect one’s chance of getting Alzheimer’s. Further, while it has been claimed that aluminum and mercury in flu vaccines lead to the disease, Nu.nl reports that neither substance is found in Dutch flu shots.

What to know
While we’ve been working with fact-checkers to rate articles across a range of topics, we also recently announced that we’re taking additional actions to reduce the spread of vaccine hoaxes verified as false by global health organizations, because the spread of health misinformation online can have dangerous consequences offline. We will also be informing people with authoritative information on the topic. (Learn more about our “remove, reduce, inform” framework for cleaning up your News Feed.)

How we caught it
This one was identified via machine learning. Nu.nl matched the claim about the flu vaccine and Alzheimer’s to an article they’d written on the topic in late October, which led to our downranking this Dutch article in News Feed and showing Nu.nl’s debunking article alongside it in Related Articles.

What we saw
In February 2019, a French website published an article claiming that the UN was seeking to legalize pedophilia. The text of the article was copy-and-pasted from an earlier article that has been floating around the internet for several years. The article, which was shared to Facebook the same month, suggests that the UN is demanding sexual rights for children as young as 10 years old, which would protect pedophiles from criminal prosecution and imprisonment.

Was it true?
No. The much-copied article seems to refer to a 2008 declaration by the International Planned Parenthood Federation, an advocate of sexual and reproductive health and rights that has participated in several UN commissions. The declaration, which has no legal value, according to a 2017 article written by our fact-checking partner 20 Minutes, asserts that “sexual rights are human rights” and proposes a framework of general principles about sexuality as well as 10 “sexual rights.”

The declaration does contain material related to the sexuality of children, such as the principle that “the rights and protections guaranteed to people under age eighteen differ from those of adults, and must take into account the evolving capacities of the individual child to exercise rights on his or her own behalf.” However, as 20 Minutes notes, it contains nothing in favor of the legalization of pedophilia. In fact, it asserts that “all children and adolescents are entitled to enjoy the right to special protection from all forms of exploitation.”

What to know
Digging a bit further, it seems that the claims in this copy-and-pasted meme stem from multiple sources, including an interview with the writer Marion Sigaut and a 2012 article from the Center for Family and Human Rights titled “UN May Recognize Sex Rights for Ten-Year Old Children.” As with rumors offline, misinformation can get distorted as it travels across the internet, which is one of the reasons truth can be hard to ascertain online.

How we caught it
This article was found via our machine learning model, which detected it based on the similarity of its central claim to a claim that had been previously debunked by 20 Minutes. When we find possible matches like these, we surface them to fact-checkers to confirm that they are in fact the same claim. 20 Minutes reviewed this new French article and connected it to a fact-checking article they’d written in 2017, which led to our downranking the article in News Feed and showing the 20 Minutes debunk in Related Articles.

What we saw
In January 2018, a Twitter account purporting to belong to Ebba Busch Thor, leader of the Swedish Christian Democrats or Kristdemokraterna (KD), tweeted disapprovingly of those who criticize the Swedish pension system. The post alluded to people’s concern for poor pensioners, saying that in Sweden people get the pension they deserve and that the undesirable alternative would be socialism. The account, @EbbaBuschThorKD, was revealed to be a fake and shut down, but screenshots of the tweet continued to circulate. The screenshot was shared to Facebook in January 2018 by a Page called Nej till EU-Skatt (“No to EU Tax”) and the post began recirculating in January 2019.

Was it true?
No — as our fact-checking partner Viralgranskaren (Viral Examiner) noted in their article debunking the screenshot, @EbbaBuschThorKD was a fake Twitter account that has since been shut down.

What to know
False news can have a long shelf life when its subjects remain in the spotlight. Even though this screenshot first surfaced in 2018, it saw another spike a full year later, during a months-long government deadlock following parliamentary elections in September 2018. As we noted above, another major trend in misinformation is content that disparages distinct groups of people — in this case, low-income people.

Though this particular ruse started on Twitter, fake accounts are a major vector of misinformation on Facebook, too. Blocking fake accounts is one of the most impactful steps in our fight to curb false news. In the third quarter of 2018 — the time period covered by our most recent Community Standards Enforcement Report — we disabled 754 million fake accounts, having found 99.6% of them before users reported them.

How we caught it
This image was detected via machine learning. Viralgranskaren investigated it and submitted a “false” rating and an explainer article, which led us to reduce its distribution in News Feed and show the Viralgranskaren debunk in Related Articles.



[ad_2]

Source link