Home / Uncategorized / Facebook ends platform policy banning apps that copy its features

Facebook ends platform policy banning apps that copy its features

Facebook will now freely allow developers to build competitors to its features upon its own platform. Today Facebook announced it will drop Platform Policy section 4.1 which stipulates “Add something unique to the community. Don’t replicate core functionality that Facebook already provides.”

Facebook had previously enforced that policy selectively to hurt competitors that had used its Find Friends or viral distribution features. Apps like Vine, Voxer, MessageMe, Phhhoto and more had been cut off from Facebook’s platform for too closely replicating its video, messaging, or GIF creation tools. Find Friends is a vital API that lets users find their Facebook friends within other apps.

The move will significantly reduce the platform risk of building on the Facebook platform. It could also cast it in a better light in the eyes of regulators. Anyone seeking ways Facebook abuses its dominance will lose a talking point. And by creating a more fair and open platform where developers can build without fear of straying too close to Facebook’s history or roadmap, it could reinvigorate its developer ecosystem.

A Facebook spokesperson provided this statement to TechCrunch:

“We built our developer platform years ago to pave the way for innovation in social apps and services. At that time we made the decision to restrict apps built on top of our platform that replicated our core functionality. These kind of restrictions are common across the tech industry with different platforms having their own variant including YouTube, Twitter, Snap and Apple. We regularly review our policies to ensure they are both protecting people’s data and enabling useful services to be built on our platform for the benefit of the Facebook community. As part of our ongoing review we have decided that we will remove this out of date policy so that our platform remains as open as possible. We think this is the right thing to do as platforms and technology develop and grow.”

The change comes after Facebook locked down parts of its platform in April for privacy and security reasons in the wake of the Cambridge Analytica scandal. Diplomatically, Facebook said it didn’t expect the change to impact its standing with regulators but it’s open to answering their questions.

Facebook shouldn’t block you from finding friends on competitors

Earlier in April, I wrote a report on how Facebook used Policy 4.1 to attack competitors it saw gaining traction. The article “Facebook shouldn’t block you from finding friends on competitors” advocated for Facebook to make its social graph more portable and interoperable so users could decamp to competitors if they felt they weren’t treated right in order for to coerce Facebook to act better.

The policy change will apply retroactively. Old apps that lost Find Friends or other functionality will be able to submit their app for review and once approved, will regain access.

Friend lists still can’t be exported in a truly interoperable way. But at least now Facebook has enacted the spirit of that call to action. Developers won’t be in danger of losing access to that Find Friends Facebook API for treading in its path.

Below is an excerpt from our previous reporting on how Facebook has previously enforced Platform Policy 4.1 that before today’s change was used to hamper competitors:

  • Voxer was one of the hottest messaging apps of 2012, climbing the charts and raising a $30 million round with its walkie-talkie-style functionality. In early January 2013, Facebook copied Voxer by adding voice messaging into Messenger. Two weeks later, Facebook cut off Voxer’s Find Friends access. Voxer CEO Tom Katis told me at the time that Facebook stated his app with tens of millions of users was a “competitive social network” and wasn’t sharing content back to Facebook. Katis told us he thought that was hypocritical. By June, Voxer had pivoted toward business communications, tumbling down the app charts and leaving Facebook Messenger to thrive.
  • MessageMe had a well-built chat app that was growing quickly after launching in 2013, posing a threat to Facebook Messenger. Shortly before reaching 1 million users, Facebook cut off MessageMe‘s Find Friends access. The app ended up selling for a paltry double-digit millions price tag to Yahoo before disintegrating.
  • Phhhoto and its fate show how Facebook’s data protectionism encompasses Instagram. Phhhoto’s app that let you shoot animated GIFs was growing popular. But soon after it hit 1 million users, it got cut off from Instagram’s social graph in April 2015. Six months later, Instagram launched Boomerang, a blatant clone of Phhhoto. Within two years, Phhhoto shut down its app, blaming Facebook and Instagram. “We watched [Instagram CEO Kevin] Systrom and his product team quietly using PHHHOTO almost a year before Boomerang was released. So it wasn’t a surprise at all . . . I’m not sure Instagram has a creative bone in their entire body.”
  • Vine had a real shot at being the future of short-form video. The day the Twitter-owned app launched, though, Facebook shut off Vine’s Find Friends access. Vine let you share back to Facebook, and its six-second loops you shot in the app were a far cry from Facebook’s heavyweight video file uploader. Still, Facebook cut it off, and by late 2016, Twitter announced it was shutting down Vine.

Check Also

Where Facebook AI research moves next

Five years is an awful lot of time in the tech industry. Darling startups find ways to crash and burn. Trends that seem unstoppable putter-out. In the field of artificial intelligence, the past five years have been nothing short of transformative. Facebook’s AI Research lab (FAIR) turns five years old this month, and just as the social media giant has left an indelible mark on the broader culture — for better or worse — the work coming out of FAIR has seen some major impact in the AI research community and entrenched itself in the way Facebook operates. “You wouldn’t be able to run Facebook without deep learning,” Facebook Chief AI Scientist Yann LeCun tells TechCrunch. “It’s very, very deep in every aspect of the operation.” Reflecting on the formation of his team, LeCun recalls his central task in initially creating the research group was “inventing what it meant to do research at Facebook.” “Facebook didn’t have any research lab before FAIR, it was the first one, until then the company was very much focused on short-term engineering projects with 6 month deadlines if not less,” he says. LeCun Five years after its formation, FAIR’s influence permeates the company. The group has labs in in Menlo Park, New York, Paris, Montreal, Tel Aviv, Seattle, Pittsburgh and London. They’ve partnered with academic institutions and published countless papers and studies, many of which the group has enumerated in this handy 5-year anniversary timeline here. “I said ‘No’ to creating a research lab for my first five years at Facebook,” CTO Mike Schroepfer wrote in a Facebook post. “In 2013, it became clear AI would be critical to the long-term future of Facebook. So we had to figure this out.” The research group’s genesis came shortly after LeCun stopped by Mark Zuckerberg’s house for dinner. “I told [Zuckerberg] how research labs should be organized, particularly the idea of practicing open research.” LeCun said. “What I heard from him, I liked a lot, because he said openness is really in the DNA of the company.” FAIR has the benefit of longer timelines that allow it to be more focused in maintaining its ethos. There is no “War Room” in the AI labs, and much of the group’s most substantial research ends up as published work that benefits the broader AI community. Nevertheless, in many ways, AI is very much an arms race for Silicon Valley tech companies. The separation between FAIR and Facebook’s Applied Machine Learning (AML) team, which focuses more on imminent product needs, gives the group “huge, huge amount of leeway to really think about the long-term,” LeCun says. I chatted with LeCun about some of these long-term visions for the company, which evolved into him spitballing about what he’s working on now and where he’d like to see improvements. “First, there’s going to be considerable progress in things that we already have quite a good handle on…” A big trend for LeCun seems to be FAIR doubling down on work that impacts how people can more seamlessly interact with data systems and get meaningful feedback. “We’ve had this project that is a question-and-answer system that basically can answer any question if the information is somewhere in Wikipedia. It’s not yet able to answer really complicated questions that require extracting information from multiple Wikipedia articles and cross-referencing them,” LeCun says. “There’s probably some progress there that will make the next generation of virtual assistants and data systems considerably less frustrating to talk to.” Some of the biggest strides in machine learning over the past five years have taken place in the vision space, where machines are able to parse out what’s happening in an image frame. LeCun predicts greater contextual understanding is on its way. “You’re going to see systems that can not just recognize the main object in an image but basically will outline every object and give you a textual description of what’s happening in the image, kind of a different, more abstract understanding of what’s happening.” FAIR has found itself tackling disparate and fundamental problems that have wide impact on how the rest of the company functions, but a lot of these points of progress sit deeper in the five year timeline. FAIR has already made some progress in unsupervised learning, the company has published work on how they are utilizing some of these techniques to translate in between languages they lack sufficient training data for so that, in practical terms, users needing translations from something like Icelandic to Swahili aren’t left in the cold. As FAIR looks to its next five years, LeCun contends that there are some much bigger challenges looming on the horizon that the AI community is just beginning to grapple with. “Those are all relatively predictable improvements,” he says. “The big prize we are really after is this idea of self-supervised learning — getting machines to learn more like humans and animals and requiring that they have some sort of common sense.”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Disclaimer: Trading in bitcoins or other digital currencies carries a high level of risk and can result in the total loss of the invested capital. theonlinetech.org does not provide investment advice, but only reflects its own opinion. Please ensure that if you trade or invest in bitcoins or other digital currencies (for example, investing in cloud mining services) you fully understand the risks involved! Please also note that some external links are affiliate links.