Home / News & Analysis / Facebook says it gave ‘identical support’ to Trump and Clinton campaigns

Facebook says it gave ‘identical support’ to Trump and Clinton campaigns

Facebook’s hundreds of pages of follow-ups to Senators make for decidedly uninteresting reading. Give lawyers a couple months and they will always find a way to respond non-substantively to the most penetrating questions. One section may at least help put a few rumors to rest about Facebook’s role in the 2016 Presidential campaigns, though of course much is still left to the imagination.

Senator Kamala Harris (D-CA), whose dogged questioning managed to put Mark Zuckerberg on his back foot during the questioning, had several pages of questions sent over afterwards. Among the many topics was that of the 2016 campaign and reports that Facebook employees were “embedded” in the Trump campaign specifically, as claimed by the person who ran the digital side of that campaign.

This has raised questions as to whether Facebook was offering some kind of premium service to one candidate or another, or whether one candidate got tips on how to juice the algorithm, how to target better, and so on.

Here are the takeaways from the answers, which you can find in full on page 167 of the document at the bottom of this post.

  • The advice to the campaigns is described as similar to that given to “other, non-political” accounts.
  • No one was “assigned full-time” on either the Trump or Clinton campaign.
  • Campaigns did not get to hand pick who from Facebook came to advise them.
  • Facebook provided “identical support” and tools to both campaigns.
  • Sales reps are trained to comply with federal election law, and to report “improper activity.”
  • No such “improper activity” was reported by Facebook employees on either campaign.
  • Facebook employees did work directly with Cambridge Analytica employees.
  • No one identified any issues with Cambridge Analytica, its data, or its intended use of that data.
  • Facebook did not work with Cambridge Analytica or related companies on other campaigns (e.g. Brexit).

It’s not exactly fire, but we don’t really need more fire these days. This at least is on the record and relatively straightforward; whatever Facebook’s sins during the election cycle may have been, it does not appear that preferential treatment of the two major campaigns was among them.

Incidentally, if you’re curious whether Facebook finally answered Sen. Harris’s questions about who made the decision not to inform users of the Cambridge Analytica issue back in 2015, or how that decision was made — no, it didn’t. In fact the silence here is so deafening it almost certainly indicates a direct hit.

Harris asked how and when it came to the decision not to inform users that their data had been misappropriated, who made that decision and why, and lastly when Zuckerberg entered the loop. Facebook’s response does not even come close to answering any of these questions:

When Facebook learned about Kogan’s breach of Facebook’s data use policies in December 2015, it took immediate action. The company retained an outside firm to assist in investigating Kogan’s actions, to demand that Kogan and each party he had shared data with delete the data and any derivatives of the data, and to obtain certifications that they had done so. Because Kogan’s app could no longer collect most categories of data due to changes in Facebook’s platform, the company’s highest priority at that time was ensuring deletion of the data that Kogan may have accessed before these changes took place. With the benefit of hindsight, we wish we had notified people whose information may have been impacted. Facebook has since notified all people potentially impacted with a detailed notice at the top of their newsfeed.

This answer has literally nothing to do with the questions.

It seems likely from the company’s careful and repeated refusal to answer this question that the story is an ugly one — top executives making a decision to keep users in the dark for as long as possible, if I had to guess.

At least with the campaign issues Facebook was more forthcoming, and as a result will put down several lines of speculation. Not so with this evasive maneuver.

Embedded below are Facebook’s answers to the Senate Judiciary Committee, and the other set is here:

View this document on ScribdRead more

Check Also

Facebook’s new AI research is a real eye-opener

There are plenty of ways to manipulate photos to make you look better, remove red eye or lens flare, and so on. But so far the blink has proven a tenacious opponent of good snapshots. That may change with research from Facebook that replaces closed eyes with open ones in a remarkably convincing manner. It’s far from the only example of intelligent “in-painting,” as the technique is called when a program fills in a space with what it thinks belongs there. Adobe in particular has made good use of it with its “context-aware fill,” allowing users to seamlessly replace undesired features, for example a protruding branch or a cloud, with a pretty good guess at what would be there if it weren’t. But some features are beyond the tools’ capacity to replace, one of which is eyes. Their detailed and highly variable nature make it particularly difficult for a system to change or create them realistically. Facebook, which probably has more pictures of people blinking than any other entity in history, decided to take a crack at this problem. It does so with a Generative Adversarial Network, essentially a machine learning system that tries to fool itself into thinking its creations are real. In a GAN, one part of the system learns to recognize, say, faces, and another part of the system repeatedly creates images that, based on feedback from the recognition part, gradually grow in realism. From left to right: “Exemplar” images, source images, Photoshop’s eye-opening algorithm, and Facebook’s method. In this case the network is trained to both recognize and replicate convincing open eyes. This could be done already, but as you can see in the examples at right, existing methods left something to be desired. They seem to paste in the eyes of the people without much consideration for consistency with the rest of the image. Machines are naive that way: they have no intuitive understanding that opening one’s eyes does not also change the color of the skin around them. (For that matter, they have no intuitive understanding of eyes, color, or anything at all.) What Facebook’s researchers did was to include “exemplar” data showing the target person with their eyes open, from which the GAN learns not just what eyes should go on the person, but how the eyes of this particular person are shaped, colored, and so on. The results are quite realistic: there’s no color mismatch or obvious stitching because the recognition part of the network knows that that’s not how the person looks. In testing, people mistook the fake eyes-opened photos for real ones, or said they couldn’t be sure which was which, more than half the time. And unless I knew a photo was definitely tampered with, I probably wouldn’t notice if I was scrolling past it in my newsfeed. Gandhi looks a little weird, though. It still fails in some situations, creating weird artifacts if a person’s eye is partially covered by a lock of hair, or sometimes failing to recreate the color correctly. But those are fixable problems. You can imagine the usefulness of an automatic eye-opening utility on Facebook that checks a person’s other photos and uses them as reference to replace a blink in the latest one. It would be a little creepy, but that’s pretty standard for Facebook, and at least it might save a group photo or two.

Leave a Reply

Your email address will not be published. Required fields are marked *

Disclaimer: Trading in bitcoins or other digital currencies carries a high level of risk and can result in the total loss of the invested capital. theonlinetech.org does not provide investment advice, but only reflects its own opinion. Please ensure that if you trade or invest in bitcoins or other digital currencies (for example, investing in cloud mining services) you fully understand the risks involved! Please also note that some external links are affiliate links.