Home / News & Analysis / Cambridge University hits back at Zuckerberg’s shade

Cambridge University hits back at Zuckerberg’s shade

Facebook’s CEO Mark Zuckerberg’s testimony to the House yesterday was a mostly bland performance, punctuated by frequent claims not to know or remember certain fundamental aspects of his own business. But he gave a curiously specific and aggressive response to a question from congressman Eliot Engel.

Starting from the premise that Facebook had been “deceived” by other players in the data misuse scandal it’s embroiled in, the congressman wondered whether Facebook intends to sue Cambridge Analytica, professor Aleksandr Kogan and Cambridge University — perhaps for unauthorized access to computer networks or breach of contract?

“It’s something that we’re looking into,” replied Zuckerberg. “We already took action by banning [Kogan] from the platform and we’re going to be doing a full audit to make sure he gets rid of all the data that he has as well.”

But the Facebook founder also seized on the opportunity to indulge in a little suggestive shade throwing which looked very much like an attempt to blame-shift responsibility for the massive data scandal embroiling his company onto, of all things, one of the UK’s most prestigious universities. (Which, full disclosure, is my own alma mater.)

“To your point about Cambridge University what we’ve found now is that there’s a whole program associated with Cambridge University where a number of researchers — not just Aleksandr Kogan, although to our current knowledge he’s the only one who sold the data to Cambridge Analytica — there are a number of the researchers who are building similar apps,” said Zuckerberg.

“So we do need to understand whether there is something bad going on at Cambridge University overall that will require a stronger action from us.”

What’s curious about this response is that Zuckerberg elides to mention how Facebook’s own staff have worked with the program he’s suggesting his company “found now” — as if it had only discovered the existence of the Cambridge University Psychometrics Centre, whose researchers have in fact been working with Facebook data since at least 2007, since the Cambridge Analytica story snowballed into a major public scandal last month.

A Facebook data-related project that the center is involved with, called the myPersonality Project — which started as a student side project of the now deputy director of the Psychometrics Centre, David Stillwell — was essentially the accidental inspiration for Kogan’s thisismydigitallife quiz app, according to testimony given to the UK parliament by former Cambridge Analytica employee Chris Wylie last month.

Here’s how the project is described on the Centre’s website:

myPersonality was a popular Facebook application that allowed users to take real psychometric tests, and allowed us to record (with consent!) their psychological and Facebook profiles. Currently, our database contains more than 6,000,000 test results, together with more than 4,000,000 individual Facebook profiles. Our respondents come from various age groups, backgrounds, and cultures. They are highly motivated to answer honestly and carefully, as the only gratification that they receive for their participation is feedback on their results.

The center itself has been active within Cambridge University since 2005, conducting research, teaching and product development in pure and applied psychological assessment — and claiming to have seen “significant growth in the past twelve years as a consequence of the explosion of activity in online communication and social networks”.

And while it’s of course possible that Zuckerberg and his staff might not have been aware of the myPersonality Facebook app project — after all 4M Facebook profiles harvested is rather less than the up to 87M Kogan was able to extract, also apparently without Facebook noticing — what’s rather harder for Zuckerberg to deny knowledge of is the fact his company’s own staff have worked with Cambridge University researchers on projects analyzing Facebook data for psychological profiling purposes for years. Since at least 2015.

In a statement provided to TechCrunch yesterday, the University expressed surprise at Zuckerberg’s remarks to the house.

“We would be surprised if Mr Zuckerberg was only now aware of research at the University of Cambridge looking at what an individual’s Facebook data says about them,” a spokesperson told us. “Our researchers have been publishing such research since 2013 in major peer-reviewed scientific journals, and these studies have been reported widely in international media. These have included one study in 2015 led by Dr Aleksandr Spectre (Kogan) and co-authored by two Facebook employees.”

The two Facebook employees who worked alongside Kogan (who was using the surname Spectre at the time) on that 2015 study — which looked at international friendships as a class marker by examining Facebook users’ friend networks — are named in the paper as Charles Gronin and Pete Fleming.

It’s not clear whether Gronin still works for Facebook. But a LinkedIn search suggests Fleming is now head of research for Facebook-owned Instagram.

We’ve asked Facebook to confirm whether the two researchers are still on its payroll and will update this story with any response.

In its statement, Cambridge University also said it’s still waiting for Facebook to provide it with evidence regarding Kogan’s activities. “We wrote to Facebook on 21 March to ask it to provide evidence to support its allegations about Dr Kogan. We have yet to receive a response,” it told us.

For his part Kogan has maintained he did nothing illegal — telling the Guardian last month that he’s being used as a scapegoat by Facebook.

We’ve asked Facebook to confirm what steps it’s taken so far to investigate Kogan’s actions regarding the Cambridge Analytica misuse of Facebook data — and will update this story with any response.

During his testimony to the House yesterday Zuckerberg was asked by congressman Mike Doyle when exactly Facebook had first learned about Cambridge Analytica using Facebook data — and whether specifically it had learned about it as a result of the December 2015 Guardian article.

In his testimony to the UK parliament last month, Wylie suggested Facebook might have known about the app as early as July 2014 because he said Kogan had told him he’d been in touch with some Facebook engineers to try to resolve problems with the rate that data could be pulled off the platform by his app.

But giving a “yes” response to Doyle, Zuckerberg reiterated Facebook’s claim that the company first learned about the issue at the end of 2015, when the Guardian broke the story.

At another point during this week’s testimony Zuckerberg was also asked whether any Facebook staff had worked alongside Cambridge Analytica when they were embedded with the Trump campaign in 2016. On that he responded that he didn’t know.

Yet another curious aspect to this story is that Facebook hired the co-director of GSR, the company Kogan set up to license data to Cambridge Analytica — as the Guardian reported last month.

According to its report Joseph Chancellor was hired by Facebook, around November 2015, about two months after he had left GSR — citing his LinkedIn profile (which has since been deleted).

Chancellor remains listed as an employee at Facebook research, working on human computer interaction & UX, where his biography confirms he also used to be a researcher at the University of Cambridge…

I am a quantitative social psychologist on the User Experience Research team at Facebook. Before joining Facebook, I was a postdoctoral researcher at the University of Cambridge, and I received my Ph.D. in social and personality psychology from the University of California, Riverside. My research examines happiness, emotions, social influences, and positive character traits.

We’ve asked Facebook when exactly it hired Chancellor; for what purposes; and whether it had any concerns about employing someone who had worked for a company that had misused its own users’ data.

At the time of writing the company had not responded to these questions either.

Check Also

Facebook hit with defamation lawsuit over fake ads

In an interesting twist, Facebook is being sued in the UK for defamation by consumer advice personality, Martin Lewis, who says his face and name have been repeatedly used on fake adverts distributed on the social media giant’s platform. Lewis, who founded the popular MoneySavingExpert.com tips website, says Facebook has failed to stop the fake ads despite repeat complaints and action on his part, thereby — he contends — tarnishing his reputation and causing victims to be lured into costly scams. “It is consistent, it is repeated. Other companies such as Outbrain who have run these adverts have taken them down. What is particularly pernicious about Facebook is that it says the onus is on me, so I have spent time and effort and stress repeatedly to have them taken down,” Lewis told The Guardian. “It is facilitating scams on a constant basis in a morally repugnant way. If Mark Zuckerburg wants to be the champion of moral causes, then he needs to stop its company doing this.” In a blog post Lewis also argues it should not be difficult for Facebook — “a leader in face and text recognition” — to prevent scammers from misappropriating his image. “I don’t do adverts. I’ve told Facebook that. Any ad with my picture or name in is without my permission. I’ve asked it not to publish them, or at least to check their legitimacy with me before publishing. This shouldn’t be difficult,” he writes. “Yet it simply continues to repeatedly publish these adverts and then relies on me to report them, once the damage has been done.” “Enough is enough. I’ve been fighting for over a year to stop Facebook letting scammers use my name and face to rip off vulnerable people – yet it continues. I feel sick each time I hear of another victim being conned because of trust they wrongly thought they were placing in me. One lady had over £100,000 taken from her,” he adds. Some of the fake ads appear to be related to cryptocurrency scams — linking through to fake news articles promising “revolutionary Bitcoin home-based opportunity”. So the scammers look to be using the same playbook as the Macedonian teens who, in 2016, concocted fake news stories about US politics to generate a mint in ad clicks — also relying on Facebook’s platform to distribute their fakes and scale the scam. In January Facebook revised its ads policy to specifically ban cryptocurrency, binary options and initial coin offerings. But as Lewis’ samples show, the scammers are circumventing this prohibition with ease — using Lewis’ image to drive unwitting clicks to a secondary offsite layer of fake news articles that directly push people towards crypto scams. It would appear that Facebook does nothing to verify the sites to which ads on its platform are directing its users, just as it does not appear to proactive police whether ad creative is legal — at least unless nudity is involved. Here’s one sample fake ad that Lewis highlights: And here’s the fake news article it links to — touting a “revolutionary” Bitcoin opportunity, in a news article style mocked up to look like the Daily Mirror newspaper… The lawsuit is a personal action by Lewis who is seeking exemplary damages in the high court. He says he’s not looking to profit himself — saying he would donate any winnings to charities that aim to combat fraud. Rather he says he’s taking the action in the hopes the publicity will spotlight the problem and force Facebook to stamp out fake ads. In a statement, Mark Lewis of the law firm Seddons, which Lewis has engaged for the action, said: “Facebook is not above the law – it cannot hide outside the UK and think that it is untouchable. Exemplary damages are being sought. This means we will ask the court to ensure they are substantial enough that Facebook can’t simply see paying out damages as just the ‘cost of business’ and carry on regardless. It needs to be shown that the price of causing misery is very high.” In a response statement to the suit, a Facebook spokesperson told us: “We do not allow adverts which are misleading or false on Facebook and have explained to Martin Lewis that he should report any adverts that infringe his rights and they will be removed. We are in direct contact with his team, offering to help and promptly investigating their requests, and only last week confirmed that several adverts and accounts that violated our Advertising Policies had been taken down.” Facebook’s ad guidelines do indeed prohibit ads that contain “deceptive, false, or misleading content, including deceptive claims, offers, or business practices” — and, as noted above, they also specifically prohibit cryptocurrency-related ads. But, as is increasingly evident where big tech platforms are concerned, meaningful enforcement of existing policies is what’s sorely lacking. The social behemoth claims to have invested significant resources in its ad review program — which includes both automated and manual review of ads. Though it also relies on users reporting problem content, thereby shifting the burden of actively policing content its systems are algorithmically distributing and monetizing (at massive scale) onto individual users (who are, by the by, not being paid for all this content review labor… hmmm… ). In Lewis’ case the burden is clearly also highly personal, given the fake ads are not just dodgy content but are directly misappropriating his image and name in an attempt to sell a scam. “On a personal note, as well as the huge amount of time, stress and effort it takes to continually combat these scams, this whole episode has been extremely depressing – to see my reputation besmirched by such a big company, out of an unending greed to keep raking in its ad cash,” he also writes. The sheer scale of Facebook’s platform — which now has more than 2BN active users globally — contrasts awkwardly with the far smaller number of people the company employs for content moderation tasks. And unsurprisingly, given that huge discrepancy, Facebook has been facing increasing pressure over various types of problem content in recent years — from Kremlin propaganda to hate speech in Myanmar. Last year it told US lawmakers it would be increasing the number of staff working on safety and security issues from 10,000 to 20,000 by the end of this year. Which is still a tiny drop in the ocean of content distributed daily on its platform. We’ve asked how many people work in Facebook’s ad review team specifically and will update this post with any response. Given the sheer scale of content continuously generated by a 2BN+ user-base, combined with a platform structure that typically allows for instant uploads, a truly robust enforcement of Facebook’s own policies is going to require legislative intervention. And, in the meanwhile, Facebook operating a policy that’s essentially unenforceable risks looking intentional — given how much profit the company continues to generate by being able to claim it’s just a platform, rather than be ruled like a publisher.

Leave a Reply

Your email address will not be published. Required fields are marked *

Disclaimer: Trading in bitcoins or other digital currencies carries a high level of risk and can result in the total loss of the invested capital. theonlinetech.org does not provide investment advice, but only reflects its own opinion. Please ensure that if you trade or invest in bitcoins or other digital currencies (for example, investing in cloud mining services) you fully understand the risks involved! Please also note that some external links are affiliate links.