Facebook’s dishonesty is apparent: the company changed its data policy four months after the social experiment with its news feed and site algorithm.
Facebook is now under fire for its social media experiment that altered the site’s own algorithm (and thus, news updates) for about 700,000 users back in 2012 over a seven-day period. The company claims that what it did was completely legal, and in step with its user policy – which all Facebook users must consent to before joining Facebook. This, of course, assumes that the so-called policy regarding experiments of this kind were in place back in 2012.
Unfortunately, Facebook’s policy included no such statement back in January 2012 when the experiment took place.
Tech journalist Kashmir Hill has done her research, and she is responsible for uncovering the latest criticism against Facebook. What did she find? Facebook lied about its policy. The policy of January 2012 was updated four months later in May 2012 to reflect Facebook’s claim. The January 2012 policy said nothing about consenting to social media experiments. Hill says that we’ve made an error in judgment:
“We were all relying on what Facebook’s data policy says now. In January 2012, they did not say anything about users being potentially being guinea pigs made to have a crappy day for science, nor that ‘research’ is something that might happen on the platform.
Four months after this study happened, in May 2012, Facebook made changes to its data use policy, and that’s when it introduced this line about how it might use your information: ‘For internal operations, including troubleshooting, data analysis, testing, research and service improvement.’ Facebook helpfully posted a ‘red line’ version of the new policy, contrasting it with the prior version from September 2011 – which did not mention anything about user information being used in ‘research’.”
A cursory glance of Facebook’s 2011 data use policy will reveal that there’s no testing language, except to say that Facebook uses the data to introduce “features and services” in the future that will help Facebook learn more about its users. Facebook also claimed in its 2011 policy that it would only use user data to make sure that Facebook remains secure and protected, to alert you of a nearby event via location services, to measure the effectiveness of ads, or to encourage you to tag friends, add someone as a contact to your friend list, and so on. The language of the 2011 policy was intended to help Facebook users. How does Facebook’s recent social experiment help users? If anything, Facebook is the only one that benefited from the study.
By May 2012, Facebook’s language had been altered in its data use policy to allow for “internal operations.” The company also left a document (called “Redline of Proposed Data Use Policy”) that records the added internal operations statement in red so that you can see that it was added in 2012, not 2011. You can view the 2011 policy and the 2012 policies, respectively, below:
What are we to make of this? Facebook’s updated policy involves the addition of lots of information that, prior to 2012, was kept out of Facebook’s reach. We can understand Facebook gathering information about ads and user experience with ads, but how does “internal operations” factor into that? And, as for Facebook’s decision to share that data with University researchers, where does this permission exist in the policy? If anything, Facebook’s policy only discusses third-party advertisers sharing information about you with Facebook – not the other way around.
In short, Facebook’s claims don’t match the evidence. Is there any wonder that the FCC warned Facebook sternly about honoring the privacy of WhatsApp users in the WhatsApp acquisition?