top of page

What Facebook's Security Double Standard Tells Us About How It Views User Privacy


Last week was a busy one for the conversation around digital privacy as Facebook fired an employee of its security division for allegedly misusing his privileged access to stalk women, while the Wall Street Journal broke the story that the company’s own employees are protected by a special privacy feature that alerts them when its security employees access their profiles. In the space of just one week, it emerged that Facebook’s security staff can access any of the platform’s ordinary two billion profiles and rummage through them without the user ever knowing their information has been accessed, while if those same security staff attempt to access a fellow Facebook employee’s profile to investigate misconduct or illegal activity, that employee will receive a notice they are being investigated. What does this double standard teach us about how social media platforms view their users?

Perhaps the most interesting element of Facebook’s system, originally called “Sauron alert” and now known simply as “Security Watchdog” is that alerts are sent directly to the employees whose profiles are accessed, rather than to corporate legal, human resources or some other watchdog division. One could imagine a process in which all accesses to employee profiles are routed to a dedicated staff in the legal department to review for appropriateness. Under this model the employee whose profile is accessed by security would never know their profile was viewed. Instead the legal department would serve as a check and balance to security misdeeds.

Indeed, I’ve seen variants of this approach at other companies I’ve worked with. At those companies the understanding was that if security needed to access an employee’s information it meant there was already strong evidence of violations of corporate policy or illegal behavior, so you would not want to tip them off, since in some cases you might want to observe them in the act to document it. Rather, legal had to approve each access and reviewed all activity, ensuring there was reasonable suspicion to warrant the accesses and that they were restricted to the bare minimal information required.

Facebook’s approach is intriguing for the way it seems to place the burden on its rank and file employees to serve as a check and balance against stalking, misuse and other unauthorized misbehavior by its security division. The fact that alerts are sent to the employees being accessed rather than to legal suggests the company views such unauthorized accesses as a relatively minor privacy violation rather than the severe misuse it represents.

More to the point, it suggests that the majority of accesses are false positives or unauthorized accesses. The reason for this is simple – if the only time security accessed employee profiles was in the investigation of a serious violation of corporate policy leading to termination or in the collection of evidence on behalf of law enforcement for criminal proceedings, it is unlikely that its system would be designed from the ground up to alert the subject of those accesses that security is on to them. Instead, the placement of employees as the arbitrators of what constitutes acceptable access is difficult to lead to any conclusion other than misuse is more common than legitimate investigations.

When asked for comment about this, a company spokesperson would not comment beyond emphasizing that Facebook had considered providing similar access alerts to ordinary users but did not want to tip off bad actors when investigating spamming, bullying or criminal behavior on the platform.

Yet, this raises the question again of false positive rates. If the overwhelming majority of accesses to user data come exclusively from legitimate investigations with probable cause to believe a policy violation or criminal activity is occurring, then the user will know about the access shortly anyway when their account is terminated or legal action is taken against them.

Imagine a user who is spamming others with venomous hate speech. If they receive an alert that Facebook security has just accessed their account, there is little action they could take, since one might assume Facebook would make a backup of their entire account prior to sending the alert, so even if the person deleted all of their offending content, the company would still have a full record of it. Even if they did not receive an alert, they would know they were investigated when they are notified that their account has been suspended or terminated for violating policy.

In short, if the majority of accesses stemmed from legitimate investigations with probable cause that largely tend to action being taken against the account holder, then alerting the user at the time of access or alerting them an hour later when their account is suspended or terminated makes little difference.

Alternatively, one could imagine a policy in which confirmed spammers or bullies are observed by security over time to learn more about their networks and how they exploit the site’s features. In this case security would want to allow them to continue attacking other users in the interim to gather further detail, instead of immediately terminating the account and exclusively relying on an archived copy of the account to perform post mortem analyses. However, this would appear to be in stark contrast with the site’s stated goal of ridding itself of such activity and so would appear unlikely.

When asked for comment on all of these issues, the company declined to comment beyond again reiterating that it had considered offering warnings to users, but did not want to tip them off to investigations.

Yet, there is another reason that it is imperative that the company find a way of either notifying users when their accounts are accessed or provide that information to a neutral third-party arbitrator: bias. Today we have no choice but to accept Facebook’s assurances that its security team is completely neutral and 100% bias free and that implicit and explicit biases or discriminatory views play no role in its actions or accesses.

Towards this end, perhaps Facebook could take a page from the NYPD, which recently announced that anyone who is stopped by one of its police officers, but not arrested or issued a summons, will be given the officer’s business card with information on how to request the bodycam footage of their interaction.

In the context of Facebook’s platform, one could imagine a policy whereby if the company’s security team accesses a user’s account and determines that there is a policy violation warranting suspension or termination, the user will know about the access by virtue of receiving the account action notification. Investigations that result in law enforcement referral would similarly not result in an alert to the user being investigated, to avoid tipping them off in case law enforcement needs to conduct additional surveillance.

However, similar to the NYPD’s model, in all cases where Facebook security accessed a user’s account, but took no action against that account, the user would receive a notification that their account was accessed. Such a notification would allow users to know their private information had been accessed. Over time, users in specific communities could determine whether members of their community were being accessed more often than other users, suggesting potential implicit bias that would help give visibility for the first time to the company's biases and help it improve its processes.

Putting this all together, Facebook’s choice of alerting employees when security accesses their accounts, but not ordinary users worryingly suggests that the majority of those accesses are false positives, rather than investigations that lead to employment or criminal action, that the company believes such false positive accesses are so common that it was worth investing the resources to build a notification system to warn employees when their profiles have been viewed and that ordinary users don’t deserve such privacy protections. Until Facebook starts being more transparent, all we can do is sit quietly and blindly trust it until the next privacy scandal.


Recent Posts
Archive
Search By Tags
No tags yet.
Follow Us
  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square
bottom of page