I worked at Facebook. I know how Cambridge Analytica could have happened.

Here’s an example of how an investigation into one such issue played out during my time at the company: In late 2011, it was revealed that an app called Klout was creating “ghost” profiles of children. These public profiles were not created or authorized by the children and were reportedly based on friend data from adults who had authorized the Klout app. As the lead for platform data protection issues, I had to call the leadership of Klout and ask whether it was violating any Facebook policies, because we couldn’t see what it was actually doing with the data. The leadership swore it was not in violation. I reiterated the importance of following the policies, and that was the end of our call. Facebook took no further action, and Klout continued to access Facebook data, though it turned off the ghost profiles feature.

Advertisement

While Klout was an unusual case because the alleged violation was publicly visible, other less visible data protection issues happened regularly during my tenure. Facebook had the following tools to deal with these cases: It could call the developer and demand answers; it could demand an audit of the developer’s application and associated data storage, a right granted in the platform policies; it could ban the developer from the platform; it could sue the developer for breach of the policies, or it could do some combination of the above. During my 16 months at Facebook, I called many developers and demanded compliance, but I don’t recall the company conducting a single audit of a developer where the company inspected the developer’s data storage. Lawsuits and outright bans were also very rare. I believe the reason for lax enforcement was simple: Facebook didn’t want to make the public aware of huge weaknesses in its data security.

Advertisement

Join the conversation as a VIP Member

Trending on HotAir Videos

Advertisement
Advertisement
Advertisement