A daily stampede of 1.2 billion users makes Facebook the world’s biggest social media platform. This massive landscape comes with the usual creepy users. If they had a super-bizarre king, it would be Facebook itself. Recent investigations found that the company has information on people not using the platform and that violence is used to keep Facebook interesting. While its moderators struggle with PTSD, the company also wants users’ bank details, phone calls and know the last time when they had sex.
10 Brain Scans Can See Facebook
Most people have experienced the obsessive compulsion to check their Facebook page. Some might even experience withdrawal symptoms. Researchers decided to see if this so-called “Facebook addiction” showed up in the brain like a drug. The study was small and by no means conclusive, but the results were interesting.
In 2014, 20 volunteers answered questions about their Facebook habits and were diagnosed with a mild addiction. Afterward, each had their brain scanned while viewing images and pushing a button. The pictures rotated between Facebook-related pictures and road signs. The participants could choose when to push the button, but those who scored higher on the addiction survey turned trigger happy when Facebook popped up.
The scans showed that the platform caused a powerful response in the brain. Similar to cocaine users, the impulsive regions lit up. However, there was a big difference between people addicted to drugs and those hooked on Facebook. A cocaine addict’s prefrontal cortex is underactive. This region controls the impulsive areas of the brain and it worked just fine in the Facebook volunteers. In other words, social media addicts are not driven by real cravings they have no control over, but instead a complex mix of habit, cultural and social factors.
9 Users Have A Reputation Score
Reporting legitimate posts as false news is a problem on social media. To fight this, Facebook devised its own system to identify guilty users. Based on what content they flag, for what reason and how many times a user pushes that button determines a person’s “trustworthiness.” For some reason, this score only runs on a scale from zero to 1.
The system took a year to design before being implemented in 2018. The main purpose is to find the trolls and bullies. Many people maliciously report content as violations just to get a thrill. Others genuinely disagree with the material but because of personal preference and not because the post is inherently wrong.
The score is not an all-powerful fly swatter. This is merely one tool, used in conjunction with thousands of behavioral clues mined by Facebook to separate the genuine flags from the false. While this is clearly a great idea, the process that ultimately gives a user a score remains mysterious. Even more so; how these scores are being used.
8 Shadow Profiles
Mark Zuckerberg, chief executive and founder of Facebook, appeared in front of Congress in 2018. During the proceedings, he answered tough questions about user privacy. In addition, he was grilled about shadow profiles collecting data on people who were not even using Facebook. Despite that this issue had been making the rounds for five years, Zuckerberg claimed he was not familiar with the term “shadow profile.”
This feature figures out the social circle and contact details of people who avoid Facebook but have friends using the platform. When these friends upload their own phone details, a shadow profile is created for every mobile contact without a Facebook account. Eventually, users with mutual friends give the site a perfect view of their social group.
Should somebody with a shadow profile join, Facebook will send them friend suggestions based on their social circle which, by then, is already known to the company. The idea seems to be a legitimate way that Facebook tries to connect people on the site. Even so, some people might not appreciate any company using their number, after purloining it from a friend’s phone, to create a ghost membership for them.
7 Secret Transcripts
In 2019, a group of third-party transcriptionists broke their silence. Facebook had hired them to transcribe recordings, most of which were voice conversations between Facebook Messenger users. The work was already strange, considering that some of the files contained vulgar topics and language. Additionally, the social media giant never provided an explanation as to why it needed the countless transcriptions.
Smelling a rat, the contractors went public. Facebook had no choice but to admit that users were being secretly taped through their phone microphones but added that permission was always given. However, this permission was necessary—and hidden in the small print—for anyone wishing to use voice messaging. When investigators combed through all the policies, they found nothing where users could agree to have their conversations recorded and farmed out to third-party transcriptionists. In other words, Facebook users could not agree even if they wanted to.
The company said it ended the project and that the transcriptions’ only purpose was to test Facebook’s speech-recognizing AI. This pedestrian explanation, whether true or not, was an admission that user privacy had been violated.
6 Moderators Have PTSD
A lot of users ignore Facebook’s policy against posting graphic, offensive and illegal content. That is where the moderators come in. Their job is to weed out these posts. In 2019, The Verge published an investigation into their working conditions. The original paper ran 7,500 words and covered an in-depth look at one of Facebook’s moderation offices in Arizona. During interviews, moderators claimed that the stress of viewing disturbing material, Facebook’s crappy employee rules and low pay created an unhappy environment.
Employees’ strategies apparently included smoking marijuana and having sex, which happened due to “trauma bonding.” Apart from struggling financially and working under what they described as Facebook’s “inhumane” rules, moderators buckled under a huge load of unsavory posts. These included child abuse and exploitation, racism and graphic violence. Several moderators broke down, developed PTSD symptoms while others turned paranoid by the repetitive conspiracy theories they reviewed.
5 The Brazilian Witch Hunt
In 2014, a tabloid posted a sketch on Facebook. The article claimed that the woman in the picture abducted children and used their organs for witchcraft. A group of people realized that the image “showed” Fabiane Maria de Jesus. As a result, the 33-year-old housewife from Brazil was attacked by a mob.
A graphic video showed the unconscious woman having her head smashed into the ground. She was then tied to a bicycle and dragged through the streets while a crowd cheered. The Military Police cleared Fabiane of any crimes related to black magic, organ trafficking, and kidnapping. Several people were also arrested in connection with the mob attack. However, none of this helped the victim. She was so brutalized that two days later, she died.
Facebook responded by saying the company did nothing wrong and accepted no responsibility for the murder. However, the attack could have been sparked by Facebook’s ignorance of Brazil’s fear of witchcraft and organ theft. Both are real issues, especially for the poorer communities. Facebook’s moderators allowed the post because it seemed mild but considering its audience, it was a cultural trigger that ended in tragedy.
4 FBI Recruits Spies On Facebook
In 2019, Business Insider reported that the FBI placed spy ads on Facebook. True enough, according to the platform’s Ad Library, the three ads went live on September 11. Despite the auspicious date and the cloak and dagger theme, the FBI was open about their intent. They were holding the door open for Russian informants.
When a person feels like giving Putin the finger, they can click on the image. The ad leads the future spy to another website. This page belongs to the FBI’s Washington Field Office Counterintelligence Program. A message, written in both English and Russian, informs the reader that the United States uses intelligence on foreign nations to protect its own citizens. Anyone with useful information is invited to meet with the FBI in person. This may seem like a strange suggestion but the fact is that 99 percent of Russian spies just walk into the relevant building and start spilling. When asked why they targeted the Russians, the Bureau would only reveal that the vast number of active soviet operatives posed a security risk.
3 Facebook Follows Sex And Periods
Privacy International is a Britain-based watchdog. In 2019, it discovered that Facebook had a creepy habit of following women’s periods and the last time they had sex. Of course, none of the women involved volunteered the information nor shared anything on social media. They made the mistake of trusting two period-tracking apps.
Named Maya and MIA Fem, users entered information to manage their health, birth control and even help themselves to conceive. The two apps, however, shared these details without permission with third parties that included Facebook. In other words, these third parties knew when the women last had sex or a period, when they were fertile and what birth control they used.
The apps started gathering details the moment a user installed and opened the tracker, before any privacy terms were agreed to. The information was passed on through the Facebook Software Development Kit, a product linked to the platform’s advertising network.
The unauthorized sharing of intimate details can be life-destroying. Experts are concerned that insurance companies and employers could use the information to discriminate against women by denying them leadership roles and making their premiums extra expensive. Why Facebook likes periods is anyone’s guess.
2 An Attempt To Grab Bank Details
In 2018, Facebook sidled up to a few banks. The company asked them to turn over the financial information of their clients. More specifically, Facebook wanted to eyeball people’s account balances and what they purchased with their credit cards. The Wall Street Journal broke the story and named some of the banks that Facebook approached. They included Wells Fargo, JP Morgan Chase, and Citigroup.
In return for the financial data, banks would get featured on the Messenger app. This was enticing. Banks face a lot of competition from companies like PayPal. Messenger has over a billion users, which is a massive market and also Facebook’s commerce center. Should the app turn users towards using the banks, the latter would get an edge against their competition.
Give credit where it is due. Facebook shoved a delicious deal across the negotiation table but every bank thus far had said: “No, thank you.” One bank openly refused out of concern for their clients’ privacy, despite promises not to misuse the information. However, Facebook’s shoddy track record with privacy does not inspire trust. Least of all the idea that its staff members will eyeball every product or service you just paid for with a credit card.
1 Facebook Profits From Violence
In 2018, a journalist applied for a job as a Facebook moderator. The reporter took the position at a Dublin-based company called CPL Resources. This contractor has worked with Facebook since 2010. The undercover investigator was gathering material for a Channel 4 documentary called “Inside Facebook: Secrets of the Social Network.”
The training CPL provided was based on Facebook’s community standards. It soon became clear that not all abuse was created equal. The false employee was told to allow certain offensive posts. The training officer then provided two examples. One was real footage of a toddler being beaten by a grown man. The video was flagged as inappropriate in 2012 but still floated around on Facebook. Another was a meme showing a little girl being drowned with the words, “when your daughter’s first crush is a little negro boy.” When Channel 4 brought it to Facebook’s attention, the platform said that both should have been deleted.
It beggars belief that two graphic posts slipped through the cracks before being chosen as training material. Indeed, as much as Facebook protested that users were not being lured with violence to bolster their numbers and ad revenue, a CPL trainer told the journalist, “If you start censoring too much, then people lose interest in the platform. It’s all about making money at the end of the day.”