For LGBTQ users, social media is neither a welcoming nor a safe place for them. This is according to a GLAAD report that found that 84% of LGBTQ adults found that not enough protections are on social media to prevent discrimination, harassment or disinformation; and 40% of LGBTQ adults, as well as 49% of transgender and nonbinary people, do not feel welcomed and safe on social media.
For GLAAD, this is because of the levels of hate and harassment LGBTQ social media users face while on platforms such as Twitter, TikTok, Instagram, Facebook and YouTube.
For this report, “Social Media Safety Index”, GLAAD graded each social media company based on each platform’s measures to protect LGBTQ users. Various features were considered, particularly whether these platforms offer gender-pronoun options on profiles or block ads that could be harmful or discriminatory.
Instagram, Facebook, Twitter, YouTube and TikTok all received failing grades, with each receiving a score under 50 out of a possible 100. TikTok scored the lowest at 43%. Instagram scored the highest at 48%.
Sadly, stated GLAAD, online hate and harassment quickly turn into real-world problems for LGBTQ people, and these social media platforms must do more to stop it.
“Today’s political and cultural landscapes demonstrate the real-life harmful effects of anti-LGBTQ rhetoric and misinformation online,” GLAAD’s president and CEO, Sarah Kate Ellis, said in a statement. “The hate and harassment, as well as misinformation and flat-out lies about LGBTQ people, that go viral on social media are creating real-world dangers, from legislation that harms our community to the recent threats of violence at Pride gatherings.”
Ellis added: “Social media platforms are active participants in the rise of anti-LGBTQ cultural climate and their only response can be to urgently create safer products and policies, and then enforce those policies.”
GLAAD’s recommendations included:
- protecting LGBTQ users’ data privacy, especially in areas where they are particularly vulnerable to serious harm and violence
- improving algorithms so that they do not amplify harmful content, extremism and hate
- training content moderators to better understand the needs of users so that they can remove content that can be hurtful
