United States – LinkedIn announced last August 26, 2020, that they will take a tougher stance on inappropriate content on their platform to protect its users and make them feel secure. They plan to make the policies more strict and clearer. Continue to read to see other updates.
LinkedIn said “Every LinkedIn member has the right to a safe, trusted, and professional experience on our platform. We’ve heard from some of you that we should set a higher bar for safe conversations given the professional context of LinkedIn. We could not agree more. We’re committed to making sure conversations remain respectful and professional.”
Many people today are not aware of each social media platform policy. Yes, who reads a long and tedious policy? Because of this, sometimes, we tend to post things that may be offending to other users. For example, one person posted a particularly nasty comment to a celebrity with millions of followers. All of those people are potentially exposed to this type of hate speech. His friends, connections, followers and others will see that post and in all likelihood find it just as offensive.
There are more occurrences where social media users are experiencing harassment. One of particularly brutal method of harassment is the non-consensual pornography, commonly called “revenge porn”. It is an act of distributing sexually-explicit images or videos of people without their permission. It’s a particularly abusive tactic and many lives have been destroyed. Victims have suffered from physiological effects, severe anxiety and depression and unfortunately it has also led to suicide.
So what do social media platforms need to address this growing problem? As of August 2020, LinkedIn has decided to spread awareness of their policy in all communication channels and provide stricter guidelines to eliminate harassing, hateful and offensive content on their platforms.
The following are their initiatives:
- They made their policy more strict and clear with its users- They added their policy in a pop up notification that appears when users post or send messages. Notifications still continue to appear even when users are using an automation tool like Biglinker.
LinkedIn said “In this ever-changing world, people are bringing more conversations about sensitive topics to LinkedIn and it’s critical these conversations stay constructive and respectful, never harmful. When we see content or behavior that violates our Policies, we take swift action to remove it.”
- LinkedIn is now using AI models to remove profiles with inappropriate content
- They will be more transparent with their users. Will let them know of the actions that they have taken for all reported violations in the platform
LinkedIn said “We’ll close the loop with members who report inappropriate content, letting them know the action we’ve taken on their report. And, for members who violate our policies, we’ll inform them about which policy they violated and why their content was removed.”
LinkedIn is taking a proactive approach to the benefit of its members.