(Photo : BTIN)
- Turkey has banned Discord, citing concerns over child safety and crimes involving minors.
- The ban follows the arrest of two minors for distributing criminal content on Telegram and Discord.
- Globally, governments are implementing measures to increase online safety for children, such as the Kids Online Safety Act in the U.S. and the Online Safety Bill in the U.K. Social media platforms are also taking steps to address child safety, but these efforts have sparked debates over user privacy and the effectiveness of AI in detecting harmful content.
In a recent development that has sparked international attention, Turkey has imposed a ban on the popular messaging platform Discord. The decision, announced by the country's Information Technologies and Communication Authority, was made following a court ruling that cited concerns over platform safety and crimes involving minors. The move is part of a broader effort by the Turkish government to protect its youth from harmful content on social media and the internet that promotes criminal activity.
Justice Minister Yilmaz Tunc stated that an Ankara court had found sufficient grounds for suspicion of child sexual abuse and obscenity to justify the move. We are committed to protecting our youth and children from harmful content on social media and the internet that promotes criminal activity. We will not tolerate attempts to undermine our social structure, Tunc said on a social media platform.
Interior Minister Ali Yerlikaya further revealed that police had detained two minors for distributing criminal content in groups on Telegram and Discord. The decision to block Discord comes in the wake of public outrage over the murder of two women by a 19-year-old man earlier this month, with offensive content related to femicide, including praise for the killings, posted on Discord.
Global Response to Online Child Safety
This move by Turkey is not an isolated incident. Across the globe, governments and international bodies are grappling with the challenge of ensuring online safety for children, particularly on social media platforms. In the United States, proposals like the Kids Online Safety Act (KOSA) aim to increase protections for children online, including requiring platforms to provide tools for parents to control their children's online experiences and mandating safety by design standards.
In the European Union, the General Data Protection Regulation (GDPR) includes specific provisions for protecting children's data online. The Digital Services Act (DSA) and Digital Markets Act (DMA) also address online safety, with the DSA requiring platforms to remove illegal content, including harmful material for children.
The United Kingdom has proposed the Online Safety Bill, which would make social media companies legally responsible for protecting users, especially children, from harmful content, including cyberbullying, sexual exploitation, and disinformation. Australia has introduced laws that hold social media companies liable for failing to remove abhorrent violent material promptly.
Social Media Platforms' Measures and Controversies
Meanwhile, social media platforms themselves have taken various measures to address child safety concerns. These include implementing stricter age verification processes, enhancing content moderation algorithms, introducing safety features like parental controls, running campaigns to educate children, parents, and educators about online safety, and cooperating with law enforcement agencies to report and assist in investigations related to child safety issues.
However, these efforts are not without controversy. Some platforms have implemented end-to-end encryption to protect user privacy, a move that has been criticized for potentially shielding illegal activities. There are also concerns about the effectiveness of AI and machine learning in detecting harmful content and behavior, and the potential for these technologies to infringe on user privacy.