US Senators Richard Blumenthal of Connecticut and Marsha Blackburn of Tennessee have introduced the Kids Online Safety Act (KOSA), legislation that aims to enhance children’s safety online.
This follows the The Wall Street Journal (WSJ)’s reporting on the harm Instagram can inflict on teens, which was based on controversial Facebook documents that whistleblower Frances Haugen leaked to the WSJ, and coupled with multiple hearings with social media companies about their failures to protect kids online.
“Protecting our kids and teens online is critically important, particularly since COVID increased our reliance on technology,” said Blackburn in a press release.
“Big Tech has brazenly failed children and betrayed its trust, putting profits above safety. Seared in my memory—and motivating my passion—are countless harrowing stories from Connecticut and across the country about heartbreaking loss, destructive emotional rabbit holes, and addictive dark places rampant on social media. The Kids Online Safety Act would finally give kids and their parents the tools and safeguards they need to protect against toxic content—and hold Big Tech accountable for deeply dangerous algorithms. Algorithms driven by eyeballs and dollars will no longer hold sway.”
In a one-page summary document, KOSA is presented as a solution to the longstanding problem of social media platforms playing a hand in their most vulnerable users’ mental health and well-being: children and teens. The senators presented how parents or carers and young social media users can benefit from KOSA by:
- Requiring social media platforms to provide their young users (age 16 years and below) options to protect their online information, disable features that would cause them addiction, and opt out of algorithmic recommendations. These algorithms pull from a user’s personal data to suggested content that triggers users to keep scrolling.
- Requiring platforms to enable the strongest possible setting to minors by default.
- Giving parents provision to support children under their care and identify harmful behavior. Platforms should also provide parents and kids a dedicated channel where they can report harms.
- Creating accountability for social media platforms to act in preventing and mitigating content that could harm minors. Such content includes the promotion of unlawful products for minors (e.g. gambling and alcohol), self-harm, substance abuse, eating disorder, sexual exploitation, and suicide.
- Requiring social media platforms to conduct an annual independent audit aimed at assessing risks to minors, compliance to legislation, and whether they are taking meaningful steps to ensure that harms are prevented. The end product of this assessment would be an annual report.
- Providing academic researchers and non-profit organizations access to critical social media platform datasets to foster research on the safety and well-being of minors. This would also require the National Telecommunications and Information Administration to setup a program where researchers could apply for data sets from these social media platforms.
Meanwhile in California, lawmakers introduced a bill on Thursday that requires Meta and YouTube to limit collecting children’s data on their platforms. If passed into law, the profiling of young users for targeted advertising will be restricted, introducing age-appropriate content policies will be mandated, and serving up behavioral nudges to get children and teens to weaken their privacy protections will be banned.
This California bill is said to be modeled after the UK’s Age Appropriate Design Code (aka “Children’s Code”), which came into force in September 2021.