Social media companies should face sanctions if they do not prevent adults from directly messaging children, the head of Ofcom has reportedly said.
The communications watchdog will regulate the sector under the Online Harms Bill and have the power to fine companies and block access to sites.
And The Times reported Dame Melanie Dawes will encourage the regulator to closely examine direct messaging when the new regulations are introduced in 2023.
Her colleague, director of online safety policy Mark Bunting, was quoted by the paper saying cutting grooming off at source was a “blindingly obvious” solution.
Today, with @ICOnews and @CMAgovUK, we have published the Digital Regulation Cooperation Forum’s first annual plan of work, which highlights our priorities for the coming year, as we prepare to take on our new role as the regulator for online harms: https://t.co/EgfGPjbTEx pic.twitter.com/22UxQLaYW5
— Ofcom (@Ofcom) March 10, 2021
Speaking about the industry and the bill, Dame Melanie said: “I don’t think it’s sustainable for them to carry on as we are. Something’s got to change.
“What regulation offers them is a way to have consistency across the industry, to persuade users that they’re putting things right, and to prevent what could be a real erosion of public trust.
“They really need to persuade us that they understand who’s actually using their platforms, and that they are designing for the reality of their users and not just the older age group that they all say they have in their terms and conditions.”
If you're going to meaningfully protect children, you have to be disrupting child abuse at the earliest point you can, and that is direct messaging
Andy Burrows, NSPCC
The proposals in the Online Harms Bill include punishments for non-compliant firms such as large fines of up to £18 million or 10% of their global turnover – whichever is higher.
Andy Burrows, head of child safety online policy at the NSPCC told The Times: “We’re not seeing responses that are anywhere near proportionate to the problem.
“If you’re going to meaningfully protect children, you have to be disrupting child abuse at the earliest point you can, and that is direct messaging.”
In August, Instagram announced it would require all users to provide their date of birth, while Google has introduced a raft of privacy changes for children who use its search engine and YouTube platform.
TikTok also began limiting the direct messaging abilities of accounts belonging to 16 and 17-year-olds, as well as offering advice to parents and caregivers on how to support teenagers when they sign up.