Britain's communications regulator Ofcom announced on Wednesday that social media platforms face fines of up to $22.5 million if they don't take steps to prevent their algorithms from directing children to harmful content.
The UK's new online safety laws place new legal responsibilities on platforms to protect children, and Ofcom has published a draft code of practice setting out how platforms should respond.
Melanie Dawes, chief executive of Ofcom, said: “In line with new online safety legislation, our proposed provisions firmly place the onus on technology companies to keep children safe.” .
“We will need to tame aggressive algorithms that push harmful content to children in personalized feeds and introduce age checks to ensure children are getting an age-appropriate experience,” she said. added.
The regulator said the report outlined 40 practical measures that would make a “major difference to children's online safety”.
Advertisement – SCROLL TO CONTINUE
“Once it takes effect, we will not hesitate to use all of our enforcement powers to hold platforms accountable,” she warned.
The new measures are due to come into force next year, and those who break the rules will be subject to fines of up to 18 million pounds ($22.5 million) or 10% of their income.
Mr Dawes said fraudulent platforms would be “named and shamed” and could even be banned against children.
Advertisement – SCROLL TO CONTINUE
In addition to robust age checks, the regulations require companies to implement content moderation systems to ensure that harmful content is quickly removed.
Peter Wanless, chief executive of children's charity NSPCC, said the draft law was a “welcome step in the right direction”.
He added: “Tech companies will be legally required to ensure that their platforms are fundamentally safe by design for children when the final code takes effect.” Ta.
jwp/phz/tw