Artificial intelligence has been a target for governments concerned about its potential to be used for fraud, disinformation, and other malicious online activities. In the UK, regulators are currently preparing to investigate how AI can be used to combat content that is particularly harmful to children.
Ofcom, the regulator responsible for enforcing the UK's online safety laws, is investigating how AI and other automated tools are used now and in the future, particularly to proactively detect and remove illegal content online. announced that they are planning to start discussions on whether it can be used for this purpose. Protect children from harmful content and identify child sexual abuse material that was previously difficult to detect.
These tools will form part of a wider set of proposals that Ofcom is putting together to focus on children's safety online. Consultation on the comprehensive proposal will begin in the coming weeks, with a consultation on AI expected later this year, Ofcom said.
Mark Bunting, director of Ofcom's Online Safety Group, said the interest in AI stemmed from seeing how commonly AI is used as a screening tool today.
“Some services are already using these tools to identify and protect children from this content,” he said in an interview with TechCrunch. “However, there is not much information available about how accurate and effective these tools are. We want to consider ways the industry can reliably assess them. [that] When using it, ensure that risks to freedom of expression and privacy are managed. ”
One possible outcome is that Ofcom will recommend how and what platforms should assess. This could potentially lead to fines if platforms fail to make improvements in blocking content and creating better ways to retain younger users, as well as adopting more sophisticated tools. There is a gender. After seeing that.
“As with many online safety regulations, the onus is on businesses to ensure they take appropriate steps and use the appropriate tools to protect their users,” he said.
This move will have both critics and supporters. AI researchers are finding ever more sophisticated ways to use AI to detect things like deep fakes and authenticate users online. But there are just as many skeptics who point out that AI detection is far from foolproof.
Ofcom has published its latest research into children's online usage in the UK, as well as announcing a consultation on AI tools. As a result, it has been found that, overall, more young children are connected to the internet than ever before, and Ofcom is now seeing increased activity among younger and younger age groups.
A survey of U.S. parents found that nearly a quarter, or 24%, of children ages 5 to 7 now own their own smartphones, and 76% if tablets are included. People in the same age group are also increasingly using media on these devices. 65% have made audio and video calls (up from 59% a year ago), and half of kids watch streaming media (up from 39% a year ago). .
Age restrictions on some mainstream social media apps are becoming lower, but whatever the restrictions are, they seem to be ignored in the UK anyway. According to Ofcom research, around 38% of children aged 5 to 7 use social media. The most popular app among them is Meta's WhatsApp with 37%. And in perhaps the first example of relief that Meta's flagship image app wasn't as popular as ByteDance's viral sensation, TikTok is used by 30% of 5-7 year olds, while Instagram is used by “only” 22 It was found that %. Discord took the last place on the list, but was significantly less popular, at just 4%.
About a third, or 32%, of children this age use the internet themselves, and 30% of parents said they were fine with their minor child having a social media profile. YouTube Kids remains the most popular network among young users, at 48%.
A firm favorite among children, gaming has grown to be used by 41% of 5-7 year olds, with 15% of children in this age group playing shooter games.
76% of parents surveyed said they talk to their young children about staying safe online, but there is a question mark between what children see and what they report. Ofcom points out. When Ofcom investigated older children aged 8 to 17, he interviewed them directly. They found that 32% of children reported having seen worrying content online, but only 20% of parents said they had reported something.
Even taking into account the contradictions in some reports, “the findings show a disconnect between older children's exposure to potentially harmful content online and what they share with their parents about their online experiences.” “This suggests that there is,” Ofcom wrote. And the content he is concerned about is just one issue. Deep fakes are also a problem. According to Ofcom, 25% of 16- to 17-year-olds say they are not confident in telling the difference between fake and real online.