Two recent news stories caught my attention this weekend, both about protecting children online. The first discussed how artificial intelligence is revolutionizing child safety online, and the second covered the pending federal legislation, the Kids Online Safety Act (KOSA).

AI Shield Protection for Children Online

Neil Sahota Forbes article AI Shields Kids By Revolutionizing Child Safety And Online Protection notes:

The Internet, with all its resources and opportunities for learning and connection, also harbors risks for young users. AI technology steps in as a vigilant protector, monitoring and analyzing online content at a scale and speed impossible for human moderators alone. By filtering harmful content, detecting predatory behavior, and providing educational resources, AI acts as a guardian of the digital playground, ensuring it remains a space of safety and growth for children.

He notes that companies like Google and Facebook use AI to scan millions of posts, images, and videos daily, flagging and removing content harmful to children. Similarly, the National Center for Missing & Exploited Children (NCMEC) uses AI to identify and rescue victims of child trafficking and sexual exploitation. AI also detects bullying behavior in texts, emails, and social media interactions before they are posted, encouraging users to reconsider and aiding in parental monitoring and online safety education.

However, there are programmatic issues. Mega platforms like Facebook and Instagram still struggle to eliminate fake profiles and real profiles advertising porn, despite these being against their community standards. For instance, while they restrict accounts like mine from gaining followers for posting content such as enjoying a cigar, they still continue to recommend profiles promoting Only Fans links, which is porn. Another example is when I reposted an age progression photo of my daughter, and her joking online reply, “I am going to kill you,” got her temporarily banned. These incidents indicate that AI needs improvement.

AI on social media platforms will likely improve against online threats. But what about trusted individuals, like teachers or coaches, who use other tech means, such as cell phone texts, to groom and harm children? AI on Facebook or Instagram won’t prevent that. While AI is also being developed for more effective parental monitoring, these measures only work on devices where they are installed. Having supervised federal sex offenders, I know these measures only work on devices they’re installed on. If a predator gives their target a cheap phone, even the best AI parental monitoring tools won’t detect those texts.

The Kids Online Safety Act (KOSA)

Associated Press writer Barbara Ortutay, in “What to Know About the Kids Online Safety Act and Its Chances of Passing,” discusses KOSA, introduced in 2022 by Senators Richard Blumenthal (D-Conn.) and Marsha Blackburn (R-Tenn.), with 68 Senate cosponsors, KOSA has enough support to pass if brought to a vote. Ortutay notes:

If passed, KOSA would create a “duty of care” — a legal term that requires companies to take reasonable steps to prevent harm — for online platforms minors will likely use. They would have to “prevent and mitigate” harms to children, including bullying and violence, the promotion of suicide, eating disorders, substance abuse, sexual exploitation and advertisements for illegal products such as narcotics, tobacco or alcohol.

Social media platforms would also have to provide minors with options to protect their information, disable addictive product features, and opt out of personalized algorithmic recommendations. They would also be required to limit other users from communicating with children and limit features that “increase, sustain, or extend the use” of the platform — such as autoplay for videos or platform rewards. In general, online platforms would have to default to the safest settings possible for accounts it believes belong to minors.

Again, I am happy about this despite groups Ortutay observes having concerns about possible overbroad censorship resulting. But as I did with the above discussion about AI I am going to be programmatic again. Anyone remember The Do-Not-Call Implementation Act of 2003, signed by President George W. Bush? Well I am on the “do not call list” and I still get inundated with calls on my cell phone. The really annoying thing is my provider lets the call through but identifies it as a Spam Call. (I know switch providers and pay for spam protection). The point is there is a 21-year old law on the books that hasn’t’ stopped this behavior.

This is also great for helping protect kids but what about our elderly. In Elderly Cyber-Crime Victims I reported the FBI Internet Crime Complaint Center (IC3) disclosed that total reported losses in 2023 by those over the age of 60 topped $3.4 billion, an almost 11% increase in reported losses from 2022. What law is improving the elderly’s protection from pig butchering and other online frauds?

Laws and better technology are important pieces to protecting our families online. Todd and I wrote our upcoming book to give individuals personal guidance in not only protecting their families but surviving those attacks. One area we stress is the need to be active in our loved one’s lives to prevent and detect these dangers. No amount of technology or legislation can improve on human involvement to protect those we cherish.

Leave a comment

Cyber Security quote

The only truly secure system is one that is powered off, cast in a block of concrete and sealed in a lead-lined room with armed guards – and even then I have my doubts.

~Gene Spafford

Copyright 2024 – The Cyber Safety Guys