Yesterday afternoon as I am loitering in the playground a friend tells me she has read my blog ‘Our Monsters on Moshi’ – . “Wow really good,” she says. “It even prompted me to ask [daughter’s name] if she ever talks to people she doesn’t know online. Of course she looked at me and said ‘No Mum’ and that was the end of the conversation.”
My friend admits her 10-year-old daughter spends a lot of time online, probably too much, and she doesn’t really know what she is doing. Judging by the research for my book Is Your Child Safe Online? she is not alone. Modern life is busy and it is simply impossible to monitor every click children our make. It is also difficult to stop them communicating with others online because communicating is what humans do.
Anyway daughter of said friend is a lovely, bright and kind girl (who I know plays on MoshiMonsters because my daughter does too) and she knows how to behave in the playground so one hopes the same is true of her online life. In fact Rebecca Newton, chief community officer and safety officer at Mind Candy, the company behind MoshiMonsters, tells me that only a tiny minority (2-3%) of the children playing on Moshi don’t know how to behave. For those bad behaviour generally falls into three categories: sexual, bullying or swearing.
So what sort of things might this small percentage of under-12s get up to? Most of the time the posts are entirely innocent messages; 98% of these, says Newton, “are nothing more than hi, or I love my monster.” But as in the real world children can be mean posting something like ‘you’re fat and ugly’. Or they may set up a false account and then post a message to real-world friends saying something like ‘I know where you live [giving a nearby landmark], don’t tell anybody I know’. Such posts can be alarming for children who have been delivered internet safety messages at school. In rare cases children may even do something inappropriately sexual like setting up a new account with the user name paedo38. Depending on the seriousness of the bad behaviour children will be given a warning. However, posts on Moshi that are sexual or involve cybersex are immediately permanently suspended – no questions, no warnings. “We shut them down and blacklist their email address from the system,” says Newton, herself a victim of paedophilia.
But it seems that it is not just children who do not know how to behave in a children’s online game. On Moshi, for example, a group of parents have been known to arrange their real-world encounters (I’ll leave that to your imagination!) on the pin board. Each to his own, some might say, but why choose a children’s game to do it?
All the above are examples of real-life incidents. But like many other big names in the business (Lego, Sony’s Freerealms, Nickleodeon and Cartoon Network) MoshiMonsters uses an online community management software from Crisp Thinking . In nanoseconds Crisp’s NetModerator software analyses every single word exchanged in real-time, generating thousands of automated user-generated reports each day. According to Crisp the system “analyses context as well as content in order to catch and invented words and slang, for instance” . Newton says it does what the human eye could never see and even picks up so-called ‘under the radar’ activity of predators. Many real predators know what they are doing and disguise their language to avoid being caught. At Moshi there are also humans for context and these two factors combined have helped UK law enforcement make an arrest, the only arrest of a Moshi user in four years, even though the site now has 53 million users. “He wasn’t doing anything illegal or offensive on the site, but we just didn’t like the under-the-radar behaviour we noticed, so we reported him,” says Newton. As it turned out the man was engaging in criminal activity outside of Moshi and so an arrest was made.
While some would argue that one arrest is one too many, one could counter that this is in fact progress. The sophisticated technology used by some online gaming companies whose customers are children means there is a greater chance of sexual predators being caught. But this does not mean that parents should be complacent as there are many online age-restricted worlds, like Facebook and YouTube fro example, which can very quickly become popular with young children and that are not monitored or controlled in the same way.
So, can the industry do more? Newton would like to see the industry go further with some sort of age verification system introduced – a bit like that now required by law the online gambling industry which can verify in real-time whether somebody is over 18. But as any expert in the field will point out it’s complicated, not to mention costly and everybody is far from being on the same page.
But what about parents? What do they want or need?
“You know what would be really useful,” says my friend in the playground, “it would be useful if you produced a website which explained clearly what is possible on all these sites and what parents need to know in terms of how they are monitored, what they comply with an so on.” Of course she could always get my book Is Your Child Safe Online?, I tell her. “Book!” she snorts, “I don’t have time to read a book.”
UK parents have already benefited from strong educational initiatives around internet safety. But there still seems to be a tendency on the one hand to bury heads in sand, allowing children loose in online worlds and hoping for the best. My friend in the playground, for example, admits she doesn’t have any parental control software installed on her home computers. Another, whose six-year-old is adept at downloading YouTube videos on her i-Phone has not yet worked out how to change her settings.
Other parents seem to want to monitor every move and then may overreact to behaviour not fully understood, denying their children access and even reporting incidents to the police, some would argue at taxpayers’ expense. Take Moshi, for example. Each month the company receives a call from the UK Child Exploitation Online Protection agency which receives about five reports a week from parents asking for an investigation into an incident on MoshiMonsters. But Newton says in virtually every single instance these are childish pranks where those involved actually know each other. Interestingly though when parents are notified (assuming the parent’s email address was used when the account was set up!) they almost always find it difficult to believe that their child has been implicated.
One thing seems certain, the debate around the safety of children online, and indeed how parents and educators can help their children make the most of this wild, wonderful, living, breathing world, is far from over.
Tomorrow I’m attending a Westminster Media Forum discussion where a panel of experts from government and industry will debate the ‘Next Steps for Child Online Safety’. I’ll let you know what the powers-that-be are thinking.