Sometimes, those working in and around social media every day can forget just how much many, if not most, of the population may take the internet for granted.

Especially teens and tweens, who, posting from the safety and security of their own bedrooms, can feel free to say, do and broadcast what they like without worrying too much about the consequences.

Children online: the stats

Recent research by the BBC showed that only 40% of youngsters knew that details they shared online stayed online forever, while the EU Kids Online report found that 50% of 11 to 16 year olds found it easier to be themselves on the internet, with the potential for over-sharing which this implies.

Pew Research found that although most teens still had positive social experiences online, nine out of ten of them had witnessed cyberbullying, and of those, over 20% had joined in the harassment.

It’s a fact that young people perceive risk differently from adults. Studies show that the parts of their brains which look after decision making, impulse control, judgement and controlling and processing emotional information will not be fully developed until their 20s.

Teens may genuinely not be able to foresee the impact which their sexting or cruel words may have on their victim, or judge an approach as potentially dangerous.

In 2011, the LSE’s EU Kids Online project conducted research into online risks faced by children.

The resulting report examined major risk factors from pornography and bullying to receiving sexual messages, contact with people not known face-to-face, offline meetings with online contacts, harmful user-generated content and personal data misuse.

It’s important to note that risk doesn’t automatically mean harm, but represents a vulnerable point where harm can be caused. Risk, as judged by the adult world, can be seen in a different light by children, who may seek out certain experiences as an act of rebellion, and are not mature enough to realise the risk.

The LSE report revealed that risk increased with the age of the child, with 14% of 9 to 10 year olds encountering risk, rising to 69% of 15 to 16 year olds.

What should brands do?

Brands have to realise that for many teens a combination of lack of judgement, the urge to take risks and in some cases strong peer pressure, will counteract any positive enforcement implemented by parents and schools.

This is why brands need to ensure that the environments they create online are safe.

Although children should be educated and advised about online safety by parents and teachers, it is the responsibility of the brands marketing to them to provide a safe environment for children to communicate in online.

The UK relies on self-regulation, but the UK Council for Internet Safety (UKCCIS) provides strong guidance on keeping children safe online. In the US, brands have to comply with COPPA (currently under review) if they collect data from children under the age of thirteen – and this carries an inevitable global impact.

Brands which target children and want them to join community sites, ‘like’ their Facebook pages or submit content into competitions need to ensure that they won’t get bullied or exploited while participating – not only for the child’s sake, but to safeguard their own reputation as well.

Identify risks

As best practice, brands must identify risks before establishing the site. By identifying risk beforehand, they can plan, prepare and have safeguards in place.

Brands can demonstrate their commitment to child safety by establishing clear and easily accessible community guidelines. Not only will they help document the brands commitment to protecting their online community, but they can encourage a strong community to develop. One in which the members encourage each other to be responsible members of their shared community.

Child protection policies

All project and community managers should know how to implement the brands child protection policies. Having a child protection policy in place (which all employees involved should sign up to) will ensure that child safety is at the core of the community structure.

The policy should include: registration and user validation policies, describe how children are brought into the site, outline what data they are required to give and describe what parental permission is required.

Ensure that moderators are in place, and that they have clear terms of reference for acceptable language and behaviour. Guidelines provided to moderators must include an escalation procedure detailing how and when a moderator should pass activity that is of particular concern on to higher management or authorities.

Even brands which don’t intend to set up a virtual world, forum or fan site can find themselves suddenly needing to consider a whole host of factors.

For example, the marketing team may come up with a brilliant idea to launch a video competition using YouTube. But before they do this, the brand has to consider how it will manage and moderate content on a social network site that they do not own.

Can they pre-moderate or do they have to post-moderate? Can they delete offensive comments or merely flag them?


There seems like a lot for brands to consider before creating a community presence for children, but there is already a considerable body of expert knowledge out there to help.

Various government and charitable agencies around the world publish useful guidelines on how to set up your website to work safely and ethically with young people.

They include legal requirements on recruiting and registering children, extracting and using their personal data, advertising and marketing to kids and guidance on monitoring and moderating user generated content to protect children from harm.

It’s not necessary to start from scratch either: organisations like KidsOKOnline exist, which can help brands create and run social networks and websites for children and teens.

(Thanks to Carole Hart-Fletcher from KidsOKOnline for her help and advice on compiling this article.)