EModeration, launched in 2002, provides user generated content moderation for a number of clients in the US, from children’s virtual worlds to ad campaigns that have a UGC element.

I’ve been talking to CEO and founder Tamara Littleton about her approach to the issue of online moderation, and the work she has been doing for advertisers…

Can you tell me a
little about eModeration?

Moderation
is a moderation and community management company. We help companies
set up and manage online communities, and moderate user generated
content within those communities. We also work with advertising
agencies who are including user generated content within their ad
campaigns. We have offices in London, New York and LA.

How big is the team?

We have 80
people working for us.

How does your service
work? Do you use technology, manual review, or a mixture of both?

We use
technology and human review techniques. We provide mostly human
moderation, but also work with technology partners including Crisp
Thinking and Reality Digital to provide automated moderation. Where
technology works particularly well is in automatically filtering
things like personally identifiable information that children will
often post to communities, or in identifying and preventing potential
grooming behaviour by being able to track the actions of a potential
abuser.

However, human moderation is vital in understanding nuance, and
also in providing a more visible moderation service, for example in
helping online users engage with their community.

How has the market,
and demand for, online moderation services changed over the last few
years?

We’ve seen
an enormous boom in user-generated content, and in brands engaging
with audiences online over the last few years, be it through online
communities, live comment streams (including Twitter), and virtual
worlds, particularly for children. Brands have realised that if they
are going to include content from users within these communities or
on their websites, they need not only to protect their users from
harmful, illegal or abusive content, but also their own reputations.

Brands are much more sophisticated in the way they engage now and
we’re seeing a rise in advertising that includes user content as
part of the ad, where of course moderation is critical, as no brand
wants their advertising messages to be associated with abusive
content, for example.

What
kind of clients do you have?

We work with
a number of household name brands, such as ITV (we’re moderating
comments on the X Factor websites currently), MTV, 02,  Hyundai,
Procter & Gamble etc, but also with a number of ad agencies,
including Publicis, Digital Outlook, Wieden+Kennedy, AKQA, and
Crispin Porter Bogusky.

Do you think
governments are dealing effectively with the issue of child safety
online?  Is
there adequate legislation? 

There is
some great work being done by the UK government, I was part of the
Home Office Sub Committee that advised the Government on moderation
of communities to help safeguard children, and am currently part of
the moderation sub group of the UK Council for Child Internet Safety, and by government-supported bodies such as the Internet Watch
Foundation, CEOP (the Child Exploitation and Online Protection
Centre) which is part of UK policing. Where I don’t think there is
adequate legislation or action is in cross-border policing,
particularly outside the EU.  

What are the biggest
problems for brands and websites which target children online? 

Child safety
is obviously the biggest concern. Children will often freely share
contact or personally identifiable information (and will often try
pretty hard to get round automated content filters to do so) and so
moderators have to prevent children from inadvertently putting
themselves at risk. ‘Flaming’ and bullying are also problems.

Children will often be much bolder in their actions online that they
would be face to face, and so bullying rates online are high (the
charity ‘beat bullying’
has some
great research in this area). Knowing where to draw the
line
(and seeing the difference between banter and bullying) can be hard
for brands running communities for kids.

How to
engage children is difficult for some brands to do well.
Understanding how a child’s mind works can be tricky for an adult!
Try to be a ‘cool kid’ and you risk looking ridiculous, but be
too heavy-handed and kids will leave the site. We’ve written some
white papers addressing this area. 

What kind of work do
you do for advertising clients? Can you give me some examples?

We’re
doing more moderation of advertising campaigns where advertisers are
engaging with consumers (getting them to help ‘create’ the ad, or
submitting content that then becomes part of the ad, for example).
We’ve
recently completed a project with Goodby, Silverstein & Partners
for its client, Sprint, which won a number of awards. Sprint took
over Google’s YouTube home page for 24 hours, creating a ‘human
clock’ from videos submitted by users. Users were given a number
and asked to shoot a video of that number. The videos were then used
to create a ‘digital clock’ – so for example, at 12.09, the ad
would show four people, each holding a number ‘1’, ‘2’, ‘0’,
‘9’.

Another
recent campaign was Cesar
‘I
promise’ campaign, where we worked with the agency Catapult
Marketing. Pet owners were asked make a ‘promise’ to their pet,
which then went up on a live display.  

Is there a growing
demand for this sort of work?

Yes.
Advertising and communications now is all about engaging with an
audience, rather than just displaying a message in front of an
audience. Web 2.0 has had a huge impact on this, but so have shows
like the X Factor which rely on user participation. This trend has
moved into ad campaigns, where advertisers are looking to get users
involved with their brand. The moderation can be slightly different
here – for example, if a client is running an ad campaign that
invites consumers to upload photos of a specific subject, our job
might be to check that the submissions meet the competition criteria.

How strict are brands
about moderation of UGC? What is best practice in this area?

Brands are
getting stricter about moderation. It’s a big subject, but broadly,
moderation should be about protecting users from harmful content,
protecting the brand from association with that harmful content, and
engaging users in a positive way. It is not about censorship. Brands
who engage with users successfully are those who allow and listen to
user comments – good and bad – and respond to them quickly. This
applies across all audiences. There are obviously best practice
techniques and specific requirements that depend on audience (for
example, what you could allow onto a site like MTV would be different
from what you’d tolerate on a child’s virtual world).

The
important thing is to create clear site terms of use, and enforce
them. If a site says it will not tolerate any swearing, for example,
it must enforce that.

The Daily Mail
removed pre-moderation recently, does this arguably present fewer
risks from a legal point of view?

The legal
question is really one for a lawyer. But the Mail’s approach is to
expect users to do its work for it. Success depends entirely on its
trust in its users to stick to the rules, and report anyone who
doesn’t do that.