A good one to start with obviously. Analytics is what we call the process of deriving meaningful patterns within data.
When we talk about ‘using analytics’ we are often talking about a specific platform (Google Analytics, Omniture) built to measure the traffic coming to our websites. It can tell us how much traffic is coming in, where it’s coming from, how long visitors spend on specific pages and whether they leave immediately amongst many other insights.
You can therefore use analytics tactically, for instance by seeing what time the most traffic visits your site, you’ll learn what times are best to publish articles.
This takes a more human, connected approach to reading data. You may also see this written as a ‘holistic approach’. Try not to cower away when you hear it.
Basically looking at behavioural analytics helps us to build a bigger picture of how and why online users behave by analysing seemingly unrelated data points.
For instance, social media interactions, the amount of money spent on site, where the customer navigated from, where they went next, can all be looked at together as one whole experience. Therefore you can understand a visitor’s intent and predict future actions or trends.
This warrants an article in itself, and it is one of buzziest of buzz words but sometimes it’s important to remember that shorthand phrases are necessary because they save time.
That’s why we talk about ‘big data’ rather than ‘the massive volume of data that has suddenly become available to us thanks to the rise of online traffic, which has in turn lead to the need for accurate analytics packages and improvements in understanding the data accrued.’
Think about Amazon. Think about how much traffic the site gets. Then think about the fact that Amazon not only has access to traffic data, but also has information on every single one of its 152m customers who have ever ordered anything from its site, including name, location, purchase history and browsing history.
Now that’s ‘big data’. So big it needs the world’s top three largest Linux databases to help its operations. Amazon of course doesn’t just sit on top of all of that data, it uses it to tailor custom experiences for all of its registered users, from the personalised homepage, to its recommended products to its targeted email marketing.
The percentage of visitors who enter your site then leave after that one page, rather than continuing to view other pages on your site.
Low bounce rates are good. High bounce rates mean your site has very little else to interest the visitor.
The ability to analyse data from a variety of channels (website, social, email, mobile apps, etc.), which can often be housed in different places (or ‘siloed’ if you want to be annoying), to better understand the customer experience as a complete whole
Data cleansing (or data scrubbing)
The process of detecting and removing inaccurate records from your database. This data can be incorrect, incomplete, duplicated or just out of date.
Mistakes can be caused by something as simple as human error. Somebody forgets to include a postcode, somebody misspells their email address. Certain inaccurate data can be deleted using an automated tool, such as duplicates or invalid email addresses.
The reliability of data. Is it accurate, complete, up-to-date and accessible? If it isn’t, then it’s not going to do you any good.
I’m including this just to give your eyes a break from all this text.
A data visualisation (or data vis as nobody should call it) brings your dry-bones data to life in a captivating, image-based and occasionally shareable way.
If you find football statistics as boring as I do, you might find these 12 World Cup data visualisations a little more palatable.
Visitors to your site who have typed in your URL or clicked on their bookmark. Direct traffic indicates how many visitors already know your site or brand and is a good indication of loyalty.
This is the process which allows a system to analyse data and ‘learn’ what action to take, as opposed to programming a computer to carry out a specific function.
As Jeff Rajeck states in his article on machine learning it is typically used to solve problems by finding patterns that we cannot see ourselves.
Jeff uses Target as an example of machine learning. The US retailer wanted to give discounts to expecting couples on the items they needed as a new parent, but without having access to that information, Target had to actively find them first.
Target hired a machine learning expert to help identify the buying habits of someone who has just become pregnant. Once it knew this, Target was able to… uh… target these people with special offers for pregnancy products.
Which basically led to Target finding out teenage girls were pregnant before their own fathers.
Whether this will further lead to The Terminator style sentience we can only cower in our bunkers and wonder.
All the data you have on the search terms people use on search engines, which leads to people clicking on your site from the results page.
Unfortunately this information is becoming harder and harder to access. Check out this further reading on the dreaded (not provided).
The traffic that visits your website after clicking on your URL from a search engine results page (SERP).
This can be organic traffic: visitors who click on unpaid, natural search engine results.
Or it can be paid traffic: visitors who click on your paid-for listings.
In the example above you can see the results that have been paid to be placed there via a PPC campaign or similar as they have a little yellow ‘ad’ symbol.
This is data that abides by a certain set of rules. The conformist squares! This data can easily be read by different programs, browsers or search engine crawlers.
Schema markup is an example of structured data.
Structured data resides in a fixed file, such as a database or spreadsheet, and depending on your type of model will have very specific parameters as to the type of data that can be entered (numeric, alphabetic, name, address) and whether its restricted to other things such as number of characters or certain prefixes like Mr, Ms or Dr.
Structured data can be easily entered, stored and analysed.
Like the errant, rebellious sibling of the above.
Unstructured data has no pre-defined model, organisation, is difficult to search, store and analyse.
An example of unstructured data is email. Although an inbox can be sorted by date or time or sender, it can never be sorted exactly by precise content. God knows how much I waffle away in the ones I send.
Multimedia content also helps create unstructured data. A single document can contain a mixture of videos, images, audio and slideshares. All data that can’t fit neatly into a database.
Econsultancy offers an Advanced Data & Analytics Training course.