Thursday, April 30, 2015

You Have $100 to Spend on Social Media Marketing. Here’s One Way to Spend It.

How big is your marketing budget?

I’ve heard of companies that spend millions on marketing. I’ve heard of others who spend zero (we skew toward the zero side at Buffer).

Regardless of how much you spend, you aim to spend it well. That’s why a hypothetical situation like the one here—what would you do with $100 to spend on social media marketing?—can be an extremely valuable exercise.

I have some ideas on what I’d do with the $100, ways to wring the most value ...

The post You Have $100 to Spend on Social Media Marketing. Here’s One Way to Spend It. appeared first on Social.

Tuesday, April 28, 2015

Search Trends: Are Compound Queries the Start of the Shift to Data-Driven Search?

Posted by Tom-Anthony

The Web is an ever-diminishing aspect of our online lives. We increasingly use apps, wearables, smart assistants (Google Now, Siri, Cortana), smart watches, and smart TVs for searches, and none of these are returning 10 blue links. In fact, we usually don't end up on a website at all.

Apps are the natural successor, and an increasing amount of time spent optimising search is going to be spent focusing on apps. However, whilst app search is going to be very important, I don't think it is where the trend stops.

This post is about where I think the trends take us—towards what I am calling "Data-Driven Search". Along the way I am going to highlight another phenomenon: "Compound Queries". I believe these changes will dramatically alter the way search and SEO work over the next 1-3 years, and it is important we begin now to think about how that future could look.

App indexing is just the beginning

With App Indexing Google is moving beyond the bounds of the web-search paradigm which made them famous. On Android, we are now seeing blue links which are not to web pages but are deep links to open specific pages within apps:


This is interesting in and of itself, but it is also part of a larger pattern which began with things like the answer box and knowledge graph. With these, we saw that Google was shifting away from sending you somewhere else but was starting to provide the answer you were looking for right there in the SERPs. App Indexing is the next step, which moves Google from simply providing answers to enabling actions—allow you to do things.

App Indexing is going to be around for a while—but here I want to focus on this trend towards providing answers and enabling actions.

Notable technology trends

Google's mission is to build the "ultimate assistant"—something that anticipates your needs and facilitates fulfilling them. Google Now is just the beginning of what they are dreaming of.

So many of the projects and technologies that Google, and their competitors, are working on are converging with the trend towards "answers and actions", and I think this is going to lead to a really interesting evolution in searches—namely what I am calling "Data-Driven Search".

Let's look at some of the contributing technologies.

Compound queries: query revisions & chained queries

There is a lot of talk about conversational search at the moment, and it is fascinating for many reasons, but in this instance I am mostly interested in two specific facets:

  • Query revision
  • Chained queries

The current model for multiple queries looks like this:

You do one query (e.g. "recipe books") and then, after looking at the results of that search, you have a better sense of exactly what it is you are looking for and so you refine your query and run another search (e.g. "vegetarian recipe books"). Notice that you do two distinct searches—with the second one mostly completely separate from the first.

Conversational search is moving us towards a new model which looks more like this, which I'm calling the Compound Query model:

In this instance, after evaluating the results I got, I don't make a new query but instead a Query Revision which relates back to that initial query. After searching "recipe books", I might follow up with "just show me the vegetarian ones". You can already do this with conversational search:

Example of a "Query Revision"—one type of Compound Query

Currently, we only see this intent revision model working in conversational search, but I expect we will see it migrate into desktop search as well. There will be a new generation of searchers who won't have been "trained" to search in the unnatural and stilted keyword-oriented that we have. They'll be used to conversational search on their phones and will apply the same patterns on desktop machines. I suspect we'll also see other changes to desktop-based search which will merge in other aspects of how conversational search results are presented. There are also other companies working on radical new interfaces, such as Scinet by Etsimo (their interface is quite radical, but the problems it solves and addresses are ones Google will likely also be working on).

So many SEO paradigms don't begin to apply in this scenario; things like keyword research and rankings are not compatible with a query model that has multiple phases.

This new query model has a second application, namely Chained Queries, where you perform an initial query, and then on receiving a response you perform a second query on the same topic (the classic example is "How tall is Justin Bieber?" followed by "How old is he?"—the second query is dependent upon the first):

Example of a Chained Query—the second type of Compound Query

It might be that in the case of chained queries, the latter queries could be converted to be standalone queries, such that they don't muddy the SEO waters quite as much as as queries that have revisions. However, I'm not sure that this necessarily stands true, because every query in a chain adds context that makes it much easier for Google to accurately determine your intent in later queries.

If you are not convinced, consider that in the example above, as is often the case in examples (such as the Justin Bieber example), it is usually clear from the formulation that this is explicitly a chained query. However—there are chained queries where it is not necessarily clear that the current query is chained to the previous. To illustrate this, I've borrowed an example which Behshad Behzadi, Director of Conversational Search at Google, showed at SMX Munich last month:

Example of a "hidden" Chained Query—it is not explicit that the last search refers to the previous one.

If you didn't see the first search for "pictures of mario" before the second and third examples, it might not be immediately obvious that the second "pictures of mario" query has taken into account the previous search. There are bound to be far more subtle examples than this.

New interfaces

The days of all Google searches coming solely via a desktop-based web browser are already long since dead, but mobile users using voice search are just the start of the change—there is an ongoing divergence of interfaces. I'm focusing here on the output interfaces—i.e., how we consume the results from a search on a specific device.

The primary device category that springs to mind is that of wearables and smart watches, which have a variety of ways in which they communicate with their users:

  • Compact screens—devices like the Apple Watch and Microsoft Band have compact form factor screens, which allow for visual results, but not in the same format as days gone by—a list of web links won't be helpful.
  • Audio—with Siri, Google Now, and Cortana all becoming available via wearable interfaces (that pair to smart phones) users can also consume results as voice.
  • Vibrations—the Apple Watch can give users directions using vibrations to signal left and right turns without needing to look or listen to the device. Getting directions already covers a number of searches, but you could imagine this also being useful for various yes/no queries (e.g. "is my train on time?").

Each of these methods is incompatible with the old "title & snippet" method that made up the 10 blue links, but furthermore they are also all different from one another.

What is clear is that there is going to need to be an increase in the forms in which search engines can respond to an identical query, with responses being adaptive to the way in which the user will consume their result.

We will also see queries where the query may be "handed off" to another device: imagine me doing a search for a location on my phone and then using my watch to give me direction. Apple already has "Handover"which does this in various contexts, and I expect we'll see the concept taken further.

This is related to Google increasingly providing us with encapsulated answers, rather than links to websites—especially true on wearables and smart devices. The interesting phenomenon here is that these answers don't specify a specific layout, like a webpage does. The data and the layout are separated.

Which leads us to...

Cards

Made popular by Google Now, cards are prevalent in both iOS and Android, as well as on social platforms. They are a growing facet of the mobile experience:

Cards provide small units of information in an accessible chunk, often with a link to dig deeper by flipping a card over or by linking through to an app.

Cards exactly fit into the paradigm above—they are more concerned with the data you will see and less so about the way in which you will see it. The same cards look different in different places.

Furthermore, we are entering a point where you can now do more and more from a card, rather than it leading you into an app to do more. You can response to messages, reply to tweets, like and re-share, and all sorts of things all from cards, without opening an app; I highly recommend this blog post which explores this phenomenon.

It seems likely we'll see Google Now (and mobile search as it becomes more like Google Now) allowing you to do more and more right from cards themselves—many of these things will be actions facilitated by other parties (by way of APIs of schema.org actions). In this way Google will become a "junction box" sitting between us and third parties who provide services; they'll find an API/service provider and return us a snippet of data showing us options and then enable us to pass back data representing our response to the relevant API.

Shared screens

The next piece of the puzzle is "shared screens", which covers several things. This starts with Google Chromecast, which has popularised the ability to "throw" things from one screen to another. At home, any guests I have over who join my wifi are able to "throw" a YouTube video from their mobile phone to my TV via the Chromecast. The same is true for people in the meeting rooms at Distilled offices and in a variety of other public spaces.

I can natively throw a variety of things: photos, YouTube videos, movies on Netflix etc., etc. How long until that includes searches? How long until I can throw the results of a search on an iPad on to the TV to show my wife the holiday options I'm looking at? Sure we can do that by sharing the whole screen now, but how long until, like photos of YouTube videos, the search results I throw to the TV take on a new layout that is suitable for that larger screen?

You can immediately see how this links back to the concept of cards and interfaces outlined above; I'm moving data from screen to screen, and between devices that provide different interfaces.

These concepts are all very related to the concept of "fluid mobility" that Microsoft recently presented in their Productivity Future Vision released in February this year.

An evolution of this is if we reach the point that some people have envisioned, whereby many offices workers, who don't require huge computational power, no longer have computers at their desks. Instead their desks just house dumb terminals: a display, keyboard and mouse which connect to the phone in their pockets which provides the processing power.

In this scenario, it becomes even more usual for people to be switching interfaces "mid task" (including searches)—you do a search at your desk at work (powered by your phone), then continue to review the results on the train home on the phone itself before browsing further on your TV at home.

Email structured markup

This deserves a quick mention—it is another data point in the trend of "enabling action". It doesn't seem to be common knowledge that you can use structured markup and schema.org markup in emails, which works in both Gmail and Google Inbox.

Editor's note: Stay tuned for more on this in tomorrow's post!

The main concepts they introduce are "highlights" and "actions"—sound familiar? You can define actions that become buttons in emails allowing people to confirm, save, review, RSVP, etc. with a single click right in the email.

Currently, you have to apply to Google for them to whitelist emails you send out in order for them to mark the emails up, but I expect we'll see this rolling out more and more. It may not seem directly search-related but if you're building the "ultimate personal assistant", then merging products like Google Now and Google Inbox would be a good place to start.

The rise of data-driven search

There is a common theme running through all of the above technologies and trends, namely data:

  • We are increasingly requesting from Search Engines snippets of data, rather than links to strictly formatted web content
  • We are increasingly being provided the option for direct action without going to an app/website/whatever by providing a snippet of data with our response/request

I think in the next 2 years small payloads of data will be the new currency of Google. Web search won't go away anytime soon, but large parts of it will be subsumed into the data driven paradigm. Projects like Knowledge Vault, which aims to dislodge the Freebase/Wikipedia (i.e. manually curated) powered Knowledge Graph by pulling facts directly from the text of all pages on the web, will mean mining the web for parcels of data become feasible at scale. This will mean that Google knows where to look for specific bits of data and can extract and return this data directly to the user.

How all this might change the way users and search engines interact:

  1. The move towards compound queries will mean it becomes more natural for people to use Google to "interact" with data in an iterative process; Google won't just send us to a set of data somewhere else but will help us sift through it all.
  2. Shared screens will mean that search results will need to be increasingly device agnostic. The next generation of technologies such as Apple Handover and Google Chromecast will mean we increasingly pass results between devices where they may take on a new layout.
  3. Cards will be one part of making that possible by ensuring that results can rendered in various formats. Users will become more and more accustomed to interacting with sets of cards.
  4. The focus on actions will mean that Google plugs directly into APIs such that they can connect users with third party backends and enable that right there in their interface.

What we should be doing

I don't have a good answer to this—which is exactly why we need to talk about it more.

Firstly, what is obvious is that lots of the old facets of technical SEO are already breaking down. For example, as I mentioned above, things like keyword research and rankings don't fit well with the conversational search model where compound queries are prevalent. This will only become more and more the case as we go further down the rabbit hole. We need to educate clients and work out what new metrics help us establish how Google perceive us.

Secondly, I can't escape the feeling that APIs are not only going to increase further in importance, but also become more "mainstream". Think how over the years ownership of company websites started in the technical departments and migrated to marketing teams—I think we could see a similar pattern with more core teams being involved in APIs. If Google wants to connect to APIs to retrieve data and help users do things, then more teams within a business are going to want to weigh in on what it can do.

APIs might seem out of the reach and unnecessary for many businesses (exactly as websites used to...), but structured markup and schema.org are like a "lite API"—enabling programmatic access to your data and even now to actions available via your website. This will provide a nice stepping stone where needed (and might even be sufficient).

Lastly, if this vision of things does play out, then much of our search behaviour could be imagined to be a sophisticated take on faceted navigation—we do an initial search and then sift through and refine the data we get back to drill down to the exact morsels we were looking for. I could envision "Query Revision" queries where the initial search happens within Google's index ("science fiction books") but subsequent searches happen in someone else's, for example Amazon's, "index" ('show me just those with 5 stars and more than 10 reviews that were released in the last 5 years').

If that is the case, then what I will be doing is ensuring that Distilled's clients have a thorough and accurate "indexes" with plenty of supplementary information that users could find useful. A few years ago we started worrying about ensuring our clients' websites have plenty of unique content, and this would see us worrying about ensuring they have a thorough "index" for their product/service. We should be doing that already, but suddenly it isn't going to be just a conversion factor, but a ranking factor too (following the same trend as many other signals, in that regard)

Discussion

Please jump in the comments, or tweet me at @TomAnthonySEO, with your thoughts. I am sure many of the details for how I have envisioned this may not be perfectly accurate, but directionally I'm confident and I want to hear from others with their ideas.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Monday, April 27, 2015

Introducing Buffer for Pinterest: Easily Schedule Your Pins, Manage and Measure

Pinterest is a happening place.

With more than 70 million users and 50 billion Pins, there’s always something new to cook, craft, buy, read or be inspired by on the visual social network.

For businesses or individuals looking to build or grow a presence on Pinterest, consistently posting valuable and interesting Pins is a great strategy to help people discover and share your Pins.

And today we’re thrilled to announce that Buffer is officially partnering with Pinterest to make it even easier to Pin ...

The post Introducing Buffer for Pinterest: Easily Schedule Your Pins, Manage and Measure appeared first on Social.

​The 3 Most Common SEO Problems on Listings Sites

Posted by Dom-Woodman

Listings sites have a very specific set of search problems that you don't run into everywhere else. In the day I'm one of Distilled's analysts, but by night I run a job listings site, teflSearch. So, for my first Moz Blog post I thought I'd cover the three search problems with listings sites that I spent far too long agonising about.

Quick clarification time: What is a listings site (i.e. will this post be useful for you)?

The classic listings site is Craigslist, but plenty of other sites act like listing sites:

  • Job sites like Monster
  • E-commerce sites like Amazon
  • Matching sites like Spareroom

1. Generating quality landing pages

The landing pages on listings sites are incredibly important. These pages are usually the primary drivers of converting traffic, and they're usually generated automatically (or are occasionally custom category pages) .

For example, if I search "Jobs in Manchester", you can see nearly every result is an automatically generated landing page or category page.

There are three common ways to generate these pages (occasionally a combination of more than one is used):

  • Faceted pages: These are generated by facets—groups of preset filters that let you filter the current search results. They usually sit on the left-hand side of the page.
  • Category pages: These pages are listings which have already had a filter applied and can't be changed. They're usually custom pages.
  • Free-text search pages: These pages are generated by a free-text search box.

Those definitions are still bit general; let's clear them up with some examples:

Amazon uses a combination of categories and facets. If you click on browse by department you can see all the category pages. Then on each category page you can see a faceted search. Amazon is so large that it needs both.

Indeed generates its landing pages through free text search, for example if we search for "IT jobs in manchester" it will generate: IT jobs in manchester.

teflSearch generates landing pages using just facets. The jobs in China landing page is simply a facet of the main search page.

Each method has its own search problems when used for generating landing pages, so lets tackle them one by one.


Aside

Facets and free text search will typically generate pages with parameters e.g. a search for "dogs" would produce:

www.mysite.com?search=dogs

But to make the URL user friendly sites will often alter the URLs to display them as folders

www.mysite.com/results/dogs/

These are still just ordinary free text search and facets, the URLs are just user friendly. (They're a lot easier to work with in robots.txt too!)


Free search (& category) problems

If you've decided the base of your search will be a free text search, then we'll have two major goals:

  • Goal 1: Helping search engines find your landing pages
  • Goal 2: Giving them link equity.

Solution

Search engines won't use search boxes and so the solution to both problems is to provide links to the valuable landing pages so search engines can find them.

There are plenty of ways to do this, but two of the most common are:

  • Category links alongside a search

    Photobucket uses a free text search to generate pages, but if we look at example search for photos of dogs, we can see the categories which define the landing pages along the right-hand side. (This is also an example of URL friendly searches!)

  • Putting the main landing pages in a top-level menu

    Indeed also uses free text to generate landing pages, and they have a browse jobs section which contains the URL structure to allow search engines to find all the valuable landing pages.

Breadcrumbs are also often used in addition to the two above and in both the examples above, you'll find breadcrumbs that reinforce that hierarchy.

Category (& facet) problems

Categories, because they tend to be custom pages, don't actually have many search disadvantages. Instead it's the other attributes that make them more or less desirable. You can create them for the purposes you want and so you typically won't have too many problems.

However, if you also use a faceted search in each category (like Amazon) to generate additional landing pages, then you'll run into all the problems described in the next section.

At first facets seem great, an easy way to generate multiple strong relevant landing pages without doing much at all. The problems appear because people don't put limits on facets.

Lets take the job page on teflSearch. We can see it has 18 facets each with many options. Some of these options will generate useful landing pages:

The China facet in countries will generate "Jobs in China" that's a useful landing page.

On the other hand, the "Conditional Bonus" facet will generate "Jobs with a conditional bonus," and that's not so great.

We can also see that the options within a single facet aren't always useful. As of writing, I have a single job available in Serbia. That's not a useful search result, and the poor user engagement combined with the tiny amount of content will be a strong signal to Google that it's thin content. Depending on the scale of your site it's very easy to generate a mass of poor-quality landing pages.

Facets generate other problems too. The primary one being they can create a huge amount of duplicate content and pages for search engines to get lost in. This is caused by two things: The first is the sheer number of possibilities they generate, and the second is because selecting facets in different orders creates identical pages with different URLs.

We end up with four goals for our facet-generated landing pages:

  • Goal 1: Make sure our searchable landing pages are actually worth landing on, and that we're not handing a mass of low-value pages to the search engines.
  • Goal 2: Make sure we don't generate multiple copies of our automatically generated landing pages.
  • Goal 3: Make sure search engines don't get caught in the metaphorical plastic six-pack rings of our facets.
  • Goal 4: Make sure our landing pages have strong internal linking.

The first goal needs to be set internally; you're always going to be the best judge of the number of results that need to present on a page in order for it to be useful to a user. I'd argue you can rarely ever go below three, but it depends both on your business and on how much content fluctuates on your site, as the useful landing pages might also change over time.

We can solve the next three problems as group. There are several possible solutions depending on what skills and resources you have access to; here are two possible solutions:

Category/facet solution 1: Blocking the majority of facets and providing external links
  • Easiest method
  • Good if your valuable category pages rarely change and you don't have too many of them.
  • Can be problematic if your valuable facet pages change a lot

Nofollow all your facet links, and noindex and block category pages which aren't valuable or are deeper than x facet/folder levels into your search using robots.txt.

You set x by looking at where your useful facet pages exist that have search volume. So, for example, if you have three facets for televisions: manufacturer, size, and resolution, and even combinations of all three have multiple results and search volume, then you could set you index everything up to three levels.

On the other hand, if people are searching for three levels (e.g. "Samsung 42" Full HD TV") but you only have one or two results for three-level facets, then you'd be better off indexing two levels and letting the product pages themselves pick up long-tail traffic for the third level.

If you have valuable facet pages that exist deeper than 1 facet or folder into your search, then this creates some duplicate content problems dealt with in the aside "Indexing more than 1 level of facets" below.)



The immediate problem with this set-up, however, is that in one stroke we've removed most of the internal links to our category pages, and by no-following all the facet links, search engines won't be able to find your valuable category pages.

In order re-create the linking, you can add a top level drop down menu to your site containing the most valuable category pages, add category links elsewhere on the page, or create a separate part of the site with links to the valuable category pages.

The top level drop down menu you can see on teflSearch (it's the search jobs menu), the other two examples are demonstrated in Photobucket and Indeed respectively in the previous section.

The big advantage for this method is how quick it is to implement, it doesn't require any fiddly internal logic and adding an extra menu option is usually minimal effort.

Category/facet solution 2: Creating internal logic to work with the facets

  • Requires new internal logic
  • Works for large numbers of category pages with value that can change rapidly

There are four parts to the second solution:

  1. Select valuable facet categories and allow those links to be followed. No-follow the rest.
  2. No-index all pages that return a number of items below the threshold for a useful landing page
  3. No-follow all facets on pages with a search depth greater than 1.
  4. Block all facet pages deeper than x level in robots.txt

As with the last solution, x is set by looking at where your useful facet pages exist that have search volume (full explanation in the first solution), and if you're indexing more than one level you'll need to check out the aside below to see how to deal with the duplicate content it generates.

This will generate landing pages for the facets you've decided are valuable and noindex the landing pages which are low-quality. It will only create pages for a single level of facets, which prevents duplicate content.


Aside: Indexing more than one level of facets

If you want a second level of facets to be indexable, e.g. Televisions - Facet 1 (46"), Facet 2 (Samsung), then the easiest option is to remove the fourth rule from above and either add links to them using one of the methods in Solution 1, or add the pages to your sitemap.

The alternative is to set robots.txt to allow category pages up to 2 levels to be indexed and all facets to be followed up to two levels.

This will, however, create duplicate content, because now search engines will be able to create:

  • Televisions - 46" - Samsung
  • Televisions - Samsung - 46"

You'll have to either rel canonical your duplicate pages with another rule or set-up your facets so they create a single unique URL.

You'll also need to be aware that unless you set-up more complicated logic, all of your followable facets will multiply. Depending on your setup you might need to block more paths in robots.txt or set-up more logic.

Letting search engines index more than one level of facets adds a lot of possible problems; make sure you're keeping track of them.


2. User-generated content cannibalization

This is a common problem for listings sites (assuming they allow user generated content). If you're reading this as an e-commerce site who only lists their own products, you can skip this one.

As we covered in the first area, category pages on listings sites are usually the landing pages aiming for the valuable search terms, but as your users start generating pages they can often create titles and content that cannibalise your landing pages.

Suppose you're a job site with a category page for PHP Jobs in Greater Manchester. If a recruiter then creates a job advert for PHP Jobs in Greater Manchester for the 4 positions they currently have, you've got a duplicate content problem.

This is less of a problem when your site is large and your categories mature, it will be obvious to any search engine which are your high value category pages, but at the start where you're lacking authority and individual listings might contain more relevant content than your own search pages this can be a problem.

Solution 1: Create structured titles

Set the <title> differently than the on-page title. Depending on variables you have available to you can set the title tag programmatically without changing the page title using other information given by the user.

For example, on our imaginary job site, suppose the recruiter also provided the following information in other fields:

  • The no. of positions: 4
  • The primary area: PHP Developer
  • The name of the recruiting company: ABC Recruitment
  • Location: Manchester

We could set the <title> pattern to be: *No of positions* *The primary area* with *recruiter name* in *Location* which would give us:

4 PHP Developers with ABC Recruitment in Manchester

Setting a <title> tag allows you to target long-tail traffic by constructing detailed descriptive titles. In our above example, imagine the recruiter had specified "Castlefield, Manchester" as the location.

All of a sudden, you've got a perfect opportunity to pick up long-tail traffic for people searching in Castlefield in Manchester.

On the downside, you lose the ability to pick up long-tail traffic where your users have chosen keywords you wouldn't have used.

For example, suppose Manchester has a jobs program called "Green Highway." A job advert title containing "Green Highway" might pick up valuable long-tail traffic. Being able to discover this, however, and find a way to fit it into a dynamic title is very hard.

Solution 2: Use regex to noindex the offending pages

Perform a regex (or string contains) search on your listings titles and no-index the ones which cannabalise your main category pages.

If it's not possible to construct titles with variables or your users provide a lot of additional long-tail traffic with their own titles, then is a great option. On the downside, you miss out on possible structured long-tail traffic that you might've been able to aim for.

Solution 3: De-index all your listings

It may seem rash, but if you're a large site with a huge number of very similar or low-content listings, you might want to consider this, but there is no common standard. Some sites like Indeed choose to no-index all their job adverts, whereas some other sites like Craigslist index all their individual listings because they'll drive long tail traffic.

Don't de-index them all lightly!

3. Constantly expiring content

Our third and final problem is that user-generated content doesn't last forever. Particularly on listings sites, it's constantly expiring and changing.

For most use cases I'd recommend 301'ing expired content to a relevant category page, with a message triggered by the redirect notifying the user of why they've been redirected. It typically comes out as the best combination of search and UX.

For more information or advice on how to deal with the edge cases, there's a previous Moz blog post on how to deal with expired content which I think does an excellent job of covering this area.

Summary

In summary, if you're working with listings sites, all three of the following need to be kept in mind:

  • How are the landing pages generated? If they're generated using free text or facets have the potential problems been solved?
  • Is user generated content cannibalising the main landing pages?
  • How has constantly expiring content been dealt with?

Good luck listing, and if you've had any other tricky problems or solutions you've come across working on listings sites lets chat about them in the comments below!


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Thursday, April 23, 2015

2 Days After Mobilegeddon: How Far Did the Sky Fall?

Posted by Dr-Pete

Even clinging to the once towering bridge, the only thing Kayce could see was desert. Yesterday, San Francisco hummed with life, but now there was nothing but the hot hiss of the wind. Google’s Mobilegeddon blew out from Mountain View like Death’s last exhale, and for the first time since she regained consciousness, Kayce wondered if she was the last SEO left alive.

We have a penchant for melodrama, and the blogosphere loves a conspiracy, but after weeks of speculation bordering on hysteria, it’s time to see what the data has to say about Google’s Mobile Update. We’re going to do something a little different – this post will be updated periodically as new data comes in. Stay tuned to this post/URL.

If you watch MozCast, you may be unimpressed with this particular apocalypse:

Temperatures hit 66.1°F on the first official day of Google's Mobile Update (the system is tuned to an average of 70°F), and then dropped to 62.1° on day 2. Of course, the problem is that this system only measures desktop temperatures, and as we know, Google's Mobile Update should only impact mobile SERPs. So, we decided to build a MozCast Mobile, that would separately track mobile SERPs (Android, specifically) across the same 10K keyword set. Here's what we saw for the past 8 days on MozCast Mobile:

Across the board, mobile temperatures run a little hotter (which could just be quirks in how we measure). On April 21st, mobile temps were slightly higher, but nothing to write home about. On April 22nd, though, temperatures between desktop and mobile diverged, with a difference of almost 18°. Day 2 is looking a lot more like an algorithm change.

There's another metric we can look at, though. Since building MozCast Mobile, we've also been tracking how many page-1 URLs show the "Mobile-friendly" tag. Presumably, if mobile-friendly results are rewarded, we'll expect that number to jump. Here are the last 8 days of that stat:

Even before April 21st, a surprisingly high number of the URLs we track carried the "Mobile-friendly" tag. We don't have a lot of historical data, but the low point was around 66.3%. The number has steadily creeped up over the past 2 weeks, but it's unclear whether this is an algorithmic change, data being updated by Google, or sites being updated last-minute to be more mobile friendly.

On April 22nd (day 2), the number of sites with "Mobile-friendly" tagging creeped up again, to 72.3%. Again, we can't really determine the cause for this increase, but, one way or another, Google seems to be getting what they wanted.

Tracking a long roll-out

Although Google has repeatedly cited April 21st, they've also said that this update could take days or weeks. If an update is spread out over weeks, can we accurately measure the flux? The short answer is: not very well. We can measure flux over any time-span, but search results naturally change over time – we have no real guidance to tell us what's normal over longer periods.

The "Mobile-friendly" tag tracking is one solution – this should gradually increase – but there's another metric we can look at. If mobile results continue to diverge from desktop results, than the same-day flux between the two sets of results should increase. In other words, mobile results should get increasingly different from desktop results with each day of the roll-out. Here's what that cross-flux looks like:

I'm using raw flux data here, since the temperature conversion isn't calibrated to this data. This comparison is tricky, because many sites use different URLs for mobile vs. desktop. I've stripped out the obvious cases ("m." and "mobile." sub-domains), but that still leaves a lot of variants.

Although April 21st was quiet, we are seeing a decent bump around April 22nd. If this pattern of divergence continues and grows over time, we'll know something is happening. The bump on April 15th is probably an error – Google made a change to In-depth Articles on mobile that created some bad data.

Tracking potential losers

No sites are reporting major hits yet, but by looking at the "Mobile-friendly" tag for the top domains in MozCast Mobile, we can start to piece together who might get hit by the update. Here are the top 20 domains (in our 10K data set) as of April 21st, along with the percent of their ranking URLs that are tagged as mobile-friendly:

    1. en.m.wikipedia.org — 96.3%
    2. www.amazon.com — 62.3%
    3. m.facebook.com — 100.0%
    4. m.yelp.com — 99.9%
    5. m.youtube.com — 27.8%
    6. twitter.com — 99.8%
    7. www.tripadvisor.com — 92.5%
    8. www.m.webmd.com — 100.0%
    9. mobile.walmart.com — 99.5%
    10. www.pinterest.com — 97.5%
    11. www.foodnetwork.com — 69.9%
    12. www.ebay.com — 97.7%
    13. www.mayoclinic.org — 100.0%
    14. m.allrecipes.com — 97.1%
    15. m.medlineplus.gov — 100.0%
    16. www.bestbuy.com — 90.2%
    17. www.overstock.com — 98.6%
    18. m.target.com — 41.4%
    19. www.zillow.com — 99.6%
    20. www.irs.gov — 0.0%

I've bolded any site under 75% – the IRS is our big Top 20 trouble spot, although don't expect IRS.gov to stop ranking at tax-time soon. Interestingly, YouTube's mobile site only shows as mobile-friendly about a quarter of the time in our data set – this will be a key case to watch. Note that Google could consider a site mobile-friendly without showing the "Mobile-friendly" tag, but it's the simplest/best proxy we have right now.

Changes beyond rankings

It's important to note that, in many ways, mobile SERPs are already different from desktop SERPs. The most striking difference is design, but that's not the only change. For examples, Google recently announced that they would be dropping domains in mobile display URLs. Here's a sample mobile result from my recent post:

Notice the display URL, which starts with the brand name ("Moz") instead of our domain name. That's followed by a breadcrumb-style URL that uses part of the page name. Expect this to spread, and possibly even hit desktop results in the future.

While Google has said that vertical results wouldn't change with the April 21st update, that statement is a bit misleading when it comes to local results. Google already uses different styles of local pack results for mobile, and those pack results appear in different proportions. For example, here's a local "snack pack" on mobile (Android):

Snack packs appear in only 1.5% of the local rankings we track for MozCast Desktop, but they're nearly 4X as prevalent (6.0%) on MozCast Mobile (for the same keywords and locations). As these new packs become more prevalent, they take away other styles of packs, and create new user behavior. So, to say local is the same just because the core algorithm may be the same is misleading at best.

Finally, mobile adds entirely new entities, like app packs on Android (from a search for "jobs"):

These app packs appear on a full 8.4% of the mobile SERPs we're tracking, including many high-volume keywords. As I noted in my recent post, these app packs also consume page-1 organic slots.

A bit of good news

If you're worried that you may be too late to the mobile game, it appears there is some good news. Google will most likely reprocess new mobile-friendly pages quickly. Just this past few days, Moz redesigned our blog to be mobile friendly. In less than 24 hours, some of our main blog pages were already showing the "Mobile-friendly" tag:

However big this update ultimately ends up being, Google's push toward mobile-first design and their clear public stance on this issue strongly signal that mobile-friendly sites are going to have an advantage over time.

One other bit of good news: we are actively exploring mobile rank-tracking for Moz Analytics. More details are in this Q&A from MA's Product Manager, Jon White.

Stay tuned to this post (same URL) for the next week or two - I'll be updating charts and data as the Mobile Update continues to roll out. If the update really does take days or weeks, we'll do our best to measure the long-term impact and keep you informed.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Wednesday, April 22, 2015

​1 Day After Mobilegeddon: How Far Did the Sky Fall?

Posted by Dr-Pete

Even clinging to the once towering bridge, the only thing Kayce could see was desert. Yesterday, San Francisco hummed with life, but now there was nothing but the hot hiss of the wind. Google’s Mobilegeddon blew out from Mountain View like Death’s last exhale, and for the first time since she regained consciousness, Kayce wondered if she was the last SEO left alive.

We have a penchant for melodrama, and the blogosphere loves a conspiracy, but after weeks of speculation bordering on hysteria, it’s time to see what the data has to say about Google’s Mobile Update. We’re going to do something a little different – this post will be updated periodically as new data comes in. Stay tuned to this post/URL.

If you watch MozCast, you may be unimpressed with this particular apocalypse:

Temperatures hit 66.1°F on the first official day of Google's Mobile Update (the system is tuned to an average of 70°F). Of course, the problem is that this system only measures desktop temperatures, and as we know, Google's Mobile Update should only impact mobile SERPs. So, we decided to build a MozCast Mobile, that would separately track mobile SERPs (Android, specifically) across the same 10K keyword set. Here's what we saw for the past 7 days on MozCast Mobile:

While the temperature across mobile results on April 21st was slightly higher (73.7°F), you'll also notice that most of the days are slightly higher and the pattern of change is roughly the same. It appears that the first day of the Mobile Update was a relatively quiet day.

There's another metric we can look at, though. Since building MozCast Mobile, we've also been tracking how many page-1 URLs show the "Mobile-friendly" tag. Presumably, if mobile-friendly results are rewarded, we'll expect that number to jump. Here's the last 7 days of that stat:

As of the morning of April 22nd, 70.1% of the URLs we track carried the "Mobile-friendly" tag. That sounds like a lot, but that number hasn't changed much the past few days. Interestingly, the number has creeped up over the past 2 weeks from a low of 66.3%. It's unclear whether this is due to changes Google made or changes webmasters made, but I suspect this small uptick indicates sites making last minute changes to meet the mobile deadline. It appears Google is getting what they want from us, one way or another.

Tracking a long roll-out

Although Google has repeatedly cited April 21st, they've also said that this update could take days or weeks. If an update is spread out over weeks, can we accurately measure the flux? The short answer is: not very well. We can measure flux over any time-span, but search results naturally change over time – we have no real guidance to tell us what's normal over longer periods.

The "Mobile-friendly" tag tracking is one solution – this should gradually increase – but there's another metric we can look at. If mobile results continue to diverge from desktop results, than the same-day flux between the two sets of results should increase. In other words, mobile results should get increasingly different from desktop results with each day of the roll-out. Here's what that cross-flux looks like:

I'm using raw flux data here, since the temperature conversion isn't calibrated to this data. This comparison is tricky, because many sites use different URLs for mobile vs. desktop. I've stripped out the obvious cases ("m." and "mobile." sub-domains), but that still leaves a lot of variants.

Historically, we're not seeing much movement on April 21st. The bump on April 15-16 is probably an error – Google made a change to In-depth Articles on mobile that created some bad data. So, again, not much going on here, but this should give us a view to see compounding changes over time.

Tracking potential losers

No sites are reporting major hits yet, but by looking at the "Mobile-friendly" tag for the top domains in MozCast Mobile, we can start to piece together who might get hit by the update. Here are the top 20 domains (in our 10K data set) as of April 21st, along with the percent of their ranking URLs that are tagged as mobile-friendly:

    1. en.m.wikipedia.org -- 96.3%
    2. www.amazon.com -- 62.3%
    3. m.facebook.com -- 100.0%
    4. m.yelp.com -- 99.9%
    5. m.youtube.com -- 27.8%
    6. twitter.com -- 99.8%
    7. www.tripadvisor.com -- 92.5%
    8. www.m.webmd.com -- 100.0%
    9. mobile.walmart.com -- 99.5%
    10. www.pinterest.com -- 97.5%
    11. www.foodnetwork.com -- 69.9%
    12. www.ebay.com -- 97.7%
    13. www.mayoclinic.org -- 100.0%
    14. m.allrecipes.com -- 97.1%
    15. m.medlineplus.gov -- 100.0%
    16. www.bestbuy.com297 -- 90.2%
    17. www.overstock.com -- 98.6%
    18. m.target.com -- 41.4%
    19. www.zillow.com -- 99.6%
    20. www.irs.gov -- 0.0%

I've bolded any site under 75% – the IRS is our big Top 20 trouble spot, although don't expect IRS.gov to stop ranking at tax-time soon. Interestingly, YouTube's mobile site only shows as mobile-friendly about a quarter of the time in our data set – this will be a key case to watch. Note that Google could consider a site mobile-friendly without showing the "Mobile-friendly" tag, but it's the simplest/best proxy we have right now.

Changes beyond rankings

It's important to note that, in many ways, mobile SERPs are already different from desktop SERPs. The most striking difference is design, but that's not the only change. For examples, Google recently announced that they would be dropping domains in mobile display URLs. Here's a sample mobile result from my recent post:

Notice the display URL, which starts with the brand name ("Moz") instead of our domain name. That's followed by a breadcrumb-style URL that uses part of the page name. Expect this to spread, and possibly even hit desktop results in the future.

While Google has said that vertical results wouldn't change with the April 21st update, that statement is a bit misleading when it comes to local results. Google already uses different styles of local pack results for mobile, and those pack results appear in different proportions. For example, here's a local "snack pack" on mobile (Android):

Snack packs appear in only 1.5% of the local rankings we track for MozCast Desktop, but they're nearly 4X as prevalent (6.0%) on MozCast Mobile (for the same keywords and locations). As these new packs become more prevalent, they take away other styles of packs, and create new user behavior. So, to say local is the same just because the core algorithm may be the same is misleading at best.

Finally, mobile adds entirely new entities, like app packs on Android (from a search for "jobs"):

These app packs appear on a full 8.4% of the mobile SERPs we're tracking, including many high-volume keywords. As I noted in my recent post, these app packs also consume page-1 organic slots.

A bit of good news

If you're worried that you may be too late to the mobile game, it appears there is some good news. Google will most likely reprocess new mobile-friendly pages quickly. Just this past few days, Moz redesigned our blog to be mobile friendly. In less than 24 hours, some of our main blog pages were already showing the "Mobile-friendly" tag:

However big this update ultimately ends up being, Google's push toward mobile-first design and their clear public stance on this issue strongly signal that mobile-friendly sites are going to have an advantage over time.

Stay tuned to this post (same URL) for the next week or two - I'll be updating charts and data as the Mobile Update continues to roll out. If the update really does take days or weeks, we'll do our best to measure the long-term impact and keep you informed.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Friday, April 17, 2015

How Google's Evolution is Forcing Marketers to Invest in Loyal Audiences - Whiteboard Friday

Posted by randfish

Given Google's recent changes to SERPs and their April 21 mobile deadline, does SEO still come first? In today's Whiteboard Friday, Rand walks you through tactics you can use to build a loyal audience before you need to do SEO.

For reference, here's a still of this week's whiteboard.

How Google's Evolution is Forcing Marketers to Invest in Loyal Audiences Whiteboard

Click on it to open a high resolution image in a new tab!

Video transcription

Howdy Moz fans and welcome to another edition of Whiteboard Friday. This week we're chatting on some of the changes that Google has made that are forcing marketers to invest more and more in building loyal audiences before they do SEO. This is kind of a reverse of years past where we could use SEO as that initial channel where we attracted visits who would become our customers, our email subscribers, our social media fans and followers. All of these things have kind of switched direction.

Why move SEO later in the process?

There are some reasons why. First off, Google has for a lot of broad, head of the demand curve queries, they've taken some of the value and equity away from those with things like instant answers and Knowledge Graph, along with lots and lots of other verticals.

Knowledge Graph

I do a search for "plaid shirts" and I get this instant answer showing me what a plaid shirt looks like and a Knowledge Graph. This is a fake example. I don't think they actually do this for plaid shirts yet, but they will.

Personalization

Personalization by history, we're seeing a ton of personalization. I think history is one of the biggest influencers on personalization. Google+ still is a little bit, but your search history and what you've clicked on in the past tends to be big predictors of this. You can see this in two areas, not just in the results that Google shows, but also in what they're suggesting to you in your Search Suggest as you type.

Now, where Google is trying to predictively say, "Hey, we think you're going to want coffee right now because we see that you stepped out of your office and you live in Seattle, and you are a human being. So you must want coffee." They have these ranking signals, that are relatively new over the past few years and certainly much stronger than in years past around user and usage data, around search volume and what you searched for using quality raters and human and manual controls. Signals that are heavily correlated with brand, even if brand itself isn't necessarily a ranking factor.

Fewer results

Of course, there are fewer results now. I don't know if you guys caught this, but I thought one of the most fascinating things that Dr. Pete showed off recently in his MozCast data set was that it used to be the case that Google would show 10 results even if they had a set of images, a news result, and a local pack. Now basically these count as individual results. So you're not getting 10 results on a page. If you've got images and a couple of news things, you're getting seven results that are web results. Ten domains appear, ten big domains, powerful domains, places like Amazon and Yelp and those kinds of things, at least for U.S. search results, appear on 17% of all page one queries. There are a little fewer results to work with and more results biased to these bigger, better-known sites.

All of these things are contributing to this world in which doing SEO first and then earning loyalty through two other channels through SEO is really, really hard. It's making the value of having a loyal audience before you need to do SEO that much more valuable, which is why I figured we'd run through some of the tactics that you can use to build a loyal audience.

This is actually a question from one of our Whiteboard Friday loyal audience members. Thank you very much. Much appreciated.

How to build a loyal audience

Some tactics to build loyalty, we talked about a few of these, but creating an expectation that you can consistently deliver upon is a huge part of how loyalty is created. Humans love to form habits. Thankfully for marketers, we're terrible at breaking those habits.

Consistency

If you can form a habit, you can create a loyal member of your audience, but this is very challenging unless you deliver consistency. That consistency needs to be created through an expectation. That could be when you publish. That could be what you're going to do. That could be the format of the content that you're providing. That could be how your solution or problem or product is delivered. But it needs to create those things in order to build that loyal audience.

Reach your audience where they are

Secondly, provide your content through the channels, the apps, the accounts, the formats that your audience is already using. If I say, "Hey, in order to get Whiteboard Friday, you need to sign up for a Moz account first," the viewability of Whiteboard Friday is going to go down. If on the other hand, which we don't have this but we really should have it, there was a subscribe on iTunes and you could get each Whiteboard Friday as a podcast, gosh, that is something that many Whiteboard Friday viewers, in fact, many people in the technology and marketing worlds already have access to. Therefore it reduces the friction of subscribing to Whiteboard Friday. We might build more people into our loyal audience.

This is definitely something to think about. You need to be able to identify those channels and then be there.

Where SEO fits

I'm saying don't start with SEO as your primary web marketing tactic anymore. I think we have to build into it. These challenges are too great. Not only are they too great, I think they could be overcome today, but they are growing. All of them are growing so substantially, instant answers and Knowledge Graph are becoming a bigger and bigger part of search results. Google Now is something that Google is pushing on so incredibly hard. I think they're going to be pushing it with new devices. They're clearly pushing it with app results inside of search results. I think these ranking signals are only going to get stronger. I think there's going to be more personalization. I think every one of these you can see an up and to the right trend.

Therefore, when we do SEO, we have to think about it as, "How do I earn a loyal audience and then use their amplification to help me perform in search?" Rather than, "How do I do SEO for my website to earn visitors that I can convert into a loyal audience?" That's a new a challenge, a new paradigm for us.

Be unique and memorable

Craft a stylistically unique and memorable approach to solving your audience's problem. One of the things that I find is challenging in a lot of businesses that we talk to, that I get to interact with is that they think, "Hey, we're the best player in this field. We're the best at doing this. Therefore, we should be able to earn a great customer audience." I think this ignores why marketing exists and ignores the power that marketing has and the power of influencing human beings overall.

The best really is not necessarily enough. We are not perfectly logical creatures where we go, "Hey, I am thinking about a new social media monitoring solution. I need to watch Twitter, Facebook, Google+, LinkedIn, and Instagram for my business. Therefore I'm going to create my criteria. I'm going to evaluate all 716 providers that are in the market today that fit my price range and those criteria. Then I'm going to choose effectively the best one. No, we're biased by the ones we've heard of, the ones our friends recommend, the ones we stumble across versus don't stumble across, the ones that have a loud voice, the ones that have a credible voice. These things bias us. Therefore, being stylistically unique and memorable have outsized power to determine whether people will become part of your loyal audience.

More isn't necessarily better

I've talked about this a few times, but I'm strongly of the opinion, especially when it comes to loyalty, that more content may actually be worse than better content. Moz publishes between 7 and 10 blog posts a week. That's a lot of content. I think there are weeks where we published 12 blog posts. For me to say this is a little odd. But the challenge here is prior to building a loyal audience. Once you have a loyal audience, you can start to expand that audience by reaching out and broadening the spectrum of content that you create, and you can afford to be a little more risk taking in that. When you are trying to build loyalty early on, you need to have that consistency of quality.

People are going to return because you keep delivering great stuff again and again. When that suffers, your audience will suffer as well. If I watch my first three Whiteboard Fridays and then the fourth one is not great, I expect to lose a ton of those viewers. But if I have tens of thousands of people who are watching Whiteboard Friday and I deliver one bad one out of twenty, maybe I have a little more room to play there.

Focus your efforts

Focus. This is a big challenge because I think a lot of us think very broadly about who we want to appeal to, the types of content we want to create, the types of marketing we want to do. This is very challenging from a loyalty perspective because passionate fans tend to congregate around very, very focused causes and very focused creators of content or focused brands or focused organizations. Its much tougher to build that passion into a group of users if you're trying to appeal to a very broad set. That's just how it is.

Don't forget engagement

Lastly, but not least, this is very tactical, but I found it extremely powerful when a brand is starting out, when a project is starting out, to engage and respond as much as possible with your customers. That could be over social channels, that could be in comments, that could be in emails, that could be directly in outreach, whatever it is. But if you see someone who you can reach out to engaging with you, replying to them, talking to them, conversing with them in some way, forming a connection is extremely powerful. It especially is important for first interactions.

I'm not going to say, "You need to respond to everything all the time, always." If you can identify, "This is the first interaction that we've had with this person," if you interact and if that interaction is positive, it can create loyalty just on its own. That's a lovely way to start scaling up from a small starting point.

All right everyone, hope you've enjoyed this edition of Whiteboard Friday. We'll see you again next week. Take care.

Video transcription by Speechpad.com


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Monday, April 13, 2015

How Content Promotion Works for Blogs Big and Small: Our 11 Favorite Content Distribution Strategies

Did you know: Some bloggers recommend you spend as much time promoting your content as you do writing it.

(Derek Halpern of Social Triggers has an 80/20 split: 80 percent promotion, 20 percent writing.)

Wow, this is an area I fall well short on. I’m so impressed by those who hustle to get their content out there and in front of as many people as possible who can gain value from it.

Over the past few months, I’ve learned a lot from content promotion experts and am starting (slowly) ...

The post How Content Promotion Works for Blogs Big and Small: Our 11 Favorite Content Distribution Strategies appeared first on Social.

Friday, April 10, 2015

Elements of Personalization & How to Perform Better in Personalized Search - Whiteboard Friday

Posted by randfish

From information about your location and device to searches you've performed in the past, Google now has a great deal of information it can use to personalize your search results. In today's Whiteboard Friday, Rand explains to what extent they're likely using that information and offers five ways in which you can improve your performance in personalized search.

For reference, here's a still of this week's whiteboard.

Elements of Personalization Whiteboard

Click on it to open a high resolution image in a new tab!

Video transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we're going to chat personalization, talking about the elements that can influence personalization as well as some of the tactical things that web marketers and SEOs specifically can do to help make their sites and their content more personalized friendly.

How personalization works

So, what are we talking about when we're talking about personalization? Well, Google is actually personalizing by a large number of things and probably even a few things I have not listed here that they have not been totally transparent or forthcoming about.

Logged-in visitors

The things that we know about include things like:

  • Location. Where is the searcher?
  • Device. What type of device and operating system is the searcher using?
  • Browser. We have seen some browser specific and operating specific forms of searches. Search history, things that you have searched for before and potentially what you've clicked on in the results.
  • Your email calendar. So if you're using Gmail and you're using Google Calendar, Google will pull in things that they find on your calendar and data from your email and potentially show that to you inside of search results when you search for very particular things. For example, if you have an upcoming plane flight and you search for that flight number or search around that airline, they may show you, you have an upcoming flight tomorrow at 2:07 p.m. with Delta airlines.
  • Google+. A lot of folks are thinking of it as dead, but it's not particularly dead, in fact no more so than the last year and a half or so. Google+ results will still appear at the bottom of your search results very frequently if you're logged in and anyone in your Google+ stream that you follow has shared any link or any post in Google+ with the keywords that you've searched for. That's a very broad matching still. Those results can appear higher if Google determines that there's more relevancy behind that. You'll also see Google+ data for people you're connected to when you search for them, that kind of thing.
  • Visit history. If you have visited a domain while logged into an account many times in the past, I'm not exactly sure how many times or what sort of engagement they look at precisely, but they may bias those results higher. So they might say, "Gosh, you know, you really seem to like eBay when you do shopping. We're going to show eBay's results for you higher than we would normally show them in an incognito window or for someone who's not logged in or someone who isn't as big an eBay fan as you are."
  • Bookmarks. It's unclear whether they're using just the bookmarks from Google Chrome or the personalization that carries over from Chrome instances or the fact that bookmarks are also things that people visit frequency. There's some discussion about what the overlap is there. Not too important for our purposes.

Logged-out visitors

If you are logged out, they still have a number of ways of personalizing, and you can still observe plenty of personalization. Your results may be very different from what you see in a totally new browser with no location applied to it, on a different device with different search and visit history.

Now, remember when I say "Logged out," I'm not talking about an incognito window. An incognito window would bias against showing anything based on search history or visit history. However, location and device appear to still remain intact. So a mobile device is going to get sometimes different results than a desktop device. Different locations will get different results than other locations. All that kind of stuff.

Now you might ask, "Quantify this for me, Rand." Like let's say we took a sample set of 500 keywords and we ran them through personalized versus non-personalized kinds of searches. What's the real delta in the results ordering and the difference of the results that we see?

Well, we actually did this. It's almost 18 months old at this point, but Doctor Pete did this in late 2013. Using the MozCast data set, he checked crawlers, Google Webmaster Tools, personalized logged in and incognito. You know what? The delta was very small for personalized versus incognito. I suspect that number's probably gone up, which means this correlation number -- 1.0 would be perfect correlation -- 0.977 very, very high correlation. So we're seeing really similar results for personalized versus incognito at least 18 months ago.

I suspect that's probably changed. It'll probably continue to change a little bit. However, I would also say that it probably won't drop that low. I would not expect that you would ever find that it'll be lower than 0.8, maybe even 0.9, just because so much of search is intentional navigation and so much of it is also not fully capable to be personalized in truly intelligent ways. The results are the best results already. There's not a whole lot of personalization that might be added in besides potentially showing your Google+ follows or something at the bottom and things based on your visit history.

Performing better in personalized search

So let's say you want to perform better in personalized search. You have a belief that, hey, a lot of people are getting personalized bias in my particular SERP sets. We're very local focused, or we're very biased by social kinds of data, or we're seeing a lot of people are getting biased in their results to our competitors because of their search history and visit history. What are things that I need to think about?

Get potential searchers to know and love your brand before the query

The answer is you can perform better in personalized search in general, overall by thinking about things like getting potential searchers to know and love your brand and your domain before they ever make the query. It turns out that if you've gotten people to your site previously through other forms of navigation and through searches, you may very well find yourself higher up in people's personalized results as a consequence of the fact that they visited you in the past. We don't know all the metrics that go into that or what precisely Google uses, but we could surmise that there are probably some bars around engagement, visit history, how many times, how frequently in a certain time frame, all that kind of stuff that goes into that search and visit history.

Likewise, if you can bias people here and rank higher, you may be getting more and more benefit. It can be a snowball effect. So if you keep showing up higher in their rankings, they keep clicking you, they keep finding information that's useful, they don't need to go back to the search results and click somebody else. You're just going to keep ranking in more and more of their queries as they investigate things. For those of you who are full funnel types of content servers, you're thinking about people as they're doing research and educating themselves all the way down to the transaction level with their searches, this is a very exciting opportunity.

Be visible in all the relevant locations for your business

For location bias, you want to make sure that you are relevant in all the locations for your business or your service. A lot of times that means getting registered with Google Maps and Google+ local business for maps -- I can't remember what it's called exactly. I think it's Google+ Local for Business -- and making sure that you are not only registered with those places but then also that your content is helping to serve the areas that you serve. Sometimes that can even mean a larger radius than what Google Maps might give you. You can rank well outside of your specific geographies with content that serves those regions, even if Google is not perfectly location connecting you via your address or your Maps registration, those kinds of things.

Get those keyword targets dialed in

Getting keyword targeting dialed in, this is important all the time. Where a lot of people fall down in this is they think, "Hey, I only need to worry about keyword targeting on the pages that are specifically intended to be search landing pages. I'm trying to get search traffic to these pages." But personalization bias means that if you can get keyword targeting dialed in even on pages that are not necessarily search landing pages, Google might say, "Hey, this wouldn't normally rank for someone, but because you've already earned that traffic, because that person is already biased to your brand, your domain, we're going to surface that higher than we ordinarily would." That is a powerful potential tool in your arsenal, hence it's useful to think about keyword targeting on a page specific level even for pages that you might not think would earn search traffic normally.

Share content on Google+ and connect with your potential customers

Google+ still, in my opinion, a very valuable place to earn personalized traffic for two reasons. One, of course you can get people actually over to your site. You may be able to get potential traffic through Google+. You can appear in those search results right at the bottom for anyone who follows you or anyone who's connected to you via email and other kinds of Google apps. You may have also noticed that when you email with someone, if they're using Gmail and their Google+ account is connected, you see in the little right-hand corner there that they'll show their last post or their last few posts sometimes on Google+. Again, also a powerful way to connect with folks and to share the content as you're emailing back and forth with them.

For brands, that also shows up in search results sometimes. There's the brand box on the right-hand side, kind of like Knowledge Graph, and it'll show your last few posts from Google+. So again, more and more opportunities to be visible if you're doing Google+.

I am also going to surmise that, in the future, Google might do stuff with this around Twitter. They just finished re-inking that deal where Twitter gives their full fire hose access to Google and Google starts displaying more and more of that stuff in search results. So I think probably still valuable to think about how that connection might form. Definitely still valuable directly to do it in Google+ even if you're not getting any traffic from Google+.

Be multi-device friendly and usable

Then the last one, of course, being multi-device friendly and usable. This is something where Moz has historically fallen down, and obviously we're going to be fixing that in the months ahead. I actually hope we fix it after April 21st so we can see whether we really take a hit when they do that mobile thing. I think that would be a noble sacrifice, and then we can see how we perform thereafter and then fix it and see if we can get back in Google's good graces after that.

So given these tactics and some of this knowledge about how personalized search works, hopefully you can take advantage of personalized search and help inform your teams, your bosses, your clients about personalization and the potential impacts. Hopefully we'll be redoing some of those studies, too, to be able to tell you, hey, how much more is personalization affecting SEO over the last 18 months and in the years ahead.

All right, everyone. Thanks again for joining us, and we'll see you again next time for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Wednesday, April 8, 2015

Off with Your Head Terms: Leveraging Long-Tail Opportunity with Content

Posted by SimonPenson

Running an agency comes with many privileges, including a first-hand look at large amounts of data on how clients' sites behave in search, and especially how that behavior changes day-to-day and month-to-month.

While every niche is different and can have subtle nuances that frustrate even the most hardened SEOs or data analysts, there are undoubtedly trends that stick out every so often which are worthy of further investigation.

In the past year, the Zazzle Media team has been monitoring one in particular, and today's post is designed to shed some light on it in hopes of creating a wider debate.

What is this trend, you ask? In simple terms, it's what we see as a major shift in the way results are presented, and it's resulting in more traffic for the long tail.

2014 growth

It's a conclusion supported by a number of client growth stories throughout the last 12 months, all of whom have seen significant growth coming not from head terms, but from an increasing number of URLs gaining search traffic from organic.

The Searchmetrics visibility chart below is just one example of a brand in the finance space seeing digital growth year-over-year as a direct result of this phenomenon. They've even seen some head terms drop backwards by a couple of places while still seeing this overall.

To understand why this may be happening we need to take a very quick crash course into how Google has evolved over the past two years.

Keyword matching

Google built its empire on a smart system; one which was able to match "documents" (webpages) to keywords by scanning and organizing those documents based upon keyword mentions.

It's an approach that has been getting increasingly too simplistic in a "big data" world.

The answer, it seems, is to focus more on the user intent behind that query and get at exactly what it is the searcher is actually looking for.

Hummingbird

The solution to that challenge is Hummingbird, Google's new "engine" for sorting the results we see when we search.

In the same way that Caffeine, the former search architecture, allowed the company to produce fresher results and roll worldwide algorithm changes (such as Panda and Penguin) out faster, Hummingbird is designed to do the same for personalized results.

And while we are only at the very beginning of that journey, from the data we have seen over the past year it seems to be crystallizing into more traffic for deeper pages.

Why is this happening? The answer lies in further analysis of what Google is trying to achieve.

Implicit vs. explicit

To better explain this change let's look at how it is affecting a search for something obvious, like "coffee shop."

Go back two or so years and a search for this may well have presented 10 blue links of the obvious chains and their location pages.

For the user, however, this isn't useful—and the search giant knows it. Instead, they want to understand the user intent behind the query, or the "implicit query," as previously explained by Tom Anthony on this blog.

What that means, in practice, is that a search for "coffee shop" will actually have context, and one of the reasons for wanting you signed in is to allow the search engine to collect further signals from you to help understand that query in detail. That means things like your location, perhaps even your brand preferences, etc.

Knowing these things allows the search to be personalized to your exact needs, throwing up the details of the closest Starbucks to your current location (if that is your favourite coffee).

If you then expand this trend out into billions of other searches you can see how deeper-level pages, or even articles, present a better, more refined option for Google.

Here we see how a result for something like "Hotels" may change if Google knows where you are, what you do for a living and therefore what kind of disposable income you have. The result may look completely different, for instance, if Google knows you are a company CEO who stays in nice hotels and has a big meeting the following day, thus requiring a quiet room so you can get some sleep.

Instead of the usual "best hotels in London" result we get something much more personalised and, critically, something more useful.

The new long-tail curve

What this appears to be doing is reshaping the traditional long-tail curve we all know so well. It is beginning to change shape along the lines of the chart below:

That's a noteworthy shift. With another client of ours, we have seen a 135% increase in the number of pages receiving traffic from search, delivering a 98% increase in overall organic traffic because of it.

The primary factor behind this rise is the creation of the "right" content to take advantage of this changing marketplace. Getting that right requires an approach reminiscent of the way traditional marketing has worked for decades—before the web even existed.

In practice, that means understanding the audience you are attempting to capture and, in doing so, outlining the key questions they are asking every day.

This audience-centric marketing approach is something I have written about previously on this blog and others, as it is critical to understanding that "context" and what your customers or clients are actually looking for.

The way to do that? Dive into data, and also speak to those who may already be buying from or working with you.

Digging into available data

The first step of any marketing process is to collect and process any and all available information about your existing audience and those you may want to attract in the future.

This is a huge subject area—one I could easily spend the next 10,000 words writing about—but it has been covered brilliantly on the more traditional research side by sites like this and this.

The latter of those two links breaks this side of the research process into the two key critical elements you will need to master to ensure you have a thorough understanding of who you are "talking" to in search.

Quantitative concentrates on the numbers. Focus is on larger data sets and statistical information, as opposed to painting a rich picture of the likes and dislikes of your audience.

Qualitative focuses on the words and on painting in the "richness." The way your customers speak and explain problems, likes and dislikes. It's more of a study on human behavior than stats.

This information can be combined with a plethora of other data sources from CRMs, email lists, and other customer insight pots, but where we are increasingly seeing more opportunity is in the social data arena.

Platforms such as Facebook can give all brands access to hugely valuable big-data insight about almost any audience you could possibly imagine.

What I'd like to do here is explain how to go about extracting that data to form rich pictures of those we are either already speaking to or the very people we want to attract.

There is also little doubt that the amount of insight you have into your audience is directly proportional to the success of your content, hence the importance of this research cycle.

Persona creation

Your data comes to life through the creation of personas, which are designed to put a human face on that data and group it into a small number of shared interest sets.

Again, the point of this post is not to explain how to best manage this process. Posts like this one and this one go over that in great detail—the point here is to go over what having them in place allows you to do.

We've also created a free persona template, which can help make the process of pulling them together much easier.

When you've got them created, you will soon realize that your personas each have very different needs from a content perspective.

To give you an example of that let's look at these example profiles below:

Here we can see three very distinct segments of the audience, and immediately it is easy to see how each of them is looking for a different experience from your brand.

Take the "Maturing Spender" for example. In this fictional example for a banking brand we can see he not only has very different content needs but is actually "activated" by a different approach to the buying cycle too.

While the traditional buyer will follow a process of awareness, research, evaluation and purchase, a new kind of purchase behaviour is materializing that's driven by social.

In this new world we are seeing consumers driven to more impulsive purchases that are often driven by social sharing. They'll see something in their social feeds and are more likely to purchase there and then (or at least within a few days), especially if there is a limited offer on.

Much of this is driven by our increasingly "disposable" culture that creates an accelerated buying process.

You can learn this and other data-driven insights from the personas, and we recommend using a good persona template, then adding further descriptive detail and "colour" to each one so that everyone understands whom it is they are writing for.

It can also work well to align those characters to famous people, if possible, as doing so makes it much easier to scale understanding across whole organizations.

Having them in place and universally adopted allows you to do many things, including:

  • Create focus on the customer
  • Allow teams to make and defend decisions
  • Create empathy with the audience

Ultimately, however, all of this is designed to ensure you have a better understanding of those you want to converse with, and in doing so you can map out the key questions they ask and understand their individual needs.

If you want to dig into this area more then I highly recommend Mike King's post from 2014 here on Moz for further background.

New keyword research – personas

Understanding the specific questions your audience is asking is where the real win can be found, and the next stage is to utilize the info gleaned from the persona process in the next phase: keyword research.

To do that, let's walk through an example for our Happy Couple persona (the first from the above graphic), and see how things plays out for this fictional banking brand.

The first step is to gather a list of tools to help unearth related keywords. Here are the ones we use:

There are many more that can help, but it is very easy to complicate the process with data, so we like to limit that as much as possible and focus on where we can get the most benefit quickly.

Before we get into the data mining process, however, we begin with a group brainstorm to surface as many initial questions as possible.

To do this, we will gather four people for a quick 15-minute stand-up conversation around each persona. The aim is to gather five questions from which the main research phase can be constructed.

Some possibilities for our Happy Couple example may include:

  • How much can I borrow for a mortgage?
  • How do I buy a house?
  • How large a deposit do I need to buy a house?
  • What is the best regular savings account?

From here we can use this framework as a starting point for the keyword research and there is no better place to start than with our first tool.

SEMRush

For those unfamiliar with this tool it is designed to make it easier to accurately assess competitor and market opportunity by plugging into search data. In this example we will use it to highlight longer-tail keyword opportunity based upon the example questions we have just unearthed.

To uncover related keyword opportunity around the first question we type in something similar to the below:

This will highlight a number of phrases related to our question:

As you can see, this gives us a lot of ammunition from a content perspective to enable us to write about this critical subject consistently without repeating the same titles.

Each of those long-tail terms can be analyzed ever deeper by clicking on them individually. That will generate a further list of even more specifically related terms.

Soovle

The next stage is to use this vastly underrated tool to further mine user search data. It allows you to gather regular search phrases from sites such as YouTube, Yahoo, Bing, Answers.com and Wikipedia in one place.

The result is something a little like the below. It may not be the prettiest but it can save a lot of time and effort as you can download the results in a single CSV.

Google Autocomplete / KeywordTool.io

There are several ways you can tap into Google's Autocomplete data and with an API in existence there are a number of tools making good use of it. My current favourite is KeywordTool.io, which actually has its own API, mashing data from Google, YouTube, Bing, and the Apple App Store.

The real value is in how it spits out that data, as you are able to see suggestions by letter or number, creating hundreds of potential areas for content development. The App Store data is particularly useful, as you will often see greater refinement in search behavior here and as a result very specific 'questions' to answer.

A great example for this would be "how to prequalify yourself for a mortgage," a phrase which would be very hard to surface using Google Autocomplete tools alone.

Forum searches

Another fantastic area worthy of research focus is forums. We use these to ask our peers and topic experts questions, so spending some time understanding what is being asked within the key ones for your market can be very helpful.

One of the best ways of doing this is to perform a simple advanced Google search as outlined below:

"keyword" + "forum"

For our example we might type:

This then presents us with more than 85,000 results, many of which will be questions that have been asked on this subject.

Examples include:

  • First-time buyer's mortgage guide
  • Getting a Mortgage: Boost your Mortgage Chances
  • Mortgage Arrears: What help is available?
  • Are Fixed Rate Mortgages best?

As you can see, this also opens up a myriad of content opportunities.

Competitive research

Another way of laterally expanding your reach is to look at the content your best competitors are producing.

In this example we will look at two ways of doing that, firstly by analyzing top content and then by looking at what those competitors rank for that you don't.

Most shared content

There are several tools that can give you a view on the most-shared content, but my personal favourites are Buzzsumo or the awesome new ahrefs Content Explorer.

Below, we see a search for "mortgages" using the tool, and we are presented with a list of content on that subject sorted by "most shared." The result can be filtered by time frame, language, or even by specific domain inclusions or exclusions.

This data can be exported and titles extracted to be used as the basis of further keyword research around that specific topic area, or within a brainstorm.

For example, I might want to look at where the volume is from an organic search perspective for something like "mortgage paperwork."

I can type this term into SEMRush and search through related phrases for long-tail opportunity on that specific area.

Competitor terms opportunity

A smart way of working out where you can gain further market share is to dive a little deeper into your key competitors and understand what they rank for and, critically, what you don't.

To do this, we return to SEMRush and make use of a little-publicized but hugely useful tool within the suite called Domain Comparison Tool.

It allows you to compare two domains and visualize the overlap they have from a keyword ranking perspective. For this example, we will choose to compare two UK banks – Lloyds and HSBC.

To do that simply type both domains into the tool as below:

Next, click on the chart button and you will be presented with two overlapping circles, representing the keywords that each domain ranks for. As we can see, both rank for a similar number of keywords (the overall number affects the size of the circles) with some overlap but there are keywords from both sides that could be exploited.

If we were working for HSBC, for instance, it would be the blue portion of the chart we would be most interested in in this scenario. We can download a full list of keywords that both banks rank for, and then sort by those that HSBC don't rank for.

You can see in the snapshot below that the data includes columns on where each site ranks for each keyword, so sorting is easy.

Once you have the raw data in spreadsheet format, we would sort by the "HSBC" column so the terms at the top are those we don't rank for, and then strip away the rest. This leaves you with the opportunity terms that you can create content to cover, and this can be prioritized by search volume or topic area if there are specific sub-topics that are more important than others within your wider plan.

Create the calendar

By this point in the process you should have hundreds, if not thousands of title ideas, and the next job is to ensure that you organise them in a way that makes sense for your audience and also for your brand.

Content flow

To do this properly requires not just a knowledge of your audience via extensive research, but also content strategy.

One of the biggest rules is something we call content flow. In a nutshell, it is the discipline of creating a content calendar that delivers variation over time in a way that keeps the audience engaged.

If you create the same content all of the time it can quickly become a turn-off, and so varying the type (video, image-led piece, infographics, etc.) and read time, or the amount of time you put into creating the piece, will produce that "flow."

This handy tool can help you sense check it as you go.

Clearly your "other" content requirements as part of your wider strategy will need to fit into this strategy, too. The vast majority of the output here will be article-focused, and it is critical to ensure that other elements of your strategy are also covered to round out your content output.

This free content strategy toolkit download gives you everything you need to ensure you get the rest of it right.

The result

This is a strategy we have followed for many of our search-focused clients over the last 18 months, and we have some great real-world case studies to prove that it works.

Below you can see how just one of those has played out in search visibility improvement terms over that period as proof of its effectiveness.

All of that growth directly correlates with a huge growth in the number of URLs receiving traffic from search and that is a key metric in measuring the effectiveness of this strategy.

In this example we saw a 15% monthly increase in the number of URLs receiving traffic from search, with organic traffic up 98% year-on-year despite head terms staying relatively static.

Give it a go for yourself as part of your wider strategy and see what it can do for your brand.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!