Troubleshooting Uncommon SEO Issues

Every milwaukee SEO specialist has had their turn at evaluating a website and trying to determine what issues are causing fundamental problems with their search engine optimization. Whether it is a large drop in rankings due to a Google penalty or a decrease in traffic and / or conversions, your customers are going to want to know what the problem is and when you are going to fix it.

While some issues such as on-page SEO might be very apparent, deeper issues may not be. When issues don’t readily present themselves after your perform a detailed site analysis, you may be left wondering where to turn next?

Author Mark Munroe offers up some advise on this exact topic in his article “SEO Disasters: Preventing the Unthinkable” posted on search engine land.

Reality Is Stranger Than Fiction

Some cases are so strange that you would have trouble believing it:

  • Imagine a site that has built a nice, friendly m.dot implementation. However, the mobile crawler was not seeing this site. Apparently, there was some old cloaking code that served up the old “dumb” phone/feature phone version of the site to the spiders. Yes, they went to the trouble of building a great mobile site and hid it from the spiders.
  • Another site, a software service, had an opt-out page to cancel their service. Somehow, an entire content section of the site had rel=”canonical” links pointing back to the opt-out page (ironically strengthening the one page on the site they did not want users to actually see).
  • On one site we looked at, there was some cloaking logic serving bad rel=canonicals to the crawler. If you looked at the page source from your browser, the rel=canonicals looked fine. To catch this, you had to compare the rel=canonicals in Google’s cache to what was on the user’s page.

You can’t make this stuff up!

Why Do So Many Things Break?

How can these things possibly happen?

  • Complexity and “if” statements. Consider the meta robots “noindex” tag:
    • Some content you may want indexed, and some you may not. In that case, there is often logic that is executed to determine whether or not to insert the “noindex” tag. Any time there is logic and “if” statements, you run a risk of a bug.
    • Typically, sites have staging environments or subdomains they don’t want to be indexed. In this case, logic is needed to detect the environment — another opportunity for a bug if that logic gets pushed live.
    • Sometimes, developers copy templates to build a new page. That old template may have had a noindex.
  • Frequent releases. Most websites have at least weekly updates, as well as patch releases. Every update presents a risk.
  • CMS updates. Manual updates to content via CMS can completely de-optimize a page. For sites that use WordPress, it is very easy to accidentally noindex a page if you use the Yoast plugin. In fact, I know one very prominent site that noindexed their most visited blog post.
  • Competing interests. Many hands with competing interests all have the potential to muck up your SEO. Whether it’s designers deleting some important text, product managers deleting navigation, or engineers using AJAX and hiding important content, the risk is ever-present that something can go wrong.

What Happened To My Site?

Most websites do not have a good handle on what updates and changes have been made to their website (and when). Sure, there might be some well-written release notes (although cryptic seems more common), but that won’t tell you exactly what changed on a page.

How can you research a traffic drop if you don’t know what has changed? The current modus operandi is to go to the Internet Archive and hope they have a version of the page you are interested in.

This is not just an SEO issue. It impacts conversion, monetization, UX metrics — in fact, all website KPIs. How can you understand what’s causing a shift in your KPIs if you don’t know exactly what changed and when?

SEO Testing

SEO testing is also a big problem.

Let’s say there are 10 important page templates for a site and 20 things you want to verify on each page template. That’s 200 verifications you must do with every release.

These tests are not black and white. The existing of a no-index tag on a page is not necessarily a problem. Just because a page has a title, doesn’t mean it has the right title and it hasn’t changed. Just because there is a rel canonical, doesn’t mean it is linking to the right destination. It’s easy to write a test script to tell you that a title is missing. It is difficult to tell the title has had its keywords removed.

As you can see from the article, sometimes the issue may be much deeper than surface level SEO. It is always best to know what changes and updates are going on with your website (if you are not a part of all of them). This way you will have a starting point for your issue analysis.


 

In conclusion, search engine optimization an ever fluid profession. To really know how to best optimize your client’s websites you will need to stay on top of the most recent Google trends and changes and make modifications as needed. If you can stay on top of the curve you can keep your clients happy for a long time to come!