Posted by Danny Dover

 SEO is a tricky skill. It is based on computer science but requires a certain level of artistry. It is a mixture of working with an enormous amount of search engine operated machines and an even larger group of internet users. While I don’t believe the art can be taught, I do believe the science can.

Below are the best practice methods we use as SEOmoz for our consulting work. They help provide a tested foundation to be used for crafting a solid search engine optimized website.

Note: I can’t speak for the entire industry. The list below is the beliefs of those of us at SEOmoz. It is based on new data we have obtained in the field and by a new rankings correlation report written by our team member Ben Hendrickson after statistically analyzing search engine rankings and our index of the World Wide Web.


Title Tag Format

Best Practice:

Primary Keyword – Secondary Keywords | Brand
Or
Brand Name | Primary Keyword and Secondary Keywords

Reasoning:

We recently finished our first round of intensive search engine ranking factors correlation testing. The results were relatively clear. If you are trying to rank for a very competitive term, it is best to include the keyword at the beginning of the title tag. If you are competing for a less competitive term and branding can help make a difference in click through rates, it is best to put the brand name first. With regards to special characters, we prefer pipes for aesthetic value but hyphens, n-dashes, m-dashes and subtraction signs are all fine.

The Usefulness of H1 Tags

Best Practice:

H1s are important for users but not necessarily for search engines anymore.

Reasoning:

Our correlation data shows that H1 tags do not carry the same ranking weight that we had originally presumed. We think they are very important for establishing information hierarchy and helping with algorithmically determined semantics, but they seem to be less important for search engine optimization. We recommend them on all pages as an aid for users but don’t stress the importance when other opportunities for SEO improvement are available.


The Usefulness of Nofollow


Best Practice:

We recommend using rel=nofollow for thwarting would be spammers of user generated content. We also recommend using it as an incentive for creating active users. (At SEOmoz, we remove the nofollow of profile links after the user has earned 100 mozPoints.)

We DO NOT recommend using nofollow for PageRank sculpting anymore.

Reasoning:

The recent announcement from Matt Cutts changed our policy. We think the new policy detracts from the overall health of the internet but feel obligated to go along with it to make sure our customers get the best rankings in the search engines. We have theories and tests running to help determine if this is, in fact, the best course of action to recommend, and we’re also looking into alternatives for sculpting the flow of link juice such as complex Javascript redirection systems, iFrames, etc.

It can still be useful for preventing comment spam (like its original use) but it is no longer useful as an aid for establishing information architecture.

The Usefulness of the Canonical Tag

Best Practice:

The canonical tag is still young and is only useful as a hint to the search engines to prevent duplicate content. It is not the silver bullet that webmasters are looking for. (nor the druids for that matter)

Reasoning:

When the nofollow tag was first released, it took a while before we could measure its affects. The search engines are likely still tweaking how they treat it. We know from public statements that this tag depletes juice like 301 redirects but it is too soon to judge its importance/value. When possible, we still recommend architectural solutions to prevent duplicate content (potentially employing solutions like the hash tag).

The Use of Alt text with Images

Best Practice:

We recommend including alt text for all images on all publicly accessible pages. We also suggest adding images with good alt text to pages targeting competitive rankings.

Reasoning:

We have two reasons for this. First, we believe that all users regardless of limitations should be able to use the internet. This includes people with disabilities and computers trying to use semantics to make information more useful. Secondly, our correlation data showed that alt tags were a much more important metric for high rankings than we would have thought. While correlation is not causation, it seems unwise to ignore the data and we’re therefore recommending the use of good images with good alt text for pages seeking to rank on competitive queries.


The Use of the Meta Keywords tag

Best Practice:

If it is not a problem to let your competitors know your keywords and you are trying to rank highly in Yahoo, the Meta Keywords tag can be useful. Note: This is different from what we have recommend in the past.

Reasoning:

We recently updated our policy on this after DJ Paisley sent us a rather convincing e-mail (and we subsequently re-tested). Initially we’d suggested not using the meta keywords at all. Our argument was that this tag was abused in the early days of the internet and was no longer useful. We thought they were not used by the modern search engines and simply provided a way for competitors to automate the process of competitive analysis.
After running some tests (they are still running so this data is preliminary) we have seen that Yahoo does indeed use this tag for ranking although it is a minor factor. That said, we believe that Google and Bing ignore this tag and it doesn’t affect their rankings. We are still concerned about the competitive aspect of this piece, and that factor, combined with the smaller market share of Yahoo! and the seemingly low value this singular piece provides even for that engine, dilute any suggestion to employ it for now.


The Use of Parameter Driven URLs (I.E. www.example.com/product?param=1&param=2)


Best Practice:

We don’t recommend using them. If they are absolutely necessary (Due to something like an established CMS configuration) we recommend no more than 2 parameters.

Reasoning:

The search engines have been very clear on this. Their crawlers can parse and crawl parameter driven URLs but it is much more difficult and often leads to duplicate content issues. This is backed up by our correlation data, which showed that pages with static URLs tend to rank higher.

The Usefulness of Footer Links

Best Practice:

Use footer links sparingly. We recommend no more than 25 relevant internal navigational links.

Reasoning:

We have seen many examples of Google penalties tied directly to abusive footer links (that "magically" lifted upon removal of the keyword anchor text stuffed footers). Manipulative links in footers are easily detected algorithmically, and appear to have automated penalties applied to them by Google.

   
The Use of Javascript and Flash on Websites

Best Practice:

We do not recommend using Javascript or Flash for any navigation important to search engines.

Reasoning:

Although we believe the search engines can crawl Javascript and Flash in a limited capacity, we choose not add the risk. Their ability to parse these languages is inferior to their ability to parse HTML and choosing to code in the former can lead to lower search engine rankings.

   
The Use of 301 Redirects

Best Practice:

We recommend 301 redirects as the best way to redirect webpages but warn that they do have disadvantages.

Reasoning:

Our tests and public statements from search engineers have made us reasonably certain that 301 redirects deplete between 1% and 10% of link juice. This is an acceptable penalty if it is necessary to make one URL lead to another URL and other options are unavailable. It is also much better than the alternatives (javascript and 302 redirects) which pass very little if any juice at all. Meta refreshes, in our testing, appear to function similarly to 301s (from a juice/rank passing ability). However, since the engines recommend one over the other, we do too.


Blocking pages from Search Engines

Best Practice:

The Meta Robots tag (noindex, follow) is generally a better option than robots.txt. Robots.txt files are useful but should be used sparingly and only if a meta robots tag is not an option.

Reasoning:

Robots.txt do stop search engine crawlers from visiting a web page but they do not keep them from being indexed (see DaveN’s recent post on this topic). They also create a black hole for link juice (as the engines cannot crawl these pages to see any links on them and pass that juice along). Thus, we strongly prefer the meta robots tag with noindex and follow parameters keep pages out of the search engine indices AND allow link juice to be passed. 🙂

Google Search Wiki’s Affect on Rankings

Best Practice:

We don’t recommend spending any time or resources on search wiki.

Reasoning:

We think it has very little affect, if any, on rankings. We have not seen any evidence of it affect on global results. We think it might help identify some spammy queries but is likely just another data source Google is using to separate it from its competiton.

The Affect of Negative Links from “Bad Link Neighborhoods”

Best Practice:

Link neighborhoods are a real thing but the affect of links from bad neighborhoods on good neighborhoods is minimal if the links are not reciprocal.

Reasoning:

We have been able to gain an excellent perspective on the internet through the creation and manipulation of Linkscape. We found it was very easy to algorithmically detect neighborhoods (or hubs). We think it is highly likely that the search engines use these to establish subject authorities.

That said, the internet is a very messy place. Legitimate websites receive spammy links all of the time (SEOmoz itself receives hundreds every month). The engines know about this phenomenon and take it into account. It is still possible to get negatively affected by bad links but the links must make up a large percentage of the total inbound links for a given site and the site must be relatively poorly linked-to by legitimate, trusted resources.


The Importance of Traffic on Rankings

Best Practice:

The metric of visitors to a given site is not used to help determine rankings.

Reasoning:

While traffic and rankings correlate (Websites with more visitors do usually have higher rankings) neither causes the other. It is simply that more popular websites receive more links and more links cause higher rankings. We have heard statements from search engineers that "time on page" was used for a short time as a ranking metrics but turned out to be a bad signal. Instead modern search engines prefer the “absence of a click” on a search engine results page as a better metric for detecting when their results need upgrading.

We also have reason to believe that metric of “unique visitors” (as opposed to total visitors) from web analytics software is fundamentally flawed. It, like Alexa.com, is rarely accurate but frequently messaged. Our suggestion is not to trust unique visitor counts nearly as much as raw visits for comparing the traffic sent to a site from various sources or comparing traffic growth from month to month.


If you have any other best practices that you think are worth sharing, feel free to post them in the comments. This post is very much a work in progress. As always, feel free to e-mail me or send me a private message if you have any suggestions on how I can make my posts more useful. If that’s not your style, feel free to contact me on Twitter (DannyDover) Thanks!

Do you like this post? YesNo

More: continued here