Are Site-wide H1 Tags in WordPress Good or Bad?

Questions from Readers

The great thing about writing our book, WordPress 3.0 Search Engine Optimization, is we get to hear from all those readers who have taken our material and put it to work in the field. Today, we’ve got a fascinating question from Robert, who asks that question we confront every day in one way or another: Just how far should I trust Google’s sophistication?

Hi Michael,

I’m currently reading your Packt book on WordPress SEO, and I have a quick question about HTML5 and the way it uses header tags. Your book says to use only one H1 tag per page, which makes sense. However, HTML5 advocates multiple H1 tags per page, as long as each is contained in a separate section/header.

Worse yet, the first H1 tag on a page is usually a wrapper around the home link logo and contains the same meaningless title text on every page. You can see a typical example at CSS3maker.com :

<header>

<h1 id=”logo”><a href=”index.html” title=”CSS 3.0 Maker”>Css 3.0 Maker</a></h1>

</header>

Most SEO bloggers assume single H1 tags are a thing of the past. Based on your experience, has there been any evidence that Google/Yahoo interpret HTML5 content any differently than HTML/XHTML?

If not, should I remove the header and h1 tags around my logo anchor tag? My site looks like the CSS3maker code above. And like them, I don’t have anything else in my header, so if I remove the H1 tag, wouldn’t I also just scrap the header tag? I have a meaningful H2 tag in my content section, which could be elevated to an H1 tag.

Thanks,
Robert

BTW, I’m really enjoying your book.

 

Robert,

This may be a cop out…but does this help?

I think google is tuned in enough to ignore site-wide h1 tags. One of my philosophies is “packaging”–make it so brain-dead easy for a search engine that it can’t POSSIBLY get confused. We are sort of on-page nerds when it comes to that stuff. Most of the pages we create are pretty perfect, at least on the page.

Do we, in our SEO business, remove site-wide h1 tags around logos and site names in the header? Absolutely we do, but I don’t think it’s the kiss of death if you don’t. Remember one thing: google has to fit its algorithm so that it doesn’t punish sites for small mistakes–otherwise, it would punish 80% of the web or more.

I am very glad you are enjoying the book!

Michael

Buy the Book Today at Amazon

From the Wordpress SEO book

Book Excerpt: Creating Keyword-Rich Content

Our book, WordPress Search Engine Optimization (now in second edition!) is out on the stands of all the upscale local bookstores and online retailers in your neighborhood. But why buy before you try? Here’s one page out of the whole volume to give you a taste of the SEO tips and strategies that you’re missing. You can buy the book at Amazon.

Creating Keyword-Rich Content

It may seem unnatural to focus on a keyword when writing content for your website, but it is absolutely essential to write your pages in a manner that will get them ranked highly in the search engines. No matter how well-written your content is, if it doesn’t contain the keywords and phrases that people use to search for your product or service, it won’t show up in the search engine results pages and no one will ever see it.

For this reason, the first step to creating content for your site is to begin with the right keywords. We learned in Chapter 3 how to research keywords, find the big-money keywords and key phrases, and organize and prioritize them. With sound keyword research, writing flows naturally: start with the high-volume, high-value keywords and write high-quality content for your site that focuses on those keywords.

It’s best to target one keyword phrase or group of phrases per content page. Recall that keyword overlap can give us a close group of keywords such as “Miami AC” and “Miami AC repair.” In any case, keep your content very focused on a small group of words.

Whichever phrase or phrases you are targeting should be used several times within the body content. You should make sure to include the keyword phrase in the title and headings as well as a few times throughout the actual content. It is especially important to include your keyword phrase near the beginning of your content . Most search engines tend to give more weight to words and phrases that appear in the first few paragraphs of a web page. Remember that search engines determine the subject of your page from the words you use on the page. If you don’t use the keyword phrase often enough, your page will not rank for that phrase.

What this means is that if your page is selling book covers and you are targeting the keyword phrase “buy book covers,” that phrase needs to appear on the page in several places. First of all, it must be included in the title and somewhere in the first paragraph of the copy. In addition, you should try to work it into the rest of the copy at least two to three more times and into the headings that separate different sections of copy. You can also add the keyword phrase to the alt text for any photos that appear on the page.

Buy the Book Today at Amazon

Tutorial: How to Remove link rel=’prev’ and link rel=’next’ from WordPress Head

How to Remove link rel=’prev’ and link rel=’next’ from WordPress Head (in WP 3.0+)

WordPress, in its default state, prints a lot of excess code to the head section of webpages. One element that always annoyed me were two entries that always appeared:

<link rel='prev' title='' href='' />
<link rel='next' title='' href='' />

These entries are recommended for web usability for disabled persons–consider that before removing them. We were looking for a way to lean up our pages, though, so we thought we’d like to remove these entries. There are some outdated instructions in WP forums that will not work in WP 3.0; we tried several approaches, but nothing worked.

In your WordPress template, you’ll find your functions.php file. Open that file and enter the following line.

remove_action( 'wp_head', 'adjacent_posts_rel_link_wp_head', 10, 0 );

This “filter,” as it is called, will tell WordPress not to generate the link rel=’prev’ and link rel=’next’ lines in the WordPress head.

Just a note on why those outdated instructions wouldn’t work with WP 3.0. The filter we created instructs WP to turn off the action titled “adjacent_posts_rel_link_wp_head.” Our commands works in WP 3.0 and above because the former action prior to 3.0 was titled “adjacent_posts_rel_link.”

 

From the Wordpress SEO book

Book Excerpt: The Ultimate WordPress Robots.txt File

Let’s whet your appetite for our book SEO for WordPress in advance of the release date. This is a truly awesome excerpt because it talks about robots. The robots.txt file that is. You can buy the book at Amazon.

The Ultimate WordPress Robots.txt File

We learned in Chapter 2 that WordPress generates archive, tag, comment, and category pages that raise duplicate content issues. We can signal to search engines to ignore these duplicate content pages with a robots.txt file. In this section, we’ll kill a few birds with one ultimate robots.txt file. We’ll tell search engines to ignore our duplicated pages. We’ll go further: we’ll instruct search engines not to index our admin area and not to index non-essential folders on our server. As an option, we can also ask bad bots not to index any pages on our site, although bad bots tend to usually do as they wish.

You can create a robots.txt file in any text editor. Place the file in the root directory/folder of your website (not the WordPress template folder) and the search engines will find it automatically.

The following robots.txt is quite simple, but can accomplish much in a few lines:

User-agent: *
Disallow: /cgi-bin
Disallow: /wp-admin
Disallow: /wp-includes
Disallow: /wp-content/plugins
Disallow: /wp-content/cache
Disallow: /wp-content/themes
Disallow: /trackback
Disallow: /comments
Disallow: /category/*/*
Disallow: /tag/
Disallow: */trackback
Disallow: */comments

Line one, “User-agent: *,” means that that this robots.txt file is to apply to any and all spiders and bots. The next twelve lines all begin with “Disallow.” The Disallow directive simply means “don’t index this location.” The first Disallow directive tells spiders not to index our /cgi-bin folder or its contents. The next five Disallow directives tell spiders to stay out of our WordPress admin area. The last six Disallow directives cure the duplicate content generated through trackbacks and comments, and category pages.

We can also disable indexing of historical archive pages by adding a few more lines, one for each year of archives.

Disallow: /2006/
Disallow: /2007/
Disallow: /2008/
Disallow: /2009/
Disallow: /2010/
Disallow: /2011/

We can also direct email harvesting programs, link exchanges schemes, worthless search engines and other undesirable website visitors not to index our site:

User-agent: SiteSnagger
Disallow: /
User-agent: WebStripper
Disallow: /

The lines instruct the named bots not to index any pages our your site. You can create new entries if you know the name of the user agent that you wish to disallow. SiteSnagger and WebStripper are both services that crawl and copy entire websites so that their users can view them offline. These bots are very unpopular with webmasters because they crawl thoroughly, aggressively, and without pausing, increasing the burden on web servers and diminishing performance for legitimate users.

Tip:

Check out Wikipedia’s robots.txt file for an example of a complex, educational, and entertaining use of the tool. Dozens of bad bots are restricted by the file, with some illustrative commentary.

Buy the Book Today at Amazon

WordPress Stripping iFrame Elements? Here’s the Fix.

Elements like Google Map embeds get stripped out. Here’s the Fix.

If you have ever tried to enter a Google Map embed into a WordPress page or post, you’ve noticed that switching between “Visual” and “HTML” view in the page or post edit window strips the iFrame out–leaving you with broken code that displays nothing. Luckily, there is a fix.

You’ll need to find the functions.php file in your active theme folder. It’s a standard WordPress file, so it’ll be there. Next, we are going to add two short functions that change the way the WordPress editor handles iFrame code. You’ll want to insert the following lines of code before the closing “?>”  of your functions.php file.

// this function initializes the iframe elements 

function add_iframe($initArray) {
$initArray['extended_valid_elements'] = "iframe[id|class|title|style|align|frameborder|height|longdesc|marginheight|marginwidth|name|scrolling|src|width]";
return $initArray;
}

// this function alters the way the WordPress editor filters your code
add_filter('tiny_mce_before_init', 'add_iframe');

That’s it. You can test your mod by entering some iFrame code in the editor window and switching between  the visual and HTML editor.

How to Diagnose a Google Penalty

How to Diagnose a Google Ranking Ban, Penalty, or Filter

The following is an excerpt (with some recent modifications and editorial comments)  from our book WordPress Search Engine Optimization (now in second edition!). You can buy the book at Amazon.

If you undertake black or gray hat techniques, you run a fair chance of having your site penalized in the search results. But even if you are not engaged in these techniques yourself, your site may be punished for associating with black hat purveyors. Hosting on a shared server or sharing domain registration information with bad neighborhoods can lead to to ranking problems, if not punishment. Certainly linking to a bad neighborhood can lead to discipline. If you purchase a domain, you’ll inherit any penalties or bans imposed on the prior version of the website.

There are a wide range of penalties and ranking filters that search engines impose and a still-wider range of effects that those penalties produce. In diagnosing and correcting ranking problems, more than half the battle is figuring which penalty, if any, is imposed and for what violations. Ranking problems are easy to fix but arduous to diagnose with precision. Sudden drops in rankings might lead you to suspect that you’ve received a penalty, but it might not be a penalty at all.

In the following section we’ll look at some specific penalties, filters, conditions, and false conditions, and how to diagnose ranking problems.

Google Ban

The worst punishment that Google serves upon webmasters in a total ban. This means the removal of all pages on a given domain from Google’s index. A ban is not always a punishment: Google “may temporarily or permanently remove sites from its index and search results if it believes it is obligated to do so by law.” Google warns that punishment bans can be meted out for “certain actions such as cloaking, writing text in such a way that it can be seen by search engines but not by users, or setting up pages/links with the sole purpose of fooling search engines may result in removal from our index.”

One of the most newsworthy instances of a total ban was when Google, in 2006, issued a total ban to the German website of carmaker BMW (http://www.bmw.de). The offense? Cloaked doorway pages stuffed with keywords that were shown only to search engines, and not to human visitors. The incident became international news, ignited at least partially by the SEO blogging community. BMW immediately removed the offending pages and within a few weeks, Google rescinded the ban.

How to Diagnose a Total or Partial Ban

To diagnose a full or partial ban penalty, run the following tests and exercises:

  • Check Google’s index. In the Google search field, enter the following specialized search query: “site:yourdomain.com.” Google then returns a list of all of your site’s pages that appear in Google’s index. If your site was formerly indexed and now the pages are removed, there is at least a possibility that your site has been banned from Google.
  • Check if Google has blacklisted your site as unsafe for browsing (type http://www.google.com/safebrowsing/diagnostic?site=mysite.com with your domain at the end).
  • Check for Nofollow/Noindex settings. It might seem obvious, but check to make sure you haven’t accidentally set your WordPress site to Noindex. To check, go to your WordPress Dashboard and click the “Privacy” option under “Settings.” If the second setting, “I would like to block search engines, but allow normal visitors” is set, then your site will promptly fall out of the index. A stray entry in a robots.txt file or in your WordPress template file can instruct search engines not to index your entire site.
  • Check Google Webmaster Tools. Sometimes, but not always, Google will notify you through your Webmaster Tools account that your site has been penalized. But you won’t always receive this message, so you can still be penalized even if you don’t receive it. See the image below for an example message.

Google Webmaster Tools penalty message. In this example, the message notes, “we detected hidden text….”

PageRank Adjustment/PageRank Penalty

An alternative penalty short of an outright ban is a PageRank adjustment. The adjustment can be partial (a drop from a PR4 to a PR2) or can be full (a drop to PR0). With a PageRank adjustment, Google simply adjusts or removes the PageRank value for a site. Google often imposes this punishment upon low-value general directories that sell links. Part of the difficulty with diagnosing and repairing a PageRank penalty is that the PageRank that Google shows to users is historical, sometimes six months pass between PageRank updates.

How to Diagnose a PageRank Penalty

To diagnose a Google PageRank penalty, run the following tests and exercises:

  • Check your inbound links. Whenever your PageRank drops, the most likely reason is that you’ve lost valuable links. Check your link profile in Yahoo Site Explorer. Have you lost any premium, high-PR links you had formerly? Use the reliability of the PageRank algorithm to help diagnose: if you have a PR4 link pointing into one of your pages, and that PR4 link has only one outbound link, that one link alone will be strong enough to make the destination page a PR1 or a PR2. If despite such a link your page remains a PR0, that raises the likelihood of a PageRank penalty.
  • Check all pages. Be sure to check every page on your site, you might just have your PageRank shifting around within your site. It is true, however, that generally your home page will have the highest PageRank value of any page of your site. So, if you’ve got a PR0 on all pages including the homepage, a PageRank penalty is suspect.
  • Check canonicalization. Recall the “www” and “non-www” distinction and that search engines see these as separate domains in some cases. WordPress handles this automatically, but some online tools don’t check this for you so you have to be sure your are checking both the www and non-www versions of your domain.
  • Compare PageRank. Compare Google’s reported PageRank score for your pages with SEOmoz’ mozRank. Typically, these two scores will correlate loosely (within about 10%). If the Google score is much lower than the SEOmoz mozRank score, it’s likely that Google is trimming some PageRank. You can see the SEOmoz Page Rank score with the free SEO Site Tools plugin or by visiting http://www.opensiteexplorer.org/.
Page Rank Penalty

Visible evidence of a Google ranking penalty in the SEO Site Tools plugin; all the elements of a ranking penalty are present. The inbound link count is healthy with over 3,500 links pointing to this domain. SEOmoz’ mozRank (erroneously called “Page Rank” in the screenshot) is a healthy 4.41. Nevertheless, Google’s PageRank is a zero. This is clear evidence of a Google PageRank penalty.

  • Check internal links. In Google Webmaster Tools, Google reveals its profile of internal links on your site. See the figures below for examples of an unhealthy internal link profile, and a healthy link profile. If your site has 100 indexed pages, but Webmaster Tools references only a handful of links, it means that Google is not properly processing your internal links. We need to be careful here because a range of conditions can cause this. It can potentially arise from a PageRank penalty but also from poor internal navigation structure.
Unhealthy Link Profile

This Google Webmaster Tools screenshot shows an unhealthy internal link profile, and is the same site shown in the screenshot just above. This site is a low-value link directory, a likely candidate for a Google PageRank penalty.

Healthy Link Profile

This Google Webmaster Tools screenshot shows a healthy link profile. All or nearly all pages on the website are represented on the internal link profile and the numbers of links to each page is relatively constant.

The -950 Ranking Penalty

Google sometimes employs a -950 ranking penalty to individual pages (but not to entire sites) for particular search queries. The -950 penalty means that for a particular search, your page would have 950 positions added to it. So, a term for which you ranked on page one of Google’s search results in position three, you’d now rank on page ninety-five of the search results at position 953. Sound harsh? It is, and Google has made faint references to it as a penalty for over-optimization. Some SEO professionals contend that they have seen the penalty imposed for shady link building practices.

How to Diagnose a -950 Ranking Penalty

Diagnosing a -950 ranking penalty is easy: try search terms for which you formerly ranked (hopefully you noted their exact former position) and follow the search results out to page 95 or 96. Remember that you can always set Google to display 100 results instead of ten by using the advanced search option at Google.com, which is convenient for checking ranking position in the 100s and above.

The -30/-40 Ranking Penalty

Google often serves up another variety of penalty: it’s the -30 or -40 position penalty. This is an often-imposed penalty, and is applied by Google to entire sites, not just particular pages and not just for particular search queries. This penalty is common enough to trip up legitimate webmasters for very minor oversights or offenses. Most signs point to the -30 penalty being applied algorithmically and is “forgivable,” so changing the condition that led to the penalty automatically reverses the penalty. This penalty has historically been imposed upon sites for serving up poor quality content. For example, the penalty has been imposed upon sites that display thin content. Thin content is content that is partially generic, as with an affiliate site repeating common descriptions of products it sells. Low-value directories have also been served this penalty.

How to Diagnose a -30/-40 Penalty

If you suspect that your site has been been hit with a -30/-40 penalty, there is one sure-fire test to determine if you tripped the penalty. Perform a Google search for your domain name, with out the “www” and without the “.com” or “.net” part of the domain. This search, in normal circumstances, should return your site at or near the first position (depending a bit on the competition of that term). If this test yields your site showing up in a position dropped to the 40s or 50s, it is almost certainly is a -30/-40 penalty.

False Positives That Aren’t Penalties

Don’t assume you’ve been penalized by Google just because your rankings drop or because your rankings remain poor for a new site. Ranking positions can jump around naturally, especially just before algorithm updates, when Google updates its search engine rules. You may also have lost one or more valuable inbound links, that can lead to a drop in rankings. You may also be alternating between Google’s personalized search modes. Personalized search is a Google feature that returns results based on your personal browsing habits. So, if you’ve visited your own website in the past few days, Google will return your website near the top of the results, figuring that it’s one of your personal favorites. Personal search is a convenience tool, but it doesn’t return true rankings. To see actual ranking results you need to make sure personalized search is off. To do this, look on any Google search results page in the upper left hand corner for “Personalize Search On.” Click on the link just under it that reads, “Turn it off.”

Google penalties are almost never imposed for no reason at all. Yes, Google imposes penalties on light offenders while more egregious violations go unpunished. While that might not seem fair, it doesn’t change the fact that if you have perfectly complied with Google’s Webmaster Guidelines, you are extremely unlikely to be penalized. If you’ve been penalized, there’s a reason.

From the Wordpress SEO book

SEO Master Class: The Mathematics and Operation of Google PageRank

The following is an excerpt (with some recent modifications and editorial comments) from our book, WordPress Search Engine Optimization. You can buy the book at Amazon.

The Mathematics and Operation of Google PageRank

Google’s PageRank is part of its search algorithm; the other search engines’ ranking algorithms work similarly. Yahoo and Bing, while they obviously measure inbound link counts as a ranking factor, do not disclose to web users any measure of page value equivalent to PageRank. PageRank works through complex mathematics. Understanding the mathematical intricacies is not vital, but can help illuminate how PageRank impacts your link building efforts. PageRank works the same on all platforms, WordPress or otherwise.

The PageRank Calculation

PageRank calculations works as follows: Google assigns a numerical value to each indexed page on the Web. When an indexed page hyperlinks to another page on the Web a portion of that numerical value is passed from the linking page to the destination page, thereby increasing the destination page’s PageRank. Inbound links increase the PageRank of your web pages and outbound links decrease PageRank. PageRank, often abbreviated as “PR,” is expressed as a number from 0 to 10. Google.com and Facebook.com, both of which benefit from millions of inbound links, enjoy a PageRank of 10. In common parlance, a PageRank 10 site is referred to as a “PR10 site.” Remember though that PageRank refers to pages on the web, not just sites themselves. A PR5 site simply means that the site’s front page is a PR5.

So how is PageRank specifically calculated? Every indexed page on the web enjoys a small amount of PageRank on its own, a PageRank score of 1. This inherent PageRank is the original source of all PageRank on the web; it is only through linking between pages and sites that some pages accumulate higher PageRank than others. However, a page can never send all of its PageRank to other pages—this is where the damping factor comes into play. The damping factor is simply a number between 0 and 1 (but think of it as zero to 100 on a percentage scale); it represents the amount of PageRank that can be sent away from a page when that page links out to other pages.

If a search algorithm’s damping factor were set to zero, no page would ever send PageRank away, and the entire PageRank calculation becomes pointless. On the other hand, if the damping factor is set to 1, then 100% of a page’s PageRank is sent away through outbound linking, and any page with any outbound links retains no PageRank. In this case, the algorithm also fails—the internet would be populated entirely sites of either PR0 or PR10 with no sites in between. As it happens, the damping factor employed by Google is widely believed to be .85. This means that 85% of a page’s PageRank is available to be passed to other pages through linking, while 15% of a page’s PageRank will always be retained. It is believed that Google can alter the damping factor for particular sites.

Consider for a moment that Google manages PageRank calculations for billions of web pages. If that wasn’t daunting enough, consider that Google undertakes the even more staggering task of managing the mathematical calculations of immeasurable numbers of links between those billions of sites.

PageRank, Diagramatically

This graphical illustration of Pagerank calculations for a hypothetical group of web pages shows that the PageRank distribution is accumulated in site “B” because it enjoys a high number of links. The sites represented by the small circles at the bottom of the illustration retain only 1.6% of the PageRank distribution because they link outward and have no inbound links. Note also that site “C” enjoys a healthy amount of PageRank simply because it enjoys a single link from site “B.”

You Have to Share Your PageRank

Also bear in mind that the amount of PageRank available to be passed by a page will be equally divided among all the outbound links on that page. So, if a webpage has a total of six links: three internal links and three external links (links to outside websites) then the PageRank passed away by that page will be shared equally among the six links on that page.

What does that mean for the link builder? Well, it means that if you have secured a link on a great PR4 page, but that page has 200 outbound links, then you’ll be sharing the available PageRank with 199 other sites. That’s why you want to seek out pages with low numbers of outbound links. When there are fewer outbound links, your link will enjoy a much greater percentage of the available PageRank.

The Logarithmic PageRank Scale

If the mathematics underlying PageRank weren’t complicated enough, there is another facet that you must consider. The PageRank scale of PR1 to PR10 isn’t linear, it is logarithmic. Therefore, it takes ten times as much linking power to rise from a PR2 to a PR3 page. Expressed another way, a PR4 page has 100 times the linking power of a PR2 page. As each level of PageRank is reached, it becomes harder and harder to reach the next level. There are only about 120 to 150 PR10 pages at any given time, and generally this elite class of pages and sites includes Google.com, Microsoft.com, WhiteHouse.gov, and other sites of equivalent popularity and character.

PageRank Is Historical

PageRank is historical and only updated every three months or so (although sometimes much longer periods pass between PageRank updates, it’s really up to the whim of Google)—when you check the PageRank of a page, you aren’t seeing the current PageRank, you are seeing the PageRank reported as of the last PageRank update.

Buy the Book Today at Amazon

From the Wordpress SEO book

SEO Master Class: Choosing a Keyword-Rich Domain Name

The following is an excerpt (with some recent modifications and editorial comments) from our book, WordPress Search Engine Optimization. You can buy the book at Amazon.

SEO Master Class: Choosing a Keyword-Rich Domain Name

Almost all websites will rely on primary keywords on core pages like the front page. If your keyword research teaches you that one phrase or a very small group of related phrases represents your high-volume, high-relevance primary keywords, then you’ll want to consider using those keyphrases in a keyword-rich domain name. For some, this won’t be possible or desirable: perhaps the domain name has already been chosen, or the business’ marketing strategy revolves principally around a customized brand name. But if you have the opportunity to choose a keyword-rich domain name, you’ll benefit from a little extra power in your ranking efforts down the road. You may have noticed that often a competitive search market is populated with websites that have keywords in their domain name. This is no accident: key terms in the domain name is a ranking factor and experienced webmasters know it.

Whatever you do, choose wisely; if you ever need to change your domain name, it’ll take a lot of work and you’ll loose both incoming links and existing customers.

Tip:

SEO professionals know that you don’t always have—and won’t always need—every SEO element (domain age, keyword-rich domain name, expert title tags, thousands of inbound links, etc.) to rank well. When you consider all the elements together that make a site rank well, you want to make sure you have 80% of the elements present—but don’t fret if a few elements are out of your control.

Domain names are certainly an element that search engines consider as a ranking factor. Remember a search engine’s core purpose: to deliver relevant search results to a user entering a query. Certainly a domain name that includes a few of the searcher’s query terms would tend to be relevant for that query. The weight afforded by search engines to keywords in the domain names is moderate. In competitive markets, a keyword-rich domain name can provide some extra push to pass tough competitors. This can be frustrating in a market where every conceivable variant of a domain name has been snatched up.

Also keep in mind that keyword prominence applies to keywords in domain names. This means that the first words in a domain name are afforded greater weight by the search engines than the last words in a domain name. You will also want to mirror the word order of popular search phrases whenever possible and keep your important terms first in the domain name.

To craft a domain name, begin with your primary keywords. We’ll use some real keyword data and search volume surrounding the keyphrase “Denver homes” as an example.

Keyword Monthly Search Volume
Denver homes for sale 1000
Denver homes 1000
Denver homes for rent 280
new homes Denver 280

The preceding table demonstrates a few important points:

  • “Denver” is the first word in both of the highest volume key phrases.
  • “Denver” appears in all four of the keyword variations.
  • “Homes” appears in all four of the keyword variations.

In this example, the terms “new” and “for rent” aren’t the valuable terms—unless of course your website is concerned with rental homes and apartments in Denver, in which case the “Denver homes for rent” keyphrase is the only relevant one on which to base your domain name. With “Denver” in the first position for the majority of searches, you will want to maintain that word order.

You should also consider keyword overlap in crafting domain names. Keyword overlap exists when one key phrase or keyword is incorporated either partially or fully within another—and you can use it to your benefit. In our example, “Denver homes” has full overlap with “Denver homes for sale.” When you see overlap like that with robust search volume for both phrases, the longer key phrase becomes even more attractive as a primary keyword for your domain name. “New homes Denver” has only a partial overlap, and even that’s a stretch because the word order is reversed.

And so, in our example, the path is clear: “Denver homes for sale” is a highly desirable high-volume phrase to use as the basis for a domain name. But what to do if “denverhomesforsale.com” is already taken? You have two options: buy an existing or dropped domain, play with hyphens, or create a clever variation with extra words.

Buying/Acquiring Domain Names

You can always buy a domain name from its owner or wait for an existing domain to expire (so-called “dropped” domains). For dropped domains, there are a host of online services that, for a fee, will help you navigate the increasingly complex world of expired domains. This approach will yield some some inevitable frustrations: the system is dominated by experts that have mastered its subtleties. As a newcomer, you’ll likely have to endure a learning curve. Also, an owner of an expired domain is entitled to a redemption period during which you’ll have to wait if you want to snatch up a choice domain. For most SEO pros, the extra time and risk isn’t worth it—especially when you can overcome a less-than-perfect domain name with sound on-page optimization and some extra linking power.

You can also buy a domain in the aftermarket from an existing domain owner. Dangers to watch our for with this approach are that some domain owners make it impossible to be found, and when you do find them, they have a completely deluded sense of the domain’s value. Services like sedo.com and domainbrokers.com maintain ostensibly active listings of domains for sale. Domain registrars like godaddy.com offer domain “buying services” where you select a desired domain name and they attempt to secure it for you.

In the domain resale market, asking prices for domains are typically astronomical. Overall, the domain resale market is riddled with complexities, dead ends, and punitive pricing. If you do undertake to purchase a domain, either by resale or following expiration, be prepared for a hunt. Smart SEO professionals don’t overpay for domains, and they certainly don’t endure unreasonable delays to launch their next project.

Hyphens and Extra Characters in Domain Names

It’s true: all the easy domain names are taken. But you still have an opportunity to fashion a keyword-rich domain name with a little creativity. All domain names must follow these technical rules:

  • Domains can include letters (x, y, z).
  • Domains can include numbers (1, 2, 3).
  • Domains can include dashes/hyphens, and can be repeated in sequence (-, –, —).
  • Domains cannot include spaces.
  • Capital letters are ignored.
  • Domains can’t begin or end with a dash.

Hyphens present a good opportunity. In our example, we might consider checking for the availability of denver-homes-for-sale.com. This domain keeps the keywords in order, maintains keyword prominence, and the hyphens have two benefits: they certainly make the domain easier for humans to read and can help search engines distinguish the words (i.e., “kitchens pot,” vs. “kitchen spot”). The drawback of hyphens—and it is worth consideration—is that hyphenated domains are awkward and unmemorable and can appear trashy. Visitors are unlikely to remember your specific combination of words and hyphens. It can also be inconvenient to express your email address repeatedly as “Peter at Denver homes for sale , dot com, with hyphens between all four words.” That said, in a pure search environment, where you are going solely for keyword-based traffic, you can worry less about memorability. You’ll be getting your visitors solely from search and not requiring repeat visitors.

Hyphenated domains have a fairly-deserved reputation as being a bit trashy; many link farms and thin content sites employ hyphens in their domain names.

A helpful variant of this technique is to simply apply a suffix to the domain, such as denverhomesforsalenow.com or denverhomesforsale303.com (303 is an area code in Denver). Get creative: think of a term that adds to your domain. The terms “express” and “pros” have positive connotations. “Express” suggests speedy, high-value service. “Pros” suggests someone licensed with experience. Find an appropriate suffix for your domain and you will have a keyword-rich domain without the hassle and expense of purchasing in the domain aftermarket.

As a final word on domains, make sure you use a reputable domain registrar. Some disreputable registrars may make it difficult for you to transfer you domain away later.

Tip:

Don’t park your domains, put up content! Domain registrars like GoDaddy offer domain parking “service.” This isn’t a service at all—it’s a way for GoDaddy to squeeze a few pennies in pay-per-click ads out of your domain. The better approach is to put up even just a few paragraphs on your domain just to get the search engines indexing the page and building up some site age. Parked domains don’t earn site age.

Buy the Book Today at Amazon

From the Wordpress SEO book

Book Excerpt: How to Use Demographic Data to Find Wealth Centers

The following is an excerpt (with some recent modifications and editorial comments) from our book, WordPress Search Engine Optimization. You can buy the book at Amazon.

Follow the People, Follow the Money

When building your keyword list, you’ll always want to return to the question “who is my customer?” If you are a deck builder, pool builder, or plastic surgeon, your customer is a homeowner (in the case of home services) and a person of financial means (in the case of home services or plastic surgery). It’s obviously helpful to know where the people with the money live. If the residents of a town or neighborhood aren’t able to afford your product, you’ll obviously not want to market there. Similarly, you’ll prefer to put your efforts into high-population areas over low-population areas. This same approach can apply to other demographics that might impact your bottom line: where are the families with children? Where do the senior citizens live? These inquiries are basic demographic questions that you can use to focus your keyword strategy.

For most, you’ll have a sense of your own community: where the population centers are, where the wealthier people with disposable income live. There may be other variations: areas with new home construction underway are a gold mine for home services like window blinds, alarm companies, and pool builders.

If you don’t have a true encyclopedic understanding of the demographics of your region, or you simply want to deepen your understanding of the local marketplace, there is a great web-based tool that can help you “follow the money.” The tool is Webfoot Maps and can be found at http://maps.webfoot.com/. Webfoot has created a collection of demographic-based Google Maps mashups that visually represent demographic data like population density and household income as an overlay over a standard Google Map. With this tool, you can zoom into your town and see where the population centers are and where the high-income folks are living.

The site offers a tremendous amount of data and it can be very helpful in crafting a keyword strategy. The census data upon which the site relies is from 2000, but will likely be updated soon when the new 2010 census data becomes available. To use the tool, browse to http://maps.webfoot.com/ and follow the link for “US 2000 Census.” From there, you can select any of the following demographic criteria:

  • Median Household Income
  • Population density
  • Median Owner-occupied home value
  • Median age
  • Median home value/median income
  • Percent White
  • Percent Black
  • Percent Hispanic
  • Percent Asian
  • Percent Native
  • Percent Female
  • Percent Male
  • Percent of owner-occupied housing units
  • Percent of renter-occupied housing units
  • Percent of vacant housing units
  • Average household size
  • Average family size
  • Percent with college degree
  • 2008 Unemployment Rate (county)
  • 2007 Unemployment Rate (county)
  • Unemployment Rate Change 2008-7

Webfoot presents sensible graphical data for each default selection, but you can adjust the “Value” parameter to display, for example, only areas with incomes above $100,000 per year.

Webfoot’s demographic Google Maps mashup at work displaying household income in the geo-markets including and surrounding Kansas City. Darker areas indicate higher income levels. Areas with higher incomes can present excellent web marketing opportunities for some businesses.

Buy the Book Today at Amazon

Google Hides “Dead” Girl on Google Maps: “This Image Is No Longer Available”

Google Hides “Dead” Girl on Google Maps: “This Image Is No Longer Available”

Last week, we reported on the apparent dead girl on Google Maps. As it turned out, the dead girl was “playing” dead in the street just as Google’s imaging van drove by. The widespread media frenzy over the photo ultimately prompted Google to remove the image from Google Places, reporting a blank image with the message This image is no longer available. You can see the removed image here.

No longer available

The Daily Mail of the UK reports that Google has been forced to remove some images from its “Street View” feature throughout the UK based on privacy protests.

Here’s a screenshot of the original Maps entry showing the young girl in the street: