The World Wide Web is like a running race or marathon where websites compete to reach the finish line first. In this case, the finish line is a higher ranking. And in this race for supremacy, it is important to avoid duplicate content and its penalties.

To facilitate the efficient functioning of directories, search engines have been armed with content filters. This removes or filters duplicate content from pages it’s indexing. And the most hurtful penalty is lower rankings.

Unfortunately, these filters not only catch rogues but web pages that are genuine too. What webmasters need to do is understand how filters function and know what action is to be taken to avoid being filtered out.

When a search engine sends out spiders the filters leave out or sieve:

• Websites that feature identical content. And when within a site the webmaster includes many copies or versions of pages to cheat the search engines. Filters are also extremely sensitive to “doorway” pages.

• Content masked by different packaging. Known as “scraped content” this duplication of pages with little or no relevant changes falls prey to filters.

• Product descriptions featured by e-commerce sites. Most e-commerce sites publish alongside a product the manufacturer’s description of the product and this content then appears on zillions of e-commerce sites falling victim to filters.

• Articles distributed widely over the net. While some engines are programmed to find the origin of the article there are others who may not be able to source the origins. (more…)