It is very important for the web developers to design and develop the site in such a way that it becomes search engine friendly. The site which is search engine friendly will be user friendly as well because all the engines are trying to adopt artificial intelligence so that they can judge a web site in a same way as human will do. In this article, I will try to explain the basic concepts of the web development to make a site friendly for both the users and the search engines like Google.
Search engines do understand the HTML very easily. Any website will be developed based on text not only the images. Using images in .TIFF, .GPEG or .GIF format can be very important for the web site as the users like to see the pictures rather than reading. But engine’s doesn’t recognise the images therefore all of this kind of content can be ignored by the crawlers. To resolve this issue, developers can use the Alt Attributes to provide the HTML description about individual images.
The way how a user sees a web site is quite different then the way how a search engine sees the same web site. You can check your web site in
SEO-Browser or Mozbar for Mozila Firefox to see how the site has been indexed by the Search Engines like Google. This tool can be the first tool which you can use for the successful SEO Campaign.
Crawlable Link: Any web site might has thousands of content including different pages, images, audio, video files etc, But all the content should be linked up with each other properly. Search Engines will start crawling the web site with one page and then it will crawl all the contents whatever will be linked up. Imagine the scenario where search engine has started indexing the first page of your site. Might be the first page don’t have any link with any other pages of the website, therefore the engine might think that this website has only one page. Although there might be lots of quality content inside your web server, but all of those will be ignored if those are not linked up properly. Every developer should be very careful about the broken links.
There might be some documents which is accessible after filling up a form. Search engines don’t fill up any form and because of this; many quality content can be ignored by the search engines.
Robots.txt is the file where a webmaster can provide the guidance which pages or which contents they don’t want the search engines to index. Automated robots will check the robots.txt file before they start indexing the content.
Search Engine Friendly Content Design for the website (Key Words)
Search engines like Google crawl millions of web pages in index those contents. Instead of indexing million of websites in million of database, engines has millions of database based on the keywords. When the robots crawl a website, find out some content related to a particular key word, it will index that web site against that keyword in that database. When a user will look for that key word, sites will be delivered to the search engine result page (SERP) according to the page rank of the site from that “key word database”.
Key words are the building block for the search engine optimization (SEO). Engines are adopting new and updated algorithm which can think and understand the content like human at some extent but still human will look for some keywords to have the idea about a particular content. Success of the optimisation campaign will depend on the key word research and content building based on those keywords.
There was a time when the engines has indexed the content of the site based on the keywords and many people has abused this system. They have inserted the key words in every page with the page background colour so that normal user can’t see it but search engines might index the page. Key word spamming will be marked with negative weight by any search engines now.
Keywords should be analyzed carefully, have to be decided by the management and then a list of key words should be used prominently for the title tag, Meta tag, Meta description or heading.
On Page Optimzation: Once the key word has been selected it should be used in the Title Tag for the relevant article. Article should have a heading (H1, H2, H3 etc) which could be used for by the keywords. It is recommended that if some of the relevant key words come in beginning of the article. But developers need to keep the fact in mind that a particular key word should not be used for more than 2.5% within an article. Alt Attributes can be used with the images for the keywords. The URL might be named with closely related keywords. Meta tag describe all the targeted keywords for a particular content while the Meta Description helps the engine’s robots to understand a particular content. Every developer should emphasise more on the Meta Description rather then the Meta Tag.
Key word density: It’s a common question by the developers that how many key words can be used in a single article. The best idea is to use a particular word for two or three times by using different synonyms of that word. All the engines are trying to understand like human and best practice for the developer in terms of key word density: don’t concentrate on the keywords, concentrate on the quality of the article and relevancy of the content. Key word will be picked up by the search engines automatically.