How Search Engine Work as Web Crawlers
These are the search engines that finally bring your website to the attention of potential customers. Therefore, it is better to know how these search engines really work and how to present information to the customer initiating a search.
There are basically two types of search engines. The first is by robots called crawlers or spiders.
Search engines use spiders to index websites. When you send your web pages a search engine to complete their required submission page, the search engine spider will index your site. A "spider" is an automated program that is run by the search engine system. Spider visits a web site,
read the contents in the same place, meta-tags of the site and follow the links that the site connects. The spider returns all that information back to a central repository where data is indexed. We will visit each link you have on your website and index those sites as well. Some spiders only index a certain number of pages on your site, so do not create a site with 500 pages!
The spider periodically revisit sites to check information that has changed. The frequency with which this occurs is determined by the moderators of the search engines.
A spider is almost like a book that contains the table of contents, the actual content and links and references to all the websites it finds during its search, and can index up to a million pages a day.
Example: Excite, Lycos, Altavista and Google.
When you ask a search engine to locate information, it is actually searching through the index you created and not actually searching the web. Different search engines produce different rankings because not all search engines use the same algorithm to search through the indices.
One of the things a search engine algorithm, search is the frequency and location of keywords on a website, but can also detect artificial keyword stuffing or spamdexing. Then the algorithms analyze the way that pages of links to other Web pages. By checking the number of pages linked together, an engine can determine how much is a page, if the keywords of the linked pages are similar to keywords in the original page.
These are the search engines that finally bring your website to the attention of potential customers. Therefore, it is better to know how these search engines really work and how to present information to the customer initiating a search.
There are basically two types of search engines. The first is by robots called crawlers or spiders.
Search engines use spiders to index websites. When you send your web pages a search engine to complete their required submission page, the search engine spider will index your site. A "spider" is an automated program that is run by the search engine system. Spider visits a web site,
read the contents in the same place, meta-tags of the site and follow the links that the site connects. The spider returns all that information back to a central repository where data is indexed. We will visit each link you have on your website and index those sites as well. Some spiders only index a certain number of pages on your site, so do not create a site with 500 pages!
The spider periodically revisit sites to check information that has changed. The frequency with which this occurs is determined by the moderators of the search engines.
A spider is almost like a book that contains the table of contents, the actual content and links and references to all the websites it finds during its search, and can index up to a million pages a day.
Example: Excite, Lycos, Altavista and Google.
When you ask a search engine to locate information, it is actually searching through the index you created and not actually searching the web. Different search engines produce different rankings because not all search engines use the same algorithm to search through the indices.
One of the things a search engine algorithm, search is the frequency and location of keywords on a website, but can also detect artificial keyword stuffing or spamdexing. Then the algorithms analyze the way that pages of links to other Web pages. By checking the number of pages linked together, an engine can determine how much is a page, if the keywords of the linked pages are similar to keywords in the original page.
No comments:
Post a Comment