0
The Deep Web (also called the Deepnet, the Invisible Web, the Undernet or the hidden Web) is World Wide Web content that is not part of the Surface Web, which is indexed by standard search engines.
It should not be confused with the dark Internet, the computers that can no longer be reached via Internet, or with the distributed filesharing network Darknet, which could be classified as a smaller part of the Deep Web.
Mike Bergman, founder of BrightPlanet, credited with coining the phrase,[1] said that searching on the Internet today can be compared to dragging a net across the surface of the ocean: a great deal may be caught in the net, but there is a wealth of information that is deep and therefore missed.[2] Most of the Web's information is buried far down on dynamically generated sites, and standard search engines do not find it. Traditional search engines cannot "see" or retrieve content in the deep Web—those pages do not exist until they are created dynamically as the result of a specific search. The deep Web is several orders of magnitude larger than the surface Web.[3]
Contents
1 Size
2 Naming
3 Deep Resources
4 Accessing
5 Crawling the deep Web
6 Classifying resources
7 Future
8 See also
9 References
10 Further reading
11 External links
Size
Estimates based on extrapolations from a study done at University of California, Berkeley in 2001,[3] speculate that the deep Web consists of about 7,500 terabytes. More accurate estimates are available for the number of resources in the deep Web: He detected around 300,000 deep web sites in the entire Web in 2004,[4] and, according to Shestakov, around 14,000 deep web sites existed in the Russian part of the Web in 2006.[5]
Naming
Bergman, in a seminal paper on the deep Web published in the Journal of Electronic Publishing, mentioned that Jill Ellsworth used the term invisible Web in 1994 to refer to websites that were not registered with any search engine.[3] Bergman cited a January 1996 article by Frank Garcia[disambiguation needed]:[6]
"It would be a site that's possibly reasonably designed, but they didn't bother to register it with any of the search engines. So, no one can find them! You're hidden. I call that the invisible Web."
Another early use of the term Invisible Web was by Bruce Mount and Matthew B. Koll of Personal Library Software, in a description of the @1 deep Web tool found in a December 1996 press release.[7]
The first use of the specific term deep Web, now generally accepted, occurred in the aforementioned 2001 Bergman study.[3]
Deep Resources
Deep Web resources may be classified into one or more of the following categories:
Dynamic content: dynamic pages which are returned in response to a submitted query or accessed only through a form, especially if open-domain input elements (such as text fields) are used; such fields are hard to navigate without domain knowledge.
Unlinked content: pages which are not linked to by other pages, which may prevent Web crawling programs from accessing the content. This content is referred to as pages without backlinks (or inlinks).
Private Web: sites that require registration and login (password-protected resources).
Contextual Web: pages with content varying for different access contexts (e.g., ranges of client IP addresses or previous navigation sequence).
Limited access content: sites that limit access to their pages in a technical way (e.g., using the Robots Exclusion Standard, CAPTCHAs, or no-cache Pragma HTTP headers which prohibit search engines from browsing them and creating cached copies[8]).
Scripted content: pages that are only accessible through links produced by JavaScript as well as content dynamically downloaded from Web servers via Flash or Ajax solutions.
Non-HTML/text content: textual content encoded in multimedia (image or video) files or specific file formats not handled by search engines.
Text content using the Gopher protocol and files hosted on FTP that are not indexed by most search engines. Engines such as Google do not index pages outside of HTTP or HTTPS.[9]
Accessing
To discover content on the Web, search engines use web crawlers that follow hyperlinks through known protocol virtual port numbers. This technique is ideal for discovering resources on the surface Web but is often ineffective at finding deep Web resources. For example, these crawlers do not attempt to find dynamic pages that are the result of database queries due to the infinite number of queries that are possible.[1] It has been noted that this can be (partially) overcome by providing links to query results, but this could unintentionally inflate the popularity for a member of the deep Web.
In 2005, Yahoo! made a small part of the deep Web searchable by releasing Yahoo! Subscriptions. This search engine searches through a few subscription-only Web sites. Some subscription websites display their full content to search engine robots so they will show up in user searches, but then show users a login or subscription page when they click a link from the search engine results page.
DeepPeep, Intute, Deep Web Technologies, and Scirus are a few search engines that have accessed the deep web. Intute ran out of funding and is now a temporary static archive as of July, 2011.[10]
Crawling the deep Web
Researchers have been exploring how the deep Web can be crawled in an automatic fashion. In 2001, Sriram Raghavan and Hector Garcia-Molina[11][12] presented an architectural model for a hidden-Web crawler that used key terms provided by users or collected from the query interfaces to query a Web form and crawl the deep Web resources. Alexandros Ntoulas, Petros Zerfos, and Junghoo Cho of UCLA created a hidden-Web crawler that automatically generated meaningful queries to issue against search forms.[13] Several form query languages (e.g., DEQUEL[14]) have been proposed that, besides issuing a query, also allow to extract structured data from result pages. Another effort is DeepPeep, a project of the University of Utah sponsored by the National Science Foundation, which gathered hidden-Web sources (Web forms) in different domains based on novel focused crawler techniques.[15][16]
Commercial search engines have begun exploring alternative methods to crawl the deep Web. The Sitemap Protocol (first developed by Google) and mod oai are mechanisms that allow search engines and other interested parties to discover deep Web resources on particular Web servers. Both mechanisms allow Web servers to advertise the URLs that are accessible on them, thereby allowing automatic discovery of resources that are not directly linked to the surface Web. Google's deep Web surfacing system pre-computes submissions for each HTML form and adds the resulting HTML pages into the Google search engine index. The surfaced results account for a thousand queries per second to deep Web content.[17] In this system, the pre-computation of submissions is done using three algorithms: (1) selecting input values for text search inputs that accept keywords, (2) identifying inputs which accept only values of a specific type (e.g., date), and (3) selecting a small number of input combinations that generate URLs suitable for inclusion into the Web search index.
Classifying resources
This section may contain original research. (September 2012)
Automatically determining if a Web resource is a member of the surface Web or the deep Web is difficult. If a resource is indexed by a search engine, it is not necessarily a member of the surface Web, because the resource could have been found using another method (e.g., the Sitemap Protocol, mod oai, OAIster) instead of traditional crawling. If a search engine provides a backlink for a resource, one may assume that the resource is in the surface Web. Unfortunately, search engines do not always provide all backlinks to resources. Even if a backlink does exist, there is no way to determine if the resource providing the link is itself in the surface Web without crawling all of the Web. Furthermore, a resource may reside in the surface Web, but it has not yet been found by a search engine. Therefore, if we have an arbitrary resource, we cannot know for sure if the resource resides in the surface Web or deep Web without a complete crawl of the Web.
Tümünü Göster