65
2003.11.20 SLIDE 1 IS 202 – FALL 2003 Lecture 21: Web Search Issues and Algorithms Prof. Ray Larson & Prof. Marc Davis UC Berkeley SIMS Tuesday and Thursday 10:30 am - 12:00 pm Fall 2003 http://www.sims.berkeley.edu/academics/courses/ is202/f03/ SIMS 202: Information Organization and Retrieval

Lecture 21: Web Search Issues and Algorithms

  • Upload
    verda

  • View
    19

  • Download
    2

Embed Size (px)

DESCRIPTION

Prof. Ray Larson & Prof. Marc Davis UC Berkeley SIMS Tuesday and Thursday 10:30 am - 12:00 pm Fall 2003 http://www.sims.berkeley.edu/academics/courses/is202/f03/. Lecture 21: Web Search Issues and Algorithms. SIMS 202: Information Organization and Retrieval. Lecture Overview. Review - PowerPoint PPT Presentation

Citation preview

Page 1: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 1IS 202 – FALL 2003

Lecture 21: Web Search Issues and Algorithms

Prof. Ray Larson & Prof. Marc Davis

UC Berkeley SIMS

Tuesday and Thursday 10:30 am - 12:00 pm

Fall 2003http://www.sims.berkeley.edu/academics/courses/is202/f03/

SIMS 202:

Information Organization

and Retrieval

Page 2: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 2IS 202 – FALL 2003

Lecture Overview

• Review– Evaluation of IR systems

• Precision vs. Recall• Cutoff Points• Test Collections/TREC• Blair & Maron Study

• Web Crawling

• Web Search Engines and Algorithms

Credit for some of the slides in this lecture goes to Marti Hearst

Page 3: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 3IS 202 – FALL 2003

• What can be measured that reflects users’ ability to use system? (Cleverdon 66)– Coverage of information– Form of presentation– Effort required/ease of use– Time and space efficiency– Recall

• Proportion of relevant material actually retrieved

– Precision• Proportion of retrieved material actually relevant

What to Evaluate?E

ffec

tiven

ess

Page 4: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 4IS 202 – FALL 2003

Precision vs. Recall

|Collectionin Rel|

|edRelRetriev| Recall

|Retrieved|

|edRelRetriev| Precision

Relevant

Retrieved

All Docs

Page 5: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 5IS 202 – FALL 2003

Retrieved vs. Relevant Documents

Very high precision, very low recall

Relevant

Page 6: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 6IS 202 – FALL 2003

Retrieved vs. Relevant Documents

Very low precision, very low recall (0 in fact)

Relevant

Page 7: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 7IS 202 – FALL 2003

Retrieved vs. Relevant Documents

High recall, but low precision

Relevant

Page 8: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 8IS 202 – FALL 2003

Retrieved vs. Relevant Documents

High precision, high recall (at last!)

Relevant

Page 9: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 9IS 202 – FALL 2003

Precision/Recall Curves

• Difficult to determine which of these two hypothetical results is better:

precision

recall

x

x

x

x

Page 10: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 10IS 202 – FALL 2003

Test Collections

• Cranfield 2– 1400 Documents, 221 Queries– 200 Documents, 42 Queries

• INSPEC – 542 Documents, 97 Queries• UKCIS – >10000 Documents, multiple sets, 193

Queries• ADI – 82 Document, 35 Queries• CACM – 3204 Documents, 50 Queries• CISI – 1460 Documents, 35 Queries• MEDLARS (Salton) 273 Documents, 18 Queries

Page 11: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 11IS 202 – FALL 2003

TREC

• Text REtrieval Conference/Competition– Run by NIST (National Institute of Standards &

Technology)– 1999 was the 8th year - 9th TREC in early November

• Collection: >6 Gigabytes (5 CRDOMs), >1.5 Million Docs– Newswire & full text news (AP, WSJ, Ziff, FT)– Government documents (federal register,

Congressional Record)– Radio Transcripts (FBIS)– Web “subsets” (“Large Web” separate with 18.5

Million pages of Web data – 100 Gbytes)– Patents

Page 12: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 12IS 202 – FALL 2003

TREC (cont.)

• Queries + Relevance Judgments– Queries devised and judged by “Information

Specialists”– Relevance judgments done only for those documents

retrieved—not entire collection!

• Competition– Various research and commercial groups compete

(TREC 6 had 51, TREC 7 had 56, TREC 8 had 66)– Results judged on precision and recall, going up to a

recall level of 1000 documents

• Following slides from TREC overviews by Ellen Voorhees of NIST

Page 13: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 13IS 202 – FALL 2003

Sample TREC Query (Topic)

<num> Number: 168<title> Topic: Financing AMTRAK

<desc> Description:A document will address the role of the Federal Government in financing the operation of the National Railroad Transportation Corporation (AMTRAK)

<narr> Narrative: A relevant document must provide information on the government’s responsibility to make AMTRAK an economically viable entity. It could also discuss the privatization of AMTRAK as an alternative to continuing government subsidies. Documents comparing government subsidies given to air and bus transportation with those provided to AMTRAK would also be relevant.

Page 14: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 14IS 202 – FALL 2003

Page 15: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 15IS 202 – FALL 2003

Page 16: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 16IS 202 – FALL 2003

Page 17: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 17IS 202 – FALL 2003

Lecture Overview

• Review– Evaluation of IR systems

• Precision vs. Recall• Cutoff Points• Test Collections/TREC• Blair & Maron Study

• Web Crawling

• Web Search Engines and Algorithms

Credit for some of the slides in this lecture goes to Marti Hearst

Page 18: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 18IS 202 – FALL 2003

Standard Web Search Engine Architecture

crawl theweb

create an inverted

index

Check for duplicates,store the

documents

Inverted index

Search engine servers

userquery

Show results To user

DocIds

Page 19: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 19IS 202 – FALL 2003

Web Crawling

• How do the web search engines get all of the items they index?

• Main idea: – Start with known sites– Record information for these sites– Follow the links from each site– Record information found at new sites– Repeat

Page 20: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 20IS 202 – FALL 2003

Web Crawlers

• How do the web search engines get all of the items they index?

• More precisely:– Put a set of known sites on a queue– Repeat the following until the queue is empty:

• Take the first page off of the queue• If this page has not yet been processed:

– Record the information found on this page

» Positions of words, links going out, etc

– Add each link on the current page to the queue

– Record that this page has been processed

• In what order should the links be followed?

Page 21: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 21IS 202 – FALL 2003

Page Visit Order

• Animated examples of breadth-first vs depth-first search on trees:– http://www.rci.rutgers.edu/~cfs/472_html/AI_SEARCH/ExhaustiveSearch.html

Structure to be traversed

Page 22: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 22IS 202 – FALL 2003

Page Visit Order

• Animated examples of breadth-first vs depth-first search on trees:– http://www.rci.rutgers.edu/~cfs/472_html/AI_SEARCH/ExhaustiveSearch.html

Breadth-first search (must be in presentation mode to see this animation)

Page 23: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 23IS 202 – FALL 2003

Page Visit Order

• Animated examples of breadth-first vs depth-first search on trees:– http://www.rci.rutgers.edu/~cfs/472_html/AI_SEARCH/ExhaustiveSearch.html

Depth-first search (must be in presentation mode to see this animation)

Page 24: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 24IS 202 – FALL 2003

Page Visit Order

• Animated examples of breadth-first vs depth-first search on trees:http://www.rci.rutgers.edu/~cfs/472_html/AI_SEARCH/ExhaustiveSearch.html

Page 25: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 25IS 202 – FALL 2003

Sites Are Complex Graphs, Not Just Trees

Page 1

Page 3Page 2

Page 1

Page 2

Page 1

Page 5

Page 6

Page 4

Page 1

Page 2

Page 1

Page 3

Site 6

Site 5

Site 3

Site 1 Site 2

Page 26: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 26IS 202 – FALL 2003

Web Crawling Issues

• Keep out signs– A file called robots.txt tells the crawler which

directories are off limits• Freshness

– Figure out which pages change often– Recrawl these often

• Duplicates, virtual hosts, etc– Convert page contents with a hash function– Compare new pages to the hash table

• Lots of problems– Server unavailable– Incorrect html– Missing links– Infinite loops

• Web crawling is difficult to do robustly!

Page 27: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 27IS 202 – FALL 2003

Searching the Web

• Web Directories versus Search Engines

• Some statistics about Web searching

• Challenges for Web Searching

• Search Engines– Crawling– Indexing– Querying

Page 28: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 28IS 202 – FALL 2003

Directories vs. Search Engines

• Directories– Hand-selected sites– Search over the

contents of the descriptions of the pages

– Organized in advance into categories

• Search Engines– All pages in all sites – Search over the

contents of the pages themselves

– Organized after the query by relevance rankings or other scores

Page 29: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 29IS 202 – FALL 2003

Search Engines vs. Internal Engines

• Not long ago HotBot, GoTo, Yahoo and Microsoft were all powered by Inktomi

• Today Google is the search engine behind many other search services (such as Yahoo and AOL’s search service)

Page 30: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 30IS 202 – FALL 2003

Statistics from Inktomi

• Statistics from Inktomi, August 2000, for one client, one week– Total # queries: 1315040– Number of repeated queries: 771085– Number of queries with repeated words: 12301– Average words/ query: 2.39– Query type: All words: 0.3036; Any words: 0.6886; Some

words:0.0078– Boolean: 0.0015 (0.9777 AND / 0.0252 OR / 0.0054 NOT)– Phrase searches: 0.198– URL searches: 0.066– URL searches w/http: 0.000– email searches: 0.001– Wildcards: 0.0011 (0.7042 '?'s )

• frac '?' at end of query: 0.6753• interrogatives when '?' at end: 0.8456• composed of:

– who: 0.0783 what: 0.2835 when: 0.0139 why: 0.0052 how: 0.2174 where 0.1826 where-MIS 0.0000 can,etc.: 0.0139 do(es)/did: 0.0

Page 31: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 31IS 202 – FALL 2003

What Do People Search for on the Web?

• Topics– Genealogy/Public Figure: 12%– Computer related: 12%– Business: 12%– Entertainment: 8%– Medical: 8%– Politics & Government 7%– News 7%– Hobbies 6%– General info/surfing 6%– Science 6%– Travel 5%– Arts/education/shopping/images 14% (from Spink et al. 98 study)

Page 32: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 32

Visits(last year)

Page 33: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 33IS 202 – FALL 2003

Visits(this year)

Page 34: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 34

Directory Sizes (Last Year)

Page 35: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 35IS 202 – FALL 2003

Directory Sizes

Page 36: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 36IS 202 – FALL 2003

Page 37: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 37IS 202 – FALL 2003

Page 38: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 38

Searches Per Day (2000)

Page 39: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 39IS 202 – FALL 2003

Searches Per Day (2001)

Page 40: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 40IS 202 – FALL 2003

Searches per day (current)

• Don’t have exact numbers for Google, but they state in their “press” section that they handle 200 Million searches per day

• They index over 3 Billion web pages and 3 trillion words– http://www.google.com/press/funfacts.html

Page 41: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 41IS 202 – FALL 2003

Challenges for Web Searching: Data

• Distributed data• Volatile data/”Freshness”: 40% of the web

changes every month• Exponential growth• Unstructured and redundant data: 30% of web

pages are near duplicates• Unedited data• Multiple formats• Commercial biases• Hidden data

Page 42: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 42IS 202 – FALL 2003

Challenges for Web Searching: Users

• Users unfamiliar with search engine interfaces (e.g., Does the query “apples oranges” mean the same thing on all of the search engines?)

• Users unfamiliar with the logical view of the data (e.g., Is a search for “Oranges” the same things as a search for “oranges”?)

• Many different kinds of users

Page 43: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 43IS 202 – FALL 2003

Web Search Queries

• Web search queries are SHORT– ~2.4 words on average (Aug 2000)– Has increased, was 1.7 (~1997)

• User Expectations– Many say “the first item shown should be what

I want to see”!– This works if the user has the most

popular/common notion in mind

Page 44: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 44IS 202 – FALL 2003

Search Engines

• Crawling

• Indexing

• Querying

Page 45: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 45IS 202 – FALL 2003

Web Search Engine Layers

From description of the FAST search engine, by Knut Risvikhttp://www.infonortics.com/searchengines/sh00/risvik_files/frame.htm

Page 46: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 46IS 202 – FALL 2003

Standard Web Search Engine Architecture

crawl theweb

create an inverted

index

Check for duplicates,store the

documents

Inverted index

Search engine servers

userquery

Show results To user

DocIds

Page 47: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 47IS 202 – FALL 2003

More detailed architecture,

from Brin & Page 98.

Only covers the preprocessing in

detail, not the query serving.

Page 48: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 48IS 202 – FALL 2003

Indexes for Web Search Engines

• Inverted indexes are still used, even though the web is so huge

• Most current web search systems partition the indexes across different machines– Each machine handles different parts of the data

(Google uses thousands of PC-class processors)

• Other systems duplicate the data across many machines– Queries are distributed among the machines

• Most do a combination of these

Page 49: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 49IS 202 – FALL 2003

Search Engine QueryingIn this example, the data for the pages is partitioned across machines. Additionally, each partition is allocated multiple machines to handle the queries.

Each row can handle 120 queries per second

Each column can handle 7M pages

To handle more queries, add another row.

From description of the FAST search engine, by Knut Risvikhttp://www.infonortics.com/searchengines/sh00/risvik_files/frame.htm

Page 50: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 50IS 202 – FALL 2003

Querying: Cascading Allocation of CPUs

• A variation on this that produces a cost-savings:– Put high-quality/common pages on many

machines– Put lower quality/less common pages on

fewer machines– Query goes to high quality machines first– If no hits found there, go to other machines

Page 51: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 51IS 202 – FALL 2003

Google

• Google maintains (currently) the worlds largest Linux cluster (over 15,000 servers)

• These are partitioned between index servers and page servers– Index servers resolve the queries (massively

parallel processing)– Page servers deliver the results of the queries

• Over 3 Billion web pages are indexed and served by Google

Page 52: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 52IS 202 – FALL 2003

Search Engine Indexes

• Starting Points for Users include

• Manually compiled lists– Directories

• Page “popularity”– Frequently visited pages (in general)– Frequently visited pages as a result of a query

• Link “co-citation”– Which sites are linked to by other sites?

Page 53: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 53IS 202 – FALL 2003

Starting Points: What is Really Being Used?

• Todays search engines combine these methods in various ways– Integration of Directories

• Today most web search engines integrate categories into the results listings

• Lycos, MSN, Google

– Link analysis• Google uses it; others are also using it• Words on the links seems to be especially useful

– Page popularity• Many use DirectHit’s popularity rankings

Page 54: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 54IS 202 – FALL 2003

Web Page Ranking

• Varies by search engine– Pretty messy in many cases– Details usually proprietary and fluctuating

• Combining subsets of:– Term frequencies– Term proximities– Term position (title, top of page, etc)– Term characteristics (boldface, capitalized, etc)– Link analysis information– Category information– Popularity information

Page 55: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 55IS 202 – FALL 2003

Ranking: Hearst ‘96

• Proximity search can help get high-precision results if >1 term– Combine Boolean and passage-level

proximity– Proves significant improvements when

retrieving top 5, 10, 20, 30 documents– Results reproduced by Mitra et al. 98– Google uses something similar

Page 56: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 56IS 202 – FALL 2003

Ranking: Link Analysis

• Assumptions:– If the pages pointing to this page are good,

then this is also a good page– The words on the links pointing to this page

are useful indicators of what this page is about

– References: Page et al. 98, Kleinberg 98

Page 57: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 57IS 202 – FALL 2003

Ranking: Link Analysis

• Why does this work?– The official Toyota site will be linked to by lots

of other official (or high-quality) sites– The best Toyota fan-club site probably also

has many links pointing to it– Less high-quality sites do not have as many

high-quality sites linking to them

Page 58: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 58IS 202 – FALL 2003

Ranking: PageRank

• Google uses the PageRank• We assume page A has pages T1...Tn which

point to it (i.e., are citations). The parameter d is a damping factor which can be set between 0 and 1. d is usually set to 0.85. C(A) is defined as the number of links going out of page A. The PageRank of a page A is given as follows:

• PR(A) = (1-d) + d (PR(T1)/C(T1) + ... + PR(Tn)/C(Tn))

• Note that the PageRanks form a probability distribution over web pages, so the sum of all web pages' PageRanks will be one

Page 59: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 59IS 202 – FALL 2003

PageRank

T2Pr=1Pr=1

T1Pr=.725Pr=.725

T6Pr=1Pr=1

T5Pr=1Pr=1

T4Pr=1Pr=1

T3Pr=1Pr=1

T7Pr=1Pr=1

T8T8Pr=2.46625Pr=2.46625

X1 X2

APr=4.2544375Pr=4.2544375

Note: these are not real PageRanks, since they include values >= 1

Page 60: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 60IS 202 – FALL 2003

PageRank

• Similar to calculations used in scientific citation analysis (e.g., Garfield et al.) and social network analysis (e.g., Waserman et al.)

• Similar to other work on ranking (e.g., the hubs and authorities of Kleinberg et al.)

• Computation is an iterative algorithm and converges to the principle eigenvector of the link matrix

Page 61: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 61IS 202 – FALL 2003

Jesse Mendelsohn Questions

• Many companies today complain that Google hurts their business by giving their web sites low PageRanks. Other companies have emerged that claim to be able to manipulate Google's PageRank. Do you think PageRank is fair? Is a web site's usefulness and relevance really proportional to where and how often is it cited? Also, does the ability to manipulate PageRank degrade its reliablity?

Page 62: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 62IS 202 – FALL 2003

Jesse Mendelsohn Questions

• What are the moral ramifications of Google's system of web crawling? The paper mentions that people have gotten upset that their pages have been indexed by Google without permission, but brushes this criticism aside. Is this an invasion of privacy? If so, should Google's crawler be given enough "intelligence" to read warnings about indexing on web sites?

Page 63: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 63IS 202 – FALL 2003

Jesse Mendelsohn Questions

• Google purports to be very good at returning relevant results for single topic queries or for single words (such as "Bill Clinton" or "CD-ROMs") and to retrieve specific relevant information, but what if you are searching for a wide area of information on a vast topic? For example, if you are writing a paper on an extremely vague subject such as "South America." Is google's search more or less efficient than, say, Yahoo!'s system of narrowing categories?

Page 64: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 64IS 202 – FALL 2003

Yuri Takhteyev Questions

• ???

Page 65: Lecture 21: Web Search Issues and Algorithms

2003.11.20 SLIDE 65IS 202 – FALL 2003

Next Time

• User Interfaces for Information Retrieval• Readings/Discussion:

– Modern Information Retrieval, Chapter 10.1-10.3 (Marti Hearst) Alison

– Vannevar Bush, • As We May Think Jeff (

http://www.theatlantic.com/unbound/flashbks/computer/bushf.htm)

• Memex II denise• Memex Revisited ryan

– Don Norman, Why Interfaces Don’t Work danah