View
4
Download
1
Category
Preview:
Citation preview
AH 2006 – Recommender Systems 20 June 2006
© 2006 Joseph A. Konstan 1
UNIVERSITY OF MINNESOTA
Recommender Systems:Advanced Concepts in Research and Practice
Joseph A. KonstanUniversity of Minnesota
konstan@cs.umn.eduhttp://www.grouplens.org
Konstan: Recommender Systems, AH 2006
A Bit of History• Ants, Cavemen, and Early Recommender
SystemsThe emergence of critics
• Information Filtering and User Modeling• Manual Collaborative Filtering• Automated Collaborative Filtering
Social Navigation and other approaches• The Commercial Era
Konstan: Recommender Systems, AH 2006
Historical Challenges• Collecting Opinion and Experience Data
• Finding the Relevant Data for a Purpose
• Presenting the Data in a Useful Way
Konstan: Recommender Systems, AH 2006
Introductions• Me
• You
• This tutorial
Konstan: Recommender Systems, AH 2006
About Me• Professor of Computer Science
University of Minnesota• Background: Human-Computer Interaction• Recommender Systems Experience
Started on GroupLens project in late 1994Co-founded Net PerceptionsStill actively working on RS research
Konstan: Recommender Systems, AH 2006
About You• Name• What you do• Who you work for / where you study• Briefly
Your experience with recommender systemsOne key thing you want to get out of this tutorial
AH 2006 – Recommender Systems 20 June 2006
© 2006 Joseph A. Konstan 2
Konstan: Recommender Systems, AH 2006
About This TutorialBackground
I have taught an introductory tutorial on RS about a dozen times (often with John Riedl, Anthony Jameson)
The idea:A venue to go beyond the introductory material to explore newer knowledge and current problemsInherently biased by what I find to be interesting (for better or for worse)Assumes basic understanding of RS (e.g., k-nearest-neighbor collaborative filtering – if not, let’s fix that!)
Konstan: Recommender Systems, AH 2006
Goals of this Tutorial• To understand the state of research and
practice in recommender systems:AlgorithmsInterface DesignEvaluation
• To explore the future of user-centered recommender system design
• To have fun while doing so!
Konstan: Recommender Systems, AH 2006
Where do Recommenders Fail?Your experiences
Where have recommenders really missed the mark?Where have they looked dumb?And why????
Konstan: Recommender Systems, AH 2006
My Examples• Recommending without enough data (weak
confidence)
• Recommending items ignorant of context
• Ignoring overall interest or balance
• Just plain wrong!
Konstan: Recommender Systems, AH 2006
Amazon Explanation
Konstan: Recommender Systems, AH 2006
Amazon.com Tolkien
AH 2006 – Recommender Systems 20 June 2006
© 2006 Joseph A. Konstan 3
Konstan: Recommender Systems, AH 2006
Amazon.com Tolkien 2Sauron DefeatedBy J.R.R. Tolkien,
Chris Tolkien, Editor
The War of the RingBy J.R.R. Tolkien,
Chris Tolkien, Editor
Treason of IsengardBy J.R.R. Tolkien,
Chris Tolkien, Editor
Shaping of Middle EarthBy J.R.R. Tolkien,
Chris Tolkien, Editor
Konstan: Recommender Systems, AH 2006
Yahoo! redundancy
Konstan: Recommender Systems, AH 2006
Screenshots
Konstan: Recommender Systems, AH 2006
Screenshots
Recommender Systems
Recommender Systems
Multimedia toolkits
Multimedia toolkits
Recommender Systems
User Interface Design
Konstan: Recommender Systems, AH 2006
Screenshots
Konstan: Recommender Systems, AH 2006
Screenshots
AH 2006 – Recommender Systems 20 June 2006
© 2006 Joseph A. Konstan 4
Konstan: Recommender Systems, AH 2006
Screenshots
Recommender Systems
Recommender Systems
Multimedia toolkits
Multimedia toolkits
Multimedia
User-based CF
Konstan: Recommender Systems, AH 2006
Screenshots
Recommender Systems
Recommender Systems
Multimedia toolkits
Multimedia toolkits
Multimedia
User-based CF
Konstan: Recommender Systems, AH 2006
Screenshots
Konstan: Recommender Systems, AH 2006
Screenshots
Operating Systems
Program Behavior
Operating Systems
Power Analysis
Data Mining
Naive Bayes
Konstan: Recommender Systems, AH 2006
Screenshots
Operating Systems
Program Behavior
Operating Systems
Power Analysis
Data Mining
Naive Bayes
Konstan: Recommender Systems, AH 2006
More Generally …• Recommenders fail when …
they lack awareness of their own knowledgethey don’t consider the context of recommendationthey don’t consider the user’s goals and needs
AH 2006 – Recommender Systems 20 June 2006
© 2006 Joseph A. Konstan 5
Konstan: Recommender Systems, AH 2006
How do we Evaluate Recommenders -- today?
• Industry outcomeAdd-on salesClick-through rates
Konstan: Recommender Systems, AH 2006
Real-world Experience• Large international catalog retailer
17% hit rate, 23% acceptance rate in call center• Medium European outbound call center
17% hit rate, 6.7% acceptance rate from an outbound telemarketing call$350.00 price of average item soldItems were in an electronics over-stocked category and were sold-out within 3 weeks
• Medium American online toy store(e-mail campaign)
19% click-thru rate vs. 10% industry average14.3% conversion to sale vs. 2.5% industry average
Konstan: Recommender Systems, AH 2006
How do we Evaluate Recommenders -- today?
• Industry outcomeAdd-on salesClick-through rates
• Research measuresUser satisfaction
• MetricsTo anticipate the above beforehand (offline)
Konstan: Recommender Systems, AH 2006
Evaluating Recommendations• Prediction Accuracy
MAE, MSE, • Decision-Support Accuracy
Reversals, ROC• Recommendation Quality
Top-n measures (e.g., Breese score)• Item-Set Coverage
J. Herlocker et al. Evaluating Collaborative Filtering Recommender Systems. ACM Transactions on Information Systems 22(1), Jan. 2004.
Konstan: Recommender Systems, AH 2006
What’s Wrong with This Approach?
• What is the purpose of recommenders?to help people find things they don’t already know – and that they’ll like/value/useto serve as a useful advisor
• What are we measuring, mostly?how well the recommenders perform at finding things the users already knowperformance on individual recommendations
Konstan: Recommender Systems, AH 2006
There are Alternatives!• The “easy” alternative
test on real users, real situationhave them consume and evaluate
• The “hard” alternativeextend our knowledge and understanding about metrics
AH 2006 – Recommender Systems 20 June 2006
© 2006 Joseph A. Konstan 6
UNIVERSITY OF MINNESOTA
Extending our Knowledge …
Konstan: Recommender Systems, AH 2006
From Items to Lists• Do users really experience
recommendations in isolation?
C. Ziegler et al. “Improving Recommendation Lists through Topic Diversification”., in Proc. WWW 2005.
Konstan: Recommender Systems, AH 2006
Making Good Lists• Individually good recommendations do not
equal a good recommendation list• Other factors are important
DiversityAffirmationAppropriateness
• Called the “Portfolio Effect”[ Ali and van Stam, 2004 ]
Konstan: Recommender Systems, AH 2006
Topic Diversification• Re-order results in a rec list • Add item with least similarity to all items
already on list• Weight with a ‘diversification factor’• Ran experiments to test effects
Konstan: Recommender Systems, AH 2006
Experimental Design• Books from BookCrossing.com• Algorithms
Item-based CFUser-based CF
• ExperimentsOn-line user surveys2125 users each saw one list of 10 recommendations
Konstan: Recommender Systems, AH 2006
Online Results
AH 2006 – Recommender Systems 20 June 2006
© 2006 Joseph A. Konstan 7
Konstan: Recommender Systems, AH 2006
Diversity is Important• User satisfaction more complicated than
only accuracy• List makeup is important to users• 30% change enough to alter user opinion• Change not equal across algorithms
UNIVERSITY OF MINNESOTA
Next Steps …
WARNING: Work in Progress
Konstan: Recommender Systems, AH 2006
Human-Recommender Interaction
• Three premises:Users perceive recommendation quality in context; users evaluate lists Users develop opinions of recommenders based on interactions over timeUsers have an information need and come to a recommender as a part of their information seeking behavior
S. McNee et al. “Making Recommendations Better: An Analytic Model for Human-Recommender Interaction” in Ext. Abs. CHI 2006
Konstan: Recommender Systems, AH 2006
HRI • A language for communicating user
expectations and system behavior• A process model for customizing
recommenders to user needs• An analytic theory to help designers focus
on user needs
Konstan: Recommender Systems, AH 2006
HRI Pillars and Aspects
Konstan: Recommender Systems, AH 2006
Recommendation Dialog• The individual recommender interaction• Historical Aspects
Correctness, Quantity, Spread• New Aspects
TransparencySaliencySerendipityUsefulnessUsability
AH 2006 – Recommender Systems 20 June 2006
© 2006 Joseph A. Konstan 8
Konstan: Recommender Systems, AH 2006
Recommendation Personality• Experience over repeated interactions• Nature of recommendations
Personalization, Boldness, Freshness, Risk• Progression over time
Adaptability, Pigeonholing• Relationship
Affirmation, Trust
Konstan: Recommender Systems, AH 2006
Information-Seeking Task• One of the current limits of HRI• Concreteness• Compromise• Appropriateness of Recommender• Role of Recommender• Expectation of Usefulness
Konstan: Recommender Systems, AH 2006
HRI Process Model
• Makes HRI ConstructiveLinks Users/Tasks to Algorithms
• But, Needs New Metrics
Konstan: Recommender Systems, AH 2006
Developing New Metrics• Identify candidate metrics• Benchmark a variety of algorithms
and datasets?establish that metric can distinguish algorithms
• Establish link to HRI aspectsdefinitional links; user studies
• Detailed Examples:Ratability, Boldness, Adaptability
Konstan: Recommender Systems, AH 2006
Metric Experimental Design•ACM DL Dataset
Thanks to ACM!24,000 papersHave citations, titles, authors, & abstractsHigh quality
•AlgorithmsUser-based CFItem-based CFNaïve Bayes ClassifierTF/IDF Content-basedCo-citationLocal Graph SearchHybrid variants
Konstan: Recommender Systems, AH 2006
Ratability• Probability a user will rate a given item
“Obviousness”Based on current user modelIndependent of liking the item
• Many possible implementationsNaïve Bayes Classifier
AH 2006 – Recommender Systems 20 June 2006
© 2006 Joseph A. Konstan 9
Konstan: Recommender Systems, AH 2006
Ratability ResultsRatability
-120
-100
-80
-60
-40
-20
0
Local Graph Bayes Item, 50 nbrs TFIDF User, 50 nbrs
Mea
n R
atab
ility
top-10 top-20 top-30 top-40
Konstan: Recommender Systems, AH 2006
Boldness• Measure of “Extreme Predictions”
Only defined on explicit rating scaleChoose “extreme values”Count appearance of “extremes” and normalize
• For example, MovieLens movie recommender
0.5 to 5.0 star scale, half-star incrementsChoose 0.5 and 5.0 as “extreme”
Konstan: Recommender Systems, AH 2006
Boldness ResultsBoldness
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
Item, 50 nbrs User, 30 nbrs
Rat
io to
Exp
ecte
d
top10 top20 top30 top40 topall
Konstan: Recommender Systems, AH 2006
Adaptability• Measure of how algorithm changes in
response to changes in user modelHow do users grow in the system?
• Perturb a user model with a model from another random user
50% eachSee quality of new recommendation lists
Konstan: Recommender Systems, AH 2006
Adaptability ResultsAdaptability, Even-Split
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Local Graph Bayes Item, 50 nbrs TFIDF User, 50 nbrs
mea
n %
ada
ptab
le
top-10 top-20 top-30 top-40
Konstan: Recommender Systems, AH 2006
Adaptability ResultsAdaptability, Even-Split
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
item.10 item.30 item.50 item.100 item.200 item.300 user.10 user.30 user.50 user.100 user.200 user.300
mea
n %
ada
ptab
le
top-10 top-20 top-30 top-40
AH 2006 – Recommender Systems 20 June 2006
© 2006 Joseph A. Konstan 10
Konstan: Recommender Systems, AH 2006
Adaptability ResultsAdaptability, Even-Split
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
item.10 item.30 item.50 item.100 item.200 item.300 user.10 user.30 user.50 user.100 user.200 user.300
mea
n %
ada
ptab
le
top-10 top-20 top-30 top-40
Konstan: Recommender Systems, AH 2006
More Generally …
Konstan: Recommender Systems, AH 2006 UNIVERSITY OF MINNESOTA
Recommender Algorithms
Konstan: Recommender Systems, AH 2006
Collaborative Filtering
?
TargetTargetCustomerCustomer
Weighted Sum
3
Konstan: Recommender Systems, AH 2006
E-Commerce Scale• Millions of Products• Millions of Customers• Thousands of Clicks per Second
• Scalability!
AH 2006 – Recommender Systems 20 June 2006
© 2006 Joseph A. Konstan 11
Konstan: Recommender Systems, AH 2006
CF Matrix Picture
ProductsC
usto
mer
s
Konstan: Recommender Systems, AH 2006
Sparsity• Many customers have no relationship• Many products have no relationship• Synonymy
Similar products treated differentlyIncreases sparsity, loss of transitivityResults in poor quality
Konstan: Recommender Systems, AH 2006
Collaborative Filtering Algorithms• Non-Personalized
Summary Statistics• K-Nearest Neighbor
user-useritem-item
• Dimensionality ReductionLSIPLSIFactor Analysis
• Content + Collaborative Filtering
Burke’s Survey of Hybrids• Graph Techniques
Horting• Clustering• Classifier Learning
Naïve BayesBayesian Belief NetworksRule-induction
Konstan: Recommender Systems, AH 2006
Zagat Guide Detail
Konstan: Recommender Systems, AH 2006
Collaborative Filtering Algorithms• Non-Personalized
Summary Statistics• K-Nearest Neighbor
user-useritem-item
• Dimensionality ReductionLSIPLSIFactor Analysis
• Content + Collaborative Filtering
Burke’s Survey of Hybrids• Graph Techniques
Horting• Clustering• Classifier Learning
Naïve BayesBayesian Belief NetworksRule-induction
Konstan: Recommender Systems, AH 2006
Item-Item Collaborative Filtering
I
II
II I II
I
I
I
I
I
I
I
I
I
B. Sarwar et al. Item-based collaborative filtering recommendation algorithms. Proc. WWW 2001.
AH 2006 – Recommender Systems 20 June 2006
© 2006 Joseph A. Konstan 12
Konstan: Recommender Systems, AH 2006
Item-Item Collaborative Filtering
I
II
II I II
I
I
I
I
I
I
I
I
I
Konstan: Recommender Systems, AH 2006
Item-Item Collaborative Filtering
I
II
II I II
I
I
I
I
I
I
I
I
I
Konstan: Recommender Systems, AH 2006
Item-Item Collaborative Filtering
I
II
II I II
I
I
I
I
I
I
I
I
I
Konstan: Recommender Systems, AH 2006
12
u
m
2nd 1st 3rd 5th4th
5 closest neighbors
R R R R-
u R R R Ri1 2 3 i-1 m-1 m
si,1
si,3
si,i-1
si,m
-
pred
ictio
n
weighted sum regression-based
Raw scoresfor predictiongeneration
Approximationbased on linearregression
Target item
ItemItem--Item Matrix FormulationItem Matrix Formulation
Konstan: Recommender Systems, AH 2006
Item Similarities 1 2 3 i n-1 n
12
u
mm-1
j
R-
R -
R R
R R
R R
si,j=?
Used for similarity computation
Konstan: Recommender Systems, AH 2006
Item-based algorithm:Similarity measure
0.650.7
0.750.8
0.85
Adjustedcosine
Pure cosine Correlation
MA
E
AH 2006 – Recommender Systems 20 June 2006
© 2006 Joseph A. Konstan 13
Konstan: Recommender Systems, AH 2006
Neighborhood Size
I
II
II I II
I
I
I
I
I
I
I
I
I
Konstan: Recommender Systems, AH 2006
Incremental Item-Item Algorithm• Model Building
Compute similarity between itemsRecord p most similar items for each item
• p is model size
• Prediction Generationp’ is subset of p rated by uUse min(p’,k) items for prediction
• k is neighborhood size
Konstan: Recommender Systems, AH 2006
Model size sensitivity
0.720.74
0.760.78
0.80.820.84
25 50 75 100 125 150 175 200 item-item
Model size
MA
E
x=0.3 x=0.5 x=0.8
0.76200.76200.75040.7504
(1.54% degradation)
Konstan: Recommender Systems, AH 2006
Model Size vs. Throughput
0
10000
20000
30000
40000
50000
60000
70000
80000
90000
25 50 75 100 125 150 175 200 item-item
Model size
Thro
ughp
ut (r
ecs.
/sec
)
x=0.3 x=0.5 x=0.8
30826 recs/sec30826 recs/sec1717 recs/sec1717 recs/sec
(17-fold Improvement)
Konstan: Recommender Systems, AH 2006
Item-Item Discussion• Good quality, in sparse situations• Promising for incremental model building
Small quality degradationBig performance gain
Konstan: Recommender Systems, AH 2006
Collaborative Filtering Algorithms• Non-Personalized
Summary Statistics• K-Nearest Neighbor
user-useritem-item
• Dimensionality ReductionLSIPLSIFactor Analysis
• Content + Collaborative Filtering
Burke’s Survey of Hybrids• Graph Techniques
Horting• Clustering• Classifier Learning
Naïve BayesBayesian Belief NetworksRule-induction
AH 2006 – Recommender Systems 20 June 2006
© 2006 Joseph A. Konstan 14
Konstan: Recommender Systems, AH 2006
Dimensionality Reduction• Latent Semantic Indexing
Used by the IR communityWorked well with the vector space modelUsed Singular Value Decomposition (SVD)
• Main IdeaTerm-document matching in feature spaceCaptures latent associationReduced space is less-noisy
B. Sarwar et al. Incremental SVD-Based Algorithms for Highly Scaleable Recommender Systems. Proc ICCIT 2002.
Konstan: Recommender Systems, AH 2006
SVD: Mathematical Background
=R
m X n
U
m X r
S
r X r
V’
r X n
Sk
k X k
Uk
m X k
Vk’
k X n
The reconstructed matrix Rk = Uk.Sk.Vk’ is the closest rank-k matrix to the original matrix R.
Rk
Konstan: Recommender Systems, AH 2006
SVD for Collaborative Filtering
. 2. DirectPredictionm x n
1. Low dimensional representation O(m+n) storage requirement
m x k
k x n
Konstan: Recommender Systems, AH 2006
Experimental Setup• MovieLens Data (www.movielens.umn.edu)
Size 943 x 1,682; 100,000 ratings entryRatings are from 1-5Used for Prediction and Neighborhood experiments
• E-Commerce Data Size 6,502 x 23,554; 97,045 purchase entryPurchase entries are dollar amountsUsed for Neighborhood experiment
• Training and Test PortionsPercentage of Training data, x10x cross-validation
Konstan: Recommender Systems, AH 2006
Experimental Setup• Benchmark Systems
CF-Predict CF-Recommend
• Evaluation MetricsPrediction
• Mean Absolute Error (MAE)Top-N Recommendation
• Recall and Precision• Combined score F1
Konstan: Recommender Systems, AH 2006
Dimension Sensitivity
Sensitivity of No. of Dimensions
0.73
0.735
0.74
0.745
0.75
0.755
0.76
0.765
5 10 11 12 13 14 15 16 17 18 19 20 21 25 50 100
Number of Dimensions, k
MA
E
x=0.5 x=0.8
Ideal value of k
AH 2006 – Recommender Systems 20 June 2006
© 2006 Joseph A. Konstan 15
Konstan: Recommender Systems, AH 2006
SVD Prediction Results
SVD as Prediction Generator (k is fixed at 14 for SVD)
0.7150.7350.7550.7750.7950.8150.8350.855
0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
x (train/test ratio)
MA
E
Pure-CF SVD Konstan: Recommender Systems, AH 2006
Discussion• Pros:
SVD addresses sparsityComparable quality with only 14 featuresStorage space: O(mn) vs. O(m+n)
• Cons:Problem with dynamic databaseSVD computation is expensive
Konstan: Recommender Systems, AH 2006
SVD Performance Issues
Model Building
Prediction Generation
Traditional CF SVDTraditional CF SVD--based CFbased CF
Relatively Fast SlowRelatively Fast Slow
Slow Very FastSlow Very Fast
Offlineperformanc
Onlineperformance Konstan: Recommender Systems, AH 2006
SVD Folding-inA simple projection technique
m x n
m x k
k x n
p x n
U S V
(m+p) x k
Konstan: Recommender Systems, AH 2006
Singular Value DecompositionReduce dimensionality of problem
Results in small, fast modelRicher Neighbor Network
Incremental UpdateFolding inModel Update
Konstan: Recommender Systems, AH 2006
Collaborative Filtering Algorithms• Non-Personalized
Summary Statistics• K-Nearest Neighbor
user-useritem-item
• Dimensionality ReductionLSIPLSIFactor Analysis
• Content + Collaborative Filtering
Burke’s Survey of Hybrids• Graph Techniques
Horting• Clustering• Classifier Learning
Naïve BayesBayesian Belief NetworksRule-induction
AH 2006 – Recommender Systems 20 June 2006
© 2006 Joseph A. Konstan 16
Konstan: Recommender Systems, AH 2006
Algorithm ExerciseRecommender Application (choose one)
Personalized newspaperMusic streaming applicationDentist recommender
The exerciseIdentify sources of data for recommendation
• content and/or ratingsIdentify 2 user situationsExplore recommender algorithms for application
UNIVERSITY OF MINNESOTA
Thinking About User Experience
Konstan: Recommender Systems, AH 2006
Recommender Application Space
• Dimensions of AnalysisDomainPurposeWhose OpinionPersonalization LevelPrivacy and TrustworthinessInterfaces<Algorithms Inside>
Konstan: Recommender Systems, AH 2006
Domains of Recommendation• Content to Commerce
News, information, “text”Products, vendors, bundles
Konstan: Recommender Systems, AH 2006
Google: Content Example
Konstan: Recommender Systems, AH 2006
C H
AH 2006 – Recommender Systems 20 June 2006
© 2006 Joseph A. Konstan 17
Konstan: Recommender Systems, AH 2006
Purposes of Recommendation• The recommendations themselves
SalesInformation
• Education of user/customer
• Build a community of users/customers around products or content
Konstan: Recommender Systems, AH 2006
800.com you might also like
Konstan: Recommender Systems, AH 2006
Tacit
Konstan: Recommender Systems, AH 2006
Whose Opinion?• “Experts”
• Ordinary “phoaks”
• People like you
Konstan: Recommender Systems, AH 2006
Wine.com Expert recommendations
Konstan: Recommender Systems, AH 2006
Personalization Level• Generic
Everyone receives same recommendations• Demographic
Matches a target group• Ephemeral
Matches current activity• Persistent
Matches long-term interests
AH 2006 – Recommender Systems 20 June 2006
© 2006 Joseph A. Konstan 18
Konstan: Recommender Systems, AH 2006
Lands’ End
Konstan: Recommender Systems, AH 2006
Brooks Brothers
Konstan: Recommender Systems, AH 2006
Cdnow album advisor
Konstan: Recommender Systems, AH 2006
Privacy and Trustworthiness• Who knows what about me?
Personal information revealedIdentityDeniability of preferences
• Is the recommendation honest?Biases built-in by operator• “business rules”
Vulnerability to external manipulation
Konstan: Recommender Systems, AH 2006
Interfaces• Types of Output
PredictionsRecommendationsFilteringOrganic vs. explicit presentation
• Types of InputExplicitImplicit
Konstan: Recommender Systems, AH 2006
Launching Organic Interfaces• Launch.yahoo.com – a truly personal radio
stationObserves play limitsMixes different inputs, different recommendersKill a song – once and foreverNice information on why a song is playing
AH 2006 – Recommender Systems 20 June 2006
© 2006 Joseph A. Konstan 19
Konstan: Recommender Systems, AH 2006 Konstan: Recommender Systems, AH 2006
Konstan: Recommender Systems, AH 2006 Konstan: Recommender Systems, AH 2006
Konstan: Recommender Systems, AH 2006
Application Critiques• Consider the following recommender-
powered applicationWhat’s good?What’s bad?Why?
UNIVERSITY OF MINNESOTA
Summary and Discussion
Recommended