82
LINKED MEDIA INTERFACES. Graphical User Interfaces for Search and Annotation Marius Schebella, Thomas Kurz and Georg Güntner

SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

Embed Size (px)

DESCRIPTION

© Salzburg NewMediaLab – The Next Generation, September 2011ISBN 978-3-902448-29-3by Marius Schebella, Thomas Kurz and Georg Güntner:Linked Media Interfaces. Graphical User Interfaces for Search and Annotation Issue 2 of the series “Linked Media Lab Reports”, edited by Christoph Bauer, Georg Güntner and Sebastian Schaffert

Citation preview

Page 1: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

LINKED MEDIA INTERFACES.

Graphical User Interfaces for Search and Annotation

Marius Schebella, Thomas Kurz and Georg Güntner

Page 2: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

The Austrian competence centre “Salzburg NewMediaLab – The Next Generation” (SNML-TNG) conducts research and development in the field of intelligent content man-agement: It aims at personalising content, making it search- and findable, interlinking enterprise content with internal and external information resources, and building a platform for sustainable information integration. For this purpose, information about content (Linked Content), structured data (Linked Data) and social interaction (Linked People) has to be connected in a lightweight and standard-ised way. Our approach for interlinking across these levels is what we call “Linked Me-dia”.

SNML-TNG is a K-project within the COMET programme (Competence Centers for Excel-lent Technologies, www.ffg.at/comet), is co-ordinated by Salzburg Research and co-fin-anced by the Austrian Federal Ministery of Economy, Family and Youth (BMWFJ), the Austrian Federal Ministry for Transport, Innovation and Technology (BMVIT) and the Province of Salzburg. Homepage: www.newmedialab.at

© Salzburg NewMediaLab – The Next Generation, September 2011

ISBN 978-3-902448-29-3

Marius Schebella, Thomas Kurz and Georg Güntner:

Linked Media InterfacesGraphical User Interfaces for Search and Annotation

Issue 2 of the series “Linked Media Lab Reports”,edited by Christoph Bauer, Georg Güntner and Sebastian Schaffert

Publisher: Salzburg Research, SalzburgCover: Daniela Gnad, Salzburg Research

Bibliografische Information der Deutschen Nationalbibliothek:

Die Deutsche Nationalbibliothek verzeichnet diese Publikationin der Deutschen Nationalbibliografie; detaillierte bibliografische Daten sind im Internet über http://dnb.d-nb.de abrufbar.

Page 3: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

Preface

Salzburg NewMediaLab – The Next Generation (SNML-TNG) is a competence centre in the Austrian COMET Programme. Today’s enterprises (and particularly the media enterprises) rely heavily upon accurate, consistent and timely access to various types of structured and unstructured data. However, today's knowledge worker is increasingly dependent on information that resides outside the com-pany's firewall. To meet this challenge a new approach to data integration is needed. One that harvests the value of both internal and external data sources.

There are various approaches to integration, e.g., integration on the presentation layer (as it is done with portals), on the business layer (for instance via service-oriented architectures), or also on the data and/or persistence layer (for example via data warehousing, federated databases, etc.).

SNML-TNG’s approach to integration is to focus on the data layer using Semantic Web technologies with an emphasis on the content and media enterprises. In-formation about people, data and content are semantically linked: Our approach is based on the Linked Data concepts developed by the World Wide Web Consorti-um (W3C) and extended to include media assets (e.g. video objects): Hence we use the term “Linked Media” to denote our data integration approach for the enter-prise information space. The principles behind our Linked Media approach are a result of socio-economic analysis, technological-conceptual work, and technolo-gical development, with the release of the Open Source framework (the “Linked Media Framework”) that provides a lightweight approach to interlink information available in content and media assets, structured (meta-)data sets and people's social networks.

The ideas and technology behind the Linked Media Principles will be validated by the company partners of SNML-TNG: the content partners (ORF, Red Bull Media House, Salzburg AG and Salzburger Nachrichten) and technology partners (media-mid, Semantic Web Company, TECHNODAT), in the form of specific applications built on top of the Linked Media Framework.

This is where the demand for appropriate graphical user interfaces supporting the interlinking approach arises: The team at SNML-TNG has looked at design pat-terns for applications in the Linked Media sector driven also by the fact that the underlying W3C Linked Data principles are well established in the Semantic Web community but still lack concrete applications apart from research prototypes. With the present second issue of the “Linked Media Lab Reports” we provide a glossary of design patterns for graphical user interfaces for the interested audi-ence, the developer community and user interaction designers.

Page 4: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

We hope that you will enjoy this second issue of our “Linked Media Lab Reports”, which – after the report on the value of Linked Media in the enterprise outlined in the first issue – addresses questions on how the Linked Media Principles can be implemented on the user-facing side to realise accurate, consistent and timely ac-cess to various types of structured and unstructured data in a Linked Media Enter-prise. We hope that our selective analysis provides a practical glossary of design patterns for graphical user interfaces for the interested audience, the developer community and user interaction designers.

Georg Güntner www.newmedialab.atManaging DirectorSeptember 2011

Page 5: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

Content

Introducton and Background................................................................... 7 Introducton........................................................................................7 The vision of “Linked Media”.............................................................. 7 Scope and purpose............................................................................10

Media Life Cycle and Design Paterns..................................................... 11 Media Life Cycle................................................................................11 Mapping the Life Cycle to Design Paterns........................................12

Types of Linked Enttes.......................................................................... 17

General Aspects of Linked Media............................................................21 Typed Links....................................................................................... 21 Personalisaton of Informaton and DRM..........................................21 Quality of Linked Data and Trusted Sources.....................................22

Linked Media Interfaces – Design Paterns.............................................23

Paterns for Search (including Visualisaton)...........................................25 Formulatng the Query......................................................................25 Fine-Tuning the Query...................................................................... 29 Search Modifers...............................................................................32 Sortng and Grouping Results............................................................34 Display of Enttes..............................................................................35 Display of Results..............................................................................39 Advanced Search...............................................................................41 Trust indicators................................................................................. 45 Content Summary............................................................................. 47 Reports..............................................................................................51 Enhancement....................................................................................52

Paterns for Annotaton.......................................................................... 55 General Annotaton Based on Text Entry..........................................55 Locaton Annotaton..........................................................................57 Annotaton of Time........................................................................... 58 People, Event and Theme Annotaton..............................................59 Selecton and Picking of Vocabulary.................................................60 Paterns for Ontology Management................................................. 64

Page 6: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

Crowd Sourced Annotaton...............................................................66 Other Annotaton Tools.................................................................... 67

Bundled Packages................................................................................... 71 Pool Party..........................................................................................71 M@RS............................................................................................... 71 More Video Annotaton Tools...........................................................71 Video Content Annotaton: Vizard Annotator...................................72 Video Semantc Search: Jinni............................................................ 72

Summary.................................................................................................75

References.............................................................................................. 77

Page 7: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

Linked Media Interfaces

INTRODUCTION AND BACKGROUND

Introducton

The Austrian competence centre “Salzburg NewMediaLab – The Next Generation” (SNML-TNG) conducts research and development in the field of intelligent content management: It aims at personalising content, making it search- and findable, in-terlinking enterprise content with internal and external information resources, and building a platform for sustainable information integration. For this purpose, information about content (Linked Content), structured data (Linked Data) and social interaction (Linked People) has to be connected in a lightweight and stand-ardised way. Our approach for interlinking across these levels is what we call “Linked Media”.

When the concepts of Linked Media are brought to end-users, a demand for ap-propriate graphical user interfaces supporting the interlinking approach arises: Therefore, the team at SNML-TNG has looked at design patterns for applications in the Linked Media area. This is driven also by the fact that although the under-lying W3C Linked Data principles are well established in the Semantic Web com-munity there is still a lack of concrete applications apart from research proto-types. With this second issue of the “Linked Media Lab Reports” we provide a glossary of design patterns for graphical user interfaces for the interested audi-ence, the developer community and user interaction designers.

This report addresses the questions about how “Linked Media Principles” can be implemented on the front-end to realise accurate, consistent and timely access to various types of structured and unstructured data in a Linked Media Enterprise. As this report builds on the vision of “Linked Media”, we will start with a short in-troduction of the concept.

The vision of “Linked Media”

The Semantic Web is not a separate Web but an extension of the current one, in which information is given well-defined meaning, better enabling computers and people to work in cooperation. (Tim Berners-Lee et al., 2001)

Unfortunately, we are still at a stage where the interaction between machine and user is not free of “issues”. Over the last decades we have gained a better under-standing of human thinking on the one hand and technological possibilities on the other. We have learned to live with the shortcomings of how machines store and process information and at the same time, technology has evolved and taken on more human-like actions. Machines were built to collect, produce and process in-formation without permanent human involvement. The user in-turn has adapted to keyboard input, forms, controlled vocabulary, and so on.

7

Page 8: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

Introducton and Background

The Semantic Web operates at this intersection between human versus machine-like understanding of the world. It weaves human concepts into a web of machine-understandable, exchangeable and processable data. To this aim, the web com-munity and public information archives have contributed significant structured and trustworthy information for applications to automatically harvest and apply that information in daily production settings.

In the multimedia space such new possibilities have redefined today's media asset management system. There is an increase in the amount, quality and the inter-re-latedness of media archives. Information can be included from public repositories, such as Wikipedia, or its “semantic sibling”, DBpedia. Search results can be im-proved by incorporating Semantic Web knowledge. Information that is generated in a company can be made public.

The basic concept of SNML-TNG that describes this relatedness of information, re-positories and eventually people is called “Linked Media”. Linked Media combines the following three principles:

– Linked Content means hyperlinks between textual docu-ments and other unstructured content on the Web; such content is designed for human readers, and the hyperlinks are primarily meant for navigation purposes, i.e. when a user clicks on the link, the browser displays the site linked to; however, some services, e.g. Google, also use the hyperlink struc-ture to rate and rank the value of information, at the core of the added value from the business perspective there are recommendations and annotations.

– Linked People means that people can connect and commu-nicate over the Internet in ways they could not before us-ing social software systems; most prominent among these systems are nowadays social networking platforms like LinkedIn, Facebook, and Xing, but also some of the more content-oriented collaborative platforms can be con-sidered to (implicitly) link people (e.g. Blogs, Wikis) as well as the users of enterprise information management systems (e.g. media asset management systems, document management systems, customer rela-tion management systems).

– Linked Data is a recent development emerging from the “Semantic Web” community and aims at providing a com-mon standard for linking structured data that is primarily meant for machine processing and not for human con-sumption1; with Linked Data, it is possible to collect and further process data from many different sources (the so-

1 Christan Bizer, Tom Heath and Tim Berners-Lee (in press). Linked Data – The Story So Far. Internatonal Journal on Semantc Web and Informaton Systems, Special Issue on Linked Data.

8

Page 9: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

called “Linked Data Cloud”). As of March 2011, the Linked Open Data (LOD) Cloud comprised more than 28.5 billion statements (“RDF-triples”) in over 200 datasets2 and keeps growing rapidly. Among the best-known datasets there are DBpedia3, the representation of Wikipedia in the Linked Data world, and a representation of the GeoNames geographical database in the LOD cloud4. Datasets compliant to the Linked Data principles can be com-bined in completely new ways that have not been thought of when the data has been collected, e.g. in mash-ups or scientific applications.

The technology behind Linked Media is the Linked Media Framework. Its core idea is to make (extensive) use of typed interlinking technologies. Metadata prop-erties link to either an internal knowledge store or are interlinked to entities of the Linked Open Data Cloud. All links are of a typed form, meaning they describe the quality (predicate) of a link.

For example, instead of tagging an image with the literal “Salzburg” a user may want to identify the content of this image as “the” particular city of Salzburg that is already specified in the Linked Open data Cloud, for example in DBpedia.

Fig. 1. Instead of a simple tag ("Salzburg") the content of this image can be uniquely identfed as htp://dbpedia.org/resource/Salzburg.

Source: htp://en.wikipedia.org/wiki/File:SalzburgerAltstadt02.JPG [2011-09-20]

By allowing metadata property fields to point to linked entities like people, loca-tions, historic events, etc., it is possible to support features such as the inclusion and combination of external resources, machine driven reasoning, inferencing and many more. For the user this means an improvement in the search for resources, but also enhancements during the playback of media (presenting and accessing additional information) or automation of processes such as creating electronic program guides, etc. On the other hand, it also means that annotations become

2 Christan Bizer, Anja Jentzsch, Richard Cyganiak. The State of the LOD Cloud. Version 0.2, 03/28/2011 (2011), htp://www4.wiwiss.fu-berlin.de/lodcloud/state [2011-09-20]

3 htp://www.dbpedia.org [2011-09-20]

4 htp://www.geonames.org [2011-09-20]

Page 10: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

partly dynamic since entities that are referred to can be edited in a separate pro-cess within a separate information source.

To respond to the new challenges from a user interaction viewpoint, we describe a set of design patterns for Graphical User Interfaces (GUIs) that allow users to in-teract with media resources in the realm of semantic annotation and retrieval of digital image and video files. These are the Linked Media Interfaces (LMI).

Scope and purpose

The Linked Media Interfaces (LMI) deal with situations and processes where people in the area of media asset management annotate, query, search, browse, research or interact with digital media objects and their metadata.

To generate a diverse field of applications this report explores the following situ-ations and approaches within this report: It takes a look at the Media Life Cycle - processes and workflows where users interact with media information. It also provides a list of common entity types (such as people, places, events) as well as other aspects involved in the descriptive part of content annotation. The main body of the work is a description of design patterns of Graphical User Interfaces divided into two chapters “Search” and “Annotation”.

Page 11: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

MEDIA LIFE CYCLE AND DESIGN PATTERNS

This chapter explores Linked Media Interfaces from a user perspective. It exam-ines the processes and workflows typical to media assets and tries to bridge life cycle management to the new possibilities that are provided by semantic techno-logies. The media assets handled in this context are documents, images and videos.

Media Life Cycle

“Multimedia resources typically have an extended life cycle covering an array of distinct processes and workflows” (Smith & Schirling, 2006, p. 84). Kosch et al. (2005) propose three phases of a life cycle: Creation, Management and Transac-tion. To complement the workflow, especially with regards to visualisation and data enhancement we add another phase that comes into play after media re-sources are delivered: Application and utilisation of resources.

In each phase (and sub-phase) of the multimedia life cycle users interact with metadata. But not always is this interaction the main aspect of a process. For ex-ample, a user may be reading the news and then decides to add just a small part of that information to a media asset. Nor are semantic aspects always a main aspect, but they can always influence the interaction.

Some processes, however, are solely dedicated to metadata. These parts of the life cycle are described in the Insemtive Annotation Life Cycle Deliverable (Insemtives D2.2.1) as a metadata life cycle (see Pipek et al., 2009). For our purposes this means, additionally to the organisation, production and maintenance of media as-sets, these three phases are also most relevant for ontology creation, concept cre-ation as well as ontology maturing. Insemtives distinguishes between controlled and uncontrolled annotation. In that context ontology maturing means “enriching the controlled vocabulary with new terms and concepts that have been used as uncontrolled annotations”.

Page 12: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

Media Life Cycle Interacton with metadataPlan Search, browse, create background and project informatonCreate Annotate resources in real tmeAcquire Extract metadata from resourcesOrganize Structure and collate metadata, organize ontologies, disclose vocabu-

lary and interfaces, select datasets, (pre)-select RDF-predicatesProduce Annotate assets. Create concepts from existng uncontrolled annota-

ton or during new annotaton of contentMaintain Maintain, add, edit media assets. Manage, author, validate, and as-

sure quality of metadata and ontologyStore Store and index metadataSell/Place at Disposal Provide access and search mechanisms as well as informaton feed-

back channel for metadataDistribute Package and organize metadata (collecton baskets)Deliver Deliver metadataConsume Display metadataUse Draw new informaton/conclusions based on metadata

Table 1. Processes of the Media Life Cycle and forms of interacton

The different phases of the media life cycle are of relevance when we try to cover a representative set of tasks in which users deal with media assets. The phases are discussed in detail in the next section and mapped to particular patterns.

The table also shows that some interactions appear more than once under the same label, but within a different context and hence indicating different goals of interaction. In these cases different interface patterns are provided under the same label. For example the term “search“ can mean that a user may want to ask a question, they might seek a particular resource that they know in advance, or they may browse for unknown resources related to a certain topic. They may want to have an overview over search results and extract information by looking at the total number and quality of results, or they may just want to browse through the resources. Knowing a user’s purpose and intention will influence and shape the interfaces proposed to them.

Mapping the Life Cycle to Design Paterns

In the following section we explain the phases of the life cycle in a little more de-tail and provide a mapping of the phases to different design patterns.

Planning

The planning phase takes place before the actual media content is created or available. For example an editor is researching for a show, browsing old docu-mentaries, scheduling interviews, etc. Or a product manager is putting together information for a photo shooting (time and location, people and products in-volved), adding notes for the photographer, awarding the contract.

Page 13: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

During this phase the user needs many types of interfaces. Browsing is an import-ant technique to scope the topic and then drill down for particular media assets. A lot of metadata is created in the process, which can be linked to media objects af-terwards or provide a context for future search and annotation.

Featured patterns:

– All search patterns

– Create Context

– Rating Systems

– “I know more” Button

Creaton

During the creation of a media asset (e.g. a video shot), machines and software automatically add metadata (location, creation date, technical data). But also hu-man annotators may be involved. They need annotation tools that support the do-main of the editor or a certain context. A tool similar to a live chat includes time annotation and supports context aware controlled vocabulary of a sport event or similar.

Featured patterns:

– Real Time Video Annotation

Acquisiton

Often the acquisition of large amounts of media assets involves a machine suppor-ted metadata extraction or conversion. At the same time editors are involved in the quality assurance of the process, having to overlook the mapping of new con-cepts to old ones, a process that involves the possibilities to compare original and converted, list all newly created concepts, look for duplicates, etc. Optionally provenance information can be added at this stage.

Featured patterns:

– Conceptual Mapping

– Concept Adder

– Entity Index

– Completeness Feedback

Page 14: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

Organisaton

Organisation of assets and metadata takes place at several levels. On an overview level all assets are given general categorisation tags. Issues may be grouped into series, not all types of media assets may be treated in the same way. On a more granular level, assets are given pre-defined classification tags. Archivists tend to be consistent in labelling the same types with the exact same terms. Finally, to support a “per-item level”, a controlled vocabulary needs to be created, including the creation of thesauri and ontologies, putting metadata concepts in relation to each other, and indexing metadata.

At this stage three phases of the metadata life cycle are relevant: ontology cre-ation, concept creation and ontology maturing.

Featured patterns:

– Concept Mapping

– Category Adder

– Entity Index

Producton

During the production phase archivists and document editors annotate - often manually - media assets. They create and assign metadata, author it and assure their quality. But not only professionals and specialists produce metadata. We have seen masses of users labelling, tagging, annotating pictures, videos and docu-ments. Patterns are provided to link entities to the Linked Open Data cloud.

During the production phase information also has to be authorized. The author-isation level of metadata may be different than the media asset itself.

Featured Patterns:

– All annotation patterns

– Quick View

Maintenance

Maintenance is closely related to production in a way that it pursues the same goal – creating additional metadata and editing it. It is an ongoing process.

Featured Patterns:

– All annotation patterns

– Authoring List

– “I know more” Button

Page 15: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

Storage

The storage of digital assets affects the features that are available for Linked Me-dia Interfaces. Is it possible to instantly preview assets? Is it possible to access only parts of an asset? Which data formats are available?

With Open Linked Data, portions of the metadata are not stored within a system, but link to public sources. This can be seen as a problem in production-critical situations (e.g. a newsroom) and may call for an offline storage of relevant inform-ation.

Since the user of a Linked Media Interface cannot affect the storage process dir-ectly there are no patterns that are related to this phase of the life cycle.

Sale/Placing at Disposal

This is the phase where users want to (re-)access media assets. It includes search-ing, browsing, previewing, choosing and ordering or buying items. Users may want to store their personal history, or preferences, receive suggestions based on recommendations, have their own collection of favorite assets, etc. Linked Media Interfaces provide access and search mechanisms as well as information feedback channels for metadata.

Featured patterns:

– All search and display patterns

– Storing Searches and Results

– Rating Systems

Package/Distributon/Delivery

Some metadata is distributed along with the media asset, other is primarily used for retrieving assets. In the cases where metadata contains useful information for the user it has to be packaged and shaped into a format that can be viewed by the user. There might be a separate process involved that selects and designs metadata for consumption, for example when metadata includes a short descrip-tion of an asset that is used for an electronic programming guide.

Featured pattern:

– Automated Content Extraction

Consumpton

In many cases metadata is meant not only to support search and retrieval of me-dia assets, but also add additional information for a user. This information might

Page 16: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

pop up during video playback or be visible when a user asks for more information on an item. In that case metadata is displayed within or next to the asset.

Featured patterns:

– Enhancements

– Trust Indicators

Usage

The last part of the life cycle deals with the use of metadata. It includes the utilisa-tion of metadata for research and statistical processes. Users want to apply metadata to their contexts and draw new conclusions.

Metadata is provided in an open way and tools are provided for viewing informa-tion.

Featured pattern:

– Display of Results

Page 17: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

TYPES OF LINKED ENTITIES

Linked entities (locations, people, events, etc.) play a key role in the annotation and retrieval of content. Many Linked Media Interfaces (LMI) focus on a particular entity (for example when annotating locations). But unfortunately there is no gen-eral definition of what an entity is, or which entities are considered to be basic or standard. Neither is there a rule stating how many types of entities there are or should be. The actual number of linked entities and their definitions are always related to the applied knowledge model (ontology) and the type of assets that is underlying an application.

Schema.org

A recent approach to agree on a set of entities is schema.org5. The three search engine providers Google, Bing and Yahoo! share this method and collection of con-cepts to markup websites in ways recognized by automated web crawlers. Their main types are:

– Creative work (including MediaObject as the object that encodes this creat-ive work)

– Event

– Organisation

– Person

– Place

– Product

There are two other schemes that are already available in structured form in the Linked Open Data Cloud: Facebook's Opengraph and OpenCalais.

Facebook Opengraph

Facebook's Opengraph defines the following list of types6

– Activities: activity, sport

– Businesses: bar, company, cafe, hotel, restaurant

– Groups: cause, sports_league, sports_team

– Organisations: band, government, non_profit, school, university

5 htp://www.schema.org [2011-09-20]

6 htp://developers.facebook.com/docs/opengraph/#types [2011-09-20]

Page 18: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

– People: actor, athlete, author, director, musician, politician, public_figure

– Places: city, country, landmark, state_province

– Products and Entertainment: album, book, drink, food, game, product, song, movie, tv_show

– Websites: blog, website, article

Please note that Opengraph has no notion of Time and Events. From a user's per-spective some businesses could also be referenced as places (bars, etc.)

OpenCalais

OpenCalais lists the following entities7:

– Entities: Anniversary, City, Company, Continent, Country, Currency, Email-Address, EntertainmentAwardEvent, Facility, FaxNumber, Holiday, In-dustryTerm, MarketIndex, MedicalCondition, MedicalTreatment, Movie, Mu-sicAlbum, MusicGroup, NaturalFeature, OperatingSystem, Organisation, Per-son, PhoneNumber, PoliticalEvent, Position, Product, ProgrammingLan-guage, ProvinceOrState, PublishedMedium, RadioProgram, RadioStation, Re-gion, SportsEvent, SportsGame, SportsLeague, Technology, TVShow, TVSta-tion, URL

– Events and Facts: Acquisition, Alliance, AnalystEarningsEstimate, Analys-tRecommendation, Arrest, Bankruptcy, BonusSharesIssuance, BusinessRela-tion, Buybacks, CompanyAccountingChange, CompanyAffiliates, Compa-nyCustomer, CompanyEarningsAnnouncement, CompanyEarningsGuidance, CompanyEmployeesNumber, CompanyExpansion, CompanyForceMajeure, CompanyFounded, CompanyInvestment, CompanyLaborIssues, Com-panyLayoffs, CompanyLegalIssues, CompanyListingChange,CompanyLoca-tion, CompanyMeeting, CompanyNameChange, CompanyProduct, Com-panyReorganisation, CompanyRestatement, CompanyTechnology, Com-panyTicker, CompanyUsingProduct, ConferenceCall, ContactDetails, Convic-tion, CreditRating, DebtFinancing, DelayedFiling, DiplomaticRelations, Di-vidend, EmploymentChange, EmploymentRelation, EnvironmentalIssue, EquityFinancing, Extinction, FamilyRelation, FDAPhase, IndicesChanges, In-dictment, IPO, JointVenture, ManMadeDisaster, Merger, MovieRelease, Mu-sicAlbumRelease, NaturalDisaster, PatentFiling, PatentIssuance, PersonAt-tributes, PersonCareer, PersonCommunication, PersonEducation, PersonE-mailAddress, PersonRelation, PersonTravel, PoliticalEndorsement, Politic-alRelationship, PollsResult, ProductIssues, ProductRecall, ProductRelease, Quotation, SecondaryIssuance, StockSplit, Trial, VotingResult

7 A list with examples can be found at htp://www.opencalais.com/documentaton/linked-data-enttes [2011-09-02]

Page 19: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

OpenCalais distinguishes between “Entities” and “Facts and Events” on the top level. This is different to our approach to simply call everything an entity, includ-ing facts and events. On a second level OpenCalais introduces a lot of specific types.

LMI Enttes

Derived from these three examples we arrived at a pragmatic approach based on the capability scenarios in SNML-TNG.

– Digital Asset/Media Object: Document, Video, Picture

– Location/Place: Geo-Coordinates, Region, City, Country, ZIP-Code, Street Ad-dress, Landmarks, Geographic Entities

– Time/Temporal Information: Date, Time, Cyclic Events, Relative Dates, Range

– People/Person: Person, Actor, Athlete, Author, Director, Musician, Artist, Public Figure

– Event: Sport Event, Public Event, Party, Meeting, Business Conference

– Other Types/Themes: Organisation, Business, Activity, Product, Entertain-ment, Project, etc.

This chapter will examine the nature and specific quality of these concepts and as-sign intuitive or meaningful interaction metaphors for each.

Digital Asset (Media Object)

A media object is a digital item for information exchange that is used to transport and store stories, messages and the like. It is a container of information, it tells a story or depicts an event. The content is not explicit to machines and has to be de-scribed by meta-data (entities, actions, etc.) to be understood. It is the entity that all the meta-data is attached to by the Linked Media Interfaces.

The Media Object is the central entity type in media asset management systems. In the case of LMI this can be a video, an image or a document. Other forms of media objects or temporal and spacial regions like single frames or audio layers (video fragments)8 are outside the scope of this report.9

8 “W3C Media Fragments Working Group”, n.d., htp://www.w3.org/2008/WebVideo/Fragments/ [2011-09-20]

9 We are aware that there are other forms of multmedia objects that can be stored and annotated. The complexity and additonal interacton paterns of these media assets cannot be addressed within the scope of the Linked Media Interfaces Report.

Page 20: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

Locaton

Location is described by geo-coordinates (longitude/latitude) or by semantic de-nomination (the name of a bar). Location can also be a region (Salzkammergut), a city, country and as such it can also be annotated for example by its ZIP-Code. Oth-er examples for location annotations are street address, landmarks, geographic entities (e.g. the Alps).

Time

Usually a time is a date (year, month and day), but it can also be a time of the day (hours, minutes and seconds), or a (cyclic) event (spring break, Easter, Christmas). It can also be a relative date (in two weeks, the second year of King George IV), or a range (15th century). Time can also be inherent in a complex event like World War II.

People

The People entity describes human beings (living and dead, real and fictional). They can be figures of public interest, private persons, as well as historic figures. People have properties, birthdays, a job, attend meetings, and there are relation-ships that can be shown between different people, for example using Open Graph. They can be grouped and categorized (for example DBpedia-categories.) But people can also be a product of fiction. Like a character in a theatre play or a movie. Finally, people can have roles as well as synonyms, for example player number 10 in a soccer game.

Event

Events are framed by four aspects: what, where, when and who. They contain a name or description, a place where they are located, a time period in which they take place, and a number of participants that are involved or take part in the event.

Themes/Other Objects

Theme is the collective term for the rest of concepts that can be linked to media resources. This is basically any object that has a well defined class structure (“fea-ture set”) which can be disclosed to the user (for example to narrow down search results).

Page 21: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

GENERAL ASPECTS OF LINKED MEDIA

Typed Links

The basic building block for Linked Media is a typed link. It attaches information to media assets including information about the quality of the link. This associates descriptions of content (people, events, etc.) or other arbitrary information to a media asset. Other than just holding the address of a target location like in a hy-pertext reference (where the type is inherent), a typed link also contains informa-tion about the type of connection that is made between the two objects. For ex-ample it makes a difference whether a scene was “filmed at” a location (e.g. Salzburg) or is making the same location the “topic” of a story.

Typed links can also be seen as properties of an asset where the target of a link relates to the attribute value and the type to the property.

That additional information can be used for search, but also has to be accounted for during annotation. Linked Media Interfaces usually handle descriptions of the content, which means a type like “IsContent” may be predefined, but link types can also handle authors, origins, technical information, etc.

This leads to the topic of metadata standards that define and specify properties of media assets. Unfortunately there are not many tools that allow users to work with typed links and LMI does not fully support these standards at the moment. For further reading about metadata standards please refer to Smith (2006)10 for a general overview and the W3C Media Annotations group for the “Ontology for Me-dia Resource 1.0”11.

Personalisaton of Informaton and DRM

“You affect the world by what you browse” (Tim Berners-Lee)

Not only do media resources disclose information about content and properties, but so do people who access these resources. Every user has their own context (time, country, interests) and (browsing) history. This personal digital footprint can be tracked and used to filter and influence the search results. People also want to manage how results are displayed, store favourites or have their current loca-tion and time taken into account in the search.

Based on their role a user will own rights to view certain data. In other situations the system will recommend views. In some cases it is necessary to inform a user about internal processes and make personalisation decisions and recommenda-

10 Smith and Schirling, “Metadata Standards Roundup.”

11 Werner Bailer et al., “Ontology for Media Resources 1.0”, March 8, 2011, htp://www.w3.org/TR/mediaont-10/ [2011-09-20]

Page 22: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

tions transparent (this movie is recommended to you, because you watched it be-fore, because you liked it, because your friends liked it, etc.)

Quality of Linked Data and Trusted Sources

Jan Hannemann and Jürgen Kett (2010)12 point out the problems of trustworthi-ness in an article about the German National Library. “The main problem for the linked data web is dealing with reliability: Is the data correct and do processes ex-ist that guarantee data quality? Who is responsible for it?”

The Linked Open Data Cloud is a mix of community driven efforts and contribu-tions by cultural heritage institutions like national archives or media organisa-tions. The increase in information available also leads to a loss of control over it. Information may not always prove to be reliable, in the worst case it may be incor-rect, incomplete or not available, especially when dealing with community driven repositories like DBpedia. In fact original authorship may be ambiguous or simply not traceable.

Halb and Hausenblas (2008)13 name two indicators of quality: provenance and trust. Knowing the provenance of a source can serve as a quality seal as well as well as knowing the person or organisation that provides the information. With linked media, the media asset and the metadata information can even stem from different sources and be of different quality.

Eventually, with typed links, not only the reference that is linked to but also the quality of the link have to be considered in assessing the overall quality. An image that was taken by by a certain photographer is incorrectly annotated if the photo-grapher is associated as the content and not as the creator. And the information remains incorrect, even if the URI of the photographer originated from a trust-worthy repository source.

For an enterprise media asset management system the trust quality of assets can fall into one of three categories. Content and information that was generated in-ternally by a skilled expert within the organisation, could be regarded of highest quality. Respected cultural heritage repositories such as the German National Lib-rary with some certificate of reliability, but still external sources may fall into a category 2 rating and information that is derived from user generated content might be classified as a third category of trustworthiness.

With the Linked Media Interfaces the user can set the level of trust and filter con-tent, based on these settings. They can also indicate the trust level if necessary by colour or flag it in some way and, finally, allow users to obtain source information about the provenance of an asset or its metadata.

12 Jan Hannemann and Jürgen Ket, “Linked Data for Libraries”, 2010.

13 Wolfgang Halb and Michael Hausenblas, “select * where { :I :trust :you } How to Trust In-terlinked Multmedia Data,” Proceedings of the Internatonal Workshop on Interactng with Multmedia Content (2008).

Page 23: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

LINKED MEDIA INTERFACES – DESIGN PATTERNS

Design patterns describe solutions and interaction components for common prob-lems and defined contexts. They are building blocks that function as small distinct mini-tools which – in connection with other blocks – form Linked Media Inter-faces. For each pattern there is a small introduction of the problem and the con-text, followed by either existing or generic solutions. Where applicable, a descrip-tion of best practices is provided, and future direction and ideas are added.

The design patterns are grouped into patterns for search and patterns for an-notation. This also holds for the two main aspects of information interaction (en-tering metadata and content retrieval).

Page 24: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation
Page 25: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

PATTERNS FOR SEARCH (INCLUDING VISUALISATION)

In this chapter we introduce graphical user interface design patterns that deal with the search and visualisation of media assets. For each pattern we provide a brief description with examples.

Formulatng the Query

Text Entry Field

Text entry is elementary for a lot of query interfaces. The characteristics of differ-ent implementations are a result not so much of the interaction process, but by the features that are included in the search. We want to point out three essential technologies:

Auto-CompletonAuto-completion with Linked Media Interfaces have to meet an additional re-quirement. Not only should it literally complete a term, but also arrive at the cor-rect entity and provide the corresponding URI. In a semantic context it is not enough to complete P-A-R to Paris, because there are a lot of meanings/concepts for the term “paris”. The site rdf.freebase.com is an example of a service that deliv-ers RDF identifiers for Linked Open Data concepts.

Depending on the implemented auto-completion settings there are differences in the list of suggestions that is provided and whether or not and in which way the suggestions are sorted and grouped. Sorting principles can be derived based on the LATCH principle of Richard S. Wurman (2001)14.

Common sorting principles are:

– alphabetic

– most relevant (e.g. most viewed, highest rated, nearest)

– based on a category

– most recent

– order of appearance (in a text)

If the text field belongs to a certain type or theme then of course additional sorting and grouping principles my apply.

Even entities of the same type can be grouped differently. For example locations and places can be grouped by their type (city, mountain, point of interest) or by countries.

14 Richard S. Wurman, Informaton Anxiety 2, 1st ed. (Que, 2001).

Page 26: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

Fig. 2. Grouping autocompleton suggestons based on category or country. Source: Amin et al. (2009), p. 523 (cropped).

Word StemmingWord stemming is a technique that allows searching for concepts that are built from the same stem or root of a word. For example a search for “fisher” could also deliver results for “fishing” or “fished” or even “fish”. Google search adopted word stemming in 2003.

This technique as well as the error tolerant search are only related to Linked Data insofar as they are based on linked data sources like WordNet15. But the technique is fundamental for good search results.

Error-Tolerant SearchSimilar to word stemming Error-Tolerant Search is only partially related to Linked Open Data. It provides a mechanism that recognizes typing errors and treats them as such, hence delivering results related only to existing concepts of a thesaurus or ontology.

Semantc Search

Semantic search is a technique that is based on the ability of a system to “under-stand” the meaning of data items and allow search that is not only based on liter-ally searching for these textual data items but also on related meanings and con-cepts that are not explicitly mentioned in the metadata of an asset. It can also in-volve techniques of inferencing and reasoning.

The food website Yummly16 offers a combination of semantic search patterns. It includes typed fields (with, without), weighted search, and several categories. The 15 htp://wordnet.princeton.edu [2011-09-20]

Page 27: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

system “understands” that certain ingredients taste salty or sweet and hence provides a Taste slider to set the flavour of a dish.

Fig. 3. Semantc search based on categories and inferred knowledge (taste, nutriton, etc.)Source: htp://www.yummly.com [2011-09-20]

Specify which...

This is a special form of auto-completion, a kind of query precision pattern that asks the user for the precise search term. It can be used for example if several en-tities of the same type (e.g. city) are available.

16 htp://www.yummly.com [2011-09-20]

Page 28: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

Fig. 4 Specify which “paris”Example again from: Amin et al. (2009), p. 523 (cropped)

Facet-Based Querying

One way to formulate queries is to use a special form of facets. Every facet provides an entry field that in combination formulate a complex query. For ex-ample17:

Fig. 5. DBpedia Search for 19th century Austrian scientstsSource: htp://dbpedia.neofonie.de/browse/rdf-type:Scientst/birthDate-

year~:1800~1900/natonality:Austria/birthPlace:Vienna/ [2011-09-20]

17 Find more use cases at htp://wiki.dbpedia.org/UseCases.

Page 29: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

Fine-Tuning the Query

Did you mean...

A “Did you mean” field with one or a few disambiguation terms is used, if there is more than one prominent entity. For example, the Yahoo! image search lists pic-tures for Vienna (Austria) but also allows to switch to Vienna, VA.

Fig. 6. Disambiguaton suggeston with Yahoo! image searchSource: htp://images.yahoo.com [2011-09-20]

Narrow Propertes Sidebar

When search results are delivered, it is possible to narrow down the search res-ults by applying certain filters. Microsoft Bing, for example allows to narrow down by language and region.

Fig. 7. Side bar with fltersSource: htp://www.bing.com/search?q=salzburg&go=&form=QBLH&flt=all [2011-09-20]

Similarly, a widget could allow the restriction of facet values by checking multiple facets.

Page 30: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

Fig. 8. Category flter from “Tagit”Source: htp://tagit.salzburgresearch.at [2011-09-20]

Faceted Result Filtering

This tool allows users to filter results based on facets. An example is Yahoo's im-age search18: The facets in the left sidebar are created automatically and can be used to narrow down search results based on facets (categories).

Fig. 9. The results for a search for “Salzburg” at Yahoo. Source: htp://images.search.yahoo.com [2011-09-20]

Suggested Search Terms (“See also...”)

This pattern adds suggestions of additional search terms to the list of results. They are semantically related vocabularies, for example higher category terms (“Clas-sical music festivals” when you search for Salzburg Festival). Example below is from from duckduckgo.com.

18 htp://images.search.yahoo.com [2011-09-20]

Page 31: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

Fig. 10. Below the quick explanaton there are two suggested search terms (“Salzburg” and “Classical music festvals”)

Source: htp://duckduckgo.com/?q=salzburg+festval [2011-09-20]

Another pattern suggests similar search terms for a new, related search: In the Iconclass Browser the search for “church” returns “see also: cathedral, chapel, sanctuary, temple”.

Fig. 11. Iconclass suggests search terms similar to the original one. (“See also:...”)Source: htp://www.iconclass.org/rkd/1/?q=church&q_s=1 [2011-09-20]

Specifying the Content Type

Specifying the content type is a special case of filtering categories. It allows the user to switch search (virtual) repositories of different types (videos, images or documents). It is also possible to use this pattern to search for projects, people or any other themes. The M@RS system by Mediamid19 adds the types “Vehicle mod-el” (Pkw), “Race Car” amongst the usual candidates (such as images, documents, videos, events, people).

Usually this feature is provided as menu items in a horizontal menu above the search results. In M@RS this is done dynamically.

19 htp://www.mediamid.com/hp/mars_6.html [2011-09-20]

Page 32: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

Fig. 12. The mediamid M@RS user interface (v6.4.2) shows a dynamically generated type selecton menu.htp://www.mediamid.com/hp/mars_6.html[2011-09-20]

Search Modifers

These are some modules that modify search entries involving common entities such as location and time or personal preferences and current context. They are similar to Faceted-Based search but can be set on a more global level, or to work in the background. They can also be applied as filters or influence the sorting of results.

Time

Time can be included in queries in different ways. It can denote the creation of an asset or the last time it was accessed or, it can be associated to a concept (e.g. an event). Selecting the time for a search query is similar to the process of annotating time (see Calendar Picker). A time modifier can also account for current time and date.

Page 33: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

Locaton

Locations offer several properties of query modification (e.g. position, size, type of location), the most prominent one, the geographic location, is given as longitude and latitude. Queries that include a geographic position may also want to specify a radius within which results may be located.

The tools to specify locations are similar to the tools used for annotation (e.g. Loc-ation Picker). But it is also possible to include the current location into the query context.

Fig. 13. Setng a spatal query modifer via zip code and range.Source: htp://www.mobile.de [2011-09-20]

People, Events and Themes

To make sure a query widget understands what a user wants, it has to interpret basically all entities (people, events, themes) as concepts. Sometimes the concept can be inferred from plain text entry or the context of the query, but sometimes this will involve the same technologies that are utilized in the annotation process (see People, Event and Theme Annotation p. 59ff).

Similar to time and location, it is also possible to modify queries by including par-ticular entities into the query that have to occur in search results. The next ex-ample from the Red Bull Content Pool restricts search results to a specific theme.

Page 34: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

Fig. 14. Picking a theme (sport) to modify search results.Source: htp://www.redbullcontentpool.com [2011-09-20]

Sortng and Grouping Results

Sortng

Search results can be sorted by various properties. YouTube, for example provides a drop down menu to sort results by relevance, upload date, view count or rating.

Fig. 15. Sortng choices on YouTubeSource: htp://youtube.com [2011-09-20]

Weighted Sortng

Instead of sorting and filtering results based on only one property, sorting can be based on two or more properties in one weighting setting. A search for websites similar to a particular one, but at the same time popular is implemented by “Moreofit”.20

20 htp://www.moreoft.com [2011-09-20]

Page 35: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

Fig. 16. Moreoft lists websites. Sortng can be weighted contnuously either by popularity of the sites or by similarity to the given one.

Source: htp://www.moreoft.com [2011-09-20]

Grouping Widget

Grouping of search results works similar as the grouping methods shown in the auto-complete pattern, but could be applied to any predefined category (e.g. sea-sons). It allows the user to more quickly spot a resource in a list of search results.

Fig. 17. Grouping results (mockup) based on seasonal diferences (summer vs. winter).

Display of Enttes

Ideally, entities include a self-explanation method to be queried by an API. Based on the description, tools can offer widgets for their visualisation. With Micro-formats21, for example there is a “Cheat Sheet” that lists the most common entities and their properties22. For Linked Open Data entities there is no similar approach.21 htp://microformats.org [2011-09-20]

22 htp://microformats.org/wiki/cheat-sheet [2011-09-20]

Page 36: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

In principle, every entity type can have generic visualisation forms. People can al-ways be displayed using an avatar image, the name, maybe their profession, or current status/location. Having predefined properties will allow designers to cre-ate “styles” for the display of the same entity type. These display forms can be im-plemented in views, in overview, or displayed with mouse-over, etc.

Quick View

Quick view displays the very basic information about an entity in a frame or pop-up window. It contains links to navigate to more detailed information. The ex-ample from duckduckgo provides information about Sebastian Vettel.

Fig. 18. Displaying informaton about Sebastan VetelSource: htps://duckduckgo.com/?q=sebastan+vetel [2011-09-20]

Extended View (Mash-Up)

The extended entity mash-up tries to display all the available information of an entity in a structured way.

Fig. 19. A “sig.ma” is a mash-up of all available structured informatonSource: htp://sig.ma/search?q=sebastan+vetel [2011-09-20]

Page 37: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

Specifying Display Details

The configuration settings determining which details to display can be manually adjusted by a type manager, seen as well in the mediamid M@RS user interface.

Fig. 20. The mediamid M@RS user interface (v6.4.2) provides a display confguraton toolSource: tp://www.mediamid.com/hp/mars_6.html [2011-09-20]

People Entty Visualisaton

This tool integrates information about people into an interface or application. People will be displayed with their key features and information. For example, profession, origin, personal information, current location can be displayed. An in-teractive widget will behave in a familiar way regardless of the interface context and enable access to relevant information.

Page 38: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

Event and Theme Entty Visualisaton

These widgets display the properties of common entities in standardized ways. For example event visualisation will always include title, date, location and parti-cipants. With themes there are domain specific ways to visualize these entities, for example a “Project” theme will look similar to the event visualisation (title, sched-ule, people, current tasks, etc.).

Fig. 21. Standard visualisaton of a Project theme displaying informaton about a research project. It includes pre-defned propertes (Title, Acronym, Duraton, etc.)

A real world example comes from the “Microformats for Google Chrome”-Plugin that detects microformat entities and displays them in a popup window.

Fig. 22. Visualising an Organisaton (Collectve Idea), including contact informaton and locatonSource: htp://michromeformats.harmonyapp.com [2011-09-20]

Page 39: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

Display of Results

Timeline Widget

A timeline widget places information on a 2d map illustrating events chronologic-ally .

Fig. 23. The Simile Timeline Widget is an interactve display. In this example it displays the events related to the assassinaton of John F. Kennedy

Source: htp://www.simile-widgets.org/tmeline [2011-09-20]

Timeplot-Widget

Similar to the timeline widget a timeplot visualisation places dates and time ranges on a timeline. Timeplot is more suited to the presentation of numerical data.

Fig. 24. A Simile Time-Plot illustratng the number of new permanent residents in the U.S per yearSource: htp://www.simile-widgets.org/tmeplot [2011-09-20]

Page 40: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

Locaton Widget

A location widget takes the geographic information of a location and places it on a map. Google Maps is the most prominent example of a location widget.

Fig. 25. Displaying bars in Salzburg on Google MapsSource: htp://maps.google.com [2011-09-20]

Display of Large Amounts of Results

Tools like mosaics and video walls display a large amount of images

.

Fig. 26. Cooliris is a Firefox plugin that displays for example Google image results on an interactve 3d wall that can be scrolled.

Source: htp://www.cooliris.com [2011-09-20]

Page 41: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

Fig. 27. Medienfuss is an installaton that arranges videos in a constantly moving data streamSource: htp://medienfuss.netzspannung.org [2011-09-20]

Advanced Search

The “Advanced Search” section lists search patterns that involve search concepts based on knowledge models that include reasoning and inferencing as well as res-ults that are derived by combining information from heterogeneous data sources.

Feature Search

This describes a search query based on properties (“features”) of entities. For ex-ample a book can be written by someone or about someone. It is not enough to simply link a certain person to the book, but instead link the person to a feature of the book (“written by”).

If a user searches for books written by Bill Clinton, the results should bring up the memoir of Bill (as opposed to results that were written about him).

Page 42: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

Fig. 28. Distnguishing search for documents about and writen by Bill Clinton.

A working example of this pattern can be found at dbpedia.neofonie.de. Depend-ing on the entity type (“item type”) the sidebar discloses features of the type in fa-cets that can be used as additional query refinement. The item type “plant” pro-duces a different set of properties than the type “organisation”. With this ap-proach it is possible to search for organisations based on date of incorporation (formation date).

Page 43: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

Fig. 29. Features generated by the entty type. “Plant” on the lef side vs. “Organisaton” on the rightSource: htp://dbpedia.neofonie.de [2011-09-20]

Page 44: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

Query Builder

Similar to the feature search, but more detailed are tools that help create complex queries. Tools like the DBpedia Query Builder make use of the RDF “predicate” op-tion allowing to query for different link types. For example the search term “win-ner” will allow to combine an event and an athlete and provide a logical switch between IS_winner or ISNOT_winner.

The Wikipedia example (DBpedia Query Builder) shows the combination of differ-ent aspects of cities.

Fig. 30. DBpedia Querybuilder ofering a complex search interfaceSource: htp://querybuilder.dbpedia.org [2011-09-20]

Another example is the KIWI Query Builder (visKWQL, see Hartl et al., 2010).

Fig. 31. A visKWQL querySource: Hartl et al. (2010), p. 1255

Page 45: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

Associatve Search

Metadata can describe the content of an asset (e.g. mother with two children in a refugee camp) or the statement of it (plight of refugees). Statements are more dif-ficult to annotate and search and may call for a special associative search field.

This provides the possibility to enter high level search terms or even emotions (for music), the result is a list of related and associated matches. It incorporates slightly different concepts of closeness, other than the most common “narrower”, “broader”, “related” and therefore not included in standard search fields.

This search interface allows the entry of keywords and finds directly and indir-ectly related results. It could be useful for research activities.

Fig. 32. Using a tag cloud as a search input.

Trust indicators

The goal of trust indicators is to tell users how reliable the given information is. This can either be achieved by displaying source information (as a literal or as icon) or by indicating the level of trustworthiness.

Provenance

Provenance can be indicated by adding icons of origin to information or by adding a reference link. Sig.ma for example adds provenance information as footnotes and lists all sources in a separate table.

Page 46: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

Fig. 33. Sig.ma indicates provenance informaton as footnotes and in a separate sources tabSource: htp://sig.ma/search?q=sebastan+vetel [2011-09-20]

Source Ratng

If the source of the information is not known or if it is not necessary that it is shown in detail, the trustworthiness can still be indicated by flagging the content or by applying a rating system.

User driven trust rating appear for example on shopping sites like amazon.com where users write product evaluations and rate them which in return can indicate the reliability level: “This number of users found the information useful”.

Within an enterprise context a user probably just wants to indicate whether in-formation is derived from internal sources or from outside the company.

Fig. 34. Mockup of trust level indicaton with a fag.

Page 47: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

Transparent recommendaton

Similar to the indication of the trust level, it may be of interest for the user to know why assets are recommended. Youtube started adding the line “because you watched” to its recommended video clips. At the same time recommendations could be chosen for other reasons (because you watched, because you liked, be-cause your friends like, because you subscribed the channel, etc.).

Fig. 35. Youtube indicates the reason for recommendatons (“Because you watched”)Source: www.youtube.com [2011-09-20]

Content Summary

These tools either summarize a single entity or a group of entities.

Video Summary

Video OverviewYovisto23 features a nice interface that provides keyframes related to shot lengths and at the same time frame-synchronized tag overview. This example shows a video result for the term “Paris”. The upper footer bar shows tags, ones that con-tain “Paris” are marked red. The lower footer bar shows the keyframes (stripes in-dicate length of shots). Tags and keyframes can be viewed by pointing the mouse cursor at the particular spots.

23 htp://www.yovisto.com [2011-09-20]

Page 48: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

Fig. 36. Tag and keyframe overview from yovisto.comSource: htp://yovisto.com [2011-09-20]

Stripe Image“A stripe image is an even more compressed representation of the original video than the keyframes are. It is created by adjoining the middle vertical column of pixels of every video frame.” (Rehatschek & Kienast, 2001).

Fig. 37. Stripe Image. The x-axis represents tme dimensionSource: Rehatschek & Kienast, 2001

Keyframe PanelKeyframes are still images of a video that represent a cut or scene. “The keyframe panel displays the 'storyboard' of the video. Keyframes are extracted by the basic video analyzer plug-in. These keyframes give a compressed overview about the content of the video.” (Rehatschek & Kienast, 2001).

Page 49: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

An example for a keyframe panel is theVideoSurf FireFox-Addon that enhances search results of common video search engines by adding a bar of keyframes.

Fig. 38. The FireFox plugin of VideoSurf adds a keyframe panel to the search results for Sebastan Vetel on YouTube.

Source: htps://addons.mozilla.org/en-US/frefox/addon/videosurf-videos-at-a-glance/ [2011-06-27]

Entty Index

An entity index lists all entities of a certain type that appear in a a webpage or me-dia asset or are related to a group of assets. The index can be sorted alphabetically and helps the user to get a better overview and find entities, as well as the possib-ility to follow links to a more detailed description.

Fig. 39. An index listng the event enttes of a websiteSource: htp://michromeformats.harmonyapp.com [2011-09-20]

Informaton Overview on Large Data Sets

These are tools that allow users to draw information from the total results avail-able. Users may find patterns in the results through the process of data visualisa-tion and data interaction.

There are standard sets of visualisation methods for data visualisation. Flare24, for example is an ActionScript library created by the UC Berkeley Visualisation Lab. The algorithms are generic and can be used to display different types of results. Display methods include tree, force, indent, radial, circle, dendrogram, bubbles, circle pack, icicle, sunburst, treemap, timeline, scatter, bars, pie.

24 htp://fare.prefuse.org [2011-09-20]

Page 50: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

Fig. 40. Displaying the appearance of Formula 1 drivers in video results in a tree map

Storing Searches and Results

Sometimes it is useful to store complex searches. Working through a long list of results can take several days including the necessity to re-access the same list at a later stage of research. It may be also useful to store the list of results to share it with collaborators.

This pattern is found for example in selection baskets and shopping carts.

Fig. 41. A “Collecton Basket” is used to store and share search results. Source: Screenshot from mARCo, the research and producton tool of

the Austrian Broadcastng Organisaton/ORF [2011-09-20]

Page 51: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

Reports

Summary Report

Reports are a way to provide a structured overview of a list of items. It allows the user to display results in a particular representation that can be chosen by the user.

A typical report would be a collection of all media assets that were requested by certain users, or divisions or assets that were sold to a certain region. Queries could also be used to comprise a generic annual overview of all internal docu-ments, deliverables and publications that are related to a certain theme. The res-ults can be listed in chronological order and grouped by document types.

Fig. 42. Reportng the publicatons of a partcular year related to partcular theme.

Automated Content Extracton

For services such as Electronic Program Guides (EPG) information is automatic-ally extracted in an exportable format. This information could be pulled from the Linked Open Data cloud as well.

Page 52: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

Fig. 43. Electronic Program Guide mockup that pulls short info and 3-star ratng from a (hypothetcal) online repository.

Enhancement

Entty Enhancement

This tool pulls data from the Open Linked Data Cloud to add additional informa-tion to existing entities. It is possible to enhance text by icons or images, but it also works for example with personal details of politicians, athletes, actors, etc. that are added as text boxes or similar.

Page 53: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

Fig. 44. Example of enhancing informaton about Jonathan Stephens, the Permanent Secretary of

the Department for Culture, Media and Sport. Data provided by data.gov.uk.

Media Enhancement

Metadata can be used to generate additional information during the display of a media resource. Names of people can be inserted, hyperlinks can be created in re-gions of a video, etc.

The triggers for enhancements can also be included in media content. Mozilla Pop-corn is an example for pulling wikipedia information in realtime during video playback.

Page 54: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

Fig. 45. Mozilla Popcorn displays additonal informaton in separate windows during video playbackSource: htp://webmademovies.etherworks.ca/popcorndemo [2011-09-20]

Another example is Soundcloud. It shows user comments during playback.

Fig. 46. Showing user comments in Soundcloud during music playbackSource: htp://soundcloud.com [2011-09-20]

Page 55: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

PATTERNS FOR ANNOTATION

Editor tools for metadata annotation allow specific description of media content. With Linked Media Interfaces, descriptions point to entities of the Linked Data Cloud through typed links. Hausenblas et al. (2008) call the process of linking data items to entities of the Linked Open Data Cloud “interlinking”. It describes the act of semantically enriching content on a uniquely identifiable description method (e.g. RDF). This assumes that resources can be retrieved in an easy way from the Linked Open Data Cloud. The interlinking of particular words and phrases to en-tities of the Linked Data Cloud happens mostly in the background.

General Annotaton Based on Text Entry

The patterns that are introduced in this section show graphical examples in which text is entered via keyboard. The techniques that are applied in the background – like semantic lifting, as well as auto correction or the possibility to use abbrevi-ations and shortcuts are part of the graphical user interfaces but not elaborated in particular.

Auto Complete

This widget provides autocompletion during typing. Data is populated from in-dexed controlled vocabulary. This features different methods of auto-completion, including drop-down boxes, etc.

Fig. 47. Auto completon example from reegle.info. Suggestons come from a controlled vocabulary related to the domain (clean energy) of the site

Source: htp://www.reegle.info [2011-09-20]

Page 56: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

Rich Text Editor

The rich text editor recognizes entities in text files. It allows users to link words in a text to Linked Open Data entities.

Fig. 48. Rich Text Editor mockup.

It scans text during entry or after entry and provides suggestions for links to re-sources and entities. It also enables the user to select text manually. The tool will link to existing concepts or create a new ones. As soon as a link is confirmed, the user can benefit from features such as quick info. Colours or icons may indicate the entity type (e.g. person, event) or the source of the concept (see Dbpedia, Fig.48. Rich Text E). A tool like this is provided by DBpedia Spotlight.

Fig. 49. Clicking the “Annotate” buton, returns a text with linked dbpedia-conceptsSource: htp://www5.wiwiss.fu-berlin.de/SpotlightWebApp/index.xhtml [2011-09-20]

Fig. 50. Result is linking words to DBpedia enttesSource: htp://www5.wiwiss.fu-berlin.de/SpotlightWebApp/index.xhtml [2011-09-20]

Page 57: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

Locaton Annotaton

Locaton Picker from Map

This tool lets the user select a place, city, etc. based on a 2d map. Examples for oth-er locations: places (such as pubs, theatres, sites), geographical names (such as cities, mountains, bodies of water), etc.

ÖBB online service uses a button that lets the user open up a map with all stations. Additional features like train stations, bus stops or parking areas can be displayed optionally.

Fig. 51. ÖBB Staton Picker optonally shows additonal informatonSource: htp://fahrplan.oebb.at [2011-09-20]

Creatng New Place mark

To create a new place mark, location or point of interest a user can use a map to locate the spot and name it. Alternatively the user can enter geo-location data.

Page 58: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

Fig. 52. Adding a new locaton by dragging mouse cursor or entering geo-locaton

Locaton Diferentaton Based on Category

It is possible to label different types of locations, for example with icons for bus stations, subway stations and railway stations, as is seen on the ÖBB example.

Fig. 53. Based on the type (subway staton or a general landmark) two diferent enttes are presentedSource: htp://fahrplan.oebb.at [2011-09-20]

Annotaton of Time

Annotation of time can be sophisticated. Besides fixed dates, time stamps may in-clude ranges (World War 2, Medieval) or cyclic events (Christmas, Valentine's day,

Page 59: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

Monday, etc.) as well as relative and hypothetic times (e.g. 2 months after the re-lease, a year after his death).

Calendar Picker

The Calendar picker is used to chose a fully defined date.

Fig. 54. Calendar picker tool from YahooSource: htp://travel.yahoo.com [2011-09-20]

People, Event and Theme Annotaton

People Tagging

This process is supported by automated face detection or, in an advanced version from face recognition. Face shapes are pre-selected and are annotated by the user. On a more powerful system a person's ID is already suggested by the computer. A feature to tag people is implemented in various applications and services like Facebook, iLife/iPhoto by Apple or face.com.

Page 60: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

Fig. 55. Face tagging demo from Mozilla DemoStudioSource: htps://developer.mozilla.org/en-US/demos/detail/facial-recogniton-and-analytcs-with-html5s-video

[2011-09-20]

Sorting of people can be done according to how “close” they are to the user based on a social graph. Other possibilities for sorting are geographic closeness, or grouping by categories (such as school, team, family)

@-Sign

This pattern uses a notation known from Twitter and other social networking ser-vices. It is used to prefix a user name.

Hashtag (#-Sign)

Events and themes in general can always be picked from a set of choices or de-noted by shortcuts or key combinations.

But similar to the @-sign for people, it is also possible to use the hash-sign (#) to prefix an event. The resulting tag is called a hashtag and can be regarded as a shortcut for the URI.

In some environments a colon (:) is used instead of the hash-sign. Placed in front of a term this method is again used to indicate a concept or stand for a URI (“:Se-bastian_Vettel”).

Selecton and Picking of Vocabulary

Cascading List

The values in the different columns may affect each other. The pattern shows how the font properties are split up in separate lists. (TextEdit Application).

Page 61: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

Fig. 56. Font dialog box from the TextEdit applicaton of MacOS XSource: Screenshot of the TextEdit applicaton of MacOS

Vocabulary Picker with Images

As additional help, search terms can be combined with images or stock icons. last.fm implemented a widget like this.

Fig. 57 Last.fm search widgetSource: htp://www.last.fm [2011-09-20]

Page 62: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

Vocabulary Picker with Diferentaton/Disambiguaton

Especially for terms with different meanings it may be useful to use a combination of text entry and a method of differentiation. For example a vocabulary picker that lists the different possibilities of meaning and giving a short semantic explanation that allows the differentiation.

Fig. 58. Disambiguaton by short explanaton.

Duckduckgo implemented both these ideas including icons and grouping of res-ults based on types (people, geography, botany, film and television, music, etc.).

Page 63: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

Fig. 59. duckduckgo resultsSource: htp://www.duckduckgo.com [2011-09-20]

Page 64: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

Grid Selecton

Representations of controlled vocabulary can be placed on a 2-dimensional plane similar to a map or on any other spatial object. This works especially well, if con-cepts are represented by icons or images. They are placed as items on a grid or as limbs of a body or as herbs in a garden. Similar to geographic maps, such selec-tions are based on the location of a concept in 2- or 3-dimensional space.

An example for a grid selection is the symbol location in a text editor. The symbols are always at the same position like on a map.

Fig. 60. Searching an icon on a 2d grid is an example of a locaton-based search method. Source: Screenshot from TextEdit applicaton of MacOS X.

Paterns for Ontology Management

Conceptual Mapping

Converting existing metadata into a set of linked data can be a tedious task. Never-theless, in many cases multimedia archives need to map large amounts of indi-

Page 65: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

vidually annotated resources to uniquely identifiable concepts. W3C and others suggest mappings between the different metadata fields (e.g. creator, artist).

Poolparty includes a mapper that provides immediate overview of a Linked Data source and lets the user map concepts of one particular set of controlled vocabu-lary to another, existing one in different ways (e.g. Exact Match).

Fig. 61. PoolParty's Concept MapperSource: htp://poolparty.punkt.at [2011-09-20]

Category Adder

This tool is useful if a new term is coined for a category, for example “organic food”. None of the existing entities are labelled that way, but to some point the category can be inferred or at least narrowed down by filtering by other categor-ies.

Another possibility is to add new categories when a user adds new assets. In Ta-gIT the user can add subcategories when they add a new point of interest.

Page 66: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

Fig. 62. By entering text into feld (b) the user can add a new sub-category. This new category can also be applied to other parent categories.

Source: htp://tagit.salzburgresearch.at [2011-09-20]

Crowd Sourced Annotaton

These patterns of annotation are used to receive asset annotations from a large number of users (crowd). They are not skilled in a way like authorized informa-tion workers such as archivists are, but still deliver valuable contribution. Aside from classical tagging which often does not involve controlled vocabulary, there are also tools that do not require particular domain knowledge.

Create Context

Users can intentionally create a context for their work environment. This will in-fluence keywords, suggested entities, search, etc.

A simple way to create a context would be to drag collections of digital informa-tion material (e.g. word documents, pdf-files, meeting minutes) into an “extract”-container and create a context by parsing the documents. A list of keywords would be lifted and interlinked to Linked Data concepts and then displayed in a sidebar.

Ratng Systems

A rating system as implemented by Amazon collects user feedback on books, art-icles, etc.

Fig. 63 Amazon ratng of productsSource: htp://www.amazon.com [2011-09-20]

Page 67: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

“I know more” Buton

This idea was introduced by the riese-project (Hausenblas et al., 2008). It allows users to add “User Contributed Interlinking” to metadata. Additionally, the context of the data item from which the button is launched can be taken into account to preselect states of the interlink-process. Since archivists and information editors usually are concerned with quality assurance, including persistent usage of con-cepts, this may involve monitoring and approval by authorized personnel.

Other Annotaton Tools

Real Time Video Annotaton

This tool works like a live chat. It creates time stamped annotation snippets. The user types a short annotation and stores it by pressing “Enter”. Depending on the context, concepts are recognized automatically.

Fig. 64. Entering annotaton in real tme during video playback

The real time annotation can be supported by additional tools. For a sport event the current athletes may be inserted in a sidebar, or different players of a soccer game may be associated with keyboard shortcuts. Events can be triggered manu-ally (e.g. lap numbers in a Formula 1 race, athletes of a skiing race) or automatic-ally (weather condition, telemetric data).

Single concepts or groups of concepts may even be provided as pre-set buttons in-side or next to the annotation window. This helps to speed up live annotation. Pre-sets may include certain people that appear in an event, certain actions, etc.

Tag Recommender

This tool suggests new tags based on text extraction or user entry, for example as soon as a first tag is entered. The recommended tags are related to the first one and allow a refinement of the topic.

Page 68: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

PoolParty offers a semi-automated tag recommender to classify the texts and to al-locate them with concepts.

Fig. 65. PoolParty's Tag RecommenderSource: htp://poolparty.punkt.at [2011-09-20]

Completeness Feedback

Similar to the trustworthiness of a document a user may also be interested in the completeness and amount of available information related to a certain entity. Feedback can be given by percentage numbers or progress bars. In the example below weather icons indicate the state of an article, whether it is new, improved, tagged, reviewed, etc.

Page 69: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

Fig. 66. Weather icons indicate the state of an artcle (from clouds: missing informaton to sunny: reviewed)

Source: htp://www.newmedialab.at/projekte/interedu [2011-09-20]

Embedded Annotaton Entry

This is a way to annotate content within the workflow of some other process. For example allow a user to enter participants of an event during the annotation of a video-clip:

Imagine the live coverage of a skiing event. During the event a reporter types in the names of the athletes in real time. This information is not only stored as meta-information of the video, but creates and edits the participants of the skiing event. That means the editor tool for the event is embedded in a real time annotation tool, but feeds into the event entity.

Page 70: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation
Page 71: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

BUNDLED PACKAGES

There are such bundled packages available. First of all, we want to introduce two technologies from the SNML-TNG’s partners: PoolParty and M@rs.

Pool Party

PoolParty25 is a professional thesaurus management system and a SKOS editor for the Semantic Web including text mining and linked data capabilities. The system helps enterprises to build and maintain multilingual thesauri providing an easy-to-use interface. The PoolParty server provides semantic services to integrate se-mantic search or recommender systems into enterprise systems like CMS, web shops, or Wikis.

M@RS

M@RS26 is a Media Asset Management solution for the intelligent organisation and distribution of media assets. It offers extensive search functions and a sophistic-ated user- and access management that ensure clear structures and lucidity. It supports features such as multi-mandator capability, multilingualism, mass im-ports, versioning and workflow-support.

An integrated thesaurus supports different notations (e.g. Photo/Picture), syn-onyms and abbreviations and offers a comfortable and efficient research tool.

More Video Annotaton Tools

The LIVE “Report on Live Human Annotation”27 provides an overview of existing video annotation tools, including Anvil, ELAN (EUDICO Linguistic Annotator), M-OntoMat-Annotizer, Vannotea, ViPER-GT, VIDETO Video Description Tool, Frameline 47 Video Notation, VideoLogger and Efficient Video Annotation (EVA) System. All these tools provide graphical user interfaces to annotate and display video metadata.

The following two sections introduce two more examples, the first one an annota-tion tool, the second one an example of a semantic video search engine.

25 htp://poolparty.punkt.at [2011-09-20]

26 htp://www.mediamid.com/hp/mars_6.html [2011-09-20]

27 htp://www.ist-live.org/intranet/iais029 [2011-09-20]

Page 72: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

Video Content Annotaton: Vizard Annotator

Costa et al. (2002) introduce “Vizard Annotator”, a video publishing tool that in-cludes a video annotation module. The module allows users to add information in one of three “annotation tracks”: “Transcript” (transcription of spoken language), “Script” (description of the storyline), “In Shot” (content that appears in the shot). The interface includes a video player with common VCR-controls (Video Cassette Recorder).

Fig. 67. VAnnotator showing the movie player and three annotaton tracksSource: Costa et al. (2002), p. 285

Video Semantc Search: Jinni

Jinni28 is an online project “to describe video in more richer forms”. It is based on semantic technology and features searches for plot, mood, style and more.

28 htp://www.jinni.com/discovery.html [2011-09-20]

Page 73: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

Fig. 68. Jinni search for “car chase” makes use of semantc technology to fnd TV clips where the plot includes a care chase

Source: htp://www.jinni.com/discovery.html [2011-09-20]

Page 74: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation
Page 75: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

SUMMARY

The current list is a compilation of design patterns showing a variety of user inter-faces in the Linked Media domain. As the field of user interaction evolves and new design patterns emerge the current list can only serve as a starting point.

We hope however that the large set of use cases both guide and stimulate de-velopers and interface designers to provide more engaging and meaningful user interaction with Linked Media.

SNML-TNG is planning to implement many of the design patterns and provide code examples and also share implementations as generic widgets. We appreciate your feedback and contribution.

Page 76: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation
Page 77: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

REFERENCES

• Amin, A., M. Hildebrand, J. Van Ossenbruggen, V. Evers, and L. Hardman. “Organizing suggestions in auto-completion interfaces.” Advances in Information Retrieval (2009): 521–529.

• Bailer, Werner, Tobias Bürger, Véronique Malaisé, Thierry Michel, Felix Sasaki, Joakim Söderberg, Flori-an Stegmaier, and John Strassner. “Ontology for Media Resources 1.0”, March 8, 2011. http://www.w3.org/TR/mediaont-10/.

• Berners-Lee, Tim, James Hendler, and Ora Lassilia. “The Semantic Web.” Scientific American (May 2001).

• Christian Bizer, Tom Heath and Tim Berners-Lee (in press). Linked Data – The Story So Far. Internation-al Journal on Semantic Web and Information Systems, Special Issue on Linked Data.

• Christian Bizer, Anja Jentzsch, Richard Cyganiak. The State of the LOD Cloud. Version 0.2, 03/28/2011 (2011), http://www4.wiwiss.fu-berlin.de/lodcloud/state [2011-09-20].

• Costa, M., N. Correia, and N. Guimaraes. “Annotations as multiple perspectives of video content.” In Pro-ceedings of the Tenth ACM international Conference on Multimedia, 283–286, 2002.

• Halb, Wolfgang, and Michael Hausenblas. “select * where { :I :trust :you } How to Trust Interlinked Multi-media Data.” Proceedings of the International Workshop on Interacting with Multimedia Content (2008).

• Hannemann, Jan, and Jürgen Kett. “Linked Data for Libraries”, 2010.

• Hartl, A., K. Weiand, and F. Bry. “visKQWL, a visual renderer for a semantic web query language.” In Pro-ceedings of the 19th international conference on World wide web, 1253–1256, 2010.

• Hausenblas, M., W. Halb, and Y. Raimond. “Scripting User Contributed Interlinking.” In 4th Workshop on Scripting for the Semantic Web (SFSW08), Tenerife, Spain, 2008.

• Kosch, H., L. Boszormenyi, M. Doller, M. Libsie, P. Schojer, and A. Kofler. “The life cycle of multimedia metadata.” Multimedia, IEEE 12, no. 1 (March 2005): 80- 86.

• Pipek, V., M. Rohde, R. Cuel, M. Herbrechter, M. Stein, O. Tokarchuk, T. Wiedenhöfer, F. Yetim, and M. Zamarian. “Requirements Report of the INSEMTIVES Seekda! Use Case (D2.2.1)” (2009).

• Rehatschek, H., and G. Kienast. “VIZARD -- An Innovative Tool for Video Navigation, Retrieval and Edit-ing.” In Proceedings of the 23rd Workshop of PVA “Multimedia and Middleware”. Vienna, 2001.

• Smith, John R., and Peter Schirling. “Metadata Standards Roundup.” IEEE Multimedia, 2006.

• “W3C Media Fragments Working Group”, n.d. http://www.w3.org/2008/WebVideo/Fragments/.

• Wurman, Richard S. Information Anxiety 2. 1st ed. Que, 2001.

• Zielke, Felix, Christian Eckes, Carsten Rosche, Matthias Aust, Sven Hoffmann, Stefan Grünvogel, Richard Wages. "D5.2 Report On Live Human Annotation" (2007). http://www.ist-live.org/intranet/iais029/live-05-d5-2-report_on_live_human_annotation.pdf [2011-09-20]

Page 78: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation
Page 79: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

LINKED MEDIA LAB REPORTS – THE NEW SERIES OF THE SNML-TNG

This is the second issue of the series “Linked Media Lab Reports” edited by the Salzburg NewMediaLab – The Next Generation (edited by Christoph Bauer, Georg Güntner and Sebastian Schaffert). Within this series, lab reports in English or Ger-man will be published. They are characterised as conceptual papers and/or how-to's. Additional issues are in preparation.

Band 1 (in German)

Linked Media. Ein White-Paper zu den Potentalen von Linked People, Linked Content und Linked Data in Unternehmen.(Salzburg NewMediaLab – The Next Generaton)

ISBN 978-3-902448-27-9

Issue 2 Linked Media InterfacesGraphical User Interfaces for Search and Annotaton(Marius Schebella, Thomas Kurz and Georg Güntner)

ISBN 978-3-902448-29-3

Issue 3 Media Objects in the Web of Linked DataPublishing Multmedia as Linked Data(Thomas Kurz)

ISBN 978-3-902448-30-9

Band 4 (in German)Smarte Annotatonen.Ein Beitrag zur Evaluaton von Empfehlungen für Annotatonen.(Sandra Schön und andere)

ISBN 978-3-902448-31-6

Band 5 (in German, scheduled for November 2011)Qualitätssicherung bei AnnotatonenSoziale und technologische Verfahren der Medienbranche

Page 80: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation

SOCIAL MEDIA – PUBLICATION SERIES OF THE SNML-TNG

Within the series “Social Media”edited by Salzburg NewMediaLab (editors Georg Güntner and Sebastian Schaffert) the following issues are published (in German):

Band 1

Erfolgreicher Aufau von Online-Communitys. Konzepte, Szenarien und Handlungsempfehlungen. (Sandra Schafert und Diana Wieden-Bischof)

ISBN 978-3-902448-13-2

Band 2

(Meta-) Informatonen von Communitys und Netzwerken. Entstehung und Nutzungsmöglichkeiten. (Sandra Schafert, Julia Eder, Wolf Hilzensauer, Thomas Kurz, Mark Markus, Sebastan Schafert, Rupert Westenthaler, Rupert und Diana Wieden-Bischof)

ISBN 978-3-902448-15-6

Band 3

Empfehlungen im Web. Konzepte und Realisierungen.(Sandra Schafert, Tobias Bürger, Cornelia Schneider und Diana Wieden-Bischof)

ISBN 978-3-902448-16-3

Band 4

Reputaton und Feedback im Web. Einsatzgebiete und Beispiele. (Sandra Schafert, Georg Güntner, Markus Lassnig und Diana Wieden-Bischof)

ISBN 978-3-902448-17-0

Band 5 – in Kooperaton mit evolaris und Salzburg Research

Mobile Gemeinschafen.Erfolgreiche Beispiele aus den Bereichen Spielen, Lernen und Gesundheit.

(Sandra Schön, Diana Wieden-Bischof, Cornelia Schneider und Martn Schumann)

ISBN 978-3-902448-25-5

Page 81: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation
Page 82: SNML-TNG: Linked Media Interfaces. Graphical User Interfaces for Search and Annotation