680
FOREWORD JEAN-YVES MARION 1 AND THOMAS SCHWENTICK 2 1 Nancy University,LORIA E-mail address : [email protected] 2 TU Dortmundt E-mail address : [email protected] The Symposium on Theoretical Aspects of Computer Science (STACS) is held alter- nately in France and in Germany. The conference of March 4-6, 2010, held in Nancy, is the 27th in this series. Previous meetings took place in Paris (1984), Saarbrücken (1985), Orsay (1986), Passau (1987), Bordeaux (1988), Paderborn (1989), Rouen (1990), Hamburg (1991), Cachan (1992), Würzburg (1993), Caen (1994), München (1995), Grenoble (1996), Lübeck (1997), Paris (1998), Trier (1999), Lille (2000), Dresden (2001), Antibes (2002), Berlin (2003), Montpellier (2004), Stuttgart (2005), Marseille (2006), Aachen (2007), Bor- deaux (2008), and Freiburg (2009). The interest in STACS has remained at a high level over the past years. The STACS 2010 call for papers led to over 238 submissions from 40 countries. Each paper was assigned to three program committee members. The com- mittee selected 54 papers during a two- week electronic meeting held in November. As co-chairs of the program committee, we would like to sincerely thank its members and the many external referees for their valuable work. In particular, there were intense and inter- esting discussions. The overall very high quality of the submissions made the selection a difficult task. We would like to express our thanks to the three invited speakers, Mikołaj Bojańczyk, Rolf Niedermeier, and Jacques Stern. Special thanks go to Andrei Voronkov for his EasyChair software (www.easychair.org). Moreover, we would like to warmly thank Wadie Guizani for preparing the conference proceedings and continuous help throughout the conference organization. For the third time, this year’s STACS proceedings are published in electronic form. A printed version was also available at the conference, with ISBN. The electronic proceedings are available through several portals, and in particular through HAL and LIPIcs series . The proceedings of the Symposium, which are published electronically in the LIPIcs (Leibniz International Proceedings in Informatics) series, are available through Dagstuhl’s website. The LIPIcs series provides an ISBN for the proceedings volume and manages the indexing issues. HAL is an electronic repository managed by several French research agencies. Both, HAL and the LIPIcs series, guarantee perennial, free and easy electronic access, while the authors will retain the rights over their work. The rights on the articles in the proceedings are kept with the authors and the papers are available freely, under a Creative Commons license (see www.stacs- conf.org/faq.html for more details). c Jean-Yves Marion and Thomas Schwentick CC Creative Commons Attribution-NoDerivs License

Foreword - Laboratoire d'Informatique, de Robotique et de ...paul/STACS-SC/publications/... · Benjamin Leveque Asaf Levin ... Cyril Nicaud Shuxin Nie Evdokia Nikolova Aviv Nisgav

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

  • FOREWORD

    JEAN-YVES MARION 1 AND THOMAS SCHWENTICK 2

    1 Nancy University,LORIA

    E-mail address: [email protected]

    2 TU Dortmundt

    E-mail address: [email protected]

    The Symposium on Theoretical Aspects of Computer Science (STACS) is held alter-nately in France and in Germany. The conference of March 4-6, 2010, held in Nancy, isthe 27th in this series. Previous meetings took place in Paris (1984), Saarbrücken (1985),Orsay (1986), Passau (1987), Bordeaux (1988), Paderborn (1989), Rouen (1990), Hamburg(1991), Cachan (1992), Würzburg (1993), Caen (1994), München (1995), Grenoble (1996),Lübeck (1997), Paris (1998), Trier (1999), Lille (2000), Dresden (2001), Antibes (2002),Berlin (2003), Montpellier (2004), Stuttgart (2005), Marseille (2006), Aachen (2007), Bor-deaux (2008), and Freiburg (2009). The interest in STACS has remained at a high levelover the past years. The STACS 2010 call for papers led to over 238 submissions from40 countries. Each paper was assigned to three program committee members. The com-mittee selected 54 papers during a two- week electronic meeting held in November. Asco-chairs of the program committee, we would like to sincerely thank its members and themany external referees for their valuable work. In particular, there were intense and inter-esting discussions. The overall very high quality of the submissions made the selection adifficult task. We would like to express our thanks to the three invited speakers, MikołajBojańczyk, Rolf Niedermeier, and Jacques Stern. Special thanks go to Andrei Voronkovfor his EasyChair software (www.easychair.org). Moreover, we would like to warmly thankWadie Guizani for preparing the conference proceedings and continuous help throughout theconference organization. For the third time, this year’s STACS proceedings are publishedin electronic form. A printed version was also available at the conference, with ISBN. Theelectronic proceedings are available through several portals, and in particular through HALand LIPIcs series . The proceedings of the Symposium, which are published electronically inthe LIPIcs (Leibniz International Proceedings in Informatics) series, are available throughDagstuhl’s website. The LIPIcs series provides an ISBN for the proceedings volume andmanages the indexing issues. HAL is an electronic repository managed by several Frenchresearch agencies. Both, HAL and the LIPIcs series, guarantee perennial, free and easyelectronic access, while the authors will retain the rights over their work. The rights on thearticles in the proceedings are kept with the authors and the papers are available freely,under a Creative Commons license (see www.stacs- conf.org/faq.html for more details).

    c© Jean-Yves Marion and Thomas SchwentickCC© Creative Commons Attribution-NoDerivs License

  • 2 JEAN-YVES MARION AND THOMAS SCHWENTICK

    STACS 2010 received funds from Nancy-University (UHP, Nancy 2 and INPL), fromRégion Lorraine, from CUGN, from GIS 3SG, from GDR IM and from Mairie de Nancy.We thank them for their support!

    February 2010 Jean-Yves Marion and Thomas Schwentick

  • FOREWORD 3

    Conference OrganisationSTACS 2010 was organized by INRIA Nancy-Grand-Est at LORIA, Nancy University.

    Members of the program committee

    Markus Bläser Saarland UniversityHarry Buhrman CWI, Amsterdam UniversityThomas Colcombet CNRS, Paris 7 UniversityAnuj Dawar University of CambridgeArnaud Durand Paris 7 UniversitySándor Fekete Braunschweig University of TechnologyRalf Klasing CNRS, Bordeaux UniversityChristian Knauer Freie Universität BerlinPiotr Krysta University of LiverpoolSylvain Lombardy Marne la vallée UniversityP. Madhusudan University of IllinoisJean-Yves Marion Nancy University (co-chair)Pierre McKenzie University of MontréalRasmus Pagh IT University of CopenhagenBoaz Patt-Shamir Tel Aviv UniversityChristophe Paul CNRS, Montpellier UniversityGeorg Schnitger Frankfurt UniversityThomas Schwentick TU Dortmund University (co-chair)Helmut Seidl TU MunichJiří Sgall Charles UniversitySebastiano Vigna Universitá degli Studi di MilanoPaul Vitanyi CWI, Amsterdam

    Members of the organizing committee

    Nicolas AlcarazAnne-Lise CharbonnierJean-Yves MarionWadie Guizani

    External Reviewers

    Ittai AbrahamEyal AckermanManindra AgrawalStefano AguzzoliCyril AllauzenEric AllenderNoga AlonAlon AltmanAndris Ambainis

    Amihood AmirEric AngelEsther ArkinDiego ArroyueloEugene AsarinAlbert AtseriasNathalie AubrunLaszlo BabaiPatrick Baillot

    Joergen Bang-JensenVince BaranyJérémy BarbayGeorgios BarmpaliasClark BarrettDavid Mix BarringtonLuca BecchettiWolfgang BeinDjamal Belazzougui

  • 4 JEAN-YVES MARION AND THOMAS SCHWENTICK

    Anne BenoitPiotr Bermanalberto bertoniPhilippe BesnardStéphane BessyLaurent BienvenuPhilip BilleDavide BilòHenrik BjörklundGuillaume BlinHans BodlaenderHans-Joachim BoeckenhauerGuillaume BonfanteVincenzo BonifaciYacine BoufkhadLaurent BoyerZvika BrakerskiFelix BrandtJop BrietKevin BuchinMaike BuchinAndrei BulatovJaroslaw ByrkaMarie-Pierre BéalSergio CabelloMichaël CadilhacArnaud CarayolOlivier CartonGiovanni CavallantiRohit ChadhaAmit ChakrabartiSourav ChakrabortyJérémie ChalopinJean-Marc ChamparnaudPierre CharbitKrishnendu ChatterjeeArkadev ChattopadhyayChandra ChekuriHo-Lin ChenJames CheneyVictor ChepoiAlessandra CherubiniFlavio ChierichettiGiorgos ChristodoulouMarek ChrobakRichard CleveÉric Colin de Verdière

    Colin CooperGraham CormodeVeronique CortierBruno CourcelleNadia CreignouMaxime CrochemoreJurek CzyzowiczFlavio D’AlessandroJean DaligaultVictor DalmauShantanu DasSamir DattaFabien de MontgolfierMichel de RougemontSøren DeboisHolger DellCamil DemetrescuBritta Denner-BroserBilel DerbelJonathan DerryberryJosee DesharnaisLuc DevroyeClaudia DieckmannScott DiehlMartin DietzfelbingerFrank DrewesAndy DruckerPhilippe DuchonAdrian DumitrescuJérôme Durand-LoseDavid DurisStephane DurocherIvo DüntschChristian EisentrautYuval EmekMatthias EnglertDavid EppsteinLeah EpsteinThomas ErlebachOmid EtesamiKousha EtessamiGuy EvenRolf FagerbergMichael FellowsStefan FelsnerJiri FialaAmos Fiat

    Bernd FinkbeinerIrene FinocchiFelix FischerJörg FlumFedor FominLance FortnowHervé FournierMahmoud FouzPierre FraigniaudGianni FranceschiniStefan FunkeNicola GalesiPhilippe GambetteDavid Garcia SorianoLeszek GasieniecSerge GaspersSerge GaspersBruno GaujalCyril GavoilleWouter GeladeDirk H.P. GerritsPanos GiannopoulosRichard GibbensHugo GimbertEmeric gioanChristian GlasserLeslie Ann GoldbergPaul GoldbergRodolfo GomezRobert GrabowskiFabrizio GrandoniFrederic GreenSerge GrigorieffErich GrädelJoachim GudmundssonSylvain GuillemotPierre GuillonYuri GurevichVenkatesan GuruswamiPeter HabermehlGena HahnMohammadTaghi HajiaghayiSean HallgrenMichal HanckowiakSariel Har-PeledMoritz HardtTero Harju

  • FOREWORD 5

    Matthias HeinRaymond HemmeckeMiki HermannDanny HermelinJohn HitchcockMartin HoeferChristian HoffmannFrank HoffmannThomas HolensteinMarkus HolzerPeter HoyerMathieu HoyrupJing HuangPaul HunterThore HusfeldtMarcus HutterNicole ImmorlicaShunsuke InenagaRiko JacobAndreas JakobyAlain Jean-MarieMark JerrumGwenaël JoretStasys JuknaValentine KabanetsLukasz KaiserTom KamphansMamadou KantéMamadou Moustapha KantéJarkko KariVeikko KeranenSanjeev KhannaStefan KieferAlex KipnisAdam KlivansJohannes KoeblerNatallia KokashPetr KolmanJochen KonemannMiroslaw KorzeniowskiAdrian KosowskiMichal KouckyMichal KouckyMatjaz KovseMáté KovácsJan KrajicekDaniel Kral

    Jan KratochvilDieter KratschStefan KratschRobi KrauthgamerSteve KremerKlaus KriegelDanny KrizancAlexander KroellerAndrei KrokhinGregory KucherovDenis KuperbergTomi KärkiJuha KärkkäinenEkkehard KöhlerSalvatore La TorreArnaud LabourelGad LandauJérôme LangSophie LaplanteBenoit LaroseSilvio LattanziLap Chi LauSoeren LaueThierry LecroqTroy LeeArnaud LefebvreAurelien LemayFrançois LemieuxBenjamin LevequeAsaf LevinMathieu LiedloffAndrzej LingasTadeusz LitakChristof LoedingDaniel LokshtanovTzvi Lotkerzvi lotkerLaurent LyaudetFlorent MadelaineFrederic MagniezMeena MahajanAnil MaheshwariJohann MakowskyGuillaume MalodSebastian ManethYishay MansourRoberto Mantaci

    Bodo MantheyMartin MaresMaurice MargensternEuripides MarkouWim MartensBarnaby MartinKaczmarek MatthieuFrédéric MazoitDamiano MazzaCarlo MereghettiJulian MestrePeter Bro MiltersenVahab MirrokniJoseph MitchellTobias MoemkeStefan MonnierAshley MontanaroThierry MonteilPat MorinHannes MoserLarry MossLuca Motto RosMarie-Laure MugnierWolfgang MulzerAndrzej MurawskiFilip MurlakViswanath NagarajanRouven NaujoksJesper NederlofYakov NekrichIlan NewmanCyril NicaudShuxin NieEvdokia NikolovaAviv NisgavJean NéraudMarcel OchelSergei OdintsovNicolas OllingerAlessio OrlandiFriedrich OttoMartin OttoSang-il OumLinda PagliBeatrice PalanoOndrej PangracRina Panigrahy

  • 6 JEAN-YVES MARION AND THOMAS SCHWENTICK

    Gennaro ParlatoArno PaulyAnthony PerezMartin PergelSylvain PerifelRafael PeñalozaGiovanni PighizziniNir PitermanDavid PodgorolecVladimir PodolskiiNatacha PortierSylvia PottVictor PoupetChristophe PrieurAriel ProcacciaGuido ProiettiPavel PudlakArnaud PêcherTomasz RadzikAnup RaoDror RawitzSaurabh RayChristian ReitwießnerEric RemilaMark ReynoldsAhmed RezineEric RivalsRomeo RizziJulien RobertPeter RossmanithJacques SakarovitchMohammad SalavatipourKai SalomaaLouis SalvailMarko SamerNicola SantoroSrinivasa Rao SattiIgnasi SauThomas SauerwaldSaket SaurabhRahul SavaniPetr SavickyGabriel ScalosubGuido SchaeferMarc ScherfenbergLena SchlipfStefan Schmid

    Christiane SchmidtJens M. SchmidtHenning SchnoorWarren SchudyNils SchweerPascal SchweitzerDaria SchymuraBernhard SeegerRaimund SeidelPranab SenSiddhartha SenOlivier SerreRocco ServedioAnil SethAlexander SherstovAmir ShpilkaRene SittersAlexander SkopalikNataliya SkrypnyukMichiel SmidMichiel SmidJack SnoeyinkChristian SohlerJeremy SprostonFabian StehnClifford SteinSebastian StillerYann StrozeckiSubhash SuriChaitanya SwamyTill TantauAlain TappAnusch TarazNina Sofia TaslamanMonique TeillaudPascal TessonGuillaume TheyssierDimitrios ThilikosWolfgang ThomasMikkel ThorupChristopher ThravesRamki ThurimellaAlwen TiuHans Raj TiwarySebastien TixeuilIoan TodincaCraig Tovey

    A.N. TrahtmanLuca TrevisanNicolas TrotignonFalk UngerWalter UngerSarvagya UpadhyayWim van DamPeter van Emde BoasDieter van MelkebeekRob van SteeAnke van ZuylenYann VaxèsRossano VenturiniKolia VereshchaginStéphane VialetteIvan ViscontiSmitha VishveshwaraMahesh ViswanathanHeribert VollmerUli WagnerIgor WalukiewiczRolf WankaEgon WankeMark Daniel WardOsamu WatanabeJohn WatrousRoger WattenhoferTzu-chieh WeiDaniel WernerRyan WilliamsErik WinfreeGerhard WoegingerPhilipp WoelfelDominik WojtczakPaul WollanJames WorrellSai WuAndrew C.-C. YaoSergey YekhaninKe YiJean-Baptiste YunèsRaphael YusterKonrad ZdanowskiMariano ZelkeAkka ZemmariUri Zwick.

  • TABLE OF CONTENTS

    Foreword . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

    J.-Y. Marion and T. Schventick

    Conference Organisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

    Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

    Invited TalksBeyond ω-Regular Languages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

    M. Bojańczyk

    Reflections on Multivariate Algorithmics and Problem Parameterization . . . . . . . . . . . 17

    R. Niedermeier

    Mathematics, Cryptology, Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

    J. Stern

    Contributed PapersLarge-girth roots of graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

    A. Adamaszek and M. Adamaszek

    The tropical double description method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

    X. Allamigeon, S. Gaubert and E. Goubault

    The Remote Point Problem, Small Bias Spaces, and Expanding Generator Sets . . . . 59

    V. Arvind and S. Srinivasan

    Evasiveness and the Distribution of Prime Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

    L. Babai, A. Banerjee, R. Kulkarni and V. Naik

    Dynamic sharing of a multiple access channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

    M. Bienkowski, M. Klonowski, M. Korzeniowski and D. R. Kowalski

    Exact Covers via Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

    A. Björklund

    On Iterated Dominance, Matrix Elimination, and Matched Paths . . . . . . . . . . . . . . . . . . 107

    F. Brandt, F. Fischer and M. Holzer

    AMS Without 4-Wise Independence on Product Domains . . . . . . . . . . . . . . . . . . . . . . . . . 119

    V. Braverman, K. Chung, Z. Liu, M. Mitzenmacher and R. Ostrovsky

    Quantum algorithms for testing properties of distributions . . . . . . . . . . . . . . . . . . . . . . . . 131

    S. Bravyi, A.W. Harrow and A. Hassidim

    Optimal Query Complexity for Reconstructing Hypergraphs . . . . . . . . . . . . . . . . . . . . . . . 143

    N.H. Bshouty and H. Mazzawi

  • 8 TABLE OF CONTENTS

    Ultimate Traces of Cellular Automata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155

    J. Cervelle, E. Formenti and P. Guillon

    Two-phase algorithms for the parametric shortest path problem . . . . . . . . . . . . . . . . . . . 167

    S. Chakraborty, E. Fischer, O. Lachish and R. Yuster

    Continuous Monitoring of Distributed Data Streams over a Time-based Sliding

    Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179

    H.L. Chan, T.W. Lam, L.K. Lee and H.F. Ting

    Robust Fault Tolerant uncapacitated facility location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191

    S. Chechik and D. Peleg

    Efficient and Error-Correcting Data Structures for Membership and Polynomial

    Evaluation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203

    V. Chen, E. Grigorescu and R. de Wolf

    Log-space Algorithms for Paths and Matchings in k-trees . . . . . . . . . . . . . . . . . . . . . . . . . . 215

    B. Das, S. Datta and P. Nimbhorkar

    Restricted Space Algorithms for Isomorphism on Bounded Treewidth Graphs . . . . . . 227

    B. Das, J. Torán and F. Wagner

    The Traveling Salesman Problem, Under Squared Euclidean Distances . . . . . . . . . . . . . 239

    M. de Berg, F. van Nijnatten, R. Sitters, G. J. Woeginger and A. Wolff

    Beyond Bidimensionality: Parameterized Subexponential Algorithms on Directed

    Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251

    F. Dorn, F.V. Fomin, D. Lokshtanov, V. Raman and S. Saurabh

    Planar Subgraph Isomorphism Revisited . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263

    F. Dorn

    Intrinsic Universality in Self-Assembly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275

    D. Doty, J.H. Lutz, M.J. Patitz, S.M. Summers and D. Woods

    Sponsored Search, Market Equilibria, and the Hungarian Method . . . . . . . . . . . . . . . . . 287

    P. Dütting, M. Henzinger and I. Weber

    Dispersion in unit disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299

    A. Dumitrescu and M. Jiang

    Long non-crossing configurations in the plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311

    A. Dumitrescu and C. D. Tóth

    The Complexity of Approximating Bounded-Degree Boolean #CSP . . . . . . . . . . . . . . . . 323

    M. Dyer, L.A. Goldberg, M. Jalsenius and D.M. Richerby

    The complexity of the list homomorphism problem for graphs . . . . . . . . . . . . . . . . . . . . . . 335

    L. Egri, A. Krokhin, B. Larose and P. Tesson

    Improved Approximation Guarantees for Weighted Matching in the Semi-Streaming

    Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347

    L. Epstein, A. Levin, J. Mestre and D. Segev

    Computing Least Fixed Points of Probabilistic Systems of Polynomials . . . . . . . . . . . . . 359

    J. Esparza, A. Gaiser and S. Kiefer

  • TABLE OF CONTENTS 9

    The k-in-a-path problem for claw-free graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371

    J. Fiala, M. Kamiński, B. Lidický and D. Paulusma

    Finding Induced Subgraphs via Minimal Triangulations . . . . . . . . . . . . . . . . . . . . . . . . . . . 383

    F.V. Fomin and Y. Villanger

    Inseparability and Strong Hypotheses for Disjoint NP Pairs . . . . . . . . . . . . . . . . . . . . . . . 395

    L. Fortnow, J.H. Lutz and E. Mayordomo

    Branching-time model checking of one-counter processes . . . . . . . . . . . . . . . . . . . . . . . . . . . 405

    S. Göller and M. Lohrey

    Evolving MultiAlgebras, unify all usual sequential computation models . . . . . . . . . . . . . 417

    S. Grigorieff and P. Valarcher

    Collapsing and Separating Completeness Notions under Average-Case and

    Worst-Case Hypotheses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429

    X. Gu, J.M. Hitchcock and A. Pavan

    Revisiting the Rice Theorem of Cellular Automata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441

    P. Guillon and G. Richard

    On optimal heuristic randomized semidecision procedures, with application to proof

    complexity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453

    E.A. Hirsch and D. Itsykson

    Weakening Assumptions for Deterministic Subexponential Time Non-Singular

    Matrix Completion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465

    M. Jansen

    On equations over sets of integers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477

    A. Jeż and A. Okhotin

    A 43-competitive randomized algorithm for online scheduling of packets with

    agreeable deadlines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489

    L. Jeż

    Collapsible Pushdown Graphs of Level 2 are Tree-Automatic . . . . . . . . . . . . . . . . . . . . . . . 501

    A. Kartzow

    Approximate shortest paths avoiding a failed vertex : optimal size data structures

    for unweighted graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513

    N. Khanna and S. Baswana

    Holant Problems for Regular Graphs with Complex Edge Functions . . . . . . . . . . . . . . . 525

    M. Kowalczyk and J.-Y. Cai

    Is Ramsey’s theorem ω-automatic? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537

    D. Kuske

    An Efficient Quantum Algorithm for some Instances of the Group Isomorphism

    Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549

    F. Le Gall

    Treewidth reduction for constrained separation and bip artization problems . . . . . . . . 561

    D. Marx, B. O’Sullivan and I. Razgon

  • 10 TABLE OF CONTENTS

    Online Correlation Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573

    C. Mathieu, O. Sankur and W. Schudy

    The Recognition of Tolerance and Bounded Tolerance Graphs . . . . . . . . . . . . . . . . . . . . . . 585

    G.B. Mertzios, I. Sau and S. Zaks

    Decidability of the interval temporal logic ABB̄ over the natural numbers . . . . . . . . . . 597

    A. Montanari, G. Puppis, P. Sala and G. Sciavicco

    Relaxed spanners for directed disk graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 609

    D. Peleg and L. Roditty

    Unsatisfiable Linear CNF Formulas Are Large and Complex . . . . . . . . . . . . . . . . . . . . . . . 621

    D. Scheder

    Construction Sequences and Certifying 3-Connectedness . . . . . . . . . . . . . . . . . . . . . . . . . . . 633

    J.M. Schmidt

    Named Models in Coalgebraic Hybrid Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645

    L. Schröder and D. Pattinson

    A dichotomy theorem for the general minimum cost homomorphism problem . . . . . . . 657

    R. Takhanov

    Alternation-Trading Proofs, Linear Programming, and Lower Bounds . . . . . . . . . . . . . . 669

    R.R. Williams

  • Symposium on Theoretical Aspects of Computer Science 2010 (Nancy, France), pp. 11-16www.stacs-conf.org

    BEYOND ω-REGULAR LANGUAGES

    MIKO LAJ BOJAŃCZYK

    University of WarsawE-mail address: [email protected]: www.mimuw.edu.pl/∼bojan

    Abstract. The paper presents some automata and logics on ω-words, which capture allω-regular languages, and yet still have good closure and decidability properties.

    The notion of ω-regular language is well established in the theory of automata. The

    class of ω-regular languages carries over to ω-words many of the good properties of regular

    languages of finite words. It can be described using automata, namely by nondeterministic

    Büchi automata, or the equivalent deterministic Muller automata. It can be described using

    a form of regular expressions, namely by ω-regular expressions. It can be described using

    logic, namely by monadic second-order logic, or the equivalent weak monadic-second order

    logic.

    This paper is about some recent work [1, 3, 2, 4], which argues that there are other

    robust classes of languages for ω-words. The following languages serve as guiding examples.

    LB = {an1ban2b · · · : lim sup ni < ∞} LS = {a

    n1ban2b · · · : lim inf ni = ∞}

    Neither of these languages is ω-regular in the accepted sense. One explanation is that LScontains no ultimately periodic word, as does the complement of LB. Another explanation

    is that an automaton recognizing either of these languages would need an infinite amount

    of memory, to compare the numbers n1, n2, . . .

    Both of these explanations can be disputed.

    Concerning the first explanation: why should ultimately periodic words be so impor-

    tant? Clearly there are other finite ways of representing infinite words. A nonempty Büchi

    automaton will necessarily accept an ultimately periodic word, and hence their importance

    in the theory of ω-regular languages. But is this notion canonic? Or is it just an artefact

    of the syntax we use?

    Concerning the second explanation: what does “infinite memory” mean? After all, one

    could also argue that the ω-regular language (a∗b)ω needs infinite memory, to count the

    b’s that need to appear infinitely often. In at least one formalization of “memory”, the

    languages LB and LS do not need infinite memory. The formalization uses a Myhill-Nerode

    style equivalence. For a language L ⊆ Aω, call two finite words L-equivalent if they can be

    Key words and phrases: automata, monadic second-order logic.Author supported by ERC Starting Grant “Sosna”.

    c© M. BojańczykCC© Creative Commons Attribution-NoDerivs License

  • 12 M. BOJAŃCZYK

    swapped a finite or infinite number of times without L noticing. Formally, words w, v ∈ A∗

    are called L-equivalent if both conditions below hold.

    u1wu2 ∈ L ⇐⇒ u1vu2 ∈ L for u1 ∈ A∗, u2 ∈ A

    ω

    u1wu2wu3w · · · ∈ L ⇐⇒ u1vu2vu3v · · · ∈ L for u1, u2, . . . ∈ A∗.

    One can show that LB-equivalence has three equivalence classes, and LS-equivalence has

    four equivalence classes. Therefore, at least in this Myhill-Nerode sense, the languages LBand LS do not need infinite memory.

    The rest of this paper presents some language classes which capture LB and LS, and

    which have at least some of the robustness properties one would expect from regular lan-

    guages. We begin with a logic.

    MSO with the unbounding quantifier. Monadic second-order logic (MSO) captures

    exactly the ω-regular languages. To define the languages LB and LS , some new feature is

    needed. Consider a new quantifier UX ϕ(X), introduced in [1], which says that formula

    ϕ(X) is satisfied by arbitrarily large finite sets X, i.e.

    UX ϕ(X) =∧

    n∈N

    ∃X(

    ϕ(X) ∧ n ≤ |X| < ∞)

    .

    As usual with quantifiers, the formula ϕ(X) might have other free variables than X. We

    write MSO+U for the extension of MSO where this quantifier is allowed. It is difficult

    to say if U is an existential or universal quantifier, since its definition involves an infinite

    conjunction of existential formulas.

    Let us see some examples of formulas of MSO+U. Consider a formula block(X) which

    says that X contains all positions between two consecutive b’s. To define the language LBin the logic MSO+U, we need to say that: i) there are infinitely many b’s and ii) the size of

    blocks is not unbounded. This is done by the following formula.

    ∀x∃y(x ≤ y ∧ b(y)) ∧ ¬UX block(X).

    For the language LS , we need a more sophisticated formula. It is easier to write a formula

    for the complement of LS. The formula says that there exists a set Z, which contains

    infinitely many blocks, as stated by the formula

    ∀y∃X(

    block(X) ∧ X ⊆ Z ∧ ∀x (x ∈ X → y < x))

    ,

    but the size of the blocks in X is bounded, as stated by the formula

    ¬UX (block(X) ∧ X ⊆ Z).

    Note that the set Z is infinite. This will play a role later on, when we talk about weak

    logics, which can only quantify over finite sets.

    The class of languages of ω-words that can be defined in MSO+U is our first candidate

    for a new definition of “regular languages”. It is also the largest class considered in this

    paper – it contains all the other classes that will be described below. By its very definition,

    the class is closed under union, complementation, projection, etc. The big problem is that

    we do not know if satisfiability is decidable for formulas of MSO+U over ω-words, although

    we conjecture it is.

    Of course, decidable emptiness/satisfiability is very important if we want to talk about

    “regular languages”. We try to attack this question by introducing automata models, some

    of which are described below. There will be the usual tradeoffs: nondeterministic automata

  • BEYOND ω-REGULAR LANGUAGES 13

    are closed under projections (existential set quantifiers), while deterministic automata are

    closed under boolean operations.

    We begin with the strongest automaton model, namely nondeterministic BS-automata,

    which were introduced in [3]1.

    Nondeterministic BS-automata. A nondeterministic BS-automaton is defined like an

    NFA. The differences are: it does not have a set of accepting states, and it is equipped

    with a finite set C of counters, a counter update function and acceptance condition, as

    described below. The counter update function maps each transition to a finite, possibly

    empty, sequence of operations of the form

    c := c + 1 c := 0 c := d for c, d ∈ C.

    Let ρ be a run of the automaton over an input ω-word, as defined for nondeterministic

    automata on infinite words. The set of runs for a given input word is independent of the

    counters, counter update function and acceptance condition.

    What are the counters used for? They are used to say when a run ρ is accepting. For

    a counter c ∈ C and a word position i ∈ N, we consider the number val(ρ, c, i), which isthe value of counter c after doing the first i transitions. (All counters start with zero.)

    These numbers are then examined by the acceptance condition, which talks about their

    assymptotic behavior. (This explains why nondeterministic BS-automata cannot describe

    patterns usually associated with counter automata, such as anbn.) Specifically, the accep-

    tance condition is a positive boolean combination of conditions of the three kinds below.

    lim supi

    val(ρ, c, i) < ∞ lim infi

    val(ρ, c, i) = ∞ “state q appears infinitely often”

    The first kind of condition is called a B-condition (because it requires counter c to be

    bounded), the second kind of condition is called an S-condition (in [3], a number sequence

    converging to ∞ was called “strongly unbounded”), and the last kind of condition is calleda Büchi condition.

    Emptiness for nondeterministic BS-automata is decidable [3]. The emptiness procedure

    searches for something like the “lasso” that witnesses nonemptiness of a Büchi automaton.

    The notion of lasso for nondeterministic BS-automata is more complicated, and leads to

    a certain class of finitely representable infinite words, a class which extends the class of

    ultimately periodic words.

    Consider the languages recognized by nondeterministic BS-automata. These languages

    are closed under union and intersection, thanks to the usual product construction. These

    languages are closed under projection (or existential set quantification), thanks to nondeter-

    minism. These languages are also closed under a suitable definition of the quantifier U for

    languages, see [3]. If these languages were also closed under complement, then nondetermin-

    istic BS-automata would recognize all languages definable in MSO+U (and nothing more,

    since existence of an accepting run of a nondeterministic BS-automaton can be described

    in the logic).

    Unfortunately, complementation fails. There is, however, a partial complementation

    result, which concerns two subclasses of nondeterministic BS-automata. An automaton

    1For consistency of presentation, the definition given here is slightly modified from the one in [3]: theautomata can move values between counters, and they can use Büchi acceptance conditions. These changesdo not affect the expressive power.

  • 14 M. BOJAŃCZYK

    that does not use S-conditions is called a B-automaton; an automaton that does not use

    B-conditions is called an S-automaton.

    Theorem 1 ([3]). The complement of a language recognized by a nondeterministic B-

    automaton is recognized by a nondeterministic S-automaton, and vice versa.

    The correspondence is effective: from a B-automaton we can compute an S-automaton

    for the complement, and vice versa. The proof of Theorem 1 is difficult, because it has to

    deal with nondeterministic automata. (Somewhat like complementation of nondeterministic

    automata on infinite trees in the proof of Rabin’s theorem.) The technical aspects are

    similar to, but more general than, Kirsten’s decidability proof [8] of the star height problem

    in formal language theory. In particular, it is not difficult to prove, using Theorem 1, that

    the star height problem is decidable.

    Deterministic max-automata. As mentioned above, nondeterministic BS-automata are

    not closed under complement. A typical approach to the complementation problem is to

    consider deterministic automata; this is the approach described below, following [2].

    A deterministic max-automaton is defined like a BS-automaton, with the following dif-

    ferences: a) it is deterministic; b) it has an additional counter operation c := max(d, e);

    and c) its acceptance condition is a boolean (not necessarily positive) combination of B-

    conditions. The max operation looks dangerous, since it seems to involve arithmetic. How-

    ever, the counters are only tested for the limits, and this severely restricts the way max

    can be used. One can show that nondeterminism renders the max operation redundant, as

    stated by Theorem 2 below. (For deterministic automata, max is not redundant.)

    Theorem 2 ([2]). Every language recognized by a deterministic max-automaton is a boolean

    combination of languages recognized by nondeterministic B-automata.

    By Theorem 1, every boolean combination of languages recognized by nondeterministic

    B-automata is equivalent to a positive boolean combination of languages recognized by

    nondeterministic B-automata, and nondeterministic S-automata. Such a positive boolean

    combination is, in turn, recognized by a single nondeterministic BS-automaton, since these

    are closed under union and intersection. It follows that every deterministic max-automaton

    is equivalent to a nondeterministic BS-automaton. Since the equivalence is effective, we get

    an algorithm for deciding emptiness of deterministic max-automata. (A direct approach to

    deciding emptiness of deterministic max-automata is complicated by the max operation.)

    So what is the point of deterministic max-automata?

    The point is that they have good closure properties. (This also explains why the max

    operation is used. The version without max does not have the closure properties described

    below.) Since the automata are deterministic, and the acceptance condition is closed under

    boolean combinations, it follows that languages recognized by deterministic max-automata

    are closed under boolean combinations. What about the existential set quantifier? If we

    talk about set quantification like in MSO, where infinite sets are quantified, then the answer

    is no [2]; closure under existential set quantifiers is essentially equivalent to nondeterminism.

    However, it turns out that quantification over finite sets can be implemented by determin-

    istic max-automata, which is stated by Theorem 3 below. The theorem refers to weak

    MSO+U, which is the fragment of MSO+U where the set quantifiers ∃ and ∀ are restrictedto finite sets.

    Theorem 3 ([2]). Deterministic max-automata recognize exactly the languages that can be

    defined in weak MSO+U.

  • BEYOND ω-REGULAR LANGUAGES 15

    Other deterministic automata. There is a natural dual automaton to a determinis-

    tic max-automaton, namely a deterministic min-automaton, see [4]. Instead of max this

    automaton uses min; instead of boolean combinations of B-conditions, it uses boolean com-

    binations of S-conditions. While the duality is fairly clear on the automaton side, it is less

    clear on the logic side: we have defined only one new quantifier U, and this quantifier is

    already taken by max-automata, which capture exactly weak MSO+U.

    The answer is to add a new quantifier R, which we call the recurrence quantifier. If

    quantification over infinite sets is allowed, the quantifier R can be defined in terms of U and

    vice versa; so we do not need to talk about the logic MSO+U+R. For weak MSO, the new

    quantifier is independent. So what does this new quantifier say? It says that the family of

    sets X satisfying ϕ(X) contains infinitely many sets of the same finite size:

    RX ϕ(X) =∨

    n∈N

    ∃∞X(

    ϕ(X) ∧ |X| = n)

    .

    If the quantifier U corresponds to the complement of the language LB (it can say there

    are arbitrarily large blocks); the new quantifier R corresponds to the complement of the

    language LS (it can say some block size appears infinitely often).

    Theorem 4 ([4]). Deterministic min-automata recognize exactly the languages that can be

    defined in weak MSO+R.

    The proof shares many similarities with the proof of Theorem 3. Actually, some of

    these similarities can be abstracted into a general framework on deterministic automata,

    which is the main topic of [4]. One result obtained from this framework, Theorem 5 below,

    gives an automaton model for weak MSO with both quantifiers U and R.

    Theorem 5 ([4]). Boolean combinations of deterministic min-automata and deterministic

    max-automata recognize exactly the languages that can be defined in weak MSO+U+R.

    The framework also works for different quantifiers, such as a perodicity quantifier (which

    binds a first-order variable x instead of a set variable X), defined as follows

    Px ϕ(x) = the positions x that satisfy ϕ(x) are ultimately periodic.

    Closing remarks. Above, we have described several classes of languages of ω-words, de-

    fined by: the logics with new quantifiers and automata with counters. Each of the classes

    captures all the ω-regular languages, and more. Some of the models are more powerful,

    others have better closure properties; all describe languages that can reasonably be called

    “regular”.

    There is a lot of work to do on this topic. The case of trees is a natural candidate, some

    results on trees can be found in [6, 7]. Another question is about the algebraic theory of

    the new languages; similar questions but in the context of finite words were explored in [5].

    References

    [1] M. Bojańczyk. A Bounding Quantifier. In Computer Science Logic, pages 41–55, 2004.[2] M. Bojańczyk. Weak MSO with the Unbounding Quantifier. In Symposium on Theoretical Aspects of

    Computer Science, pages 233–245, 2009.[3] M. Bojańczyk and T. Colcombet. ω-Regular Expressions with Bounds. In Logic in Computer Science,

    pages 285–296, 2006.

  • 16 M. BOJAŃCZYK

    [4] M. Bojańczyk and S. Toruńczyk. Deterministic Automata and Extensions of Weak MSO. In Foundationsof Software Technology and Theoretical Computer Science, 2009.

    [5] T. Colcombet. The Theory of Stabilisation Monoids and Regular Cost Functions. In International Col-loquium on Automata, Languages and Programming, 2009.

    [6] T. Colcombet and C. Löding. The Nondeterministic Mostowski Hierarchy and Distance-Parity Automata.In International Colloquium on Automata, Languages and Programming 2008: 398-409

    [7] T. Colcombet and C. Löding. The Nesting-Depth of Disjunctive mu-calculus for Tree Languages and theLimitedness Problem. In Computer Science Logic, pages 416-430, 2008

    [8] D. Kirsten. Distance desert automata and the star height problem. Theoretical Informatics and Applica-tions, 39(3):455–511, 2005.

    This work is licensed under the Creative Commons Attribution-NoDerivs License. To view acopy of this license, visit http://creativecommons.org/licenses/by-nd/3.0/.

  • Symposium on Theoretical Aspects of Computer Science 2010 (Nancy, France), pp. 17-32www.stacs-conf.org

    REFLECTIONS ON MULTIVARIATE ALGORITHMICS AND

    PROBLEM PARAMETERIZATION

    ROLF NIEDERMEIER

    Institut für Informatik, Friedrich-Schiller-Universität Jena, Ernst-Abbe-Platz 2, D-07743 Jena,GermanyE-mail address: [email protected]

    Abstract. Research on parameterized algorithmics for NP-hard problems has steadilygrown over the last years. We survey and discuss how parameterized complexity analysisnaturally develops into the field of multivariate algorithmics. Correspondingly, we describehow to perform a systematic investigation and exploitation of the “parameter space” ofcomputationally hard problems.

    Algorithms and Complexity; Parameterized Algorithmics; Coping with Computational

    Intractability; Fixed-Parameter Tractability

    1. Introduction

    NP-hardness is an every-day obstacle for practical computing. Since there is no hope for

    polynomial-time algorithms for NP-hard problems, it is pragmatic to accept exponential-

    time behavior of solving algorithms. Clearly, an exponential growth of the running time

    is bad, but maybe affordable, if the combinatorial explosion is modest and/or can be con-

    fined to certain problem parameters. This line of research has been pioneered by Downey

    and Fellows’ monograph “Parameterized Complexity” [24] (see [32, 57] for two more recent

    monographs). The number of investigations in this direction has steadily grown over the

    recent years. A core question herein is what actually “a” or “the” parameter of a compu-

    tational problem is. The simple answer is that there are many reasonable possibilities to

    “parameterize a problem”. In this survey, we review some aspects of this “art” of problem

    parameterization.1 Moreover, we discuss corresponding research on multivariate algorith-

    mics, the natural sequel of parameterized algorithmics when expanding to multidimensional

    parameter spaces.

    We start with an example. The NP-complete problem Possible Winner for k-Approval is a standard problem in the context of voting systems. In the k-approvalprotocol, for a given set of candidates, each voter can assign a score of 1 to k of thesecandidates and the rest of the candidates receive score 0. In other words, each voter may

    linearly order the candidates; the “first” k candidates in this order score 1 and the remainingones score 0. A winner of an election (where the input is a collection of votes) is a candidate

    who achieves the maximum total score. By simple counting this voting protocol can be

    1In previous work [56, 57], we discussed the “art” of parameterizing problems in a less systematic way.

    c© R. NiedermeierCC© Creative Commons Attribution-NoDerivs License

  • 18 R. NIEDERMEIER

    evaluated in linear time. In real-world applications, however, a voter may only provide

    a partial order of the candidates: The input of Possible Winner for k-Approval isa set of partial orders on a set of candidates and a distinguished candidate d, and thequestion is whether there exists an extension for each partial order into a linear one such

    that d wins under the k-approval protocol. Possible Winner for k-Approval is NP-complete already in case of only two input votes when k is part of the input [10]. Moreover,for an unbounded number of votes Possible Winner for 2-Approval is NP-complete [7].

    Hence, Possible Winner for k-Approval parameterized by the number v of votes as wellas parameterized by k remains intractable. In contrast, the problem turns out to be fixed-parameter tractable when parameterized by the combined parameter (v, k) [6], that is, it canbe solved in f(v, k) · poly time for some computable function f only depending on v and k(see Section 2 for more on underlying notions). In summary, this implies that to better

    understand and cope with the computational complexity of Possible Winner for k-Approval, we should investigate its parameterized (in)tractability with respect to various

    parameters and combinations thereof. Parameter combinations—this is what multivariate

    complexity analysis refers to—may be unavoidable to get fast algorithms for relevant special

    cases. In case of Possible Winner for k-Approval such an important special case isa small number of votes2 together with a small value of k. Various problem parametersoften come up very naturally. For instance, besides v and k, a further parameter here is thenumber c of candidates. Using integer linear programming, one can show that PossibleWinner for k-Approval is fixed-parameter tractable with respect to the parameter c [10].

    Idealistically speaking, multivariate algorithmics aims at a holistic approach to deter-

    mine the “computational nature” of each NP-hard problem. To this end, one wants to

    find out which problem-specific parameters influence the problem’s complexity in which

    quantitative way. Clearly, also combinations of several single parameters should be inves-

    tigated. Some parameterizations may yield hardness even in case of constant values, some

    may yield polynomial-time solvability in case of constant values, and in the best case some

    may allow for fixed-parameter tractability results.3 Hence, the identification of “reasonable”

    problem parameters is an important issue in multivariate algorithmics. In what follows, we

    describe and survey systematic ways to find interesting problem parameters to be exploited

    in algorithm design. This is part of the general effort to better understand and cope with

    computational intractability, culminating in the multivariate approach to computational

    complexity analysis.

    2. A Primer on Parameterized and Multivariate Algorithmics

    Consider the following two NP-hard problems from algorithmic graph theory. Given

    an undirected graph, compute a minimum-cardinality set of vertices that either cover all

    graph edges (this is Vertex Cover) or dominate all graph vertices (this is Dominating

    Set). Herein, an edge e is covered by a vertex v if v is one of the two endpoints of e, anda vertex v is dominated by a vertex u if u and v are connected by an edge. By definition,every vertex dominates itself. The NP-hardness of both problems makes the search for

    2There are realistic voting scenarios where the number of candidates is large and the number of voters issmall. For instance, this is the case when a small committee decides about many applicants.

    3For input size n and parameter value k, a running time of O(nk) would mean polynomial-time solvablefor constant values of k whereas a running time of say O(2kn) would mean fixed-parameter tractability withrespect to the parameter k, see Section 2 for more on this.

  • MULTIVARIATE ALGORITHMICS AND PROBLEM PARAMETERIZATION 19

    polynomial-time solving algorithms hopeless. How fast can we solve these two minimization

    problems in an exact way? Trying all possibilities, for an n-vertex graph in case of bothproblems we end up with an algorithm running in basically 2n steps (times a polynomial),

    being infeasible for already small values of n. However, what happens if we only searchfor a size-at-most-k solution set? Trying all size-k subsets of the n-vertex set as solutioncandidates gives a straightforward algorithm running in O(nk+2) steps. This is superior tothe 2n-steps algorithm for sufficiently small values of k, but again turns infeasible alreadyfor moderate k-values. Can we still do better? Yes, we can—but seemingly only for VertexCover. Whereas we do not know any notably more efficient way to solve Dominating

    Set [24, 20], in case of Vertex Cover a simple observation suffices to obtain a 2k-step

    (times a polynomial) algorithm: Just pick any edge and branch the search for a size-ksolution into the two possibilities of taking one of the two endpoints of this edge. One

    of them has to be in an optimal solution! Recurse (branching into two subcases) to find

    size-(k − 1) solutions for the remaining graphs where the already chosen vertex is deleted.In this way, one can achieve a search tree of size 2k, leading to the stated running time.

    In summary, there is a simple 2k-algorithm for Vertex Cover whereas there is only an

    nO(k)-algorithm for Dominating Set. Clearly, this makes a huge difference in practicalcomputing, although both algorithms can be put into the coarse category of “polynomial

    time for constant values of k”. This categorization ignores that in the one case k influencesthe degree of the polynomial and in the other it does not—the categorization is too coarse-

    grained; a richer modelling is needed. This is the key contribution parameterized complexity

    analysis makes.

    To better understand the different behavior of Vertex Cover and Dominating Set

    concerning their solvability in dependence on the parameter k (solution size) historicallywas one of the starting points of parameterized complexity analysis [24, 32, 57]. Roughly

    speaking, it deals with a “function battle”, namely the typical question whether an nO(k)-algorithm can be replaced by a significantly more efficient f(k)-algorithm where f is acomputable function exclusively depending on k; in more general terms, this is the questionfor the fixed-parameter tractability (fpt) of a computationally hard problem. Vertex

    Cover is fpt, Dominating Set, classified as W[1]-hard (more precisely, W[2]-complete)

    by parameterized complexity theory, is very unlikely to be fpt. Intuitively speaking, a

    parameterized problem being classified as W[1]-hard with respect to parameter k meansthat it is as least as hard as computing a k-vertex clique in a graph. There seems to be nohope for doing this in f(k) · nO(1) time for a computable function f .

    More formally, parameterized complexity is a two-dimensional framework for studying

    the computational complexity of problems [24, 32, 57]. One dimension is the input size n(as in classical complexity theory), and the other one is the parameter k (usually a positive

    integer). A problem is called fixed-parameter tractable (fpt) if it can be solved in f(k) ·nO(1)

    time, where f is a computable function only depending on k. This means that when solvinga problem that is fpt, the combinatorial explosion can be confined to the parameter. There

    are numerous algorithmic techniques for the design of fixed-parameter algorithms, including

    data reduction and kernelization [11, 41], color-coding [3] and chromatic coding [2], itera-

    tive compression [58, 40], depth-bounded search trees, dynamic programming, and several

    more [44, 60]. Downey and Fellows [24] developed a parameterized theory of computational

    complexity to show fixed-parameter intractability. The basic complexity class for fixed-

    parameter intractability is called W[1] and there is good reason to believe that W[1]-hard

    problems are not fpt [24, 32, 57]. Indeed, there is a whole complexity hierarchy FPT ⊆

  • 20 R. NIEDERMEIER

    W[1] ⊆ W[2] ⊆ . . . ⊆ XP, where XP denotes the class of parameterized problems that canbe solved in polynomial time in case of constant parameter values. See Chen and Meng [22]

    for a recent survey on parameterized hardness and completeness. Indeed, the typical ex-

    pectation for a parameterized problem is that it either is in FPT or is W[1]-hard but in XP

    or already is NP-hard for some constant parameter value.

    In retrospective, the one-dimensional NP-hardness theory [34] and its limitations to

    offer a more fine-grained description of the complexity of exactly solving NP-hard problems

    led to the two-dimensional framework of parameterized complexity analysis. Developing

    further into multivariate algorithmics, the number of corresponding research challenges

    grows, on the one hand, by identifying meaningful different parameterizations of a single

    problem, and, on the other hand, by studying the combinations of single parameters and

    their impact on problem complexity. Indeed, multivariation is the continuing revolution of

    parameterized algorithmics, lifting the two-dimensional framework to a multidimensional

    one [27].

    3. Ways to Parameter Identification

    From the very beginning of parameterized complexity analysis the “standard parame-

    terization” of a problem referred to the cost of the solution (such as the size of a vertex set

    covering all edges of a graph, see Vertex Cover). For graph-modelled problems, “struc-

    tural” parameters such as treewidth (measuring the treelikeness of graphs) also have played

    a prominent role for a long time. As we try to make clear in the following, structural prob-

    lem parameterization is an enormously rich field. It provides a key to better understand

    the “nature” of computational intractability. The ultimate goal is to quantitatively classify

    how parameters influence problem complexity. The more we know about these interactions,

    the more likely it becomes to master computational intractability.

    Structural parameterization, in a very broad sense, is the major issue of this section.

    However, there is also more to say about parameterization by “solution quality” (solution

    cost herein being one aspect), which is discussed in the first subsection. This is followed

    by several subsections which can be interpreted as various aspects of structural parameter-

    ization. It is important to realize that it may often happen that different parameterization

    strategies eventually lead to the same parameter. Indeed, also the proposed strategies may

    overlap in various ways. Still, however, each of the subsequent subsections shall provide a

    fresh view on parameter identification.

    3.1. Parameterizations Related to Solution Quality

    The Idea. The classical and most often used problem parameter is the cost of the solution

    sought after. If the solution cost is large, then it makes sense to study the dual parameter

    (the cost of the elements not in the solution set) or above guarantee parameterization (the

    guarantee is the minimum cost every solution must have and the parameter measures the

    distance from this lower bound). Solution quality, however, also may refer to quality of

    approximation as parameter, or the “radius” of the search area in local search (a standard

    method to design heuristic algorithms where the parameter k determines the size of a k-localneighborhood searched).

  • MULTIVARIATE ALGORITHMICS AND PROBLEM PARAMETERIZATION 21

    Examples. To find a size-k vertex cover in an n-vertex graph is solvable in O(1.28k + kn)time [21], that is, Vertex Cover is fixed-parameter tractable. In contrast, finding a size-kdominating set is W[1]-hard. In case of Vertex Cover, the dual parameterization leads to

    searching for a size-(n−k′) vertex cover, where k′ is the number of vertices not contained inthe vertex cover. This problem is W[1]-hard with respect to the parameter k′ [24]. Indeed,this problem is equivalent to finding a size-k′ independent set of vertices in a graph. Thismeans that the corresponding problems Vertex Cover and Independent Set are dual

    to each other.

    Above guarantee parameterization was pioneered by Mahajan and Raman [49] studying

    the Maximum Satisfiability problem, noting that in every boolean formula in conjunctive

    normal form one can satisfy at least half of all clauses. Hence, an obvious parameterization

    (leading to fixed-parameter tractability) is whether one can satisfy at least ⌈m/2⌉+k clausesof a formula in conjunctive normal form. Herein, m denotes the total number of clausesand the parameter is k, measuring the distance to the guaranteed threshold ⌈m/2⌉. Thereis recent progress on new techniques and results in this direction [50, 1]. A long-standing

    open problem is to determine the parameterized complexity of finding a size-(⌈n/4⌉ + k)independent set in an n-vertex planar graph, parameterized by k.

    Marx [53] surveyed many facets of the relationship between approximation and param-

    eterized complexity. For instance, he discussed the issue of ratio-(1+ǫ) approximation (thatis, polynomial-time approximation schemes (PTAS’s)) parameterized by the quality of ap-

    proximation measure 1/ǫ. The central question here is whether the degree of the polynomialof the running time depends on the parameter 1/ǫ or not.

    Khuller et al. [45] presented a fixed-parameter tractability result for k-local search (pa-rameterized by k) for the Minimum Vertex Feedback Edge Set problem. In contrast,Marx [54] provided W[1]-hardness results for k-local search for the Traveling Salesmanproblem. Very recently, fixed-parameter tractability results for k-local search for planargraph problems have been reported [31].

    Discussion. Parameterization by solution quality becomes a colorful research topic when

    going beyond the simple parameter “solution size.” Above guarantee parameterization

    and k-local search parameterization still seem to be at early development stages. Theconnections of parameterization to polynomial-time approximation and beyond still lack a

    deep and thorough investigation [53].

    3.2. Parameterization by Distance from Triviality

    The Idea. Identify polynomial-time solvable special cases of the NP-hard problem under

    study. A “distance from triviality”-parameter then shall measure how far the given instance

    is away from the trivial (that is, polynomial-time solvable) case.

    Examples. A classical example for “distance from triviality”-parameterization are width

    concepts measuring the similarity of a graph compared to a tree. The point is that many

    graph problems that are NP-hard on general graphs become easily solvable when restricted

    to trees. The larger the respective width parameter is, the less treelike the considered graph

    is. For instance, Vertex Cover and Dominating Set both become fixed-parameter

    tractable with respect to the treewidth parameter; see Bodlaender and Koster [12] for a

  • 22 R. NIEDERMEIER

    survey. There are many more width parameters measuring the treelikeness of graphs, see

    Hliněný et al. [42] for a survey.

    Besides measuring treewidth, alternatively one may also study the feedback vertex set

    number to measure the distance from a tree. Indeed, the feedback vertex set number of

    a graph is at least as big as its treewidth. Kratsch and Schweitzer [47] showed that the

    Graph Isomorphism problem is fixed-parameter tractable when parameterized by the

    feedback vertex set size; in contrast, this is open with respect to the parameter treewidth.

    A similar situation occurs when parameterizing the Bandwidth problem by the vertex

    cover number of the underlying graph [30].

    Further examples for the “distance from triviality”-approach appear in the context of

    vertex-coloring of graphs [18, 51]. Here, for instance, coloring chordal graphs is polynomial-

    time solvable and the studied parameter measures how many edges to delete from a graph

    to make it chordal; this turned out to be fixed-parameter tractable [51]. Deiněko et al. [23]

    and Hoffman and Okamoto [43] described geometric “distance from triviality”-parameters

    by measuring the number of points inside the convex hull of a point set. A general view on

    “distance from triviality”-parameterization appears in Guo et al. [39].

    Discussion. Measuring distance from triviality is a very broad and flexible way to generate

    useful parameterizations of intractable problems. It helps to better analyze the transition

    from polynomial- to exponential-time solvability.

    3.3. Parameterization Based on Data Analysis

    The Idea. With the advent of algorithm engineering, it has become clear that algorithm

    design and analysis for practically relevant problems should be part of a development cy-

    cle. Implementation and experiments with a base algorithm combined with standard data

    analysis methods provide insights into the structure of the considered real-world data which

    may be quantified by parameters. Knowing these parameters and their typical values then

    can inspire new solving strategies based on multivariate complexity analysis.

    Examples. A very simple data analysis in graph problems would be to check the maximum

    vertex degree of the input graph. Many graph problems can be solved faster when the

    maximum degree is bounded. For instance, Independent Set is fixed-parameter tractable

    on bounded-degree graphs (a straightforward depth-bounded search tree does) whereas it

    is W[1]-hard on general graphs.

    Song et al. [61] described an approach for the alignment of a biopolymer sequence (such

    as an RNA or a protein) to a structure by representing both the sequence and the structure

    as graphs and solving some subgraph problem. Observing the fact that for real-world

    instances the structure graph has small treewidth, they designed practical fixed-parameter

    algorithms based on the parameter treewidth. Refer to Cai et al. [19] for a survey on

    parameterized complexity and biopolymer sequence comparison.

    A second example deals with finding dense subgraphs (more precisely, some form of

    clique relaxations) in social networks [55]. Here, it was essential for speeding up the algo-

    rithm and making it practically competitive that there were only relatively few hubs (that

    is, high-degree vertices) in the real-world graph. The corresponding algorithm engineering

    exploited this low parameter value.

  • MULTIVARIATE ALGORITHMICS AND PROBLEM PARAMETERIZATION 23

    Discussion. Parameterization by data analysis goes hand in hand with algorithm engi-

    neering and a data-driven algorithm design process. It combines empirical findings (that is,

    small parameter values measured in the input data) with rigorous theory building (provable

    fixed-parameter tractability results). This line of investigation is still underdeveloped in

    parameterized and multivariate algorithmics but is a litmus test for the practical relevance

    and impact on applied computing.

    3.4. Parameterizations Generated by Deconstructing Hardness Proofs

    The Idea. Look at the (many-one) reductions used to show a problem’s NP-hardness.

    Check whether certain quantities (that is, parameters) are assumed to be unbounded in

    order to make the reduction work. Parameterize by these quantities. It is important to

    note that this approach naturally extends to deconstructing W[1]-hardness proofs; here the

    goal is to find additional parameters to achieve fixed-parameter tractability results.

    Examples. Recall our introductory example with Possible Winner for k-Approval.From the corresponding NP-hardness proofs it follows that this problem is NP-hard when

    either the number of votes v is a constant (but k is unbounded) or k is a constant (but v isunbounded) [7, 10], whereas it becomes fixed-parameter tractable when parameterized by

    both k and v [6].A second example, where the deconstruction approach is also systematically explained,

    refers to the NP-hard Interval Constrained Coloring problem [46]. Looking at a

    known NP-hardness proof [4], one may identify several quantities being unbounded in

    the NP-hardness reduction; this was used to derive several fixed-parameter tractability re-

    sults [46]. In contrast, a recent result showed that the quantity “number k of colors” aloneis not useful as a parameter in the sense that the problem remains NP-hard when restricted

    to instances with only three colors [15]. Indeed, Interval Constrained Coloring offers

    a multitude of challenges for multivariate algorithmics, also see Subsection 4.3.

    Discussion. Deconstructing intractability relies on the close study of the available hardness

    proofs for an intractable problem. This means to strive for a full understanding of the

    current state of knowledge about a problem’s computational complexity. Having identified

    quantities whose unboundedness is essential for the hardness proofs then can trigger the

    search for either stronger hardness or fixed-parameter tractability results.

    3.5. Parameterization by Dimension

    The Idea. The dimensionality of a problem plays an important role in computational ge-

    ometry and also in fields such as databases and query optimization (where the dimension

    number can be the number of attributes of a stored object). Hence, the dimension number

    and also the “range of values of each dimension” are important for assessing the computa-

    tional complexity of multidimensional problems.

  • 24 R. NIEDERMEIER

    Examples. Cabello et al. [16] studied the problem to decide whether two n-point sets ind-dimensional space are congruent, a fundamental problem in geometric pattern matching.Brass and Knauer [13] conjectured that this problem is fixed-parameter tractable with

    respect to the parameter d. However, deciding whether a set is congruent to a subset ofanother set is shown to be W[1]-hard with respect to d [16]. An other example appearsin the context of geometric clustering. Cabello et al. [17] showed that the Rectilinear

    3-Center problem is fixed-parameter tractable with respect to the dimension of the input

    point set whereas Rectilinear k-Center for k ≥ 4 and Euclidean k-Center for k ≥ 2are W[1]-hard with respect to the dimension parameter. See Giannopoulos et al. [35, 36]

    for more on the parameterized complexity of geometric problems.

    The Closest String problem is of different “dimension nature”. Here, one is given a

    set of k strings of same length and the task is to find a string which minimizes the maximumHamming distance to the input strings. The two dimensions of this problem are string length

    (typically large) and number k of strings (typically small). It was shown that ClosestString is fixed-parameter tractable with respect to the “dimension parameter” k [38],whereas fixed-parameter tractability with respect to the string length is straightforward in

    the case of constant-size input alphabets; also see Subsection 4.1.

    Discussion. Incorporating dimension parameters into investigations is natural and the pa-

    rameter values and ranges usually can easily be derived from the applications. The dimen-

    sion alone, however, usually seems to be a “hard parameter” in terms of fixed-parameter

    tractability; so often the combination with further parameters might be unavoidable.

    3.6. Parameterization by Averaging Out

    The Idea. Assume that one is given a number of objects and a distance measure between

    them. In median or consensus problems, the goal is to find an object that minimizes the

    sum of distances to the given objects. Parameterize by the average distance to the goal

    object or the average distance between the input objects. In graph problems, the average

    vertex degree could for instance be an interesting parameter.

    Examples. In the Consensus Patterns problem, for given strings s1, . . . , sk one wantsto find a string s of some specified length such that each si, 1 ≤ i ≤ k, contains a substringsuch that the average of the distances of s to these k substrings is minimized. Marx [52]showed that Consensus Patterns is fixed-parameter tractable with respect to this average

    distance parameter.

    In the Consensus Clustering problem, one is given a set of n partitions C1, . . . , Cn ofa base set S. In other words, every partition of the base set is a clustering of S. The goal is tofind a partition C of S that minimizes the sum

    n

    i=1 d(C,Ci), where the distance function dmeasures how similar two clusters are by counting the “differently placed” elements of S.In contrast to Consensus Patterns, here the parameter “average distance between two

    input partitions” has been considered and led to fixed-parameter tractability [9]. Thus, the

    higher the degree of average similarity between input objects is, the faster one finds the

    desired median object.

  • MULTIVARIATE ALGORITHMICS AND PROBLEM PARAMETERIZATION 25

    Discussion. The average parameterization for Consensus Patterns directly relates to

    the solution quality whereas the one for Consensus Clustering relates to the structure of

    the input. In the latter case, the described example showed that one can deal with “outliers”

    having high distance to the other objects. Measuring the average distance between the input

    objects means to determine their degree of average similarity. This structural parameter

    value may be quickly computed in advance, making it easy to forecast the performance of

    the corresponding fixed-parameter algorithm.

    4. Three Case Studies

    In the preceding section, we focussed on various ways to single out various interesting

    problem parameterizations. In what follows, we put emphasis on the multivariate aspects

    of complexity analysis related to (combining) different parameterizations of one and the

    same problem. To this end, we study three NP-hard problems that nicely exhibit various

    relevant features of multivariate algorithmics.

    4.1. Closest String

    The NP-hard Closest String problem is to find a length-L string that minimizesthe maximum Hamming distance to a given set of k length-L strings. The problem arisesin computational biology (motif search in strings) and coding theory (minimum radius

    problem).

    Known Results. What are natural parameterizations here? First, consider the number kof input strings. Using integer linear programming results, fixed-parameter tractability with

    respect to k can be derived [38]. This result is of theoretical interest only due to a hugecombinatorial explosion. Second, concerning the parameter string length L, for strings overalphabet Σ we obviously only need to check all |Σ|L candidates for the closest string andchoose a best one, hence fixed-parameter tractability with respect to L follows for constant-size alphabets. More precisely, Closest String is fixed-parameter tractable with respect

    to the combined parameter (|Σ|, L). Finally, recall that the goal is to minimize the maximumdistance d; thus, d is a natural parameter as well, being small (say values below 10) inbiological applications. Closest String is also shown to be fixed-parameter tractable

    with respect to d by designing a search tree of size (d + 1)d [38]. A further fixed-parameteralgorithm with respect to the combined parameter (|Σ|, d) has a combinatorial explosion ofthe form (|Σ| − 1)d · 24d [48], which has recently been improved to (|Σ| − 1)d · 23.25d [62].For small alphabet size these results improve on the (d + 1)d-search tree algorithm. Thereare also several parameterized complexity results on the more general Closest Substring

    and further related problems [29, 37, 52, 48, 62].

    Discussion. Closest String carries four obvious parameters, namely the number k ofinput strings, the string length L, the alphabet size |Σ|, and the solution distance d. Acorresponding multivariate complexity analysis still faces several open questions with re-

    spect to making solving algorithms more practical. For instance, it would be interesting

    to see whether the (impractical) fixed-parameter tractability result for parameter k can beimproved when adding further parameters. Moreover, it would be interesting to identify

  • 26 R. NIEDERMEIER

    further structural string parameters that help to gain faster algorithms, perhaps in combi-

    nation with known parameterizations. This is of particular importance for the more general

    and harder Closest Substring problem.

    Data analysis has indicated small d- and k-values in biological applications. Interestingpolynomial-time solvable instances would help to find “distance from triviality”-parameters.

    Closest String remains NP-hard for binary alphabets [33]; a systematic intractability

    deconstruction appears desirable. Closest String has the obvious two dimensions kand L, where k is typically much smaller than L. Parameterization by “averaging out”is hopeless for Closest String since one can easily many-one reduce an arbitrary input

    instance to one with constant average Hamming distance between input strings: just add

    a sufficiently large number of identical strings. Altogether, the multivariate complexity

    nature of Closest String is in many aspects unexplored.

    4.2. Kemeny Score

    The Kemeny Score problem is to find a consensus ranking of a given set of votes (that

    is, permutations) over a given set of candidates. A consensus ranking is a permutation of the

    candidates that minimizes the sum of “inversions” between this ranking and the given votes.

    Kemeny Score plays an important role in rank aggregation and multi-agent systems; due

    to its many nice properties, it is considered to be one of the most important preference-based

    voting systems.

    Known Results. Kemeny Score is NP-hard already for four votes [25, 26], excluding

    hope for fixed-parameter tractability with respect to the parameter “number of votes”.

    In contrast, the parameter “number of candidates” c trivially leads to fixed-parametertractability by simply checking all possible c! permutations that may constitute the con-sensus ranking. Using a more clever dynamic programming approach, the combinatorial

    explosion can be lowered to 2c [8]. A different natural parameterization is to study what

    happens if the votes have high pairwise average similarity. More specifically, this means

    counting the number of inversions between each pair of votes and then taking the average

    over all pairs. Indeed, the problem is also fixed-parameter tractable with respect to this

    similarity value s, the best known algorithm currently incurring a combinatorial explosionof 4.83s [59]. Further natural parameters are the sum of distances of the consensus rankingto input votes (that is, the Kemeny score) or the range of positions a candidate takes within

    a vote [8]. Other than for the pairwise distance parameter, where both the maximum and

    the average version lead to fixed-parameter tractability [8, 59], for the range parameter only

    the maximum version does whereas the problem becomes NP-hard already for an average

    range value of 2. [8]. Simjour [59] also studied the interesting parameter “Kemeny score

    divided by the number of candidates” and also showed fixed-parameter tractability in this

    case. There are more general problem versions that allow ties within the votes. Some

    fixed-parameter tractability results also have been achieved here [8, 9].

    Discussion. Kemeny Score is an other example for a problem carrying numerous “ob-

    vious” parameters. Most known results, however, are with respect to two-dimensional

    complexity analysis (that is, parameterization by a single parameter), lacking the extension

    to a multivariate view.

    First data analysis studies on ranking data [14] indicate the practical relevance of some

    of the above parameterizations. Average pairwise distance may be also considered as a

  • MULTIVARIATE ALGORITHMICS AND PROBLEM PARAMETERIZATION 27

    straightforward “distance from triviality”-measure since average distance 0 means that all

    input votes are equal. The same holds true for the range parameter. Again, known in-

    tractability deconstruction for Kemeny Score just refers to looking at the NP-hardness

    result of Dwork et al. [25, 26], implying hardness already for a constant number of votes. A

    more fine-grained intractability deconstruction is missing. Kemeny Score can be seen as a

    two-dimensional problem. One dimension is the number of votes and the other dimension is

    number of candidates; however, only the latter leads to fixed-parameter tractability. In this

    context, the novel concept of “partial kernelization” has been introduced [9]. To the best

    of our knowledge, Kemeny Score has been the first example for a systematic approach to

    average parameterization [8, 9]. As for Closest String, a multidimensional analysis of

    the computational complexity of Kemeny Score remains widely open.

    4.3. Interval Constrained Coloring

    In the NP-hard Interval Constrained Coloring problem [4, 5] (arising in auto-

    mated mass spectrometry in biochemistry) one is given a set of m integer intervals in therange 1 to r and a set of m associated multisets of colors (specifying for each interval thecolors to be used for its elements), and one asks whether there is a “consistent” coloring for

    all integer points from {1, . . . , r} that complies with the constraints specified by the colormultisets.

    Known Results. Interval Constrained Coloring remains NP-hard even in case of

    only three colors [15]. Deconstructing the original NP-hardness proof due to Althaus et

    al. [4] and taking into account the refined NP-hardness proof of Byrka et al. [15], the

    following interesting parameters have been identified [46]:

    • interval range,• number of intervals,• maximum interval length,• maximum cutwidth with respect to overlapping intervals,• maximum pairwise interval overlap, and• maximum number of different colors in the color multisets.

    All these quantities are assumed to be unbounded in the NP-hardness reduction due to

    Althaus et al. [4]; this immediately calls for a parameterized investigation. Several fixed-

    parameter tractability results have been achieved for single parameters and parameter pairs,

    leaving numerous open questions [46]. For instance, the parameterized complexity with re-

    spect to the parameter “number of intervals” is open, whereas Interval Constrained

    Coloring is fixed-parameter tractable with respect to the parameter “interval length”.

    Combining the parameters “number of colors” and “number of intervals” though, one

    achieves fixed-parameter tractability. In summary, many multidimensional parameteriza-

    tions remain unstudied.

    Discussion. The case of Interval Constrained Coloring gives a prime example for

    deconstruction of intractability and the existence of numerous relevant parameterizations.

    There are a few known fixed-parameter tractability results, several of them calling for

    improved algorithms. Checking “all” reasonable parameter combinations and constellations

    could easily make an interesting PhD thesis.

  • 28 R. NIEDERMEIER

    The biological data often contain only three colors; the corresponding NP-hardness

    result [15] shows that this alone is not a fruitful parameter—combination with other pa-

    rameters is needed (such as the interval range [46]). Moreover, observations on biological

    data indicate a small number of lengthy intervals, motivating a further parameterization

    possibility. Instances with only two colors or cutwidth two are “trivial” in the sense that

    (nontrivial) polynomial-time algorithms have been developed to solve these instances [4, 46].

    Unfortunately, in both cases a parameter value of three already yields NP-hardness. The

    two natural dimensions of the problem are given by the interval range and the number of

    intervals, both important parameters. Average parameterization has not been considered

    yet. In summary, Interval Constrained Coloring might serve as a “model problem”

    for studying many aspects of multivariate algorithmics.

    5. Conclusion with Six Theses on Multivariate Algorithmics

    We described a number of possibilities to derive meaningful “single” parameterizations.

    Typically, not every such parameter will allow for fixed-parameter tractability results. As-

    sume that a problem is W[1]-hard with respect to a parameter k (or even NP-hard forconstant values of k). Then this calls for studying whether the problem becomes tractablewhen adding a further parameter k′, that is, asking the question whether the problem isfixed-parameter tractable with respect to the (combined) parameter (k, k′). Moreover, evenif a problem is classified to be fixed-parameter tractable with respect to a parameter k, thisstill can be practically useless. Hence, introducing a second parameter may open the route

    to practical fixed-parameter algorithms. Altogether, in its full generality such a “problem

    processing” forms the heart of multivariate algorithmics.

    Fellows et al. [28] proposed to study the “complexity ecology of parameters”. For the

    ease of presentation restricting the discussion to graph problems, one may build “complex-

    ity matrices” where both rows and columns represent certain parameters such as treewidth,

    bandwidth, vertex cover number, domination number, and so on. The corresponding val-

    ues deliver structural information about the input graph. Then, a matrix entry in row xand column y represents a question of the form “how hard is it to compute the quantityrepresented by column y when parameterized by the quantity represented by x?”. For ex-ample, it is easy to see that the domination number can be computed by a fixed-parameter

    algorithm using the parameter vertex cover number. Obviously, there is no need to restrict

    such considerations to two-dimensional matrices, thus leading to a full-flavored multivariate

    algorithmics approach.

    After all, a multivariate approach may open Pandora’s box by generating a great num-

    ber of questions regarding the influence and the interrelationship between parameters in

    terms of computational complexity. With the tools provided by parameterized and multi-

    variate algorithmics, the arising questions yield worthwhile research challenges. Indeed, to

    better understand important phenomena of computational complexity, there seems to be

    no way to circumvent such a “massive analytical attack” on problem complexity. Opening

    Pandora’s box, however, is not hopeless because multivariate algorithmics can already rely

    on numerous tools available from parameterized complexity analysis.

    There is little point in finishing this paper with a list of open questions—basically every

    NP-hard problem still harbors numerous challenges in terms of multivariate algorithmics.

    Indeed, multivariation is a horn of plenty concerning practically relevant and theoretically

  • MULTIVARIATE ALGORITHMICS AND PROBLEM PARAMETERIZATION 29

    appealing opportunities for research. Instead, we conclude with six claims and conjectures

    concerning the future of (multivariate) algorithmics.

    Thesis 1: Problem parameterization is a pervasive and ubiquitous tool in attacking

    intractable problems. A theory of computational complexity neglecting parameter-

    ized and multivariate analysis is incomplete.

    Thesis 2: Multivariate algorithmics helps in gaining a more fine-grained view on

    polynomial-time solvable problems, also getting in close touch with adaptive al-

    gorithms.4

    Thesis 3: Multivariate algorithmics can naturally incorporate approximation algo-

    rithms, relaxing the goal of exact to approximate solvability.

    Thesis 4: Multivariate algorithmics is a “systems approach” to explore the nature

    of computational complexity. In particular, it promotes the development of meta-

    algorithms that first estimate various parameter values and then choose the appro-

    priate algorithm to apply.

    Thesis 5: Multivariate algorithmics helps to significantly increase the impact of The-

    oretical Computer Science on practical computing by providing more expressive

    statements about worst-case complexity.

    Thesis 6: Multivariate algorithmics is an ideal theoretical match for algorithm engi-

    neering, both areas mutually benefiting from and complementing each other.

    Acknowledgments. I am grateful to Nadja Betzler, Michael R. Fellows, Jiong Guo, Christian

    Komusiewicz, Dániel Marx, Hannes Moser, Johannes Uhlmann, and Mathias Weller for

    constructive and insightful feedback on earlier versions of this paper.

    References

    [1] N. Alon, G. Gutin, E. J. Kim, S. Szeider, and A. Yeo. Solving MAX-r-SAT above a tight lower bound.In Proc. 21st SODA. ACM/SIAM, 2010.

    [2] N. Alon, D. Lokshtanov, and S. Saurabh. Fast FAST. In Proc. 36th ICALP, volume 5555 of LNCS,pages 49–58. Springer, 2009.

    [3] N. Alon, R. Yuster, and U. Zwick. Color-coding. J. ACM, 42(4):844–856, 1995.[4] E. Althaus, S. Canzar, K. Elbassioni, A. Karrenbauer, and J. Mestre. Approximating the interval

    constrained coloring problem. In Proc. 11th SWAT, volume 5124 of LNCS, pages 210–221. Springer,2008.

    [5] E. Althaus, S. Canzar, M. R. Emmett, A. Karrenbauer, A. G. Marshall, A. Meyer-Baese, and H. Zhang.Computing H/D-exchange speeds of single residues from data of peptic fragments. In Proc. 23rdSAC ’08, pages 1273–1277. ACM, 2008.

    [6] N. Betzler. On problem kernels for possible winner determination under the k-approval protocol. 2009.[7] N. Betzler and B. Dorn. Towards a dichotomy of finding possible winners in elections based on scoring

    rules. In Proc. 34th MFCS, volume 5734 of LNCS, pages 124–136. Springer, 2009.[8] N. Betzler, M. R. Fellows, J. Guo, R. Niedermeier, and F. A. Rosamond. Fixed-parameter algorithms

    for Kemeny scores. Theor. Comput. Sci., 410(45):4454–4570, 2009.[9] N. Betzler, J. Guo, C. Komusiewicz, and R. Niedermeier. Average parameterization and partial kernel-

    ization for computing medians. In Proc. 9th LATIN, LNCS. Springer, 2010.[10] N. Betzler, S. Hemmann, and R. Niedermeier. A multivariate complexity analysis of determining possible

    winners given incomplete votes. In Proc. 21st IJCAI, pages 53–58, 2009.[11] H. L. Bodlaender. Kernelization: New upper and lower bound techniques. In Proc. 4th IWPEC, volume

    5917 of LNCS, pages 17–37. Springer, 2009.

    4For instance, an adaptive sorting algorithm takes advantage of existing order in the input, with itsrunning time being a function of the disorder in the input.

  • 30 R. NIEDERMEIER

    [12] H. L. Bodlaender and A. M. C. A. Koster. Combinatorial optimization on graphs of bounded treewidth.Comp. J., 51(3):255–269, 2008.

    [13] P. Brass and C. Knauer. Testing the congruence of d-dimensional point sets. Int. J. Comput. GeometryAppl., 12(1–2):115–124, 2002.

    [14] R. Bredereck. Fixed-parameter algorithms for computing Kemeny scores—theory and practice. Studi-enarbeit, Institut für Informatik, Friedrich-Schiller-Universität Jena, Germany, 2009.

    [15] J. Byrka, A. Karrenbauer, and L. Sanità. The interval constrained 3-coloring problem. In Proc. 9thLATIN, LNCS. Springer, 2010.

    [16] S. Cabello, P. Giannopoulos, and C. Knauer. On the parameterized complexity of d-dimensional pointset pattern matching. Inf. Process. Lett., 105(2):73–77, 2008.

    [17] S. Cabello, P. Giannopoulos, C. Knauer, D. Marx, and G. Rote. Geometric clustering: fixed-parametertractability and lower bounds with respect to the dimension. ACM Transactions on Algorithms, 2009.To appear. Preliminary version at SODA 2008.

    [18] L. Cai. Parameterized complexity of vertex colouring. Discrete Appl. Math., 127(1):415–429, 2003.[19] L. Cai, X. Huang, C. Liu, F. A. Rosamond, and Y. Song. Parameterized complexity and biopolymer

    sequence comparison. Comp. J., 51(3):270–291, 2008.[20] J. Chen, B. Chor, M. Fellows, X. Huang, D. W. Juedes, I. A. Kanj, and G. Xia. Tight lower bounds for

    certain parameterized NP-hard problems. Inform. and Comput., 201(2):216–231, 2005.[21] J. Chen, I. A. Kanj, and G. Xia. Improved parameterized upper bounds for Vertex Cover. In Proc. 31st

    MFCS, volume 4162 of LNCS, pages 238–249. Springer, 2006.[22] J. Chen and J. Meng. On parameterized intractability: Hardness and completeness. Comp. J., 51(1):39–

    59, 2008.[23] V. G. Deiněko, M. Hoffmann, Y. Okamoto, and G. J. Woeginger. The traveling salesman problem with

    few inner points. Oper. Res. Lett., 34(1):106–110, 2006.[24] R.