20
Code and data structure implications when patching without local storage… and resource management overall. Ken Demarest [email protected] [email protected]

Ken Demarest Ken@demarest Kdemarest@eidos

Embed Size (px)

DESCRIPTION

Code and data structure implications when patching without local storage … and resource management overall. Ken Demarest [email protected] [email protected]. Background on this talk. - PowerPoint PPT Presentation

Citation preview

Page 1: Ken Demarest Ken@demarest Kdemarest@eidos

Code and data structure implications when patching without local storage… and resource management overall.

Ken [email protected]@eidos.com

Page 2: Ken Demarest Ken@demarest Kdemarest@eidos

Background on this talk

This is a thought experiment – it all started with Jon asking me what hard problems I had heard about recently…

Patching consoles… Turns out I never did it, so… Focus on high concepts, ignores numerous practical issues Make assumptions about a future that may not occur Goal is to drive discussion, out-of-the-box thought Gather some current best practices relative to future

improvements

And maybe… perspective for console guys who might not have the depth of online experience PC developers have, and PC guys who are unused to storage limitations

Page 3: Ken Demarest Ken@demarest Kdemarest@eidos

Initial Poll

Who has ever delivered a patch to their game?

Who uses late linking of data?

Who has edit-and-continue in their data path? What areas cause level resets? What areas invalidate edit-and-continue?

Whose products support mods / plugins?

Who is console only (vs PC or console+PC)?

Page 4: Ken Demarest Ken@demarest Kdemarest@eidos

Situation

Next gen consoles (PS3, XB2, PSP, DS) will provide connectivity

Sending data to console games post-sale will enable new revenue opportunities

Patching (should never happen, of course) Content unlock (fill every DVD) Content extension (Lode Runner maps) Community mods (console mods built on PC) Marketing intrusion… I mean surveys, advertising, dynamic

product placement, up-sell, cross-sell

Storage may be limited Never as massive as PC storage – MMO ‘Patch Kings’ Current PS2 no HD; Microsoft’s HD ‘nut’; trend to mem cards

Page 5: Ken Demarest Ken@demarest Kdemarest@eidos

Data Transfer Options

Require storage But even then you won’t be the only app to utilize it

Send data every time Plausible, but slow Think of online connection as a ‘remote hard drive’ You’ve got ‘more bandwidth than storage’ (ex: 60%

broadband, 40% modem on PS2 today)

Conclusion: any data sent should be as small as possible… but how?

Page 6: Ken Demarest Ken@demarest Kdemarest@eidos

Reality Check

North American PS2 install base is ~30M 3.2M connected users Appears unlikely to break 30% connectivity in near- to mid-term Europe still hamstrung Japan uptake less certain - <1M today; Korea likely uptake But all new PS2s have network adapter built in

Still, 30% of 30M is 9M users – the size of the largest PC game penetration, and about 18x the size of the MMO audience

Assuming XB2 leverages MSN/Zone/Passport user base, we could see big numbers here as well

So still a viable profit center

Page 7: Ken Demarest Ken@demarest Kdemarest@eidos

Considered as a patch problem, how do we keep data transfer small?

Today’s code & data layout often resists efficient patch sizes

Code fixups Data aggregation, indexes, etc

The good news: Simple techniques can ease the problems They (might) ripple into greater build efficiency

The bad news: Many techniques imply slower loads Might have to change your fundamental resource management

architecture But hey, you’ve got a while to make these systemic changes

Page 8: Ken Demarest Ken@demarest Kdemarest@eidos

Code Options

Stop fixup problems DLLs (or equivalent – MS TDR) Ship the obj files – link at run time Fixed, predictable link order

Imagine an order-retaining linker Post ship, name everything zz1, zz2, etc

Code hooks (like a plug-in architecture… or a mod) In main loop, around mem allocations, root ancestor

constructor/destructor, DLL detection, kernel process insertion, etc. Code intercept installation on load All facilitated with a compact game state

Manual aspect-orientation Crazy VMT redirect [applyDamage()]

Warning: Absurd ideas herein for the sake of illustration

Page 9: Ken Demarest Ken@demarest Kdemarest@eidos

Sample data path:

Hot spots Getting data game-friendly seldom has order guarantees Some source data has Butterfly Effect on derived data Derived data takes a long time to generate; often large

(otherwise you wouldn’t need to pre-generate it) Compression may cross assets (text?) Internal linkage lacks persistent references Concat indices often resemble code fixups Embedded concats only make it worse

Data has troubles too…

SourceGame- Friendl

y

Derived Data

LinkCompres

sConcat Embed

Illustrative – not necessarily in order; each step may also happen multiple times

Page 10: Ken Demarest Ken@demarest Kdemarest@eidos

Data Options

The biggies Model data build on code build principles Dynamic linking of data at (re)load time Concat using relocatable, sector-based approach

(the irony of FAT32; sector sorting; tell FAT to order files?)

The others Modularity and predictable ordering in all data, to

facilitate a diff Compression optional (or diff-friendly via window

borders)

Page 11: Ken Demarest Ken@demarest Kdemarest@eidos

When Sending Data or Code…

Date-stamped resources make atomic changes clear, QA-able

Diff then compress before send

Merge at load time (no local storage, remember?)

Only send when game-relevant

Page 12: Ken Demarest Ken@demarest Kdemarest@eidos

Veering to a related thought experiment…

Lets suppose that you never had to QA anything, or worry about load times

Your end-users could theoretically get build results just like the team does (suspend disbelief, please… and Yes, MMOs appear to do this already…)

Page 13: Ken Demarest Ken@demarest Kdemarest@eidos

Hey, quality upload == quality build?

Conceptually, delivery of patches/upgrades/etc to customers is no different than delivery of build updates to your team (except for QA, load times, delivery infrastructure, blah blah)

HL2, Halo2, Oddworld all have facets of the data delivery techniques discussed earlier

Trim resource management tends to yield faster builds Tends to prepare you for efficient online delivery too

Mod community has had many of these techniques for years – yet projects still let their build environment stray from the faith, often for very valid reasons

How might these techniques point us towards ‘data delivery nirvana’ conjoined with ‘build environment nirvana’? Can TCP/IP distribution of code and data be unified across target audiences (team/players)?

Page 14: Ken Demarest Ken@demarest Kdemarest@eidos

A few pathological cases…

Artist adds wrinkles to a character’s forehead, and all players see it immediately

Dev team and the community make simo-changes to levels, and everybody sees it at once

Content rights (ownership) is a characteristic of both the dev team and players

i.e. what content you can see/play, as well as your rights to make changes

We’re heading over the edge, so why stop now…

Page 15: Ken Demarest Ken@demarest Kdemarest@eidos

Straw Man (1)

All tools are also servers of their data and services via TCP/IP Maya, Max, Photoshop, your crazy facial animation tool Via a uniform format (SOAP? XMLRPC?) Capable of evincing change deltas, not just entire data units Via a perfectly uniform resource system with perfect pointer management Game displays insta-results and multi-user simo editing on live levels

Late linking, and persistent symbolic links allow data-based ‘edit and continue’

All code and data compilation is always distributed and cached Data validation is so good it isn’t possible for the team to get stopped Oh, and automated code unit testing makes this true for code as well

Derived data is never required – only tasked out then later loaded when ready

Just-in-time auto-concatenation (by the game?) groups data for most efficient load times

Warning: Blatant impossibilities ahead. No throwing of rotten

tomatoes, please.

Page 16: Ken Demarest Ken@demarest Kdemarest@eidos

Straw Man (2)

Validation extends to code A bot plays through the levels before you see them, finds bugs

before QA Bugs are always fully repeatable, often across builds

Code and scripting both always link late, fast, dynamic re-link

The game never halts – the code is so robust it even saves game state across major recompiles, effectively never halting

Publish to team members and publish to players only vary in the amount of QA

In fact, player distribution is just a version control sync

OK, maybe one small tomato

Page 17: Ken Demarest Ken@demarest Kdemarest@eidos

Discussion

What in the straw man, ignoring constraints, is undesirable?

What are key obstacles to achieving the straw man?

What other high-concept, perfect-world systems would you like to see in a next-generation resource management / data delivery system? (or open discussion to asset chain overall)

Page 18: Ken Demarest Ken@demarest Kdemarest@eidos

Cost Analysis Poll

What % of tech time in your org is spent creating system (vs. leaf): 20%, 40%, 60%, 80%?

Data full recompile: <10 min, <30 min, <1 hr, <2 hr, more?

Code full recompile: <5 min, <10 min, <30 min, <1 hr, more?

Who runs metrics on code effort vs. lost team time?

Page 19: Ken Demarest Ken@demarest Kdemarest@eidos

Next Gen Tools Poll

Who journals, or uses techniques to improve bug reproduction?

Who uses unit testing? Expects to use it?

Focusing on techniques to debug multithreaded pain?

Fully generic layer for tweaking data values (with a GUI layer where needed)?

Who sees integration of 3rd party tools as a major part of their next product?

Other techniques?

Page 20: Ken Demarest Ken@demarest Kdemarest@eidos

Next Gen Resource Chain Stealth Talk

Ken Demarest

[email protected]