Upload
derrick-grant
View
213
Download
1
Embed Size (px)
Citation preview
Rqmts & OO
(Textual) Requirements Assistant*
Hayes: works on repositories of textual documents (requirements, designs, failure reports, …)
Scale: readily works with CM1 (in MDP)
Nikora: works on repositories of textual requirementsScale: studies of 1300+ requirements
WH
AT
Automate onerous yet important tasks related to requirements tracing
Hayes: well-chosen modest linguistic processing; trainable from user inputs; thorough, thoughtful metrics
Nikora: scripts to automate search (pattern-based), closure (of tracing graph), comparison (of result sets); identifies the frequency of various expressive styles
HO
WW
HY
* akin to the “Apprentice” name used by Rich, Waters & Reubenstein
RE
TR
O tool – com
ing soon to the SA
RP
site!Are the days of textual requirements numbered? (e.g., by “model based”)
Rqmts & OO
Analyst’s Assistant
Menzies: where models exist, subsumes “heuristic” activities (debugging, diagnosis, optimization, tuning…)
Scale: no fear! Adoptability: taught to students
Powell: property validation of designs expressed in formal model including C code Scale: MER arbiter; Adoptable: 3rd party experimenter
WH
AT
Analysis results for the masses – timely, scalable, meaningful
Menzies : random search; simple yet expressive language; underlying language mechanism
Powell : utilizing LURCH (David Owen), in turn based on Menzies’ ideas; bugs’ connection to code obvious
HO
WW
HY
but am *I* too old to learn?
Are models really going to be prevalent as we think?
Rqmts & OO
Designer’s Assistant
Whittle: assists in progression of requirements, use cases, scenarios, design; Applicability: event driven systems UML
Scale: 10 dense pages of requirements, 40 scenarios
Beimes : Model checking of interface (proper use of: OS calls, sequencing; synchronization, commanding)
Applicability: interfaces, including those of COTSScale: CM1 study; VxWorks study results imminent
WH
AT
Help do/check design
Whittle : prioritize among use cases; elicit off-nominal scenarios; form relationships among scenarios; generalize scenarios to state machines
Beimes : Model checking, static analysis platform; control-flow graph; properties easily expressed (C like language, plus standard templates)
HO
WW
HY
Systems, Interfaces, COTS
Rqmts & OO
OO Metrician’s Assistant
Lateef : Critiquing of use cases and beyond Application: attitude control system; OO models
Etzkorn : semantic metrics (ignores syntactic variability), Application: works on OO designs before code existsScale: (anyone contemplating working with 20CDRoms’ worth of data does not fear scale…)
WH
AT
Help measure/validate OO designs
Lateef : OO-specific risks; prioritization of critical (essential) use cases, etc.; OO metrics; BBN model to combine multiple metricsEtzkorn : leverage significant prior work on program understanding (conceptual graphs for knowledge representation); empirical and comparative comparisons
HO
WW
HY
Early days of NASA mission use of OO; perception of increase – correct?
Rqmts & OO
Obstacles
Information access: Foreign Nationals & ITAR, etc.“Sanitizer”: Helps, but sometimes discardsinformation needed for study; MDP helps
Challenge: Getting enough case studies
SARP quarterly reporting more onerous than NSF (but not DARPA or DoD):Can be discouraging/daunting to typical academics
However, NASA cares:University thrilled by visit from NASA!
Ominous trend: level of documentation in average project is decreasing.
Forthcoming NPR may remedy this, but its effect still several years in future; also, beware of old and unaudited information
*SARP complaint
line
Rqmts & OO
Publicationilty a problem? Apparently not.
Credibility: Nicely documents past successes, key to gaining interest of future customers!
SARP values them: Called for in quarterly report; published impact ratings for venues (conferences, journals); AWARD!
Outreach: via SARP web site!
SARP : SARP encourages empiricism* (application and evaluations), key to (e.g.) experience paper Note: may need longer lead time for release approvalNASA source of real problems, which researchers
QUESTION: Is there a NASA strategy for (e.g.) conference participation?
*Empirically based work: new territory for some Recommended reading: Basili, Selby & Hutchins, TSE, 1986
Rqmts & OO
Aunt Aardvark's Advice Column*:“Working with projects”
Initially: $: Pay for time of project personnelGets you the data/insight/feedbackGets you an inside advocateHowever, you’re often their lowest priority task
Having established credibility: Offer technique to project, in return for their supply of, and guidance on, their data
Interactions: Make few rather than many queries of project; be willing to wait; formulate query to be of interest (address significant issues)
Free: Offer free for experimentation / limited time use
Pitfalls: Don’t force your process on the user(s) Anticipate (& prepare for) how your tool will be used
(in ways you could hardly imagine!!!)
*SARP quarterly
newsletter
Rqmts & OO
QUESTIONS
Is it better to solve a new problem than to improve upon an old solution?
“Lines in the sand” – is anyone going the same way?
What NEW tools would we wish for?
It’s like asking what requirements are we missing…
What is the SARP product (or products)?
Should we have a common goal (or few goals)?
E.g., multiple SARP projects working towards a single tool/process/course/…
SARP is for reporting out – what about reporting in to SARP?