13
Journal of Applied Operational Research (2018) Vol. 10, No. 1, 5365 ISSN 1735-8523 (Print), ISSN 1927-0089 (Online) www.orlabanalytics.ca Improving admissions processes through value focused thinking Joshua D. Deehr 1 , Chrisophter M. Smith 1, , and Raymond R. Hill 1 1 Air Force Institute of Technology, Wright-Patterson AFB, OH, USA Received 14 August 2018 Accepted 28 October 2018 AbstractColleges and universities have the ability to be selective in who they choose to admit, but how they do this has recently come under scrutiny. The purpose of this research is to show how decision analysis can aid with these processes. Value focused thinking (VFT) places criteria in a hierarchal structure and quantifies the values with criteria measurements, known as a decision model. This research compares two case studies of college admissions selection criteria. VFT is applied to develop a valid admissions decision model, and a Monte Carlo simulation was used to examine the accuracy of the models. Our research determines that the university with a well-defined decision model can choose candidates with 86% accuracy, while the university without a well-defined decision model was only 55% accurate. This research shows how VFT can improve an organization's personnel selection process by accounting for what is important to the universities. Published online 21 November 2018 Copyright © ORLab Analytics Inc. All rights reserved. Keywords: Admissions processes Colleges Decision analysis Monte Carlo simulation Personnel selection Value focused thinking Introduction Colleges and universities have the ability to be selective in which students they choose to admit into their ranks, however the ways in which they do so has recently come under intense scrutiny (Camara & Kimmel, 2005). As admissions into higher education is perceived as the way to social and economic success in America, the manner in which a university chooses its students is often a contested topic. Inevitably, any debate on the admissions process revolves around the systems used to delineate applicants as these procedures are subjective and at times not explicit (Camara & Kimmel, 2005). Decision analysis can aid with these difficult processes by helping to remove some of the ambiguity. Decision analysis is a philosophy and a social-technical process to create value for decision makers (DMs) facing difficult decisions involving multiple stakeholders, multiple (possibly conflicting) objectives, complex alternatives, important uncertainties, and significant consequences (Parnell et al., 2013). The multi-criteria nature of personnel selection makes multi-criteria decision making (MCDM) methods ideal for coping with these complexities, given that those involved in personnel selection consider many criteria at the same time, with various weights and thresholds, having the potential to reflect at a very satisfactory degree the, most of the time, vague preferences of the DMs (Kelemenis & Askounis, 2010). One form of MCDM is Multi-Objective Decision Analysis (MODA), which allows for the selection of a best solution amongst a pool of available alternative solutions through the use of value trade-off and factor weighting. MODA provides the ability to compare disparate alternatives by converting all criteria into common value scores. Too often, decisions are reached by starting with the problem's alternatives and then determining the objectives used for evaluation of those alternatives. This is alternative focused thinking and it is reactive rather than pro-active (Keeney, 1994). Value focused thinking (VFT) defines values and objectives before identifying the alternatives. VFT helps uncover hidden objectives and leads to more productive information collection (Keeney, 1994). This type of thinking is what makes using VFT for personnel selection so successful. It compels DMs to determine what characteristics they are looking for in applicants before the selection process actually starts. Managing a large volume of applications, typically on a limited budget, forces many universities to find simple ways to sort applicants and make decisions on them. As a result, many public universities rely on basic metrics such as SAT scores or GPAs to filter applicants without looking at entire application packets, potentially missing excellent candidates. Private universities extend the use of metrics to also dissect essays and conduct interviews to select students through processes that can be seen as extremely subjective (Camara & Kimmel, 2005). Correspondence: Chrisophter M. Smith, Air Force Institute of Technology, Department of Operational Analysis, 2950 Hobson Way, Wright Patterson AFB, OH 45433-7765 USA E-mail: [email protected]

Improving admissions processes through value focused thinking · Joshua D. Deehr 1, Chrisophter M. Smith 1, , and Raymond R. Hill 1 1 Air Force Institute of Technology, Wright-Patterson

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Improving admissions processes through value focused thinking · Joshua D. Deehr 1, Chrisophter M. Smith 1, , and Raymond R. Hill 1 1 Air Force Institute of Technology, Wright-Patterson

Journal of Applied Operational Research (2018) Vol. 10, No. 1, 53–65 ISSN 1735-8523 (Print), ISSN 1927-0089 (Online)

www.orlabanalytics.ca

Improving admissions processes through value focused thinking

Joshua D. Deehr 1, Chrisophter M. Smith

1,, and Raymond R. Hill 1

1 Air Force Institute of Technology, Wright-Patterson AFB, OH, USA

Received 14 August 2018 Accepted 28 October 2018

Abstract—Colleges and universities have the ability to be selective in who they choose to admit, but how

they do this has recently come under scrutiny. The purpose of this research is to show how decision analysis

can aid with these processes. Value focused thinking (VFT) places criteria in a hierarchal structure and

quantifies the values with criteria measurements, known as a decision model. This research compares two

case studies of college admissions selection criteria. VFT is applied to develop a valid admissions decision

model, and a Monte Carlo simulation was used to examine the accuracy of the models. Our research

determines that the university with a well-defined decision model can choose candidates with 86% accuracy,

while the university without a well-defined decision model was only 55% accurate. This research shows

how VFT can improve an organization's personnel selection process by accounting for what is important to

the universities.

Published online 21 November 2018

Copyright © ORLab Analytics Inc. All rights reserved.

Keywords:

Admissions processes Colleges

Decision analysis Monte Carlo simulation

Personnel selection

Value focused thinking

Introduction

Colleges and universities have the ability to be selective in which students they choose to admit into their ranks, however

the ways in which they do so has recently come under intense scrutiny (Camara & Kimmel, 2005). As admissions into

higher education is perceived as the way to social and economic success in America, the manner in which a university

chooses its students is often a contested topic. Inevitably, any debate on the admissions process revolves around the systems

used to delineate applicants as these procedures are subjective and at times not explicit (Camara & Kimmel, 2005). Decision

analysis can aid with these difficult processes by helping to remove some of the ambiguity.

Decision analysis is a philosophy and a social-technical process to create value for decision makers (DMs) facing difficult

decisions involving multiple stakeholders, multiple (possibly conflicting) objectives, complex alternatives, important

uncertainties, and significant consequences (Parnell et al., 2013). The multi-criteria nature of personnel selection makes

multi-criteria decision making (MCDM) methods ideal for coping with these complexities, given that those involved in

personnel selection consider many criteria at the same time, with various weights and thresholds, having the potential to

reflect at a very satisfactory degree the, most of the time, vague preferences of the DMs (Kelemenis & Askounis, 2010).

One form of MCDM is Multi-Objective Decision Analysis (MODA), which allows for the selection of a best solution

amongst a pool of available alternative solutions through the use of value trade-off and factor weighting. MODA provides

the ability to compare disparate alternatives by converting all criteria into common value scores. Too often, decisions are

reached by starting with the problem's alternatives and then determining the objectives used for evaluation of those

alternatives. This is alternative focused thinking and it is reactive rather than pro-active (Keeney, 1994). Value focused

thinking (VFT) defines values and objectives before identifying the alternatives. VFT helps uncover hidden objectives and

leads to more productive information collection (Keeney, 1994). This type of thinking is what makes using VFT for

personnel selection so successful. It compels DMs to determine what characteristics they are looking for in applicants

before the selection process actually starts.

Managing a large volume of applications, typically on a limited budget, forces many universities to find simple ways to

sort applicants and make decisions on them. As a result, many public universities rely on basic metrics such as SAT

scores or GPAs to filter applicants without looking at entire application packets, potentially missing excellent candidates.

Private universities extend the use of metrics to also dissect essays and conduct interviews to select students through

processes that can be seen as extremely subjective (Camara & Kimmel, 2005).

Correspondence: Chrisophter M. Smith, Air Force Institute of

Technology, Department of Operational Analysis, 2950 Hobson Way,

Wright Patterson AFB, OH 45433-7765 USA

E-mail: [email protected]

Page 2: Improving admissions processes through value focused thinking · Joshua D. Deehr 1, Chrisophter M. Smith 1, , and Raymond R. Hill 1 1 Air Force Institute of Technology, Wright-Patterson

Deehr et al (2018)

54

Rothstein conducted a study at the University of California looked at over 22,000 applicants who applied, were admitted,

and enrolled at the eight UC campuses to determine if the SAT was a valid predictor of academic performance. This study

found that the SAT should be assigned less importance than was implied by selection-biased models. It determined that

the background variables provided much of the information captured by the SAT scores (Rothstein, 2004).

Some colleges have looked to switch from standard aptitude tests in favor of achievement-based ones as to not discriminate

against students who have been subjected to lesser levels of education. Chile switched from aptitude tests to achievement

tests in 2004. The subsequent studies found that achievement tests alone did little to change the outcome, because those

students who came from affluent families had access to more achievement opportunities. The Chilean experience showed

the need to avoid simplistic approaches in the pursuit of higher education (Koljatic et al., 2013).

The Department of Education, Office of Civil Rights (OCR) outlined several key principles in The Use of Tests When

Making High Stake Decisions for Students which argued that looking at GPAs and standardized tests alone were not

acceptable for college admissions criteria. The OCR believed that institutions need a clear admissions policy statement

and that the use of appropriate criteria in their selection process needs to be directly tied to any policy statement. The

OCR also believed that institutions should empirically derive weights to assign to this selection criteria (OCR, 2000).

There have also been research works focused on improving the selection process. Niessen & Meijer demonstrated the

applicability of broader criteria upon which to base student selection, while Mulvenon et al, and Brió et al explored the

suitability for applying different methods to strengthening the value of the test scores to the application process.

Pennebaker et al explored how words in an application essay might translate to higher grades later on, with the use of

language articles and preposition use, rather than auxiliary verbs, pronouns, adverbs, etc. Analysis like this might allow

schools to better select quality candidates who will likely perform well in school (Niessen, 2017), (Mulevnon, 1999),

(Biró, 2015), (Pennebaker, 2014).

Methodology

VFT is a strategic, quantitative approach to decision making that uses specified objectives, evaluation measures, and value

hierarchies (Kirkwood, 1997). Multiple Criteria Decision Analysis (MCDA) is another popular approach to solving difficult

decisions. There are a number of different approaches, and VFT has been used previously in a number of personnel selection

areas. Malyemez developed a VFT model to help the Turkish Air Force better select officers to enter into the education

system. Dees et al. developed a VFT model looking at the whole solder for junior enlisted soliders to allow the Army to

have some measure of quality of junior enlisted soldiers in order to highlight them for more focus and advanced leadership

opportunities. (Malyemez, 2011), (Dees, 2013).

Because of these past successes, VFT is a prime method to solve problems that involve selection criteria or rank ordering

of alternatives. Our research methodology utilized a ten-step VFT model (see Figure 1.).

Step 1: Problem Identification

The first step necessary in helping the DM is to identify the problem, referred to as framing the problem or developing

the frame. The problem is framed to ensure the purpose, perspective and scope are understood by all parties. Improper

framing can cause one to not fully understanding the problem, overlooking key objectives, or even not involving the right

stakeholders, all of which can lead the failure of being able to make a good decision.

Step 2: Identify and Structure Objectives

Once the frame is clear, objectives are identified and structured into the value hierarchy. An objective is the specific

goal being sought. Appropriate structuring of objectives is critical otherwise DMs will not have sufficient belief and

confidence in the model. The objectives’ structure is visualized through the value hierarchy. The value hierarchy places

the overall, or strategic, objective at the top with all of the lower tiered, or fundamental objectives below. Objectives are

decomposed until a single measurable objective remains.

Step 3: Measure the Achievement of Objectives

The value hierarchy is a qualitative model. To conduct quantitative analysis, attributes are assigned to the lowest level

of objectives. Attributes, also called evaluation measures, determine the degree of attainment of an objective. Attribute

ranges are determined by considering the possible values deemed within the acceptable region, this creates the endpoints.

Evaluation measures allow for an unambiguous rating, so that if a person had infinite resources and instantaneous

computational powers could they assign an accurate score.

Page 3: Improving admissions processes through value focused thinking · Joshua D. Deehr 1, Chrisophter M. Smith 1, , and Raymond R. Hill 1 1 Air Force Institute of Technology, Wright-Patterson

Journal of Applied Operational Research Vol. 10, No. 1

55

Figure 1. 10 step VFT Model

Step 4: Single Attribute Value Functions

Single Value Attribute Functions (SAVF) are used to calculate individual criteria scores from the raw data. SAVF can

be exponential, linear, or categorical depending on the data.

Step 5: Multi Value Attribute Functions

The final step needed to calculate the alternatives score is to determine the multi-attribute value function (MAVF). For

VFT this is done by multiplying each attributes SAVFs (vi(xi)) by a vector of weights (wi) that corresponds to each criteria

(xi) where i is the index of the alternative. Weights are used to assign different preferences to the criteria, and are determined

by the DM. The general form of the MAVF is:

V(x) = Σwivi(xi).

The weights vector is normalized so that the sum of weights is equal to one:

Σwi = 1.

Page 4: Improving admissions processes through value focused thinking · Joshua D. Deehr 1, Chrisophter M. Smith 1, , and Raymond R. Hill 1 1 Air Force Institute of Technology, Wright-Patterson

Deehr et al (2018)

56

Step 6: Alternative Generation and Screening

Alternatives are not needed for the college admissions personnel selection problem as these are the applicants.

Step 7: Alternative Scoring

The final alternative score was found by taking each alternative’s raw scores, converting it to component value scores

by applying the associated SAVFs. Then, an overall score is found for each alternative by applying the weights developed

in the MAVF to the component scores. Finally, these scores are rank ordered accordingly.

Step 8: Deterministic Sensitivity

After the alternatives are scored, analysis is conducted to ensure the alternative rankings are understandable, not

misleading, and if any insights or improvements can be identified. This is done by looking at the deterministic sensitivity

of each alternative. The value breakout graph is one tool that can be used. It allows for a quick and easy comparison of

how each attribute affects the alternatives and how it can compare to the ideal, or Utopian, candidate.

Step 9: Sensitivity Analysis

Once the model is deemed acceptable, sensitivity analysis helps determine the impact on the rankings of alternatives to

changes in the various assumptions of the model, specifically the weights. The weights represent the relative importance

attached to each evaluation measure. By adjusting the weights, the model's resiliency to change can be tested.

Step 10: Communicating Results

The final step is to effectively present the results to the DM.

Analysis

Overview

This research compares two case studies of college admissions selection criteria. The first university is non-competitive

and the second is highly competitive. For this research a non-competitive university is defined as one that admits greater

than 90% of applicants, while a highly competitive university is defined as one that admits less than 10% of applicants. A

selection pool of 8,000 “select” and 24,000 “non-select” applicants was generated based on the datasets provided by each

university. VFT was applied to show how to create a valid admissions process model. The two universities’ admissions

documentation were used to gather information on the hierarchies, SAVFs, MAVFs, and weights. Value hierarchies were

determined based on the colleges admission requirements. SAVFs were created using the normalized exponential function

with the mid-value point being the mean of selected applicants. MAVFs and weights were determined by the ratio of the

SAVFs correlation coefficients. Sample applicants were drawn from the generated pool and examined to see how accurately

each model could select from the correct pool of applicants.

Data

Cleaning the original datasets were required so they were usable for VFT. All biographical information was removed, and

attributes were reduced to ensure independence. It was assumed that gender, race, and declared major had no significant

impact to admittance.

The competitive university’s dataset consisted of approximately 13,000 observations and 26 possible attributes. The

observations represented the students admitted to the university over the last 10 years. The attributes were the recorded

data of the applicants. The data was reduced to an independent set. For example, the data points for the high school class

rank constructed score was kept in favor of the raw overall class rank and the applicants high school class size since the

constructed score was based off the other two. Finally, all students who only had ACT scores were removed (in favor of

those who had SAT scores) to eliminate the need for designing a scale to relate ACT and SAT scores. This reduced the

initial dataset from 26 attributes to seven. The final dataset contained approximately 7,800 observations.

The non-competitive university’s dataset had approximately 3,400 observations and nine attributes. The observations

represented the admitted students and the attributes was the data the admission's team tracked. The data were reduced so

the attributes were an independent set. Additionally, 400 observations were removed due to missing data, leaving the final

dataset with approximately 3,000 observations.

Page 5: Improving admissions processes through value focused thinking · Joshua D. Deehr 1, Chrisophter M. Smith 1, , and Raymond R. Hill 1 1 Air Force Institute of Technology, Wright-Patterson

Journal of Applied Operational Research Vol. 10, No. 1

57

Non-Competitive University

The first case study looked at the admission's data for a specific major from a large, non-competitive public university.

This university has a lenient admissions policy, looking only at GPA and ACT scores. The university also tracks which

high school applicants attended to assess the quality of education the applicant potentially received. From this, a simple

value hierarchy is created (see Figure 2). The selection criteria is broken into four fundamental objectives, an applicant’s

academic performance (GPA), academic potential (ACT composite score), analytic capability (ACT math score), and the

quality of their education (high school rank). ACT scores are used for this university as it is a Midwestern school, and the

ACT is the preferred standardized test that applicants submit.

Figure 2. Value Hierarchy for Non-Competitive University

All evaluation measures (see Table 1) were based on constructed scales and for all measures a higher value was preferred.

A constructed scale is created when no pre-existing natural scale exists, is available, or appropriate. For example the natural

scale would be how many questions a person got right on the ACT, while the constructed scale is the composite score.

The grade point average (GPA) is the GPA at the time the application was submitted. Its range was from zero to five to

account for those high schools that used an extended scale. The ACT composite and math scores were the applicant scores

on the ACT test. The range of this attribute was from 10 to 36, which was deemed to be the feasible region. Finally, the

high school rank score was the score the university determined for each high school an applicant attended based on criteria

such as student to teacher ratio, graduation rate, standardized test scores, etc.

SAVFs were calculated for each attribute. Admissions documentation showed the university had no predetermined

weighting for these criteria. With this, and not having access to the DM, exponential SAVFs were used. The attribute's

mid-value was the mean of the respective attribute from the original dataset (see Table 2).

Table 1. Evaluation Measures for Non-Competitive University

Table 2. Weights for Non-Competitive University

Page 6: Improving admissions processes through value focused thinking · Joshua D. Deehr 1, Chrisophter M. Smith 1, , and Raymond R. Hill 1 1 Air Force Institute of Technology, Wright-Patterson

Deehr et al (2018)

58

Figure 3. SAVF for Non-Competitive University

To determine the MAVF, correlation coefficients were used to assess how much each attribute contributed to the model

and weights were calculated based on their ratios. The global weights were 40.9%, 15.5%, 27.8%, and 15.8% for the

fundamental objectives (see Figure 3). The remaining steps of VFT are covered later in the research.

Competitive University

For the second case study, admission's data from a small, highly competitive, liberal arts university was used. Knowing

the university bases the merits of an applicant on their scholastic, athletic, and leadership accolades, the attributes were

arranged into an appropriate value hierarchy (see Figure 4). Scholastic ability was determined to measure an applicant's

mental capacity through their standardized tests (SAT verbal and math) and their academic performance (class rank

constructed score). As discussed in the data section, SAT scores were retained in place of both ACT and SAT scores for

simplicity. Additionally, this university is an east coast school with the preferred standardized test being the SAT. The

number of applicants that submitted ACT scores was less than four percent of applicants so these applicants were

removed. The athletic fundamental objective was determined by the applicant's athletic capability (athletic constructed

score) and their fitness level (university administered physical fitness test). Finally, leadership was determined by their

earned achievements (leadership constructed score) and the extracurricular activities the applicant participated in

(extracurricular constructed score).

Page 7: Improving admissions processes through value focused thinking · Joshua D. Deehr 1, Chrisophter M. Smith 1, , and Raymond R. Hill 1 1 Air Force Institute of Technology, Wright-Patterson

Journal of Applied Operational Research Vol. 10, No. 1

59

Figure 4. Value Hierarchy for Competitive University

All evaluation measures (see Table 3) are based on constructed scales, and for all measures a higher value is preferred.

SAT Verbal and Math scores were the verbal and math scores the applicants received when they took the SATs. These

scores range from a low of 400 to a high of 800. The remainder of the constructed scales was determined by the university

and had a low of 200 and a high value of 800. The class rank constructed score is a scale that was based on the applicant’s

high school class rank, their GPA, and the size of their graduating class. The athletic constructed score is based on how

many sports an applicant participated in, for how many years, were they on the varsity team, and did the applicant place at

any regional, state, or national level events. The fitness test score is a constructed score based on how an applicant scored

on six events (basketball throw, pull-up, shuttle run, sit-ups, push-ups, and one-mile run) during a university-administered

test. The leadership score was a constructed score based on the applicant's leadership positions held in high school and

community organizations, while the extracurricular constructed score is based on the number of extracurricular events the

applicant has participated in and the number of years they had participated.

Table 3. Evaluation Measures for Competitive University

SAVFs were calculated for each of the seven attributes. Due to the limits of the provided administration documentation,

and not having access to the DM, exponential SAVF were determined to be the best fit. All attributes are increasing

SAVFs (see Figure 5). The mid-values used are the mean value of each attribute from the original dataset.

When determining the MAVF, administration documents provides insight that the fundamental objective weights for

scholastics, athletics, and leadership were 60%, 25%, and 15% respectively. Correlation coefficients are used to determine

how each attribute contributed to the model and local weights were based on their ratios (see Table 4).

Table 4. Weights for Competitive University

Page 8: Improving admissions processes through value focused thinking · Joshua D. Deehr 1, Chrisophter M. Smith 1, , and Raymond R. Hill 1 1 Air Force Institute of Technology, Wright-Patterson

Deehr et al (2018)

60

Figure 5. SAVF for Competitive University

Monte Carlo Simulation

A known distribution was fit to each of the attributes providing the ability to easily generate new data points. Using R,

8,000 random “select” applicants (shown by the solid blue line) were generated using the defined normal distribution

based on the provided datasets (represented by the histograms). One issue with both datasets was that they only contained

data from applicants already selected to attend the university. To address this, the means of the normal distributions used

to generate the “select” applicants was shifted approximately 10-15% to the left to create “non-selected” applicants

(shown by the dashed red line) that were just slightly worse than their counterparts. With R, 24,000 “non-select” applicants

were generated.

Page 9: Improving admissions processes through value focused thinking · Joshua D. Deehr 1, Chrisophter M. Smith 1, , and Raymond R. Hill 1 1 Air Force Institute of Technology, Wright-Patterson

Journal of Applied Operational Research Vol. 10, No. 1

61

Figure 6. Non Competitive Attribute Distributions

Figure 7. Competitive Attribute Distributions

Page 10: Improving admissions processes through value focused thinking · Joshua D. Deehr 1, Chrisophter M. Smith 1, , and Raymond R. Hill 1 1 Air Force Institute of Technology, Wright-Patterson

Deehr et al (2018)

62

A simulation was developed where the 8,000 “select” and 24,000 “non-select” applicants were combined, and 4,000

applicants were randomly selected as a sample year group (see Figure 6 and Figure 7). The appropriate VFT models were

applied, the MAVF scores were calculated, and all of the applicants were rank ordered. The top 1,000 applicants were

selected and compared to see how accurately the “select” applicants were chosen. If applicants were picked completely at

random to fill the 1,000-seat class the probability that all applicants would all be chosen from the “select" group was

1.56%.

P(t) =

= 0.0156.

The two models were repeated 1,000 times in a Monte Carlo simulation. The non-competitive university’s model

selection accuracy rate was 55.16% (see Figure 8), just a bit better than that of flipping a coin, but still better than choosing

completely at random which was 1.56%. No additional analysis was conducted because it was determined that this decision

model would not be able to yield any significant insights. The competitive university’s model correctly picked applicants

from the “select" pool 85.71% of the time (see Figure 8), which was a far better accuracy than the random selection prob-

ability of 1.56%.

Figure 8. Monte Carlo Simulation

Table 5. MAVF Scores of Small Sample

Page 11: Improving admissions processes through value focused thinking · Joshua D. Deehr 1, Chrisophter M. Smith 1, , and Raymond R. Hill 1 1 Air Force Institute of Technology, Wright-Patterson

Journal of Applied Operational Research Vol. 10, No. 1

63

The 85.71% accuracy for the competitive university is likely a lower bound. This is because some of the “non-selects”

that were chosen were better applicants than those of the “selects” that were not chosen. The reason for this is that the

normal distribution that was used to create the “non-selects” could sometimes generate random values above those of the

“selects” (see Figure 6 and Figure 7). To further illustrate this a single instance of the model was run and five “non-

selects” chosen to attend the university were compared to five “selects” that were not chosen. Table 5 shows the MAVF

score for each of the ten potential applicants. The “non-select" applicants clearly scored better than their counterparts, and

therefore were better choices.

This was further highlighted by using R to create a value breakout graph of the applicants MAVF scores (see Figure 9).

Figure 9 shows the deterministic sensitivity of each alternative. It allows for a quick and easy comparison of how each

attribute affects the alternatives. All of the “non-selects” scored considerably higher in the most heavily weighted attribute,

the SAT verbal score, than all of the applicants from the “select” population.

Figure 9. Value Breakout Graph of Small Sample

Finally, sensitivity analysis was conducted to determine the impact on the rankings of alternatives to changes in the

various assumptions of the model, specifically the weights (see Figure 10). The weights represent the relative importance

attached to each evaluation measure. It shows that the “non-selects” (shown by the solid lines) were resilient to changes in

the weighting across all seven attributes as compared to the “selects” (shown by the dashed lines). All attribute weights

must be at least doubled (or halved in the case of the Verbal score) to have any impact on the “selects" becoming better

applicants than the “non-selects”.

Page 12: Improving admissions processes through value focused thinking · Joshua D. Deehr 1, Chrisophter M. Smith 1, , and Raymond R. Hill 1 1 Air Force Institute of Technology, Wright-Patterson

Deehr et al (2018)

64

Figure 10. Sensitivity Analysis of Small Sample

Comparison

Analysis of the two case studies showed how a well-defined decision model can improve an organizations personnel

selection. The less defined decision model from the first case study was only able to accurately place the “select” applicants

just over half of the time, while the second case study was able to do so over 85% of the time. The transition from the

hierarchy in the first case study to the hierarchy in the second is an example of how an organization could update their

objectives to improve their quality of selection.

Additionally, the competitive university data allowed for additional analysis to be conducted if desired through

deterministic and sensitivity analysis. This provided the DM with more detailed information about how an applicant

scored and how resilient those applicants were to changes in weighting, or value trade-offs. This transparency is beneficial

to the selection process because it allows the choices to be debatable if necessary.

Page 13: Improving admissions processes through value focused thinking · Joshua D. Deehr 1, Chrisophter M. Smith 1, , and Raymond R. Hill 1 1 Air Force Institute of Technology, Wright-Patterson

Journal of Applied Operational Research Vol. 10, No. 1

65

Conclusion

This research showed how VFT could improve an organization's personnel selection process. First and foremost, VFT

allows for organizations to codify their process so that it becomes repeatable. The key to this is making sure an accurate

picture of the organization's goals is captured through the value hierarchy. A well-defined hierarchy ensures that all objectives

important to the organization are accounted for and that they have associated attributes for which to measure how successfully

alternatives can meet these criteria. Our two case studies showed how a university with a well-defined decision model

could successfully order alternatives at least 85% of the time while the university that lacked a well-defined model was

only accurate 55% of the time. Second, VFT allows for the ability to conduct deterministic sensitivity and sensitivity

analysis on alternatives. This is particularly useful when working with a smaller group, when a smaller sample wants to be

examined for additional consideration, or when there is a group of people selecting students because transparency leads to

better discussion and coordination within the team. Next, VFT provides a score for each alternative as opposed to simply

rank ordering them. This provides DM the ability to examine the difference in magnitude between alternatives or students.

Finally, this research captured the process that the universities currently practice. As discussed in the literature review,

universities need to take a hard look at how a “successful” student is defined when creating their model to ensure it accurately

reflects their organizational goals.

Acknowledgments—This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit

sectors. The views expressed in this article are those of the authors and do not reflect the official policy or position of the Air Force

Institute of Technology, United States Air Force, Department of Defense, or the US Government.

References

Biró, P., & Kiselgof, S. (2015). College admissions with stable score-limits. Central European Journal of Operations

Research, 23(4), 727-741.

Camara, W. and Kimmel, E. (2005), Choosing Students: Higher Education Admissions Tools for the 21st Century,

Lawrence Erlbaum Associates, Mahwah, NJ.

Dees, R. A., Nestler, S. T., & Kewley, R. (2013). WholeSoldier performance appraisal to support mentoring and personnel

decisions. Decision Analysis, 10(1), 82-97.

Keeney, R. (1994), “Creativity in Decision Making with Value-Focused Thinking”, Sloan Management Review, Vol. 35,

No. 4, pp. 33-41.

Kelemenis, A. and Askounis, D. (2010), “A New TOPSIS-Based Multi-Criteria Approach to Personnel Selection”, Expert

Systems with Applications, Vol. 37, pp.4999-5008.

Kirkwood, C. (1997), Strategic Decision Making, Wadsworth Publishing Company, Belmont, California.

Koljatic, M. and Silva, M. and Cofre, R. (2013), “Achievement vs Aptitude in College Admissions: A Cautionary Note

Based on Evidence from Chile”, International Journal of Educational Development, Vol. 33, pp. 106–115.

Malyemez, C. (2011). Multi criteria decision support model for the Turkish Air Force personnel course/education planning

system (No. AFIT-OR-MS-ENS-11-12). Air Force Institute of Technology, Master’s Thesis, Wright Patterson.

Mulvenon, S. W., Stegman, C., Thorn, A., & Thomas, S. (1999). Selection for College Admission: Refining Traditional

Models. Journal of College Admission, 162, 20-27.

Niessen, A. S. M., & Meijer, R. R. (2017). On the use of broadened admission criteria in higher education. Perspectives

on Psychological Science, 12(3), 436-448.

Parnell, G. and Bresnick, T. and Tani, S. and Johnson, E. (2013), Handbook of Decision Analysis, Wiley, Hoboken, NJ.

Pennebaker, J. W., Chung, C. K., Frazee, J., Lavergne, G. M., & Beaver, D. I. (2014). When small words foretell academic

success: The case of college admissions essays. PLOS one, 9(12), e115844.

Rothstein, J. (2004), “College Performance Predictions and the SAT”, Journal of Econometrics, Vol. 121, No. 1, pp 297-317.

U.S. Department of Education, Office of Civil Rights (2000), The Use of Tests as Part of High-Stakes Decision Making

for Students: A Resource Guide for Educators and Policymakers, Washington, DC.