Upload
others
View
4
Download
0
Embed Size (px)
Citation preview
IN DEGREE PROJECT INDUSTRIAL ENGINEERING AND MANAGEMENT,SECOND CYCLE, 30 CREDITS
, STOCKHOLM SWEDEN 2018
A Pricing Model for AIaaSAn analysis of a new AI personalization product within the edtech space
ZENJA JEFIMOVA
SOFIE NABSETH
KTH ROYAL INSTITUTE OF TECHNOLOGYSCHOOL OF INDUSTRIAL ENGINEERING AND MANAGEMENT
A Pricing Model for AIaaS
An analysis of a new AI personalization product within the edtech space
by
Zenja Jefimova and Sofie Nabseth
Master of Science Thesis INDEK 2018:331
KTH Industrial Engineering and Management
Industrial Management
SE-100 44 STOCKHOLM
En Prismodell för AIaaS
En analys av en ny AI-baserad personifieringsprodukt inom edtech
av
Zenja Jefimova och Sofie Nabseth
Examensarbete INDEK 2018:331
KTH Industriell teknik och management
Industriell ekonomi och organisation
SE-100 44 STOCKHOLM
Master of Science Thesis INDEK 2018:331
A Pricing Model for AiaaS
An analysis of a new AI personalization
product within the edtech space
Zenja Jefimova
Sofie Nabseth
Approved
2018-06-01
Examiner
Gregg Vanourek
Supervisor
Terrence Brown
Commissioner
Sana Labs AB
Contact person
Joel Hellermark
Abstract
As pricing is vital for an organization’s marketing strategy, it is a significant area to consider for companies offering new products where Artificial Intelligence as a service (AIaaS) is provided. The purpose of this study was to investigate possible pricing models for an AIaaS product. The study was delimited to the edtech industry. The main research question to be investigated was “What pricing model should an AI-company have for its B2B personalization product to correspond to the value delivered by it?”. Sub-research questions consisted of how perceived value can be related to price, what factors the organization should consider for the pricing model and the implications of implementation.
The exploratory research was carried out through a literature review, a survey where Van Westendorp’s Price Sensitivity Meter was applied, as well as in-depth interviews to gather qualitative data. The quantitative results showed that the price sensitivity depends on the number of monthly active users a platform has, where there is a negative relationship between the number of monthly active users and the price willing to pay per learner. The qualitative results showed that the perceived value depends, amongst other factors, on which segment the buyer belongs to. The primary results were discussed with the findings from the literature review, which mainly consisted of pricing model for Software as a Service (SaaS) products, that resulted in a designed pricing model for AIaaS providers.
The conclusion of the study is that pricing an entirely new product is complicated as the buyer does not know the value of the product. Also, there does not exist one single value which can be quantified and translated into price; the price must be adjusted according to the segment’s perceived value. The pricing model presented accounts for adjustable variables needed to be considered by an AIaaS provider before determining a price.
Key-words: AI, AIaaS, edtech, education technology, personalization, price, pricing, pricing strategies, pricing models, pricing tools, pricing AIaaS, software pricing, pricing for new products, value, value adding
Examensarbete INDEK 2018:331
En Prismodell för AIaaS
En analys av en ny AI-baserad
personifieringsprodukt inom edtech
Zenja Jefimova
Sofie Nabseth
Godkänt
2018-06-01
Examinator
Gregg Vanourek
Handledare
Terrence Brown
Uppdragsgivare
Sana Labs AB
Kontaktperson
Joel Hellermark
Sammanfattning
Då prissättning är viktigt för en organisations marknadsstrategi, är det ett betydelsefullt område att ta hänsyn till för organisationer som erbjuder nya produkter baserade på Artificiell Intelligens as a Service (AIaaS). Syftet med denna studie var att undersöka möjliga prismodeller för AIaaS produkter. Studien var begränsad till edtech industrin. Den huvudsakliga forskningsfrågan var “Vilken prismodell borde ett AI-bolag ha för att dess B2B personifieringsprodukt ska motsvara det levererade värdet ?”. Delforskningsfrågor bestod av hur uppfattat värde kan relateras till pris, vilka faktorer en organisation bör ta hänsyn till för en prismodell samt implikationerna av att implementera den.
Den utforskande studien genomfördes genom en litteraturstudie, en webbenkät där Van Westendorps priskänslighetsmätare applicerades, såväl som fördjupningsintervjuer för att samla kvalitativ data. De kvantitativa resultaten visade att priskänsligheten beror av antalet månatliga aktiva användare som plattformen har, vilket visar på ett negativt samband mellan antalet månatliga aktiva användare och priset en är villig att betala per elev. De kvalitativa resultaten visade att det uppfattade värdet beror av vilket segment köparen tillhör. De primära resultaten diskuterades mot resultaten från litteraturstudien, vilka främst bestod av prismodeller för SaaS produkter, och slutade i en framtagen prismodell för AIaaS leverantörer.
Slutsatsen av studien är att prissättning av en ny produkt är komplicerat eftersom köparen inte vet värdet av produkten. Det finns heller inte ett enskilt värde som kan kvantifieras och översättas till ett pris; priset måste anpassas enligt segmentets uppfattade värde. Prismodellen som presenteras tar hänsyn till justerbara variabler som en AIaaS leverantör måste utvärdera innan ett pris bestäms.
Nyckelord: AI, AIaaS, edtech, online-utbildning, personifiering, pris, prissättning, prisstrategi, prismodell, prisverktyg, prissättning för AIaaS, prissättning för mjukvara, prissättning för nya produkter, värdeskapande
Abbreviations
AIaaS Artificial Intelligence as a Service
AIP Artificial Intelligence Provider
B2S Business to School
Edtech Educational Technology
MAU Monthly Active User
OCP Online Course Provider
Definitions
Edtech: “the study and ethical practice of facilitating learning and improving performance by
creating, using, and managing appropriate technological processes and resources” (Richey, et
al., 2008).
AI: “the study of how to make computers do things at which, at the moment, people do
better” (Rich & Knight, 1991). This is a definition that will always be relevant, not only in the
present but also in the future (Ertel, 2009).
Table of Contents
1. INTRODUCTION ............................................................................................................................ 1
1.1 BACKGROUND ............................................................................................................................................. 1
1.2 COMMISSIONER ........................................................................................................................................... 2
1.3 PROBLEMATIZATION .................................................................................................................................. 2
1.4 PURPOSE ....................................................................................................................................................... 3
1.5 RESEARCH QUESTIONS .............................................................................................................................. 3
1.6 EXPECTED CONTRIBUTION ....................................................................................................................... 3
1.7 DELIMITATIONS .......................................................................................................................................... 3
1.8 DISPOSITION OF THESIS ............................................................................................................................ 4
2. LITERATURE REVIEW .................................................................................................................. 5
2.1 VALUE AND ITS DEFINITION .................................................................................................................... 5
2.1.1 Customer Lifetime Value ......................................................................................................................... 5
2.1.2 Network Effects ....................................................................................................................................... 5
2.2 THE IMPACT OF PRICE ............................................................................................................................... 6
2.3 STRATEGY, MODEL AND TOOL AS A FUNNEL OF PRICING .................................................................. 7
2.4 PRICING STRATEGIES ................................................................................................................................. 8
2.4.1 Skimming ................................................................................................................................................ 9
2.4.2 Penetration .............................................................................................................................................. 9
2.4.3 Freemium ................................................................................................................................................ 9
2.4.4 Price Leadership....................................................................................................................................... 9
2.5 PRICING MODELS...................................................................................................................................... 10
2.5.1 Software Pricing ..................................................................................................................................... 11
2.5.2 Pricing for New Products ........................................................................................................................ 14
2.5.3 Performance Based Pricing ...................................................................................................................... 14
2.6 PRICING TOOLS ......................................................................................................................................... 15
2.6.1 Van Westendorp Price Sensitivity Meter.................................................................................................. 15
2.6.2 Conjoint Analysis .................................................................................................................................. 16
3. METHOD ........................................................................................................................................ 18
3.1 RESEARCH DESIGN ................................................................................................................................... 18
3.2 DATA COLLECTION .................................................................................................................................. 19
3.2.1 Quantitative Sampling through a Survey .................................................................................................. 19
3.2.2 Qualitative Sampling through In-Depth Interviews ................................................................................... 20
3.3 APPLICATION OF LITERATURE AND THEORY ....................................................................................... 21
3.3.1 Theory Bits ............................................................................................................................................ 22
3.3.2 Van Westendorp Price Sensitivity Meter.................................................................................................. 22
3.3.3 Value Determination Inspired by CBC ................................................................................................... 23
3.3.4 Regression Analysis ................................................................................................................................ 24
3.3.5 Thematic Analysis ................................................................................................................................. 24
3.4 QUALITY OF SCIENTIFIC RESEARCH ...................................................................................................... 24
3.4.1 Reliability .............................................................................................................................................. 24
3.4.2 Validity ................................................................................................................................................ 25
3.4.3 Source Criticism ..................................................................................................................................... 27
3.5 ETHICS ........................................................................................................................................................ 28
3.6 AI FOR EDUCATION.................................................................................................................................. 28
4. RESULTS AND ANALYSIS ........................................................................................................... 30
4.1 SURVEY RESPONSES .................................................................................................................................. 30
4.2 VAN WESTENDORP PRICE SENSITIVITY METER .................................................................................. 34
4.3 ANALYSIS OF QUANTITATIVE RESULTS ................................................................................................. 38
4.3.1 MAUs.................................................................................................................................................. 38
4.3.2 Price Span ............................................................................................................................................. 40
4.3.3 Cloud and IT costs ................................................................................................................................. 41
4.3.4 In-house Development Costs .................................................................................................................... 42
4.4 PRODUCT PREFERENCES ......................................................................................................................... 42
4.5 VALUE PERCEPTION ................................................................................................................................. 44
4.5.1 KPIs ..................................................................................................................................................... 44
4.5.2 Price Span ............................................................................................................................................. 45
4.5.3 Additional Services................................................................................................................................. 47
4.6 EMERGED THEMES ................................................................................................................................... 47
4.6.1 Price Based on Segment........................................................................................................................... 47
4.6.2 Issues with AI in Education ................................................................................................................... 49
5. DISCUSSION ................................................................................................................................... 51
5.1 PRODUCT PREFERENCES ......................................................................................................................... 51
5.2 VALUE PERCEPTION ................................................................................................................................. 52
5.2.1 KPIs ..................................................................................................................................................... 52
5.2.2 Price ...................................................................................................................................................... 54
5.2.3 Additional Services................................................................................................................................. 56
5.3 DISCUSSION ON EMERGED THEMES ...................................................................................................... 56
5.3.1 Segments ................................................................................................................................................ 56
5.3.2 Issues with AI in Education ................................................................................................................... 58
5.4 PROPOSAL OF A PRICING MODEL FOR AIAAS ...................................................................................... 59
5.5 SUSTAINABILITY ........................................................................................................................................ 62
5.6 ETHICAL IMPLICATIONS ........................................................................................................................... 63
5.7 SUMMARY OF FINDINGS ........................................................................................................................... 64
6. CONCLUSION ................................................................................................................................ 66
6.1 MAIN FINDINGS......................................................................................................................................... 66
6.2 CONTRIBUTION ......................................................................................................................................... 67
6.3 LIMITATIONS ............................................................................................................................................. 67
6.4 FUTURE RESEARCH ................................................................................................................................... 68
REFERENCES .................................................................................................................................... 69
APPENDICES .........................................................................................................................................
APPENDIX I - VAN WESTENDORP PRICE SENSITIVITY METER ....................................................................
APPENDIX II - RESPONDENTS AND POSITION AT COMPANY .......................................................................
APPENDIX III – STRUCTURED INTERVIEW QUESTIONS AND SELECTIONS ................................................
List of Figures
Figure 1 The authors’ interpretation through a visualization of the different pricing
levels based on literature (Spencer, 2009; Brenner, 2016; Verbrugge, 2016;
Mintzberg, et al., 2003)
Figure 2 Differences in the product development process between cost based and value
based pricing (Harmon, et al., 2004)
Figure 3 Value delivery and value extraction (Simon, et al., 2003)
Figure 4 Pricing models parameter for software products (Lehmann,& Buxmann, 2009)
Figure 5 Cumulutants of Van Westendorp’s PSM (Lipovetsky, 2006)
Figure 6 Showing the acceptable price range determined by the two intersecting points
(Esomar, 2015)
Figure 7 Proposed pricing model for AI personalization products
List of Tables
Table 1 Presentation of the questions asked in the web survey
Table 2 Presentation of the attributes and attribute levels used in the structured
interview questions
Table 3 Presentation of the semi-structured interview questions
Table 4 Presents the explanations for why personalization does not apply to the
respondent’s business.
Table 5 Showing the various price ranges in USD per student per month for each of
the six tier groups.
Table 6 The number of times each attribute was available for choice as well as how
many times each attribute was chosen in absolute numbers and as a proportion
of the total times available
Table 7 The most apparent results from respondents’ product choices
List of Graphs
Graph 1 The number of survey respondents in each tier group (one respondent did not
answer this question)
Graph 2 Amount of answers which has considered to personalize its
education offering
Graph 3 Amount of answers which has considered to personalize its
education offering and has considered to develop the offering in-house
Graph 4 The estimated fixed cost of developing corresponding personalization product
in-house
Graph 5 The monthly IT expenses of each respondent
Graph 6 The cumulative frequency of respondents up to USD 20,000 to show accepted
price range of USD 5,000 to USD 10,000
Graph 7 The cumulative frequency of the monthly price per MAU, calculated with the
lower bound of the MAU range given
Graph 8 The cumulative frequency of the monthly price per MAU, calculated with the
higher bound of the MAU range given
Graph 9 The cumulative frequency of the monthly price per MAU, calculated with the
median of the MAU range given
Graph 10 The scattered points of the natural logarithm of the number of MAUs against
the natural logarithm of the price willing to pay (expensive)
Graph 11 The scattered points for Tier 1-2 of the natural logarithm of the number of
MAUs against the natural logarithm of the price willing to pay (expensive)
Graph 12 The scattered points for Tier 3-6 of the natural logarithm of the number of
MAUs against the natural logarithm of the price willing to pay (expensive)
Graph 13 The median price willing to pay per tier group with trend lines for average of
cheap and too cheap as well as average of expensive and too expensive
Graph 14 A high and a low case of monthly fee for the different tier groups
Graph 15 The monthly IT expenses of each respondent in relation to the willingness to
pay
Foreword
This report was written during the first half of 2018 as the master’s thesis of a MSc in
Industrial Engineering and Management at the Royal Institute of Technology (KTH) in
Stockholm, Sweden.
At the inception of the master’s thesis, we sought to perform the investigation in an
innovative area, which the commissioning company provided by being an AI-startup within
the edtech space. The excitement of a startup and entrepreneurship was quickly changed
towards the one for edtech and AI. We have truly learnt a lot, and we are very thankful for this
opportunity.
Acknowledgements
First of all, we would like to thank the commissioning company Sana Labs, for considering
our application and believing in us. More specifically, we would like to thank Joel Hellermark
who has been our supervisor during the thesis. Although time has been scarce, our discussions
have always been interesting and helpful for the research. Thank you for taking the time and
sharing your knowledge and input. Also, we would like to thank Anna Nordell for providing
data when needed, and motivation even more so.
Secondly, we would like to express our gratitude to our supervisor at KTH, Mr. Terrence
Brown. While time has been limited, we appreciate the time you spent with us discussing ideas
and guiding us through the process. Your insight has been valuable to our research and our
perception of it.
Lastly, we would like to thank all interviewees and survey respondents who took the time to
answer our questions. The three fields of AI, education and pricing have been turned inside
out and discussed between ourselves and the interviewees, whom without this research could
not have been performed.
Thank you for taking the time to make this research possible!
Zenja Jefimova & Sofie Nabseth
Stockholm, May 2018
1
1. Introduction
This chapter presents the background to the study, which gives the reader an insight to the area of study and its
importance. Some key concepts within the investigated industry are introduced, in addition to the commissioner
of the project. The identified gap is presented through the problematization, which is followed by the purpose of
the study as well as the research questions being investigated. This chapter is then finalized by the expected
contribution, the delimitations of the study and the disposition of the thesis.
1.1 Background
No other marketing tool has a greater impact on sales than pricing. It is vital for a company to
decide what pricing strategy and model to set before the launch of a new product. Within B2B
and B2C pricing, there generally exists three different models; cost based, value based or
market based, where the latter is determined by the price of competitors’. As a subset within
marketing, the goal of a pricing model is to set a price which reflects the monetary equivalence
to the value perceived by the customer, whilst meeting the return on investment and
profitability goals. Many pricing models have historically been cost based, which focuses on
the short term value for the user. Opposingly, value based pricing is focused on creating long
term value for the user as it builds upon the value perception of the customer. (Harmon, et al.,
2004; Kienzler & Kowalkowski, 2017)
When an organization launches a new product, it can choose to set the price based on one of
the three pricing models where a market based pricing model is commonly used for software
companies (Lehmann & Buxmann, 2009). At the time when software organizations started
shifting from perpetual license fees to pricing based on subscription models, they also changed
their business models with a faster time to value (Pettey, 2015). With the increase of software
as a service (SaaS) offerings that have risen with the global digitalization, comes
personalization (Accenture, 2018). Personalization, in this study, possesses the meaning that
individuals are receiving recommended content based on previous clicks and likings, which
can be seen for companies such as Netflix, Instagram, Facebook, and Amazon (Netflix, 2018;
Shapira, 2013; Fortune, 2012). New pricing models are emerging with the rise of personalized
content, as companies can receive huge profits from scaling up sales through content
recommendations to its users (Pettey, 2015).
Such recommendation systems are built by algorithms that identify certain attributes of the
user’s liking, and applies this to suggest similar products or services. The more people who use
the recommendation algorithm, the more data the algorithm collects which allows it to create
even more specialized and customized recommendations. This effect is known as the data
network effect, and is based on that a recommendation algorithm becomes more valuable the
more data it accesses (Lehmann & Buxmann, 2009). When one or several algorithms receive
such a vast amount of data that the recommendation algorithm becomes improved, or
“trained”, it is referred to as machine learning (ML) (Marr, 2016). A subset of ML is deep
learning; that is when the algorithms inspired by the human brain try to replicate the brain’s
neural network, referred to as artificial neural networks (LeCun, et al., 2015). Deep learning,
ML and recommendations systems are all parts of Artificial Intelligence (AI) (Ertel, 2009). The
2
ability for a machine to learn and be smart, is the difference between AI and other software
(Accenture, 2018; LeCun, et al., 2015).
An even more recent application of AI is within educational technology (edtech). This form of
personalization through AI and deep learning is known as adaptive learning. Adaptive learning
is a growing trend within edtech and it challenges conventional classroom teaching by
providing the opportunity to capture specific information about each student’s learning path
to personalize content based on the student’s previous knowledge and progress (Bughin, et al.,
2017). The goal of adaptive learning is “to deliver the right content, at the right time, in the
best way for each student” (Sana, 2018). In contrast to adaptive learning, a standardized
approach could provide the same material redundantly and disengage fast learners, or not
provide enough appropriate content and leave the students who struggle behind. By using an
adaptive learning program for students who struggled with remedial math at Arizona State
University, the student dropout rates went down by 7% and the pass rates improved from
66% to 75% (Bughin, et al., 2017). With deep learning and techniques for tracking digital
interaction and analyzing time spent on a question; there are greater possibilities to, in real
time, personalize the teaching to each student’s specific needs and progress. In addition, it
enhances teaching effectiveness and efficiency as well as the opportunities to provide
education for all (Bughin, et al., 2017; Sana, 2018).
Currently, there does not exist previous research within pricing of AI products offered from
an external provider or as a service (AIaaS). However, research does to some extent exist for
SaaS pricing (Lehmann & Buxmann 2009; Chao, 2013; Harmon, et al., 2004), which is similar
to AIaaS not only due to its newness, value creation and product type but particularly to its
network effects (Rouse, 2017). In addition to the research gap of pricing models for AI
products, there is also limited research within pricing models for new B2B products on
oligopoly markets (Kienzler & Kowalkowski, 2017), where competitive pricing models cannot
be adopted. Therefore, the scope of the study includes software pricing, pricing for new
products as well as the value related to it.
1.2 Commissioner
The commissioner of this project is Sana Labs AB, from here on referred to as Sana, which
has assigned the authors to investigate the company’s pricing model. Sana is a ML startup
company offering personalization algorithms for online education platforms to provide more
engaging learning processes for the end users; the learners. The company was founded in 2016
by Joel Hellermark, who developed a set of algorithms which can be applied to all type of
learning content, to find each student’s optimal learning path. The company currently has
several data scientists and AI researchers to further develop its offering through ML and deep
learning.
1.3 Problematization
To set a price for a new kind of product is challenging, and varies greatly depending on
whether the product is delivered to consumers or to businesses. With a new and complex B2B
product where competitive products do not yet exist, knowing what price to set, and how to
set it, is difficult. As sole AI-personalization offerings are such new products present on an
3
upcoming market, neither customers nor the provider knows which price range is realistic. In
addition to this, the customer does not know what value the product has to the user; making it
even more challenging to translate value into price whilst the provider does not have a market
proven nor established pricing strategy.
1.4 Purpose
The purpose of this exploratory research is to investigate the possible pricing models for an
AIaaS product based on a case within the edtech industry. Thereafter, the authors seek to
create a suitable pricing model for AI companies offering a new personalization technology
based on AI and deep learning algorithms.
1.5 Research Questions
The answer to the following main research question aims to fulfill the purpose of the study.
“What pricing model should an AI-company have for its B2B personalization
product?”
To answer the main research question, the following sub-research questions were formulated.
● What is the perceived value delivered by an AIaaS and how can it be determined and
mirrored to price?
● What factors should an AIaaS providing organization consider when determining a
pricing model?
● What are the main implications of implementing AIaaS within the edtech industry?
1.6 Expected Contribution
Under the field of industrial management lies marketing and entrepreneurship, where pricing
is one of the most effectively used marketing tools (Harmon, et al., 2004). This study expects
to contribute within the field of pricing models for new AI products, such as AI
recommendation or personalization offerings where standardized pricing strategies not yet are
established. There exists numerous market reports and guidance for already established
companies which wish to integrate AI into their pricing strategy. Pricing strategies for pure AI
companies or AIaaS has been researched to a very limited amount. The study is expected to lie
close to the field of pricing for software offerings or SaaS, where research has been carried out
to a greater extent than for AI products. This research also seeks to contribute to further
knowledge and expertise in the field of edtech and the personalization of learning.
1.7 Delimitations
The study will be delimited to the edtech industry where results are gathered from online and
digital learning companies in Europe and US. The reason for this is to narrow the scope of
potential AIaaS buyers to focus on the industry that is relevant for the commissioning
company. The results can be applicable to AI-companies providing a recommendation or
4
personalization offering, which in this study collectively is called AIaaS, and not only within
the edtech industry. The study will also be delimited to pricing models applicable for B2B
companies. Another delimitation that is made is that the literature reviewed on previous
research for pricing models for new products or software services published before 2000 will
not be reviewed. The reason for this is that SaaS is a highly current subject that is developing
fast and in which new research is being produced continuously. The fact that the subject area
and its research is continuously developing emphasizes the importance of using and applying
up-to-date literature when conducting research within this dynamic field.
1.8 Disposition of Thesis
This section explains the layout of the thesis, and what is to be presented and where. The
report is from here on structured into five chapters where the following chapter introduces
previous and relevant literature to, for instance, the concept of value and pricing strategies for
software services as well as new products. The tools used in the method are also introduced to
give the reader the required knowledge to follow the remaining parts of the report. Chapter 3,
the method, presents and explains the manner in which the study was performed in order for
the reader to be able to repeat the study. This chapter also includes a clarification about the
commissioner’s product and its features, which are explained at a deeper level. The research’s
quality through the concepts validity and reliability is also discussed in this chapter. In chapter
4, results and analysis of the study are presented. Primary results from the survey and in-depth
interviews are presented. This chapter is followed by the discussion in chapter 5, where the
authors reason and analyze the primary results as well as the results from the literature review
to find a solution. In chapter 6, the conclusions of the study are presented together with the
research’s contribution and limitations. This is followed by further recommendations and
future work for the subject studied.
5
2. Literature Review
The literature review will cover previous research within the area of pricing models in relation to price
determination of a new product, or software product, as previous studies of pricing models specific for AI
products do not exist. The section will also clarify the concept of value and how one can quantify it to relate it to
price. The difference and relation between pricing strategy, model and tool will also be covered to further clarify
the scope of this paper.
2.1 Value and its Definition
The concept of value is often confused with the conception of quality, benefits and price
(Dodds, et al., 1991). Within marketing, there exist four types of values: functional, monetary,
social and psychological value (Doyle, 2000; Sánchez-Fernández & Iniesta-Bonillo, 2007). One
type of value-based approach can relate to shareholder value; the economic value added
approach which is based on the net profit of an organization and its cost of employed capital
(Doyle, 2000). Doyle (2000) also argues that marketing value strategies are related to the net
present value of the future cash flow. However, with this approach come many uncertainties
which in reality are very hard to accurately predict (Doyle, 2000). Perceived value is however a
factor more easily determined as it is done so by the customer and can be related to perceived
sacrifice, willingness to buy and price, where the perception of value is directly related to the
willingness to pay (Dodds, et al., 1991). Value does not always however have to be related to
monetary value; a fact for, for instance, non-profit organizations (Doyle, 2000). A non-
monetary value is emotional value; the level of attachment the customer has to the product, a
value which seeks to be translated from the economic value (Doyle, 2000). When relating
value to digital products, software for instance, the perceived value by the customer can only
be determined after the purchase of the product being an aspect to consider also for AI
products (Lehmann & Buxmann, 2009).
2.1.1 Customer Lifetime Value
Customer Lifetime Value (CLV) is a method of measuring a customer’s monetary value of a
product, based on the predicted future cash flows generated by a specific customer to an
organization. CLV is an approach of measuring a long term value compared to a quarterly
impact of cash flow by a specific customer and determines the payback period of a customer
for the marketing initially spent on the customer by the organization. One of the advantages
of CLV is that it assesses the future potential of a customer rather than calculating the present
customer profitability, which quantifies the current value of a customer to an organization’s
profitability based on the revenues and costs generated by the customer. On the other hand,
CLV can be difficult to quantify as it involves the forecasting of future cash flows. The CLV
can be calculated by multiplying the Average Monthly Revenue and the Gross Margin Per Customer
and dividing by the Monthly Churn Rate. (Farris, et al., 2010)
2.1.2 Network Effects
A common term used in connection to value is economies of scale which refers to the
characteristics that make a company’s average cost per unit to drop as the output rises
(O'Sullivan & Sheffrin, 2003). In the industrial era of the twentieth-century, supply economies of
6
scale was a term strongly connected to giant monopolies as it implied efficiencies in production
and lead to increase in the amount of quantities produced which in turn lead to a lower unit
cost of producing a product or service. By having these supply economies of scale, a company
had significant cost advantages which made it hard for other actors to enter the market which
created high barriers to entry. In the internet era of the twenty-first century, the term demand
economies of scale increased in popularity and was used to describe the creation of comparable
monopolies. Significantly for a company that has demand economies of scale is that it benefits
from the demand side’s technological innovations which can give the company an advantage
connected to the network effect that is difficult for other actors within the same market to
overcome. It is argued that demand economies of scale is the main driver of value in
economic terms today. It is additionally argued that demand economies of scale through
network effects is the most significant differentiating factor (Parker, et al., 2016).
The term network effect emerged as a result of technological innovation (Parker, et al., 2016).
The network effect is a phenomenon in which a product or service becomes more valuable as
more people are using it (Shapiro, et al., 1999) and is therefore connected to the concept of
value in which network effects can be present. Examples of such phenomena are when big
networks become more valuable to the users, as in app development and social networks such
as Facebook that provides a more interesting experience as the number of members increases
(Parker, et al., 2016). This is due to the fact that Facebook can customize the user experience
further based on other users’ likings and usage behavior (Zimmermann, 2017). Data network
effects is a phenomenon that occurs, commonly through ML, when the product becomes
smarter with a greater access to data from the users of the product. At the same time as users
use the product they contribute with more data which in turn makes the product “smarter”
and able to serve users even better. This creates a cycle in which the users are more likely to
return and contribute with more data making the business able to excel at serving users and is
thereby highly competitive, which is significant for network effects. Organizations that benefit
from network effects seek to grow their number of users to make their offerings even more
valuable to their customers (Lehmann & Buxmann, 2009).
One can assess direct and indirect network effects where direct network effects are the direct
communication between the users while indirect network effect is created purely by the
consumption of the product; the more it is used the more valuable it becomes. Furthermore,
network effects create lock in effects for the users, which in turn creates high barriers to entry
for new actors as the product provider needs many users to create value. This is also closely
related to “winner takes it all markets”, common within the markets where network effects are
present, with weaker network effects there is usually an oligopoly market (Lehmann & Buxmann,
2009).
2.2 The Impact of Price
The price-elasticity of demand reveals the demand’s sensitivity to alterations in price and can be
defined as “the percentage change in the quantity demanded resulting from a 1 percent change in the product’s
own price” (Maital, 1994, p. 185). This information can be used to explore whether price
increases can be made without declines in sales volume and if the price rise can cause the
buyers to choose cheaper products from competitors (Maital, 1994). Elastic products and
7
services are the ones that suffer from a large change in demand when exposed to a small price
change. What characterizes these kinds of products are the absence of brand, product
differentiation and customer attachment to the product where beef is an example of an elastic
product which easily can be substituted with for example chicken when the price increases
(Gallo, 2015). Markets with a large number of product substitutes and alternatives are known
to have a greater price sensitivity (Maital, 1994). In contrast, there also exist inelastic products
and services which are the ones that do suffer from a minor change in demand when exposed
to a large change in price. Example of an inelastic product is gasoline since it is a product that
many people are dependent on and thereby purchase even when the price increases. In
addition to making the buyers dependent on the product, one can create a stronger brand to
make the offering more inelastic. On the extreme of inelastic offerings are products or services
that cannot be obtained by any other provider and are absolutely needed by the consumers,
which appear when a company has a demand monopoly (Gallo, 2015). An application in
which one should consider price sensitivity and insensitivity is when it considers price cuts.
Cuts in prices can be used to gain higher volume at the expense of lower prices on units sold,
which makes it a tradeoff. Prior to taking a price cut decision, one can determine if the
demand is price sensitive or insensitive which determines how well a company responds to
this kind of tradeoff (Maital, 1994).
2.3 Strategy, Model and Tool as a Funnel of Pricing
The commonly used concepts tool, model and strategy can lead to confusion when used in
various manners. To be clear about these concepts an explanation of each will follow in this
section which reasons why one or the other will be used in which circumstance further on in
the report. The simplest one out of the concepts is the tool; a tool is a device or implement
which is used in the method to achieve a goal. Tools can be both physical objects as well as
software programs (Brenner, 2016).
A model however, is a simplified representation of reality in existing or future state used in an
explanatory manner (Spencer, 2009; Brenner, 2016; Verbrugge, 2016). The model can be a
representation of a system, device or idea. A model focuses on the most vital aspects of the
object it represents, where unimportant details may be left out. Within business, models are
used to schematically represent the decision making within a business, the business itself or
the processes within the business to make decisions (Brenner, 2016). Commonly used
modeling techniques are process model, workflow model and life cycle model (Verbrugge,
2016). Brenner (2016) argues that the most critical step when designing a model, is what parts
to include or neglect. The reason for this is that most models are decision support tools.
Therefore, a model is one or several tools, but - a tool is not a model. Within models, there are
static and dynamic models where the first gives a specific outcome of the represented reality
without considering altering external factors, while the latter represents a system behaviour
which takes attributes which evolve over time into account (Brenner, 2016).
Last but not least; the concept of strategy. Strategy often brings up the difference with tactics,
which is related to the details within a strategy, while the strategy itself refers to a game plan,
the use of engagements to reach an objective and is usually at a higher level than tactics and a
model (Mintzberg, et al., 2003). Pricing strategies involve adjusting the price, where different
8
strategies include skimming, segmentation, discount and revenue management (Dolgui &
Proth, 2010).
This research will focus on dynamic models within business where the pricing process is the
major focus area. In order to understand the concept of pricing model, different pricing
strategies as well as pricing tools will be presented in addition to pricing models. To visually
present the various pricing levels Figure 1 demonstrates the scope of each level, which are
presented from top to bottom in the following chapters.
Figure 1. The authors’ interpretation through a visualization of the different pricing levels based on literature
(Spencer, 2009; Brenner, 2016; Verbrugge, 2016; Mintzberg, et al., 2003)
2.4 Pricing Strategies
A pricing process can be used to help a company to decide on and implement prices. This
process can consist of a set of procedures and rules involving models, methodologies,
information, responsibilities and incentives. Besides, experiences and estimations as well as
competitor and market data can be involved in this process. These processes tend to be
industry or company specific and are highly secret. (Simon, et al., 2003)
A pricing strategy is based on a set of various prices determined by an organization to meet its
objectives in a given period of time. Prior to setting the strategy, the organization must
evaluate and analyse its industry, including; products, consumers, competitors, suppliers and
the structure of the market (Dixit, et al., 2008). The pricing is generally very closely related to
the strategy of the organization as it makes up the core of how to reach the revenue goals
(Lehmann & Buxmann, 2009).
This section introduces the reader to pricing strategies relevant to software products, including
AI products. Other pricing strategies which are not brought up in this study are not thought to
be relevant for software services or AIaaS products.
9
2.4.1 Skimming
Setting a high price at first and then lowering it to expand market share. The objective with
this strategy is to first reach out to customers which have a high willingness to pay, “to skim
customers with lower reservation prices by a lower price” (Lehmann & Buxmann, 2009).
These are usually customers that are insensitive to the price because they value the offering so
high, usually early adopters (Baker, 2011). For many software companies however, this is
infrequently used (Lehmann & Buxmann, 2009).
2.4.2 Penetration
The penetration pricing strategy aims to set an initially low price, usually below the service’s
value to the customer, to be able to maximize market penetration (Baker, 2011; Lehmann &
Buxmann, 2009). The penetration strategy is particularly useful for organizations where
competitors already are established on the market with a large customer base. After the
organization has reached a critical mass of customers it increases the prices. This strategy is
sensibly used when network effects or economies of scale have a significant presence. In
addition to this, it has been shown for software vendors that this strategy creates a lock-in
effect where customers become dependent on the product at a cheap price, to later on be
willing to pay a lot more for upgraded versions due to its dependence of the software product.
Due to existing network effects in the software industry, the penetration strategy is commonly
used by offering large discounts for the first customers (Lehmann & Buxmann, 2009).
2.4.3 Freemium
Follow-the-free, or freemium, strategy is based on the concept that customers receive a small
quantity of a product free of charge to create a hook or lock-in effect for the customer, with
the aim to create a demand so strong that the customer buys complimentary products or
premium versions (Anderson, 2009; Lehmann & Buxmann, 2009). An example of this is a free
version software that offers more features when the user pays a fee, meaning that the content
could vary from free to expensive. For digital products, the 5% rule is followed for a typical
web site where only 5% of the users pay for the premium version. This is however enough as
the cost of serving the remaining 95% is close to zero and hence covered (Anderson, 2009).
Another form of freemium is versioning, where more mature companies segment their
customers into different tiers, and startup companies which begin by giving the entire product
for free, until they know which parts of the product that will be revenue generating
(Anderson, 2009). In addition to proprietary solutions, free and open source software (FOSS)
also exists, where the open source software usually is free and income is generated through
augmenting services such as additional features, consulting and maintenance. This type of
pricing is however unrelated to the one of software (Lehmann & Buxmann, 2009).
2.4.4 Price Leadership
There are also several variations of price leadership; the barometric model, the collusive model as
well as the dominant firm model. The latter occurs when there is one large producer (or
provider) and several small, where the smaller providers do not produce enough output or have
enough market share to be able to affect the price. Hence, all smaller firms have to follow the
price set by the dominating leader of the market (Deneckere & Kovenock, 1992). The collusive
10
model was first identified by Rotemberg and Saloner in 1990, defining it as a situation where
“one of the firms announces a price change in advance of the date at which the new price will
take effect and the new price and date are swiftly matched by the other firms in the industry”
(Ishibashi, 2008, p.704). The barometric model is based on a firm which is more adept in setting
the price, and is first by doing so, making other firms follow the price set although the price
maker may not be in a dominant market position (Deneckere & Kovenock, 1992).
2.5 Pricing Models
In general, pricing models can be divided under the three following groups; cost which is based
on cost accounting, competition which is based on observed or anticipated price levels of
competitors and value which sets prices based on a product’s or service’s value delivery to a
specific customer segment (Hinterhuber, 2008). Value based pricing centers around
customers’ value perception of the product and focuses on developing a long-term customer
value. Conversely, cost based pricing centers around product costs and focuses on short-term
vendor value and competition based pricing focuses on market price. Figure 2 illustrates the
conflict between cost based and value based pricing models (Harmon, et al., 2004).
Figure 2. Differences in the product development process between cost based and value based pricing (Harmon,
et al., 2004)
For digital goods, cost based pricing is of no use due to its special cost structure while it can
be argued that it is more suitable for SaaS products. Competition based pricing may be suitable
for some companies offering digital products depending on the market landscape. In addition,
there are also auctions which are another form of pricing commonly used by internet
advertising companies. This is dependent on the degree of interaction by the customer, which
is high in an auction based pricing model. Auctions usually make little sense for digital goods
and therefore software products. (Lehmann & Buxmann, 2009)
Value based pricing has increasingly gained acknowledgement in the literature and among
practitioners due to the recognition that sustained profitability relies on the understanding of
value creation sources for customers, designing offerings that meet customers’ demands as
well as setting value-based prices (Hinterhuber, 2008). In general, this pricing strategy reveals
customers’ perception of a company’s product value (Ingenbleek, et al., 2003).
The satisfaction customers receive from utilizing a service or product offering generally refers
11
to the term value. Nagle et al. (2013) write about two out of the four earlier mentioned forms
of value that require different estimation approaches: monetary and psychological. The former
represents a customer’s total income enhancements or cost savings from purchasing a
product. Generally, monetary value is considered as the most important element for many
B2B purchases since a supplier’s service or product offering can be translated into tangible
cost savings for the customer which gives the offering a high monetary value. On the other
hand, there exist products without tangible monetary benefits for the customer but rather
create inherent satisfaction and pleasure, which is typical for psychological value. In some
cases, both types of value can be created which makes it difficult to determine which of the
two that is most essential to the decision to purchase (Nagle, et al., 2013). To set the right
price when the value perception of an offering is complex can be seen as a process where it is
sought after to find a proper balance between clients’ perception of what is obtained and what
is sacrificed to use the offering (Iveroth, et al., 2013).
The disadvantage of the cost and competitor based pricing strategies is their lack of sufficient
attention to the needs and requirements of the customer while an advantage is the availability
of data to base the strategies on. Conversely, a disadvantage with customer value-based
methods is the difficulty to obtain and understand relevant data while an advantage is that the
customer perspectives are taken into account (Hinterhuber, 2008). If there is a high relative
product advantage, competition-based and value-based pricing are two strategies that provide
better price ceiling understanding. In these situations, it is difficult to compare products to
other offerings which make a customer’s perception the most significant information source in
the process of understanding a product’s worth to the customer (Ingenbleek, et al., 2003).
It is contended by marketing researchers that cost-based pricing results in a profitability that is
lower-than-average (Simon, et al., 2003). An empirical study made by Ingenbleek et al. (2003)
showed that value-based pricing strategies are positively associated with the success of new
products while no such corresponding association could be identified between the adoption of
competition-based and cost-based pricing and new product success. Hence, the most suitable
strategy to use when deciding on new product pricing is the customer value-based approach
(Ingenbleek, et al., 2003). A representation of the strategic challenge is shown in Figure 3.
Figure 3. Value delivery and value extraction (Simon, et al., 2003)
2.5.1 Software Pricing
The software industry is comparable to the one rising from AI, particularly as-a-Service (aaS).
The term aaS implies that the user buys the software product over the internet, rather than
installing it locally through license fees, which historically has been the most common pricing
model within software (Lehmann & Buxmann 2009). Software aaS (SaaS) is fundamentally
12
different from other industries due to for instance network effects, which enhances the value
of a product (Lehmann & Buxmann, 2009). This is a similarity one could draw to AIaaS,
making previous literature of SaaS of high interest for this study. AIaaS is a third party
providing AI outsourcing which allows the buyer to experiment with AI without a large risk
and a lower initial investment. Examples of this include setups where experimentation occurs
on a cloud where the machine learning algorithms can be tested. Examples of organizations
which are already doing this is Amazon Machine Learning and Google Cloud Machine
Learning that seeks to assist organizations with analyzing their data (Rouse, 2017).
In general, there is no standard pricing for SaaS companies. In recent history, software pricing
was often based on licensing models, based on computing power or user-oriented (Lehmann
& Buxmann, 2009). Today however, most are constructed from a usage-based model in six
different manners; (i) subscription fees paid monthly or annually, also categorized under
recurring revenue (ii) revenue based on advertising (iii) transaction based revenue, based on
number of transactions which customer performs) (iv) premium based revenue, built on
premium versions besides the first free version (v) implementation and maintenance revenue
as well as (vi) software licensing (Laatikainen & Ojala, 2014). In a survey conducted by SIIA
(2006) software providers believe that subscription models will be more common than single
license payments in the future. In addition to this, it is thought to be increasingly common to
have a combination of a single license and a subscription model. Such hybrid models are often
based on a percentage fee of the license as fixed annual payments, seeking to cover the
maintenance fee for the provider, usually 20% of the pricing model (Lehmann & Buxmann,
2009).
In the study by Lehmann and Buxmann (2009) it is concluded that lock-in effects, switching
costs and network effects are important factors to consider when determining a pricing
strategy. They present a framework in which six parameters are included and need to be
accounted for when designing a pricing model. The parameters are; Formation of price,
Structure of payment flow, Assessment base, Price discrimination, Price bundling and
Dynamic pricing strategies, presented in Figure 4. For the formation of price, value based is
often more suitable for digital products as the cost base does not reflect a reasonable price to
the customer. The structure of the payment flow can be divided into single or recurring
payments, or a hybrid model of the two which is thought to become increasingly common.
The assessment base is probably the most important parameter when designing the pricing
model as this determines whether the product should be priced per user, time, transaction or
other suitable variables for the software and is commonly considered by the customer to be an
indication of whether the price is fair or not. The price discrimination considers aspects such
as versioning, geographic regions or if the customer is a private or corporate individual
etcetera. The two final parameters include product bundling and how the offering is assembled
as well as the strategy selected to reach the desired market position. When selecting the
dynamic pricing strategy one must consider the impact of network effects, switching costs and
economies of scale. Amongst software vendors it is common to follow the penetration
strategy where a low initial price is set to create a lock in effect, followed by leveraging on
network effects and switching costs. Skimming, where one sets a high initial price to find
customers’ willingness to pay, is less commonly used amongst software companies. (Lehmann
& Buxmann, 2009)
13
Figure 4. Pricing models parameter for software products (Lehmann & Buxmann, 2009)
Another study which has been made on SaaS pricing (Chao, 2013) categorized the software
pricing into three different sections; linear pricing, 2 Part Tariff (2PT) and 3 Part Tariff (3PT).
A linear pricing model sets the price per event, for example per MAU for an online education
provider. The 2PT is based on a fixed base price, where a price for each event is added on to
the initial base price. The 3PT is also based on an initial base fee, which is higher than for the
2PT. The first x events are then added on for free, followed by higher linear pricing than
previous models for the events surpassing amount x. For the early software companies, many
used linear pricing models, 2PT have hitherto been uncommon while 3PT models currently
are the most common models for B2B schemes as well as consumer products. Although the
3PT has been discussed in antitrust cases, the method shows through game theory, to increase
a leading firms’ profits in an oligopoly market. The 3PT model outperforms the 2PT and LP
models in a competitive but oligopoly market where substitute products are present. (Chao,
2013)
Harmon et al. (2004) argue that cost based pricing for software products has been given
support by researchers within strategic cost management. Practitioners of activity based
costing (ABC) argue that traditional cost based methods are better suited for high volume
production costs, where costs per unit can be calculated, rather than complex software
products. (Harmon, et al., 2004)
14
2.5.2 Pricing for New Products
Iveroth et al. (2013) created a five-dimensional pricing model named SBIFT based on a case
study of the telecom company Ericsson. The SBIFT model is highly applicable to rapidly
changing industries, such as the telecom industry where pricing is complex due to the variety
of products as well as the rate at which the industry is developing. Primarily, the SBIFT model
was invented for organizations to differentiate their offering by price. The five dimensions;
Scope, Base, Influence, Formula and Temporal Rights, establish an underlying model for price
models, which can be used to explain and characterize a company’s price model. However,
besides the five dimensions, there are additional agreements that could be useful to consider
between a seller and a buyer. Further, the model would need to be complemented with
scenario analysis to follow and foresee the changes in customer demands. Lastly, within a
business ecosystem, there are different actors who influence each other. Hence, a certain price
model should be compared to other price models being used by actors in the specific
ecosystem to envision possible effects of certain price model alterations. (Iveroth, et al., 2013)
2.5.3 Performance Based Pricing
Outcome based pricing, or performance based pricing (PBP), is based on the idea that the
seller gets paid after the customer can determine the actual outcome of the product or service
which is becoming increasingly common. Many industries have previously based their prices
on the costs of the service, but are now moving towards PBP where the deliverer has to work
towards a set goal. This pricing method is applicable to industries such as the marketing,
construction, consulting and heavy industries. (Shapiro, 2002)
One of many benefits with PBP is the calibration between the buyer’s and the seller’s goal,
which ultimately becomes unified. In addition to this, PBP creates an assurance for the buyer
as the seller ensures not to under deliver or the seller will not be compensated. This becomes
similar to an insurance where the buyer can pay more, and will receive higher deliverables, and
minimizes the risk of overpaying for an unwished outcome. Lastly, the third benefit of PBP is
the level of engagement which becomes forced upon the two parties. Usually, PBP is used in
complex situations with intricate and contradicting objectives. Unlike traditional contracts,
PBP engages both parties in a discussion where the other’s objective must be understood to
set a corresponding, fair price. The buyer and the seller can then jointly focus on mutual
objectives which lead to better agreements and lower unexpected costs for the seller. (Shapiro,
2002)
As with any other pricing model, there are disadvantages with the PBP as well. Similarly to
usage based pricing, the actual payment can only be determined after its delivery. Unlike usage
based pricing however, PBP charges for the quality rather than the quantity of usage. Another
disadvantage is the time to payment; the cash flow for the vendor. PBP is usually not suitable
for organizations which are in great need of short term cash flow. Shapiro provides an
example of a software startup, which is in need of a short time to cash flow than larger stable
organizations. For a software startup to, for instance, determine the savings to their customer
with use of their product, it may take months for the customer to determine such savings, and
prolong the payment to the startup. Shapiro however proposes a solution of combining early
fixed payments followed by performance based payment after determining the quality of the
15
output. Nevertheless, the determination of revenue recognition and determination of what the
cost savings derive from are still very intricate. (Shapiro, 2002)
2.6 Pricing Tools
This section presents the tools used later on in the study for determining price and value of a
product. The tools are independent of one another but are both commonly used within
marketing research.
2.6.1 Van Westendorp Price Sensitivity Meter
The Van Westendorp Price Sensitivity Meter (PSM) was introduced by the dutch economist
Peter van Westendorp in 1976 where the concept of asking four unspecified questions
regarding the price about a certain product would be asked in the following way; (i) at what price
do you consider the product to be too expensive to be bought? (ii) at what price do you consider the product to be
too cheap to have the wanted performance? (iii) at what price do you consider the product to be expensive but
you may still consider buying it? and (iv) at what price do you consider the product to be a bargain? - i.e.
good value for money (Van Westendorp, 1976). The answers are then plotted as a series of
cumulative distributions, one line for each question as shown in Figure 5. The horizontal axis
shows the price willing to pay while the vertical axis shows the cumulative frequency; either
the number of respondents or a percentage of the population.
Figure 5. Cumulutants of Van Westendorp’s PSM (Lipovetsky, 2006)
The cumulative distributions “too cheap” and “cheap” can also be plotted inversely. Where
“too cheap” and “expensive” meets one can define a lower range of the acceptable price range
while the intersection of “cheap” and “too expensive” can be used as an upper limit to the
acceptable price range (Esomar, 2015). The line where “too expensive” and “too cheap” meets
is known as the optimal price point (OPP). This graph is displayed in Figure 6. According to
Van Westendorp (1976); “Optimal price is the price associated with this point [that] represents
a price at which resistance against the price of a particular product (too expensive or too
cheap) is very low while the percentages iron each other”.
16
Figure 6. Showing the acceptable price range determined by the two intersecting points (Esomar, 2015)
In the book Data Driven Sales (2018) it is discussed how organizations should price their SaaS
products. Van Westendorp is present in this book as a suitable tool used particularly for new
products (Poyar, 2018). Poyar also describes that pricing in the lower end of the acceptable
price range enables a penetration strategy while pricing in the upper end of the range enables a
skimming or profit maximization strategy (Poyar, 2018).
2.6.2 Conjoint Analysis
Green and Srinivasan concluded in 1978 that conjoint analysis is an effective tool for
explaining the structure of a customer’s preference and to some extent the prediction of
customers' behavior towards new products or features (Green & Srinivasan, 1978). The tool is
based on a set of techniques aiming to measure buyers’ tradeoffs amongst combinations of
attributes in a service or product design. Prior to releasing a product or service, conjoint
analysis can be used when it is desired to discover what price customers are willing to pay for
the preferred product attributes (Green & Srinivasan, 1990).
Simplified, the conjoint analysis can be divided into five stages; (i) establishing the attributes
(ii) assigning levels to the attributes (iii) defining scenarios, products or solutions (iv)
establishing preferences through data collection and (v) analyzing and interpreting the data by
calculating utilities and importance scores (Silverstein, et al., 2008; Ryan, 1996). This
information can be used to determine customers’ price sensitivity since it can vary with a
product’s characteristics which also can be described as product attributes (Orme, 2010).
For the first stage, there are several methods for establishing the attributes; this can be done
through literature reviews, group discussions, interviews or questioning of individual subjects.
It can also be defined by the researcher through an already defined research question. The
second stage involves the determination of levels to the attributes. Criteria for determining the
attributes are that they must be quantifiable and capable of being traded-off against each other.
Following this, the third stage includes providing combinations of the different levels and
attributes to determine hypothetical products, solutions or scenarios. As the number of
17
plausible scenarios increases with the number of attributes and level of attributes one must
ensure that unrealistic options are not presented; such as an extremely costly feature for an
incredibly cheap price that in practice would not be possible for the vendor to produce. The
fourth stage of the conjoint analysis involves establishing the preferences. (Ryan, 1996)
This is also where different versions of the conjoint analysis becomes introduced which can be
divided into three groups; ranking exercises, rating exercises as well as discrete choices. A
more specific version of the conjoint analysis is the choice based conjoint analysis (CBC), only
one of many alterations of the basic method of conjoint (Sawtooth, 2017). The CBC belongs
to the group of discrete choices and differs from other versions of the conjoint analysis as it is
based on product choices rather than rankings, which is a more commonly used approach
(Sawtooth, 2017). The fifth stage of the conjoint analysis is the analysis of data, where ranking
and rating usually are analyzed through graphical methods and relative importance scores, to
find out which attributes are of higher value to the customers. In addition to these, regression
techniques can be used for all three versions of the conjoint analysis (Ryan, 1996).
Previous limitations with the conjoint analysis includes that it has been argued to be tedious
for respondents, costly for researchers and quite difficult for the applied industry to
understand and implement (Rao, 2014). In addition to this, the respondent must visually see
the available options, meaning a phone interview without screen sharing cannot be pursued
(Rao, 2014). Ryan (1996) also argues that respondents may not be well informed about the
product, and even if so - the conjoint analysis does not allow for respondents to indicate their
strength of the preference.
18
3. Method
This section will cover the process of the research and how it is performed. The method will be presented and
motivated to why it is chosen, and can be divided into qualitative or quantitative, where this research primarily
is based on a qualitative approach. The research design explains the strategy chosen to make the researched
phenomenon, explanandum, researchable. This section also explains the explanans chosen to understand the
explanandum, or the purpose of the study as well as the methods for empirical data gathering. Furthermore, a
discussion of the quality as well as ethics of the research is also included in this section. Lastly, there is a
presentation of the product whose price and value is investigated in this research.
3.1 Research Design
This exploratory research is performed in a triangulation of qualitative and quantitative data
gathering where qualitative data gathering is associated with interviews of open ended
questions while quantitative data gathering is associated with numbers and facts which one can
put numbers on; quantify (Collis & Hussey, 2013). The research is exploratory as it
investigates an area with very limited amount of previous research. An exploratory research
seeks to study patterns, concepts or a hypothesis where the result can provide a base for future
research (Collis & Hussey, 2013). The research was initiated by the literature review and field
interviews where knowledge was gained in the shape of primary and secondary data in order to
understand the three main subject areas; AI, value and pricing. The secondary data consists of
a literature review while the primary data gathering consists of responses from a web survey as
well as in-depth interviews.
As parts of the research lies closer to the positivist paradigm research, a descriptive survey
appeared to be the most suitable method for gathering quantitative data (Collis & Hussey,
2013). The survey was conducted through a questionnaire sent out to targeted respondents,
with predetermined as well as open-ended questions (Blomkvist & Hallin, 2014). The entire
research lies in between the positivist and the interpretivist paradigm as it uses a combination
of qualitative and quantitative methods in seeking to answer the research questions.
Nevertheless, one can argue that it is closer to the interpretivism paradigm as opinions and
ideas of subjective thoughts are collected (Collis & Hussey, 2013). One can relate this to the
interpretation of social science where an interpretivist researcher must see the phenomenon
“through the eyes of the people being studied”, which provides multiple aspects of reality
rather than a single and narrow reality (Greener, 2008). The research includes a qualitative
method based on structured interviews combined with another qualitative method consisting
of semi-structured interviews (Blomkvist & Hallin, 2014). This research starts by examining a
business problem and generates theory from the research rather than beginning with theory
and then testing that theory in the research. For that reason, the reasoning carried out in the
study is an inductive logic, as the research intends to find a new pricing model rather than test
an existing one (Greener, 2008). A limitation of the chosen research design is that the risk for
biasness is present with the interpretivist paradigm compared to the positivist paradigm. With
a positivist paradigm however, the researcher may exclude parameters which cannot be
considered through a positivist approach.
19
3.2 Data Collection
The data collection can be divided into a qualitative and a quantitative part. The quantitative
part was carried out through a survey where the Van Westendorp method was applied and
several OCPs are asked about their willingness to pay for the investigated hypothetical
product. The qualitative method consists of semi-structured and structured in-depth
interviews where the OCPs’ perception of the product’s value and worth was to be
determined. The answers gathered through the Van Westendorp method in the survey were to
be used for the different pricing points needed to be determined in the structured part which
were inspired by the four first stages of CBC. Prior to this, the authors also carried out
preparatory interviews with three independent and relevant actors within knowledge from
pricing, AI solutions and the edtech industry.
3.2.1 Quantitative Sampling through a Survey
The chosen population to investigate was supposed to reflect the organizations which have (i)
thought about developing personalization algorithms in-house or (ii) whose online education
platform would benefit by personalization through AI. The population was selected based on
a list of prospects provided by the commissioner, a population which consisted of 1211
companies. The companies in the population are present in Europe and the US and provide
online learning content with a varied amount and type of end learners that commonly are
students or employees. Questions were sent out in a web survey which was published 1st of
March 2018 and was operative for eight weeks. The survey was sent to the accessible
population which consisted of 146 OCPs. The difference between the total population and
accessible population is that the latter is the one that the authors had a possibility to reach
through email while proper contact information for the total population was not available.
This accessible population is a part of the entire target population and aims to reflect the
behavior of the entire target population as the accessible population was randomly selected.
This is reasonable as the accessible population reflects the same geographies, MAUs and
segments which are present in the entire target population. Out of the 146 companies in the
population, 41 provided useful answers for the survey which gives a response rate of 27%.
Previous research has stated that a response rate above 20% is not unusual for external, online
surveys (Nulty, 2008). Out of the 146, 32 respondents answered the Van Westendorp
correctly, giving a response rate of 22%. Procedures for reminders were also carried out every
two weeks at three occasions through emails in order to increase the response rate and reach a
higher accuracy (Fink & Kosecoff, 1998). The data collected in the Van Westendorp method
is interval variable data which means that it is quantifiable (Collis & Hussey, 2013).
The questions asked in the survey were based on the Van Westendorp method as well as
information needed to categorize the potential customer into its tier, based on the number of
MAUs. Also, a question on the organization’s current IT expenses was formulated in order to
find a possible relationship between current expenses, willingness to pay and the size of the
organization based on the number of MAUs. The questions asked in the survey are presented
in Table 1.
20
Table 1. Presentation of the questions asked in the web survey
Question # Question asked in the survey
1 What position do you hold in your organization?
2 How many monthly users do you currently estimate your business to have?
3 Has you organization considered to personalize the education offering?
4 If yes, have you considered developing a personalization algorithm in-house?
5
If yes,approximately to what value have you estimated the costs to be for developing a
personalization algorithm in-house?
6
If no, please explain the reasons for why personalization of education does not apply to
your business.
7
Please estimate your current expenses in USD per month for IT services such as data
storage and other cloud services.
8
With a product that brings 100% increase in efficacy, 12% increase in daily engagement
and 4.5 times more problems solved by users; at what price would you find...
8.1 ... the product too cheap to doubt its performance?
8.2 ... the product to be cheap, a bargain?
8.3 ... the product to be expensive, but you would still consider buying it?
8.4 ... the product to be too expensive to be bought?
9 Additional comments
3.2.2 Qualitative Sampling through In-Depth Interviews
The qualitative data collection was performed through in-depth interviews with a structured as
well as a semi-structured part that aimed to act as a tool to find out what attributes of a
product an OCP values. The structured part of the interview was performed in the same
manner as up to the fourth stage of the conjoint analysis through a choice-based method
where four options were presented in each of the ten questions given. The respondents were
then asked to choose one of the four options (which were either a hypothetical product or no
product at all) for each of the ten questions. This part received 16 responses out of 146 and by
using the formula presented by Greener (2008) for calculating absolute sample size, the answer
rate amounted to 11%. This part was based on the attributes and attribute levels presented in
Table 2 and these attributes were combined to different product concepts which are presented
in Appendix III.
21
Table 2. Presentation of the attributes and attribute levels used in the structured interview questions
Attribute # Attribute Levels
KPI
1 2x Learning Efficacy
2 4.5x more Problems Solved
3 12% improved Churn Rate
Price
1 USD 5,000
2 USD 7,500
3 USD 10,000
Additional
Service
1 Dedicated Solutions Engineer
2 Weekly Report
3 Executive Briefings
4 None
The semi-structured part of the interview consisted of questions based on the attributes and
attribute levels. The questions asked in the semi-structured part of the interview are presented
in Table 3 and received 24 responses. The reason for why more respondents were able to
answer the semi-structured part of the interview is because some believed that they were not
in a position to make a realistic evaluation of the products presented. The reason for that is
because their organization had not considered personalization through AI and therefore they
had no realistic value or price perception for the AIaaS. Such respondents were however able
to discuss the attribute levels per se.
Table 3. Presentation of the semi-structured interview questions
Is there a KPI that is of higher value to your organization than the other? If so, why?
What are your comments on the price range?
How did the pricing affect your choice?
Is there an additional service that is of higher value to your organization than the other? If so, why?
3.3 Application of Literature and Theory
The report’s literature review consists of a thematic approach of previous research on price
strategies and it presents established price models to introduce the reader to the researched
area. The literature review presents secondary literature sources from research, published
22
books, conference proceedings and peer-reviewed articles in journals. The major difference
between the primary and secondary sources, is the delay from which secondary sources are
conducted to when they are published compared to primary sources, where the data and
information collected is very current (Greener, 2008). In addition to this, the literature study
guides the research by introducing relevant terms within the dynamic field of AI, edtech,
pricing models and the concept of value in relation to price.
The chosen method for gathering empirical material to determine a pricing model consists of
collecting answers from a survey and in-depth interviews. Qualitative in-depth interviews
sought to determine the perceived value inspired by the CBC. The analysis of the collected
answers from in-depth interviews and the survey intend to assist in the explanation of how the
OCPs’ perceived value from an AI recommendation product relates to price and how the
price affects the perceived value. The results from the survey and the in-depth interviews,
together with the insights from the literature review, serve as foundation for formulating a
pricing model for AI-companies in general, and for the commissioner in particular.
3.3.1 Theory Bits
The method for determining a price model is to combine the review of written material, based
on secondary sources, with the answers from the preparatory interviews as well as the
qualitative and quantitative data collection from the survey and the in-depth interviews, the
primary sources. Theory bits refer to bits of theory such as concepts or hypotheses from
grounded theories where one must be cautious to keep the theory bits relevant when it is out
of the grounded theory context (Holton, et al., 2010). The proposed model will therefore be
the result of combining bits of the different methods in addition to using a theory in a new
setting to propose one final model.
3.3.2 Van Westendorp Price Sensitivity Meter
From the book Data Driven Sales (2018), it is argued that the Van Westendorp PSM is a
useful tool for determining the price of a new SaaS product, and is therefore believed to be
suitable for a new AIaaS product as well (Poyar, 2018). Furthermore, Poyar also states that the
Van Westendorp PSM is particularly useful compared to more complex price sensitivity tests
as it can be done with small sample sizes; “one-on-one conversations” or through collection
through web services (Poyar, 2018). The empirical material collected through Van Westendorp
analysis aims to reveal customers’ willingness to pay for the product. The four questions
needed to determine the accepted price range were asked; (i) at what price do you consider the
product to be too expensive to be bought? (ii) at what price do you consider the product to be too cheap to have
the wanted performance? (iii) at what price do you consider the product to be expensive but you may still
consider buying it? and (iv) at what price do you consider the product to be a bargain? – that is good value
for money (Van Westendorp, 1976). This information was plotted in a graph as a cumulative
frequency to display the price range that Sana Labs should target merely based on the answers
from the Van Westendorp analysis. The interviewees were OCPs who could be potential
customers to Sana and are thereby in a good position to estimate what price would be too
cheap, too expensive, a bargain and expensive but still worth to consider. To analyze the
results, the data was plotted in several graphs to show the optimal pricing point as well as the
acceptable price range.
23
3.3.3 Value Determination Inspired by CBC
As the conjoint analysis is a statistical method, the authors did not want to complement the
quantitative Van Westendorp method with another quantitative, but rather a qualitative
method. However, qualitative methods which seek to quantify value have found to be difficult
to find, where the conjoint analysis is the most commonly used method to quantify perceived
customer value. Furthermore, the conjoint analysis is a quantitative method that requires a
large amount of respondents to be statistically significant and valid as a data gathering method.
As an example, a total population of 100 would need 92 respondents to avoid a margin error
that is beyond ± 3% (Orme, 2010). Being limited by time and resources to gather a sufficient
amount of responses, the authors were instead influenced in the structured part of the in-
depth interviews by the CBC. By doing this, the data gathering no longer serves as a tool to
predict the value perception of the whole population. Therefore, the structured part of the in-
depth interviews has been performed up to the fourth stage of the CBC to act (i) as a
preparatory exercise for the commissioner to carry it out in full and (ii) as a base for discussion
for the respondents where qualitative responses could be collected through semi-structured
interviews as well as to test the appropriateness of a CBC.
The structured interview part seeks to act as a building block to value-based pricing. The
value-based pricing will not seek to determine the customers’ willingness to pay for each
product feature but rather the product as a whole, also known as product bundling (Lehmann
& Buxmann, 2009). According to Dholakia (2016), segmentation is vital when setting a value
based price, and that for B2B products, a segment can represent one single customer (Orme,
2010).
The empirical material collected through the structured interview was carried out in the first
four stages of the CBC analysis. This will be discussed in terms of what the respondents
consider as the most as well as least valuable product features which in turn is significant for a
company’s value capturing and price strategy. Following the first stage out of the five in the
conjoint analysis (Green & Srinivasan, 1976; Ryan, 1996), the authors held interviews with the
commissioner who provided suggestions for the attributes. The first stage of selecting
attributes also involved a pre-defined concept as a hypothetical product sought to be
investigated where some attributes were already set whilst others were yet to be determined.
Preparatory interviews were also held with external actors who provided their insight to the
considered attributes. The attributes which sought to be investigated were KPIs, Price and
Additional Services. Following this, the second stage of the conjoint analysis was carried out
where the authors were to determine the levels of the attributes. The authors needed to make
sure to have quantifiable attributes and ability to be traded-off. The levels for each of the
attributes were developed based on interviews with the commissioner: KPIs; 2x Learning
Efficacy, 12% Improved Churn Rate and 4.5x Problems Solved, Price; USD 5,000, USD 7,500 and
USD 10,000 and for Additional Services; Dedicated Solutions Engineer, Executive Briefings and
Weekly Performance Reports. The third stage of the CBC was carried out where hypothetical but
plausible and realistic product concepts were put together in advance of the structured
interviews. The two attributes with three attribute levels and a third attribute with four
attribute levels create 36 unique product combinations. By formulating ten questions that
present 3-4 product concepts, it is possible to cover all of the unique combinations. However,
the authors excluded some of the attribute level combinations due to them being unrealistic.
24
For example, it is not possible for the AIP to offer additional services for the lowest price. For
that reason, all such combinations were excluded and some other combinations appeared
twice instead in order to provide product combinations that are as realistic as possible. After
adjusting for the unrealistic combinations, the total number of product combinations
amounted to 33. The structured interviews were then held in the fourth manner of the CBC
where respondents chose a product which seemed most attractive to them or their
organization.
3.3.4 Regression Analysis
The quantitative data for the Van Westendorp PSM gathered through the survey was also
analyzed through a regression analysis, aiming to find a correlation between the number of
MAU’s and the price willing to pay. A regression analysis was plotted through a scatter graph,
where linear regressions can be plotted to find linear, polynomial, exponential or logarithmic
relationships depending on how the data is entered (e.g. take e on both sides to find
exponential regression). The accuracy of the linear model can then be determined by R2, a
statistic measure of how close the data is to the regression line. When R2 equals 100%, the
linear model fully explains the relationship between the variables while a 90% value only
explains the relationship to 90% of the variable. (Rachev, et al., 2011)
3.3.5 Thematic Analysis
To process the results gathered from the semi-structured part of the in-depth interviews the
empirics will reflected upon in a critical manner through a thematic approach (Blomkvist &
Hallin, 2014). This is done by identifying both explicit and implicit ideas within the gathered
results, being themes where particular words or opinions are analyzed by investigating the
frequency of appearance or co-occurrence (Guest, et al., 2012). A thematic approach or
analysis is suitable for processing qualitative empirics (Guest, et al., 2012) and is therefore
applicable to the semi-structured interviews held in this study. However, empirics from the
open-ended questions in the survey as well as answers from the structured part of the
interviews will also be included in the thematic analysis.
3.4 Quality of Scientific Research
To analyze the quality of a scientific report, the concepts validity and reliability can be used
(Blomkvist & Hallin, 2014). Both of the concepts are introduced and analyzed in this section
together with a description of actions needed for improvement and hence increase the
credibility of the report (Collis & Hussey, 2014). In addition to this, source criticism will be
evaluated and analyzed.
3.4.1 Reliability
Reliability is a term used to evaluate the issue of precision and accuracy of measurements and
data, as well as the similarity of results if a repeated study was to be generated (Collis &
Hussey, 2014). In section 3.1 to 3.3, it is explained how the methodology is built up to ensure
a replicable study, which increases the reliability. Reliability is reduced if participants provide
the answers they believe is wanted during the study and thereby give a faulty impression
25
(Collis & Hussey, 2014). However, as the respondents were to estimate price and value of a
new product, there is a lower risk that they would provide answers they believe are wanted as
no standards are yet set for the product. Besides, since it exists a commissioner for this
research, there is a risk for being bias in the report due to the possibility that another outcome
could have been generated with a different commissioner. Another risk for bias is associated
with the inclusion of different stakeholders in the value chain that might be affiliated with one
another (Collis & Hussey, 2014). To minimize the levels of bias, empirical material is gathered
from various actors.
When using a survey as a method, the reliability can be threatened if the respondents do not
represent organizations that are in the correct value chain position, i.e. an OCP in this case.
Further, there is a threat of inaccessible information due to secrecy. A literature review and in-
depth interviews are used to increase the study’s reliability and to complement the limited
insights provided by the survey. In addition, due to the limited research within pricing for
AIaaS, theories and methods used in this study could possibly be more suited for products
with a clear and established value for the customers and may not be as suited for new and
complex product offerings, which results in lower reliability of the study. Reliability is
improved by using survey as a method for collecting primary quantitative data since such data
is precise and can be captured at different points and contexts. In expansion to this,
quantitative data is associated with a positivist approach which in turn is associated with highly
reliable findings. The qualitative data collected through semi-structured interviews is on the
other hand associated with the interpretivist approach and has usually a high degree of validity
but a lower reliability. (Collis & Hussey, 2014)
With a limited amount of interviewees and with a poor spread of interviewees, the insights
into the edtech industry and the issue of pricing for a new AI-product can be insufficient. An
approach used to increase reliability and validity is triangulation which is a mixing of methods
and theory bits to verify the findings by allowing diverse viewpoints in which a number of data
sources, observers, methodologies and theoretical perspectives are combined. Reliability and
validity are also increased as the structured part of the in-depth interviews was performed
interactively between interviewers and interviewees in a way that allows the respondents to
motivate their selections. The lack of possibilities to strengthen the preferences has been
expressed as a shortcoming of proper conjoint analysis. To further strengthen the report and
better interpret the results, the empirical findings from the survey as well as the in-depth
interviews are evaluated together with the literature review and preparatory interviews. This
approach composes a profound form of triangulation and a better understanding since it
combines different perspectives, qualitative as well as quantitative research methods, on the
same research area. (Planing, 2014)
3.4.2 Validity
The other aspect of a report’s credibility is validity (Collis & Hussey, 2014). In research studies
face validity, construct validity, internal validity as well as external validity are ways of
characterizing the term (Greener, 2008). Face validity aims to reveal how well a test measures
what it is intended to measure by the researchers as well as how well the phenomena studied is
reflected in the results (Collis & Hussey, 2014). Face validity is particularly important in this
report to encourage participation in interviews and surveys. By providing a broad picture of
26
how the chosen method is related to the research questions, there is a greater possibility that
interviewees and survey respondents are interested in participating (Greener, 2008). Although
the authors aimed to provide a broad picture through the survey, the response rate only
reached 28%. With a lower response rate, the likelihood of bias or nonresponse errors
increase, which occurs when the subjects of those that did not respond vary to the ones who
did; which could have an impact on the results (Hager, et al., 2003). As reminders were sent
out to the population group for the survey, it can be argued that it led to a higher face validity
as it was a way to encourage the participation in the survey (Greener, 2008).
Construct validity means that the method used is measuring what the authors intend to
measure. A way to improve construct validity is by checking that the questions used in the
survey are in fact asking what the authors think they are asking. A way to check this is by using
item response theory and factor analysis and this is particularly important in online surveys
that are not conducted face-to-face since there is a limited possibility for clarification and
discussion of the questions’ meaning. In this research, the construct validity was checked by
using a test-group for the survey consisting of nine respondents. After analyzing how the test-
group interpreted the Van Westendorp-question, the authors modified it in order to make it
more clear and understandable for the target group. (Greener, 2008)
In business research, the concept of construct validity is also important since it relates to
phenomena that are non-observable, such as hypothetical constructs in form of ambition,
anxiety, motivation and satisfaction. Factors like these can be used to explain and manifest
phenomena through conducting in-depth interviews. However, these observations of
constructs can also lead to false conclusions. In this report, these kinds of misinterpretations
are limited through the use of the survey in addition to the in-depth interviews (Collis &
Hussey, 2014). To increase the validity through in-depth interviews, the authors centered the
discussions on three predetermined areas; KPIs, Price and Additional services. This was done
in order to ensure that the following discussion would correspond to the purpose of the study
(Blomkvist & Hallin, 2014) while not excluding other areas for discussion that may emerge;
which the respondents had the opportunity to bring up at the end of the interview.
The third form of validity is referred to internal validity and is related to causality which in
turn can be explained by examining if a factor (independent variable) causes an effect
(dependent variable) to happen and should not be confused with pure association of the two
factors (Greener, 2008). In addition to the mentioned kinds of validity, there is also external
validity which is commonly referred to as generalizability, which answers whether the study’s
results can be generalized to other situations and contexts (Greener, 2008). The rationale
behind collecting empirical results from actors in different segments within the edtech space
was to gather results that could be generalized to the whole edtech industry. This was fulfilled
by contacting the commissioner's all potential accessible customers, regardless of their size or
segment. Furthermore, this research aims to, through the analysis of the price- and value
perception of a particular AIaaS for personalized learning within the education industry,
propose a pricing model for all AIaaS offerings. This is attempted by reviewing literature
within price and value determination which is independent of industry and combining it with
the empirical data gathering. By doing this, the authors attempt to reach generalizability by
formulating the proposed model in general yet relevant parameters and variables that can be
applied to other industries that would benefit from AIaaS.
27
The interpretivism approach is beneficial for a high validity study while research errors are
affecting the validity negatively. Examples of such errors to be aware of are inappropriate and
misleading data gathering methods as well as poor sample sizes (Collis & Hussey, 2014). Since
it is impossible to get hold of all of the relevant company representatives within the targeted
segment due to their large amount and limited contact possibilities, only a certain section of
the target group was contacted as this were judged to be more likely of responding (Greener,
2008).
Relating validity to the survey used in this report, there is a risk that the answer to “Please
estimate your current expenses in USD per month for IT services such as data storage and
other cloud services” can be miscalculated and interpreted differently between different actors
since different respondents could potentially include different types of IT-services in their
estimation. However, since it is only an estimation, the authors of this report are only using
these estimations as a hint of the respondents’ current IT-expenses. Nine respondents were
used as a test group for the Van Westendorp question in which the question formulation was
re-assessed through the interpretation by the nine test respondents, which improves the
construct validity (Greener, 2008). After analyzing the collected responses from the test-
round, it was noticed that some of the respondents did not fill in the Van Westendorp-
questions in the way it was intended and some of the earlier respondents did not understand
the product. After realizing this problem, the survey was altered and a more detailed survey
explanation was provided to ensure a higher rate of relevant answers. After the test round,
more emphasis was put on, to a higher extent, relating the questions directly to the
information needed and limiting unclear objective purposes to the extent it was possible. To
improve validity, the nine answers provided by the test respondents that did not correspond to
the survey’s purpose were not included in the final analysis to ensure a fair and systematic data
gathering approach (Greener, 2008). Immediately after an interview, to improve the findings’
validity, time was spent to clarify the notes taken during the interview to make sure that all
relevant info was accurately documented (Collis & Hussey, 2014).
3.4.3 Source Criticism
The primary and secondary literature sources used in this report’s literature review and
methodology comes from published peer-reviewed journals as well as published books, which
have been discussed critically by the authors throughout the report. However, due to the
limited amount of relevant publications available in the area of pricing of AI products and AI
for personalization in the edtech industry, industry reports have also been used and
triangulated with theory and empirics to justify the fact that these reports have not been peer-
reviewed. To improve the quality of sources from primary research, the target group was
organizations within the edtech industry. Specifically within this industry, OCPs were targeted
as the relevant respondents for this research’s survey as well as interviews. To the extent it was
possible, CEOs and founders of respective company were contacted for interviews and asked
to fill out the survey since they have a holistic view of the company which is beneficial for
understanding the questions. (Greener, 2008)
28
3.5 Ethics
There are four different basic ethical requirements which research within social science has to
meet; (i) the information requirement, (ii) the consent requirement, (iii) the confidentiality
requirement in addition to (iv) the good use requirement (Blomkvist & Hallin, 2014). The first
requirement involves that the interviewees, or other people included in the study, are informed
about the purpose of the study. This was explained in the survey at the cover page as well as in
the email sent out to the sample population. For the interviewees, this was also explained
briefly in an email followed up by a more detailed explanation at the beginning of each
interview. The second requirement means that the people being studied have agreed to do so.
The people which have been included in this study have participated through free will and
have therefore agreed to the consent. The third requirement, which includes confidentiality of
all data collected, has been met at each responding organization is treated confidentially where
neither position nor name of organization is displayed in the research. The collected data is
handled with care so that only the individuals with correct access (the authors, supervisor and
commissioner) can access the data if needed. The last and fourth requirement is met as the
individuals studied in the research were informed about the purpose of the master thesis and
that it will be shared in KTH Diva, a portal at the university in which the thesis is being
carried out; the Royal Institute of Technology, in Stockholm, Sweden. (Blomkvist & Hallin,
2014)
3.6 AI for Education
The product presented in the survey is an AI technology which finds every students’ optimal
learning path by recommending the right question or explanation at the time in which the
student finds it most engaging; increasing the learning efficacy. The API developed by Sana is
integrated into educational platforms to scalably personalize their users’ learning path. Sana
has identified several KPIs for its potential customers that will be improved through the
product; time spent, number of problems solved, churn rate, engagement and daily activity,
retention and learning efficacy. Additional services which have been identified by the
commissioner are: dedicated solutions engineer, dedicated infrastructure for unparalleled
performance, executive briefings, enterprise-class SLA, custom API endpoints, unlimited
recommendations and weekly performance report. When choosing which KPIs to investigate
in this study, the authors together with the commissioner decided that the best suited would
be the ones that are as tangible as possible in order to make it easier for the respondents to
understand the implications of the KPIs. For that reason, the chosen KPIs were the ones that
were quantitatively expressed based on Sana’s data. The three KPIs chosen as attributes for
the fictive product used in the structured part of the in-depth interviews were; 4.5x more
problems solved by users, 12% improved churn rate and 2x learning efficacy. Three additional
services were also chosen; dedicated solutions engineer, weekly performance report and
executive briefings as these were considered by the commissioner to be the most evolved
among the previously mentioned additional services. These selections were made upon
recommendations from the commissioner as the chosen attributes were thought to be the
most significant and reflect the widest scope of value perception by an online education
provider. Further, Segments have also been identified that relates to the type of end user and
29
affects the willingness to pay as different business models are used between the segments. The
hitherto identified segments are online educators that provide content for; K12 (lower, middle
and high school), Higher Education (universities and colleges), Testprep, Enterprise and
Learning Management Systems (LMS).
30
4. Results and Analysis
This section presents the findings of the study where data was collected through a survey and in-depth interviews.
The survey responses are presented and visualized through graphs and a table. A part of the survey was the
Van Westendorp theory that was used to find out potential customers’ willingness to pay for an AI-product
towards the edtech industry. These findings are analyzed through the presentation of various graphs to show
different aspects of the results. In addition, the structured part of the in-depth interviews is presented in tables
that display respondents’ product choices with respect to the specific product attributes. Lastly, the semi-
structured part of the in-depth interviews is presented in a body text that is divided into the already determined
themes as well as the new themes that emerged during the process. The respondents’ organizational positions are
presented in Appendix II.
4.1 Survey Responses
This section presents facts in form of graphs and body text related to the target group
provided by survey respondents. Graph 1 displays the number of respondents in each tier
group sorted on the number of MAUs. The tier with 100,000 - 1,000,000 MAUs is the largest
response group with 11 respondents. The two other highly represented response groups had
up to 100,000 MAUs. The smallest represented response group is the one with over
10,000,000 users. The total number of survey respondents amounted to 41.
Graph 1. The number of survey respondents in each tier group (two respondents did not answer this question)
Graph 2 displays the number of organizations answering the survey that has considered to
personalize its education offering.
31
Graph 2. Amount of answers which has considered to personalize its education offering
Out of the ones who has considered to personalize its education offering, Graph 3 displays
which ones that has considered to develop it in-house with internal resources.
Graph 3. Amount of answers which has considered to personalize its education offering and has considered to
develop the offering in-house
Out of the 30 respondents who have considered to personalize the education offering in-
house, 14 respondents could not provide an estimation of the development costs related to
this whilst 16 respondents had a defined estimation. The results varied from estimations of a
fixed amount of USD 12,000 to USD 10,000,000 and yearly costs of USD 240,000 to USD
500,000 per year. The costs presented in Graph 4 below have been displayed to match a fixed
fee where an estimation of 5 years is taken based on the estimation of the CEO of Sana Labs
(2018). Tier 3 with 100,000 to 1,000,000 MAU’s was overly represented with six respondents
answering this question.
32
Graph 4. The estimated fixed cost of developing corresponding personalization product in-house
Table 4 presents the results with explanations of the respondents who did not believe that
personalization of education does apply to their business. In some cases, the respondent
believed personalization of education to apply, but not always through the use of AI.
33
Table 4. Presents the explanations for why personalization does not apply to the respondent’s business.
Respondent Tier
Group Please explain the reasons for why personalization of education does not apply to your
business.
R8 2
The reasons for not having put together a costed plan is due to the unproven commercial viability of
AI offers in the marketplace. Schools and colleges are unsure of the value and it is not only the cost
of the technology but the overhead of generating metadata and content for the algorithms that need
to be considered.
R2 3
Our company provides online educational games to millions of students each month. Since we adhere
so strictly to all COPPA standards, we do not collect any personally identifiable information. We
maintain that what we do best is provide a tool that teachers can use to reinforce their lessons, and
which they know to be completely safe for use by children. If we were to personalize our lessons, we
would have to reconsider our business model and our COPPA compliance strategies.
R33 3
We do allow for adaptivity (real-time feedback and adaptive pathways) and personalization
(randomized question data for example, ability to drawn on answers and student data to make more
personal) using "designed adaptivity" which means that the adaptivity is authored with our platform
in a simple "If student does this, then provide that feedback or take these actions."
R20 1 Our company is geared toward teachers' professional development. Our company is completely free
& we are not at that point yet because of lack of funding.
R9 1 It does, but for us personalization of education means breaking down the hierarchies, barriers and
perceptions of traditional education and empowering learners. Learning is personalized by learners
being in a position to set their own goals, provide feedback to peers and get feedback from peers, etc.
R5 6
We develop software for schools, a management information system. We personalise the system to
each school. Our costs are great as we build everything in-house and have a team of developers
constantly working, so the costs are high. Though, I do not think we are developing personal
algorithms.
R23 1 It does to our specific solution.
R16 1 It does apply, it is just not something we have been able to focus on quite yet. However, it is
something we could see ourselves doing more of in the future, depending on how our platform
evolves.
R11 1 We would like to have it but not develop it in-house. Chances are we will partner with companies like
[an AIP within the edtech space] to get this done.
R17 2 Our company focuses on student-agency and developing strong bonds between students and faculty
so there is no focus on AI and ML to fulfill the organization's goal.
The respondents were then asked about their current monthly IT expenses such as data
storage and cloud services. The answers are presented in Graph 5.
Graph 5. The monthly IT expenses of each respondent
34
4.2 Van Westendorp Price Sensitivity Meter
The collected responses from the Van Westendorp method are presented as a series of
cumulative distribution frequencies which will indicate a price range for an AI-product for
personalization for the edtech industry. The results collected through the survey and the semi-
structured interviews have shown an organization’s MAUs and the corresponding price
sensitivity which below is presented through the Van Westendorp PSM seen in Graphs 6-9.
The number of respondents for the Van Westendorp PSM amounts to 32 respondents. Graph
6 shows the range up to USD 20,000 of a fixed price per month as cumulative frequencies
where the cumulative frequencies too cheap and cheap have been inverted to more clearly identify
the intersection point between too cheap and expensive as well as cheap and too expensive. The range
in between these intersecting points is known as the acceptable price span (Van Westendorp,
1976). The price span for a fixed price per month independent on the number of MAUs lies in
between USD 5,000 to USD 10,000. The OPP lies at USD 5,000 where two thirds of the
respondents do not find this price neither too cheap nor too expensive.
Graph 6. The cumulative frequency of respondents up to USD 20,000 to show accepted price range of USD
5,000 to USD 10,000
Graph 7 is based on the same data as given above, only that the price is displayed per user
(learner). As the respondents provided a range of MAUs a lower, an upper and a median value
of the MAU span can be used to calculate the price per user. The accepted price range lies in
between USD 0.1 and USD 1 per learner for the low range of MAUs. The OPP lies at USD
0.011 per learner per month based on the same end of the range.
35
Graph 7. The cumulative frequency of the monthly price per MAU, calculated with the lower bound of the
MAU range given
Graph 8 is based on the higher number of MAUs, providing a lower price span than Graph 7
that is based on a lower number of MAU’s, giving a higher price per user. The accepted price
range lies in between USD 0.01 and USD 0.046 per learner. The OPP lies in between USD
0.010 and 0.011 per learner per month.
Graph 8. The cumulative frequency of the monthly price per MAU, calculated with the higher bound of the
MAU range given
Graph 9 is based on the median of the range of MAU given by the respondents. The accepted
price range lies between USD 0.02 per learner and USD 0.06 per learner. The OPP lies at
USD 0.020 per learner per month where two thirds of the respondents neither find it too
cheap nor too expensive.
36
Graph 9. The cumulative frequency of the monthly price per MAU, calculated with the median of the MAU
range given
To show the relationship between the number of MAUs and the corresponding price per
student, the collected answers have been plotted in a scatter diagram with a regression analysis
to show the trend of the relationship of price to MAUs, as shown Graph 10. When taking the
full sample into account, the value of R2 is 0.567, meaning that there is a 56.7% reliability in
that the trend line explains the reliability between the variables.
Graph 10. The scattered points of the natural logarithm of the number of MAUs against the natural
logarithm of the price willing to pay (expensive)
Graph 11 which displays a regression analysis for Tier 1 and 2, shows a trend line for the
relationship of the price (expensive) willing to pay per user and the number of MAUs. One
can see that the price per user decreases with the number of MAUs. R2 is 90.6% for this
37
relationship. Here, the case of 1-10,000 MAUs has been included in the graph. The cases of 1
MAU were not included in Graph 10 as this dramatically modifies the data. They are included
in the lower tier graph for the possibility of showing a trend at the different ends of the
spectrum.
Graph 11. The scattered points for Tier 1-2 of the natural logarithm of the number of MAUs against the
natural logarithm of the price willing to pay (expensive)
Graph 12 also shows a regression analysis of the relationship of price (expensive) per user and
the number of MAUs but for the tier groups with a larger amount of MAUs. The relationship
is similar to the one for Tiers 1-2 where the price (expensive) willing to pay per user decreases
with the number of MAU’s on the platform. The R2 is 90.1% for Tiers 3-6.
38
Graph 12. The scattered points for Tier 3-6 of the natural logarithm of the number of MAUs against the
natural logarithm of the price willing to pay (expensive)
Graph 13 is based on the average and median price (expensive) willing to pay as well as the
median and average MAUs for each tier group, to more clearly show the trend between the
different tier groups.
Graph 13. The median price willing to pay per tier group with trend lines for average of cheap and too cheap as
well as average of expensive and too expensive
4.3 Analysis of Quantitative Results
Here, an analysis of the results gathered from the survey is presented. The section is divided
into four key areas where price related to the number of MAUs is discussed, the price span, IT
costs as well as costs for developing a corresponding personalization product based on AI in-
house.
4.3.1 MAUs
The customers’ willingness to pay depends on several factors. One such factor, supported
from survey, is the number of MAUs, a variable which shows a relationship to the willingness
to pay. From Graphs 10-13, one can see that the trend line is negative between the price per
user and the number of MAUs, meaning that the more MAUs an online learning platform has,
the less it is willing to pay per user. In contrast, a smaller platform with fewer MAUs is willing
to pay more per user but less as a total sum (possibly up to a certain limit), usually because of
the limited amount of users at smaller platforms opposed to the higher amount of users at
larger platforms.
39
To analyze this a bit deeper, one can look at Graph 10 which shows the negative exponential
relationship between the price (expensive) willing to pay in relation to the number of MAUs.
The higher the number of MAUs the less the online platform is willing to pay per student.
From this graph, one can see clusters in lines following the same gradient as the trend line.
Why there are clusters and what the data points in the cluster have in common cannot yet be
determined by the graph, but there is a hypothesis that this could have to do with what
segment the OCP belongs to. From this graph, one can also see that the variance in
willingness to pay is a lot higher for the organizations with 100,000 users (amounting to
ln(MAU)<11) or less, through the data points with the same amount of users but the differing
price perception. One of the reasons for this is however that the smaller segment contains a
lot fewer data points than the segment above 100,000 users. The data points for the tier
groups with MAUs above 100,000 are clearly following the trend line. An aspect which must
be taken into consideration is the number of respondents within the tiers above 100,000 users;
which in total were a lot larger than the smaller tier groups as well as the variance in number
of MAUs.
From Graph 13 it can also be seen that the difference in price perception between the tier
groups vary a lot more for what is perceived to be too expensive as the altitude of these curves
are a lot higher than for what is considered as too cheap. From this graph, one would prefer to
see a clearer trend between the different tier groups where Tier 6 prefers the lowest price per
user and Tier 1 the highest. However, both Tier 2 and Tier 3 create exceptions of this which
does not follow the intended trend line. This could potentially be the tipping point where price
perception varies the most, also confirmed by Graph 14. Following the trend lines drawn for
an upper and a lower range of the graph, one can read the price ranges in USD per student per
month for respective order tier group in Table 5.
Table 5. Showing the various price ranges in USD per student per month for each of the six tier groups.
Tier 1 0.023 - 0.00
Tier 2 0.020 - 0.53
Tier 3 0.016 - 0.38
Tier 4 0.012 - 0.24
Tier 5 0.008 - 0.08
Tier 6 0.001 - 0.004
Based on these results and the ends of the ranges of each tier group, Graph 14 presents the
possible scenarios (a high case and a low case) of the total monthly fixed fee for each tier
group based on the answers in the survey.
40
Graph 14. A high and a low case of monthly fee for the different tier groups
The results from Graph 14 may not be useful nor realistic for all tier groups as Tier 6 ends up
paying less than Tier 2. However, when looking at the trend line one could assess this to be
more realistic as the tiers with a higher number of MAUs pay more per month than the tiers
with less MAUs. The answers received in the survey show a breaking point from Tier 4 and
upwards. This can either be due to a higher willingness to pay in Tier 4 than what a general
trend should be, or because of the lower willingness to pay of the respondents that answered
from Tier 5 and 6. Nevertheless, a pricing model would rely on a larger total monthly fee for
the larger tier groups. They may however pay a smaller fee per user, but the total amount must
exceed the one of the tier group below.
Something also to be noted is the variance within each tier group, where Tier 6 lies very close
in price to what is considered too cheap and too expensive (only a difference of 0.003 dollars
per student) while for Tier 2 this is 0.999 dollars per student and for Tier 1 is 1.48 dollars per
student showing a trend that the variance decreases with the number of MAUs. Once again,
the number of respondents within each tier group has to be taken into account, although in
general, this seems to be the case for the tier groups with a lower amount of MAUs.
4.3.2 Price Span
The results from the PSM showed that an edtech actor’s willingness to pay for an AIaaS
product is a monthly fixed fee of USD 5,000 - USD 10,000. From the Van Westendorp
analysis, it can be seen that the price span per user depends on whether the calculation is
based on the lower or the upper range of the provided MAU span. When considering all the
tiers and analyzing the results as if the number of MAUs was the upper end of the range, the
price span lies in between USD 0.01 and USD 0.1 per user; a range of USD 0.09. Considering
the lower range of the MAU span gives an accepted price range of USD 0.1 and USD 1.0 per
user; a range which is ten times larger than the one of the higher number of MAUs. It is
therefore very difficult to interpret a tier’s willingness to pay - particularly for the smaller tier
where the price span can vary hugely depending on whether the platform has 10 or 10,000
MAUs, and the effect on the price per student becomes just as large. From Graph 9, where
41
the median of the respondents’ user range is used, the price span becomes a lot smaller; a span
of only USD 0.04.
4.3.3 Cloud and IT costs
One possible factor related to a pricing model is an organization's current expenses within
other services such as cloud and IT. The survey respondents were initially asked this question
in order for the authors to find a relation to the OCP’s willingness to pay and its current IT
expenses. If current IT expenses were to mirror the willingness to pay the personalization
product, the AIaaS provider could benchmark against cloud providers to make use of a
competition based pricing model. All respondents who gave a number to the organization’s
current IT expenses did not also provide a response to the willingness to pay (cheap and
expensive). For the ones who did provide a response however, Graph 15 shows the
willingness to pay (cheap and expensive) of those respondents as well as the current monthly
IT expenses.
The respondents who filled out their monthly IT expenses and answered question 8, the Van
Westendorp question where respondents were asked about their willingness pay, are shown in
Graph 15 along with the given answer in IT expenses. This graph was made in order to
investigate if there exists a relationship between the willingness to pay and the current monthly
IT expenses.
Graph 15. The monthly IT expenses of each respondent in relation to the willingness to pay
Although one may see that the willingness to pay slightly increases with the current IT
expenses there are several anomalies with peaks in willingness to pay from respondents R19,
R11, R3, R26, R36 and R7. Therefore, it would be riskful to draw a conclusion that a
relationship of increased IT expenses relates to an increased willingness to pay. Therefore, one
cannot determine there to be a clear relationship between the IT expenses provided and the
willingness to pay for the personalization product. In addition to this, there are uncertainties in
42
the number provided by the respondents for the current IT expenses as these may include
varying products. For instance; one respondent may only provide its storage costs whilst
others may be including several services in the number provided. This is an uncertainty that
must be taken into account.
4.3.4 In-house Development Costs
An additional economic aspect to be discussed from the survey responses is the estimation of
the cost of developing a corresponding personalization product in-house. However, only 16 of
the respondents were able to provide an estimation of this. With an estimation provided by
the commissioner, the development of a corresponding personalization product in-house
would cost the OCP four to eight times more than purchasing it from a third party provider
(based on five ML developers working full time over five years, which in total costs
approximately USD 5,000,000). From Graph 4, one cannot draw a conclusion of the
relationship between the estimated cost of developing the personalization product in-house in
comparison to what the respondents are willing to pay. Furthermore, only one respondent
estimated the development cost to be above the commissioner’s estimation of USD 5,000,000
whilst the remaining 15 respondents estimated the development costs to be below the
estimation by the commissioner. The reason for this could be two-folded; the first alternative
is that the respondents have not made estimations based on the same variables such as time
frame and number of developers as the commissioner, and therefore reached a different
estimation as it is based on other numbers. The second alternative is that the respondents have
not made the estimations for an exact corresponding product with the same features as the
one developed by the commissioner, hence not recognizing the extent of the impact by the
product. Therefore, the variety and uncertainty that comes from the different assumptions
affect the reliability and validity of these results. Some respondents have calculated the
development costs to be as low as USD 12,000 - USD 14,000. Considering the salary of a data
scientist and the time spent on developing such a product, it is unrealistic that the OCP has
estimated the development costs for the same type of product as the one by the
commissioner. That an OCP would spend four to eight times more by developing a
personalization algorithm in-house is therefore still realistic, as the OCP is likely to have used
other factors of the variables.
4.4 Product Preferences
The results from the structured part of the interviews, which existed in 16 out of the total 24
in-depth interviews, are presented in this section. Table 6 presents the number of times each
attribute was available for choice as well as how many times each attribute was chosen. The
attribute levels assigned numbers in Table 6 are presented in chapter 3 under section 3.2.2.
The different products presented for the respondents as well as all of the product choices
made by the 16 respondents are presented in Appendix I.
43
Table 6. The number of times each attribute was available for choice as well as how many times each attribute
was chosen in absolute numbers and as a proportion of the total times available
KPI Price Additional Service None
1 2 3 1 2 3 1 2 3 4
Available: 160 192 176 44 176 208 112 128 128 160 112
Chosen attribute:
61 30 45 48 43 45 30 31 24 51 24
Proportion: 38% 16% 26% 33% 24% 22% 27% 24% 19% 32% 21%
As it is not of importance to examine the product choices per se, the product choices are
presented and later discussed in section 5.1 with a focus on the attributes that the selected
products consist of. The most apparent and significant results from the ten product choice
questions are presented in Table 7.
Table 7. The most apparent results from respondents’ product choices
Question # Findings
1 When the price remained constant at USD 10,000, six out of the 16 respondents would
not choose any of the three product bundling alternatives presented. Five out of 16
would prefer the option with 2x learning efficacy and dedicated solutions engineer for the price
of USD 10,000
2 No conclusion regarding respondents’ preferences can be drawn as all four alternatives
were chosen equally amount of times
3 Seven out of 16 respondents chose the alternative with 2x learning efficacy, USD 7,500
and weekly performance report
4 The equally preferred choices with six scores each are the combination 12% improved churn rate, USD 7,500 and dedicated solutions engineer and the combination 12% improved churn rate, USD 5,000 and no additional services
5 Eight out of 16 respondents chose 2x learning efficacy, USD 10,000 and weekly performance report. Five respondents would not choose any of the products presented
6 Ten out of 16 respondents chose 2x learning efficacy, USD 5,000 and no additional services
7 The service dedicated solutions engineer appears to be more appealing than weekly performance report
8 Eight out of 16 respondents chose 2x learning efficacy, USD 5,000 and no additional services
9 Nine out of 16 respondents chose 2x learning efficacy, USD 5,000 and no additional services
10 Eight out of 16 respondents chose 12% improved churn rate, USD 7,500 and executive briefings
44
4.5 Value Perception
This section presents the most common and significant themes highlighted during the semi-
structured part of the in-depth interviews. These results are related to value perception and
price for a B2B personalization product within edtech.
4.5.1 KPIs
The 12% improved churn rate KPI is important or the most important KPI for some
organizations (R27; R30; R34; R38; R42; R43). One organization expresses that it
accomplishes other KPIs (R27) and provides a holistic view of an organization’s performance
since the KPI can be translated into an increase in revenue for the organization (R30; R42)
which makes it easier to estimate the product’s actual worth for the organization (R42). One
of the organizations did not consider this KPI to be important (R36). An organization who
provides support and courseware for educational institutions expressed that it is not
concerned with churn rate since it is the schools’ concern (R33). It is also expressed that this
KPI is more applicable to higher education than to K12 (R40). An issue with measuring the
churn rate within the K12 segment is that the students usually get assigned to use the platform
as part of the course work and might not have a choice whether to use it or not which makes
the KPI not as useful. For that reason, it is suggested that a better KPI would be engagement
expressed in daily, weekly or monthly terms (R24).
The KPI 2x learning efficacy is important or the most important KPI for organizations (R24;
R28; R34; R35; R36; R43; R44) and claimed that when this KPI is reached, the other two KPIs
will automatically be reached as well (R28). An organization expressed that they would be
willing to pay millions to have a tool that guarantees 100% increase in efficacy (R24) while
another organization is willing to pay anything for a product that brings this increase in
efficacy KPI (R44).
One organization expressed that this KPI is more important than 4.5x more problems solved
(R30). Another organization that practices subscription per student per school as a pricing
model to sell their product towards K12, expresses that for their business model, it does not
matter how long time the learning takes (R42). Another organization emphasized the
ambiguity in promising 12% improved churn rate while at the same time delivering 2x learning
efficacy since it could be translated to that it requires a rather “low” effort (12% improved churn
rate) for duplicating the learning efficacy (R40). Further, one organization expressed that
learning efficacy is the goal of the organization which is reached without this kind of AI-
personalization product (R33). Another respondent highlighted that learning efficacy is a
highly complex KPI which takes long time to measure (R35) and can be reached by asking the
right questions which is enabled through ML (R39). A respondent argued for that reason that
using this kind of KPI would equal to simplifying something that is of high complexity (R35).
A B2B-organization within the edtech space expresses that, specifically for them, it is not
about increasing the end-learners efficacy but rather equip the teachers to be as efficient as
they can be when teaching. For that reason, it is more important for them to increase the
teachers’ efficacy by providing solutions that counter the problem of not having enough time
to teach. The same respondent added that it does not matter if the learners use their product
more efficiently since the K12-segment is dedicating a certain amount of teaching hours
45
regardless of how efficient the learning is. For that reason, since a teacher is deciding on the
teaching content and supports the learning, it is beneficial to have all students at the same
level. What the organization does to personalize their offering is to provide tips and
recommendations on what to re-read. For that reason, it is claimed that efficacy for students is
not important as the learning can take more or less time and the schools are not striving for
making it more efficient (R42). An e-learning organization within the enterprise segment
questions the concept of improved learning efficacy as “faster training” is not their intention.
It is important that the learners do not speed through the courses as the content is important
for their work (R34).
The 4.5x more problems solved KPI is the most important KPI for some organizations (R29, R43)
and not as important for other organizations (R34; R36). For a software vendor within higher
education, this KPI is considered as very important (R43). It is further formulated that this
KPI is not useful since it focuses on quantity and does not reveal anything about the actual
quality for the end learners (R35; R38). Another organization suggests that rather than
measuring the amount of problems solved, one could put a number on the level of
engagement connected to the learning outcomes (R39). An issue with this KPI is whether it
actually measures how well a learner is learning. The issue is not about how many problems a
learner is facing but rather how appropriate they are (R24). It is desired to have a KPI that
would pay regard to the organization’s return of investment (ROI) (R40; R46) and thereby
base it on the value for the customer (R46).
In addition to the KPI-specific comments above, it was expressed that it must be proven that
the KPIs are working for specific customers and not limited to a certain customer segment
(R35). A respondent means that 100% increase in efficacy or 4.5x more problems solved are
unrealistic while 12% improve in churn rate is more realistic. The organization would for that
reason want independent institutions to guarantee these KPIs for that specific organization
(R44). From one respondent, there is a perception that the KPIs used are reasonable (R36).
However, it is expressed that having a student perspective is important in order to deliver
value to actors within the education industry and to think in terms of “what delivers value to
the learners?”. For that reason, it is important to be able to measure the amount of knowledge
and the intended learning outcomes the learners achieve and fulfill. (R36) Lastly, another
organization expressed the difficulty in quantifying these values into KPIs without considering
the volume and profile of users (R39). One interviewee expressed that different learning
segments care about different KPIs (R43) and that the KPIs must be proved within each
specific segment as it is not good enough to prove KPIs from other case studies (R35).
4.5.2 Price Span
As already mentioned, two organizations expressed that they would be willing to pay millions
or even anything for this kind of product (R24; R44). Three other organizations expressed that
a price of USD 10,000 per month is “incredibly cheap” (R26; R28, R29), especially if it is not
linked to the amount of users connected to the platform (R26). It is supported by other
respondents that the suggested prices are low (R40, R46) and the lowest amount could be
raised to USD 8,000 per month (R40). Another organization expressed that USD 10,000 per
month seemed reasonable (R35). Other organizations expressed that USD 10,000 per month
46
is a pricey investment relative to the organizations’ current goals (R33, R39) since it would
correspond to the cost of 1.5 of their team’s computer engineers (R33).
It was formulated that the current prices used in the structured interview part only served the
upper end of the market (R34). To make this investment appealing, the product must increase
the customers’ revenue. A price of USD 10,000 is not too high if the product would legitimate
the customer to charge a higher price per learner. This could be done by delivering desirable
KPIs for the curriculum partners and for the market, for example a significantly higher level of
learner engagement (R39). It is further stated that larger organizations would be able to pay
USD 10,000 per month but schools would find it expensive. However, if the technology was
proven, it is possible that schools would be willing to pay this amount as well. The willingness
to pay for this product is dependent on the size of the organization (R43). Several
organizations prefer a price setting that is based on the organization’s volume of MAUs since
a price’s impact is different depending on the size of the organization that is buying the
product (R34; R35; R36). On the other hand, it is also mentioned that a fixed price is more
appealing than a volume-based fee (R26) since cost that depends on variables tend to become
much more expensive and uncertain for the organization than what fixed prices are (R39). A
B2B actor expressed that in their position, it is difficult to set a price based on user volume.
The respondent would rather see it formulated in relation to the size of the solution or API
instead (R29). Another desired way to price this kind of offering is by royalty shares in which
both risks and rewards are shared between the two organizations (R39). Other respondents
highlight that the price setting should be per student per year rather than a monthly fee (R36;
R40; R41; R43; R44). If the price is set per student, one would possibly need to use price
discrimination per geography since, for example, schools in China have a lot more students
than schools in Finland which would make the price per student fee too expensive in China
(R40).
A respondent from a company that is scaling rapidly expresses that for companies in this
stage, it is preferred to use a fixed monthly price. On the other hand, for earlier stage start-ups,
the most optimal price model would be “per student price” that scales with the startup before
leveling off to a more beneficial fixed monthly price (R33). It is stated that a subscription fee is
a decent method for pricing the product but another preferred price setting would be a one-
time fee which would allow the organization to own the technology and evolve it like they
want (R35).
The costs associated with content creation should be an additional variable to consider when
evaluating the price for this product (R21). Moreover, the price should not be fixed but rather
related to the price of a textbook which varies greatly in different countries. For that reason,
the price should not be expressed in absolute numbers but rather compared to prices in the
geographic market (R44). Specifically for start-ups, a respondent highlighted the importance of
reducing the entrance price point as much as possible. In this stage, it is critical to show value
very fast, for example by showing the student engagement and performance before and after
the product. However, it is added that some of the factors affecting the sales are intangible
and can be hard to track back to the product provider. A higher level of engagement can
however be quantified into improvement in the customer’s business in long term. When the
customer has become dependent on the product, they will be willing to pay for it since the
organization’s offering will not be as good without it (R45).
47
4.5.3 Additional Services
For an actor within the enterprise segment, it is considered that the most valuable additional
services are executive briefings and weekly performance reports (R34). Other organizations mentioned
weekly performance reports as the most important additional service (R35; R36; R42; R43) and
added that they would even need daily performance reports (R35; R42) since “without
reporting, it means nothing” (R42). On the other hand, another respondent mentioned
executive briefings as the single most important additional service and did not consider the other
two to be valuable (R38). It is expressed that what is considered as the most valuable service
might change over time (R39) as dedicated solutions engineer might be needed during the
implementation and initial phase or occasionally rather than on an ongoing basis like weekly
performance reports (R34, R39). Not to mention, executive briefings may not be needed more
often than once in a quarter (R43). For one organization, dedicated solutions engineer was not
considered as an important additional service since they, within the higher education segment,
rather exclude this service and pay less for the whole product since they can have their own
internal solutions engineer instead (R40). However, for the K12 segment, dedicated solutions
engineer might be an important service as this is something that is too expensive for the actors
to have internally (R43). For another organization, both executive briefings and dedicated
solutions engineer were considered as unimportant (R36). It was also expressed that additional
services are secondary to the choice as the KPIs are the most important factor to consider
(R44). For an actor within the enterprise segment, it is important that the additional services
are adapted to the specific customer’s needs and type of organization (R34).
4.6 Emerged Themes
Besides the areas that affect the value perception of an AI-personalization product, it was
revealed from the survey as well as during the in-depth interviews that there are additional
factors to consider when a price for this kind of product is formulated. The main category for
these factors is segment-based pricing. Aside from this, the respondents highlighted some
issues related to AI as a service in education which could affect the product’s value as well as
price perception.
4.6.1 Price Based on Segment
An actor within K12 tells that “If you can teach the content to 40 students together it will be
cheaper than personalizing each one" (R47). In line with this, it is expressed that schools are
economically restricted (R38) due to price pressure in the market that schools must adapt to
(R33), which makes price an important factor to consider (R38). Another actor mentions that
they are constrained by the budget as well as the fact that a product like this must be refined
continuously (R35). In line with this, it was also expressed that the K12 segment cannot afford
to pay as much as, for example, the enterprise segment (R43). Furthermore, it is said that
schools are very conservative institutions which makes it hard to implement new ideas and
technologies (R44). One interviewee within the K12 segment expressed that, each pupil has a
certain school capitation allowance in which school rent, teachers’ salaries, food and teaching
material are some of the areas that are included. If a learning institution would increase its
costs for teaching material, it would be at the expense of a lower amount of the school
48
capitation allowance to other areas that are needed for a K12 pupil. However, the same
respondent added that if a product can replace teachers or education centers, it is possible to
pay up to 50% more for that product (R36). Moreover, it is mentioned that the price should
be related to what the different school districts are receiving in monetary amount per student
annually (R40). Another organization within this segment would rather formulate the monthly
price expressed per student and per course. Additionally, the same respondent expressed that
the price should be related to the, per student, monetary amount received by the state in which
a participatory share would be better suited than a certain fixed sum (R36). A non-profit
organization towards professional development for teachers expresses that they are not at the
point of personalizing their offering due to lack of funding (R9).
A B2B-organization expressed that the Van Westendorp question was more relevant to answer
for organizations that provide B2C products rather than B2B products. Also, the same
organization stated that their own pricing depends on which country they sell to as they
consider the country-specific market conditions. Examples of that could be the different ways
software is being acquired by schools as this differs widely across countries, for example in
US, every school can make these decisions by itself. An actor in the edtech space that provides
learning that is not yet a part of the standard curriculum for K12, explains that having early
adopters as their customers prohibits them from being able to set a fixed price for their
product. For that reason, to target different schools in different countries, the organization
always needs to negotiate its price within K12 as well as consider competition and their
customers’ size when setting the price. (R42)
The issue of retaining learners is bigger in higher education than in K12. In the K12 segment,
where students are more or less forced to be on a certain platform, actors within the K12-
segment rather want to retain schools to the learning platform, not students (R33; R35; R40;
R42; R44). Other organizations mention that they work with keeping the teachers and parents
engaged to improve the customer retention (R35; R42). One organization within the K12
segment reveals that it is up to the teachers to plan and organize the courses. For that reason,
OCP is not trying to make the students spend more time on their platform (R36). A
respondent tells that it is the teachers that sign up the students to their learning platform and
support them in challenges as they are progressing (R42). It is expressed that one organization
within K12 is interested in any product that would assist the teachers and lighten their
workload in for example marking written assignments (R25). Since the students, at the request
of schools, buy textbooks from e-learning providers, these providers retain the customers by
working with schools and instructors to adopt the courses. After the end of a course they do
not work with retaining the students since it is the schools’ task (R33). An organization
mentions that they work with retention by providing content which is attractive, fun and
engaging for kids since the odds for them staying on the platform increases if it is fun to use
(R27). Another organization expresses their interest in implementing personalized learning and
emphasizes that they would need to have an external provider to realize it (R23).
A respondent within higher education reveals that there are high variations in volume of
students since their customers are colleges and other learning institutions with different focus
depending on if it is spring or fall season (R33). Some material is used continuously and some
is used on specific periods (R33). Another respondent highlighted the term disruptive
education which refers to that people are, as alternative learning rises, starting to question the
49
traditional learning paths (R45). With the rise of disruptive education, universities will partner
with new edtech-companies as more and more people realize that there are situations where
people need to learn faster to get job faster (R45).
Similar to R33, an actor who provides online classes for self-paced learning suggested to
consider peak- as well as low seasons and thereby price differently based on that (R26).
Another organization within this segment expressed that they do not work actively with
retaining their users (R42).
A provider of corporate training reveals that they have a varying client base which makes it
hard for them to offer a fixed fee that would suit all the enterprises’ different needs and
possibilities. For that reason, when talking about price, a fee based per user is more
appropriate. The amount of users decides how much an enterprise is willing to pay per user
and economies of scale is a relevant term in this situation since a company with a lower
amount of users probably do not have the same possibility to pay the same price as a company
with a significantly higher amount of users (R34). It is expressed by an organization that they
are not in charge of retaining the end learners to their platform but rather their customers
which are the enterprises (R29). It is also expressed that within the enterprise segment, it is less
common with personalization in the learning. The reason for that is the wide range of learning
content which makes it hard to interpret what the learners need and when they need it. For
that reason, personalization through AI would be valuable within this segment (R34).
4.6.2 Issues with AI in Education
It is argued that it is difficult to estimate the product’s value and price due to the limited
tangibility of it (R29) and that the value proposition of this kind of product is difficult to
quantify into commercial benefit (R39). A worry is also expressed towards the unproven
commercial viability of AI offers in the marketplace (R34). One respondent expresses that
these kinds of products are far from market reality and points out that digitalization is a huge
investment for publishers as they have to create more material than before (R44). Worries are
for that reason connected to the cost for the solution associated with feeding enough content
into the adaptive engine to support auto generated question types (R39). In line with this, an
organization also expresses that they would need to produce a lot more content for enabling
personalized feedback (R25). Another organization mentions that they are skeptical towards
these products due to that there are so many other publishers of learning content and courses
(R34). In UK, it is common to publish the learning material with a number of different exam
boards which makes it harder to reach a large enough volume for AI and ML engines (R33).
For a publisher, introducing different levels of adaptivity and difficulty would make all
different alternatives to mount up very quickly which imply significant overhead costs (R39).
For that reason, schools and colleges are unsure of the value and emphasizes that it is not only
the cost of the technology that needs to be considered but also the overhead costs associated
with generating metadata and content for the algorithms (R2). One respondent commented
that most of the work for personalization is done in higher education since computer based
learning is more established there than for younger students that do not have laptops (R24).
A provider of corporate training is in contact with an AI organization but they are skeptical as
it seems extremely expensive (R43). Skepticism is also raised towards having an external
provider of this kind of personalization product and it is expressed that the organization
50
would rather want to develop it in-house. The reason for that is because the product is
strongly connected to the core of the organization and they want to have the control of and
decide what problems and learning content that the learners receive (R35). Moreover, it is
expressed that an issue with using AI for personalization in learning is that there are no
international standards for expressing these adaptive question types and no equivalent on how
to interpret the content (R39). It is stressed that grading is an issue when personalized learning
paths are used. The respondent asked “how will teachers set grades on students that took
different paths?” (R44). There are lack of standards for adaptive assessments which is key to
scalability (R39). It is also expressed that personalization works well for self-study but not for
schools and institutions (R44). Another organization expresses that the organization’s goals
are fulfilled without AI and ML and therefore sees no reason to focus on that (R33).
Another organization highlights their restrictions in implementing this kind of product as they
strictly adhere to the Children's Online Privacy Protection Act (COPPA) standards which
prohibit them from collecting any personally identifiable information. The respondent added
that “If we were to personalize our lessons, we would have to reconsider our business model
and our COPPA compliance strategies”. Furthermore, it is important that the tools they
provide teachers with to reinforce the lessons are completely safe to use for their learners, the
pupils. (R5)
Further issues connected to personalized learning through AI are that it takes away the social
aspects of learning in a classroom setting since conversations amongst learners might get lost,
jealousy can be created and the feeling of group cohesion could be threatened (R35). Another
issue highlighted by an organization is that it is not always desired, from the schools’ point of
view, to create too individual learning paths as the pupils should be able to interact with not
only other pupils but also the teacher in a classroom setting. One organization expresses that
they have not been able to focus on personalization through AI yet but could see themselves
do it in the future, depending on how their platform evolves (R20).
51
5. Discussion
In this section, a discussion of the results is presented. It is initiated with the discussion of the product
preferences collected from the structured interviews. The semi-structured part of the in-depth interviews is then
discussed based on the three already set subject areas. Thereafter, the emerged themes; segments and issues with
AI in education, are discussed. This is then related and discussed to eventually reach a proposal of a pricing
model for companies offering AIaaS. After a discussion on sustainability and ethics, the chapter is finalized by
summarizing the findings.
5.1 Product Preferences
Table 6 shows that among the attributes presented, 2x learning efficacy was chosen the highest
amount of times relative to how many times it was available for choice. This number
amounted to 38%. From the semi-structured part of the in-depth interviews, 2x learning efficacy
was considered as important or the most important KPI by seven respondents, which was the
largest category of organizations expressing a KPI as important or the most important. Hence,
since both the structured and the semi-structured part of the in-depth interviews showed the
same result, it is fair to say that 2x learning efficacy seems to be the most important KPI for the
OCPs that were interviewed. The second most important KPI, based on results from the
structured as well as the semi-structured parts of the in-depth interviews, is 12% improved churn
rate, which was chosen 26% of the times it was available and regarded by 6 of the 16
respondents as important or the most important KPI. The least important KPI among the
three was 4.5x more problems solved which was chosen 16% of the times it was available and
regarded by two out of 16 as important whilst no respondent considered it to be the most
important. A possible reason for why the attribute level none was chosen the highest amount of
times relative to how many times it was available is because it only appeared, except from one
time in Q9, together with the cheapest price of USD 5,000. Hence, it is reasonable to assume
that this affected the none additional service attribute level to a high extent.
From the respondents’ product choices which are presented briefly in Table 7 and to the full
in Appendix III, five out of 16 chose none of the products in Q1 when the choice was
between three product combinations which all had a price of USD 10,000. This implies that,
although seven out of 24 respondents found the price to be too cheap, the price does seem to
have an impact on the product choice. Another interesting result found when comparing Q3
and Q5 is that; in the former question, seven respondents chose 2x learning efficacy, USD 7,500
and weekly performance report while in the latter question, eight respondents chose the same KPI
and service combination but for the price of USD 10,000. The choices are however dependent
on the different bundlings of attribute level combinations which limits the possibility to draw
conclusions without ceteris paribus. However, some conclusions are more obvious and can be
drawn. It is revealed from Q4 that six respondents chose 12% improved churn rate, USD 7,500
and dedicated solutions engineer, which is the same amount that chose 12% improved churn rate, USD
5,000 and no additional services. Based on this, it can be said that it lies in some organizations’
interest to pay extra for dedicated solutions engineer while it for other organizations is more
valuable to exclude the service and purchase AIaaS at a cheaper price instead. Further, as an
additional argument for why 4.5x more problems solved has revealed to be the least valuable KPI
among the three presented, the combination 4.5x more problems solved, USD 5,000 and no
52
additional services did not get chosen by any respondent while the same combination with the
KPI 2x learning efficacy got chosen ten times in Q6. Even when 4.5x more problems solved, USD
5,000 and no additional services was presented among twice as expensive alternatives, it did not
get chosen a significant amount of times. Contradicting to this are the responses for Q7 where
it is revealed that 4.5x more problems solved is as attractive as 12% improved churn rate. However, in
Q10 it is clear that 12% improved churn rate is considered more attractive than 4.5x more problems
solved as the other two attributes remained constant and 12% improved churn rate received nine
selections while 4.5x more problems solved received one selection. This emphasizes that the
context in which the products are presented may affect the product choices made by the
respondent group.
Continuing with the results from Q6 and the dominating ten responses for 2x learning efficacy,
USD 5,000 and no additional services, another four respondents chose 2x learning efficacy, USD
7,500 and executive briefings. This shows that even when cheaper alternatives exist, some
respondents are willing to pay extra for adding additional services to the AIaaS. When the
attribute level combination 2x learning efficacy, USD 5,000 and no additional services appeared a
second time, it received once again the double amount of selections as the second most
selected product in Q8. Further, in Q9, the same product combination as in Q8 received three
folded as many selections as the second most selected product which emphasizes that, at the
price of USD 5,000, an AIaaS with this KPI is highly desired. It is also revealed that there are
as many that chose 12% improved churn rate, USD 7,500 and no additional services as the ones that
chose 12% improved churn rate, USD 10,000 and weekly performance report. This emphasizes that
additional services should be offered as optional alternatives that can be added to the main
AIaaS.
5.2 Value Perception
This part of the discussion is divided into the three subject areas that were investigated
through in-depth interviews; KPIs, Price and Additional Services.
5.2.1 KPIs
Six out of 16 respondents expressed 12% improved churn rate as important or the most
important KPI to consider. Four out of these six respondents are actors within K12. To make
this product an appealing investment, three out of these four actors explicitly expressed that
the most desirable KPI is the one that can be quantified into value expressed as long term
revenue increase for the OCP. Based on these results and the consideration that K12 is a
conservative and slow market, a conclusion for this segment is that it demands guarantees in
the form of quantifiable KPIs to trust new products. It is also expressed that a price of USD
10,000 is not too high if the product would legitimate the OCP to charge a higher price per
learner. This could be done by delivering desirable KPIs for the curriculum partners and for
the market, for instance a significantly higher level of learner engagement. In contrast, it was
also revealed that, within the K12 segment, it is common that students are assigned to use a
certain platform as part of the curriculum which makes the KPI misleading in the K12
segment and better suited for higher education where mandatory participation in platforms are
not as common. An alternative to the KPI improved churn rate that could be more useful within
53
K12 is to measure daily, weekly or monthly engagement. Yet, it can be argued that improved churn rate
is particularly important as the AI-product and its corresponding value proposition is new and
somewhat unclear. By analyzing responses from in-depth interviews, one can conclude that
this KPI may be more relevant for a B2C OCP than a B2B or B2S OCP. This is because
churn rates usually are a concern for B2C actors where an individual learner seeks to be kept at
the platform. For a B2B or B2S OCP the retention would concern an entire school or
business, and not the individual learner. In addition to this, it is expressed that it could be
more appropriate to express the price based on user volume for a B2C actor rather than for a
B2B or B2S OCP. Factors to base the pricing on towards B2B or B2S actors can therefore be
expressed in relation to the size of the solution or the number of recommendations, which will
be considered in the proposed pricing model.
Seven out of 16 respondents expressed that 2x learning efficacy is important or the most
important KPI. Similar to what has been expressed about 12% improved churn rate, there is a
perception that it can be connected to overall improved revenue for the OCP. There is a
perception that if the KPI was to be proven, this would be invaluable for actors within
education. Hence, to increase the price, case studies that prove this KPI across segments
should be provided by the AIP itself or preferably by an independent institution. However,
criticism is also raised for this KPI as a respondent emphasizes that, specifically for their
organization, it does not matter how long the learning time is. In addition to this, the in-depth
interview results revealed that actors within K12, self-paced online learning and education
towards enterprise are not striving to make the learning faster but would rather want to
provide personalized recommendation tools to reinforce the learning. Another argument that
questions this KPI is that teachers within the K12 segment are dedicating a certain amount of
hours on a particular lesson or course regardless of how efficient the learning is. Hence, they
would rather have all students at the same level to be able to teach as many students as
possible during the same time. On the contrary, this relies on that all students are learning
efficiently and reaching the assessment criteria. Therefore, learning efficiency may be more
relevant for schools or teachers where there is a challenge in reaching the learning criteria for
all students. This argument is in line with that the types of KPIs used should be adjusted
depending on if the customer is a B2B or B2C actor while it must pay attention to the
organization’s specific interests. For instance, one of the respondents expressed that they work
with improving teacher’s efficacy and therefore would rather see KPIs corresponding to
teacher specific issues. However, one can argue that the increase in student efficacy is highly
related to the efficacy of teachers as their job is related to the hours spent per student.
Therefore, the authors believe the KPI 2x learning efficacy still to be highly relevant for the
considered pricing model.
For the KPI 4.5x more problems solved, there are two organizations that consider this important
and two organizations that do not consider it important. When analyzing the results, there is a
perception that 4.5x more problems solved is the least appreciated KPI among the three presented
as none of the 16 interviewed organizations did consider it the most important KPI. There is a
higher appreciation for this KPI within the higher education segment than in the K12 or
enterprise segment. The reason for this could be that K12 and enterprise learning is more
standardized and does not see the value in speeding through the learning process as they value
quality above quantity. However, the number of problems solved must not always be solely
54
related to quantity rather than quality, for instance; if a student is not at its optimal level of
difficulty, the student may spend unnecessary time for one specific question. Therefore, the
student does not solve enough problems in a given time frame, compared to if the level of
difficulty is optimized. Hence, the number of problems solved may still be an appropriate
KPI.
In addition to the discussion about the three KPIs, it was revealed during the interviews that
other possible KPIs are also important to discuss. Among these, a KPI that was mentioned
twice by different respondents was one that pays regard to OCP’s return of investment (ROI).
To be able to measure this, one would need to be able to derive increased revenue or profit to
the specific effect from the product by the AIP. Following discussions with the commissioner,
this can be done through A-B testing but is not investigated in this study due to time
constrictions. Also, to measure how well a learner is actually learning, rather than measuring,
for instance, the number of problems solved, the AIP can provide a KPI that involves
engagement connected to the learning outcomes achieved as well as how appropriate the
content is. However, to measure whether the student has reached the learning criteria or not is
up to each OCP rather than the AIP. Within higher education, it is possibly more common
and encouraged to have individual learning paths, which makes a KPI based on engagement
more relevant for this segment.
5.2.2 Price
The results from the PSM showed that an edtech actor’s willingness to pay for an AIaaS
product is a monthly fixed fee of USD 5,000 - USD 10,000. Seven out of all the 24 in-depth
interview respondents expressed this range to be very cheap or cheap and that organizations
“would be willing to pay anything if the product outcome is what it promises”. Two of these
organizations are however in the enterprise segment where prices generally tend to be less
restricted than for K12. Nevertheless, the two other respondents are within the segments K12
and higher education. Additionally, a respondent within the K12 segment has expressed this to
be an acceptable price but too cheap if the platform has “for instance 5,000,000 users”. Four
other organizations have however expressed this to be “pricey”, “above what the organization
can afford” or “only serve the upper end of the market”. Often, this depends on to which
segment the organization belongs where online platforms aimed towards the K12 segment
must adapt its prices to what the school can afford. In addition to this, the geography and the
difference in how financing works for each school in different countries also affect the
perception of the price span.
One interview revealed that there is a perception of larger organizations would be able to pay
USD 10,000 per month for this product while schools would find it expensive and that the
willingness to pay for this product is dependent on the size of the organization. However, no
clear correlation between the tier group and the perception of whether USD 10,000 is
expensive or cheap could be found from analyzing the in-depth interviews alone. Four out of
the 16 respondents that answered the structured in-depth interview expressed that USD
10,000 is a cheap or reasonable price while two respondents expressed that USD 10,000 is
pricey compared to the organization’s current goals. These former four respondents are within
different tiers and no conclusion can thereby be drawn, from in-depth interviews, between tier
group and if USD 10,000 is considered as cheap. The two respondents who expressed that
55
USD 10,000 is expensive belongs to the two middle tiers; Tier 3 and 4, which does not reveal
any particular connection between tier and if USD 10,000 is considered as expensive.
Regarding the way of pricing, there are two different alternatives that have been mentioned by
the interviewees and literature review; recurring payments or one-time fee. It can be argued
that the latter would make little sense as the personalization product would possibly need
maintenance periodically and require certain expertise that may not be present internally,
which is likely to be the case within the K12 segment where internal resources are strained. By
several actors, this also served as an argument for why they would like an external AI provider
to deliver this product.
The subscription based pricing can further be fixed or variable in which the latter allows the
price to depend on a certain factor. One of 16 respondents preferred a fixed fee over a
volume-based fee while five out of 16 rather would see a volume-based annual price for the
product. These respondents belong to different tier groups and no correlation between
volume-based fee preference and a specific tier group can thereby be identified, which makes
it more generalizable. As stated in section 4.3.1, the number of MAUs, which also was
revealed during the in-depth interviews, is the single most important factor that the variable
fee should consider. Another base for the variable subscription fee, specifically towards K12-
actors, would be to express the price as a proportion of the annual monetary amount per
student received by the state. However, one respondent revealed that there is a perception that
volume-based prices tend to become more expensive than fixed fees. On the other hand, the
interview results also showed that what is considered as the most favorable price setting
among fixed and variable depends on the current stage of the company. For that reason, for a
pricing model to be scalable, it should consider the growth stage of the OCP. A company in
an early stage with users in Tier 1 would benefit from paying per student rather than having a
fixed fee, which is supported from the Van Westendorp analysis. It is further argued that the
entrance price point should be as low as possible to be appealing for these companies. As the
company is growing and it reaches the tipping point of a large amount of users, the pricing
would level off to a more beneficial fixed monthly price instead. On the other hand, a
company that is scaling rapidly would benefit from a fixed fee which is more predictable and
do not escalate significantly when the number of MAUs increases. For companies that grow
beyond the highest tier group, a variable pricing based on price per student could be added to
the fixed price. This variable should be less than what the smaller OCPs in Tier 1 would pay
per learner as it is displayed in Table 5 that tier groups ≥ 2 are generally not willing to pay as
much per learner as actors in Tier 1. This is also related to the theory of Chao (2013) with
different tariffs depending on the tier.
In addition, the respondents are willing to not only base the price on an organization’s MAUs
but also express it as a sum of all students connected to the platform, and therefore consider a
school’s total amount of students rather than just the MAUs. However, if the price is set per
student, there are reasons for using price discrimination to be able to attract actors in more
economically developed countries as well as in less economically developed countries. It was
suggested that the price of a textbook within a geographic market can serve as an index for the
price discrimination, which is also a reason to not express the prices in absolute numbers.
Content creation costs is an aspect that an interviewee regards as an issue but does not
necessarily have to be related positively to personalized learning.
56
A common theme from interview responses is that the OCPs value the importance of having
guarantees for that this product actually works for their specific segment and company. The
need for this kind of warranty can be originated to the newness of AIaaS within edtech and
the limited research within the field. In addition to this, the in-depth interviews revealed that
there are different issues related to AIaaS which make OCPs cautious; this is further discussed
in 5.3.2. One way to counteract this product uncertainty is by providing a pricing that is based
on shared risks and rewards, referred to as royalty shares by one respondent, between the AI-
provider and the OCP. Similar to PBP, shared risks and rewards can be a way to increase trust
between the actors, which is important for these kinds of new offerings.
5.2.3 Additional Services
There is dispersion in which additional services that are the most important. However, weekly
performance reports was considered by four interviewees out of 16 as the most important
additional service and two respondents added that they would even appreciate daily reports. It
was expressed that it is desired to have additional services that change over time as the needs
and requirements of the OCPs change. Moreover, depending on what the OCP desires, some
of the services can be offered periodically, such as executive briefings and the dedicated solutions
engineer, while other services might be offered continuously, such as performance reports.
Furthermore, it was revealed that the need for additional services is dependent on what
segment the OCP belongs to as actors within higher education tend to have more resources
and believe that they might provide some of the services themselves. On the contrary, actors
within the K12 segment do not have the same possibilities due to the limit of resources. This
suggests that product bundling should be tailored towards specific actors depending on the
segment and the OCP’s level of internal resources. To have the possibility of adapting the
additional services towards the needs of the OCPs seem to be more important within the
enterprise segment since organizations have a wide range of needs and interests than schools
or universities. It is also expressed that KPI is the most critical factor for choosing the product
and additional services are secondary to that.
5.3 Discussion on Emerged Themes
This section discusses the most common focus areas connected to pricing discussed in the in-
depth interviews as well as from the survey answers.
5.3.1 Segments
What segment the respondent belongs to has showed to be important to consider when
examining the price setting for an AI-personalization product within education. It is revealed
that actors from different segments have different needs, restrictions, interests and budgets
which are highly relevant for the price determination of a new recommendation product for
education. Specifically for K12, the actors strive to use as little resources as possible to reach
the largest volume of students possible, which makes the K12 segment a hard market to
penetrate for providers of new innovations. This limited amount of resources derives from
price pressures in the market which makes it hard for organizations that have schools as their
customers to invest in new kinds of products. However, there is a will to drive the adoption
57
higher through AIaaS within the segment but it is hard for OCPs to drive this adoption
through price increases for schools. Such economical restrictions within a segment should for
that reason be considered and reflected in the pricing unless it is proven that the product
replaces learning material or teaching hours. In this scenario, when the effects of AIaaS can be
directly derived to decreases in costs, it is possible to free resources which in turn enable B2S
OCPs’ customers, schools, to pay for personalization through AI. In relation to the discussion
of a direct connection of KPIs to the revenue increase, it is as relevant to examine the
possibility of connecting the product to a potential decrease in current costs for schools. This
is however potentially more suitable for the K12 segment. Further, one respondent expressed
that, for their organization, it was not a very complicated task to create their personalization
tool. However, not all actors have enough data or MAUs to enable a personalization engine
and it is not possible for all actors within the edtech space to develop the product by
themselves, especially not for schools within the K12 segment where both resources and
capabilities are lacking. In line with this, several actors expressed that they would like an
external AI provider to deliver such a product.
Similar to relating the AIaaS pricing to the price of a textbook within the specific geographical
market, one could extend this to include other geographically specific conditions as well. For
instance, in some geographical locations, the decision to acquire software is made by the
individual schools themselves while in other geographies the decision is made at a higher level
and applies to several schools, for instance per district. A deeper way of customizing the price
towards OCPs is by considering how integrated their content is in the standard curriculum as
for example OCPs within programming are struggling more to sell their offering to schools
than what OCPs in mathematics might do. The former aims to target early adopters and it is
not unusual to sell such products through negotiation to K12. For this reason, it is reasonable
to assume that these actors also have different possibilities to pay for AIaaS. However, since
this factor is so specific, it will not be considered in the proposed pricing model as the authors’
aim to propose a generalized model.
In addition to considering the amount of MAUs, one may also consider high and low seasons
during a year and therefore price differently. As MAUs in K12 and higher education peaks
during spring and fall, other OCPs might have their peak season during the summer instead.
On the contrary, other organizations that provide testprep or enterprise education may peak
during the summer and winter holidays as people take time from the regular work for
education. For this reason, the consideration of peaks in seasons might be a good complement
to the number of MAUs used for pricing since the number of users might differ slightly from
the MAU average within a year or a semester. However, since the number of MAUs will be
considered in the proposed pricing model, the decision is made to not include peak seasons in
the model since it is not believed to have a major impact on the MAU average.
Another area that differs between different actors and segments is how much they are working
with retaining the end learners. As this is highly important for schools, it is less significant for
OCPs towards K12 to measure how well the students are retained since participation often is
mandatory and it is in the teachers’ interest to retain students. For that reason, as it is not the
OCP’s mission to make the students spend more time on their platform, five respondents
expressed that what they instead focus on is to retain the schools. An OCP within corporate
training also expressed that they work with retaining the enterprises rather than the end
58
learners, which are the enterprises’ employees. This highlights that the KPIs presented towards
B2S and B2B actors within the K12 and enterprise segment should capture how well their
customers will be retained through the AIaaS. However, expressing the AIaaS’s KPIs in terms
of OCPs’ customers’ interests is an indirect way of expressing the OCP’s direct interests. An
OCP for self-paced learning also expressed that they do not work actively with retaining their
users which is surprising as many of the actors within the segment are profit-based. For that
reason, by providing learning content that is more interactive, it is possible that such
organizations would experience a higher level of user retention and thereby increase in
revenue.
The enterprise segment varies from the K12 segment in the sense that it is not as restricted
when it comes to budget since OCP’s customers (the enterprises) might be willing to pay extra
for the personalization offering towards its employees. Additionally, within the corporate
training segment, it is expressed that personalization is difficult to develop internally as the
OCP has a wide range of content. For that reason, one can legitimate a higher price within the
enterprise segment since it would be a valuable feature that enterprises might be willing to
finance themselves.
5.3.2 Issues with AI in Education
From the in-depth interviews, some issues with purchasing AIaaS in general as well as for the
edtech sector were raised. These are similar to the ones with SaaS where the biggest concern is
that the actual value is not visible until after the purchase is made and the product has been
used. To counter this issue, as already mentioned, it is of great importance to express the value
in terms of relevant long term business KPIs. Also, there seem to be a division of opinions
regarding how far from the market reality AIaaS is. Some respondents expressed that this kind
of personalization product is a costly and an unnecessary investment which in turn requires a
lot of resources to produce large enough data to analyze. In contrast, others pointed out that
personalization tools are vital to stay relevant in today’s digitalized society. Moreover, it is
expressed that AIaaS for personalization is more applicable to higher education than to K12
since the latter segment is not as digitalized. However, ever younger pupils are using laptops,
tablets and smartphones which puts pressure on schools to adapt and invest in new
technologies (Manning, 2017).
Some actors are worried about losing control of the content by having an external AIP.
However, the buying party is still the owner of its content and the content provided to the end
users is the same as without the personalization product with the exception that the learning
path may be different for the learners. For this reason, the concern regarding control should
not have a major impact on the pricing. Another issue that may however be more significant is
the lack of standards to express and interpret adaptive learning content. This is particularly for
learning institutions, where it is important that adaptive learning and assessments are marked
as fairly as non-adaptive content. In addition, current standards such as COPPA might have to
adjust its current criteria in order to enable personalized education while not excluding
involved actors from complying with the standards.
Lastly, there is an issue of depriving the social aspects of classroom learning by introducing
digital personalization tools in education. In order to minimize this impact, the personalization
can be restricted to only provide content from a certain part of the curriculum. In this way, the
59
students might be exposed to slightly different problems but the subject area studied will
remain the same for all learners to ensure group cohesion and encourage classroom
discussions.
5.4 Proposal of a Pricing Model for AIaaS
The model presented in Figure 7 is a pricing model which can be applied to an AI-
organization delivering products based on AI and ML, where AI is offered aaS. The model is a
result of theory bits since it is based upon the model presented by Lehmann & Buxmann
(2009) where a specific variable is chosen within each parameter to fit an AIaaS provider. In
addition to this theory, the model also considers the SaaS pricing identified by Chao (2013).
Hence, the model combines parts of existing pricing models found in the performed literature
review as well as the primary results gathered through the performed survey and in-depth
interviews to fulfill the needs of an AI-organization. The model consists of free
implementation to create a lock-in effect for the customer as this was shown through the
literature review to be of high importance to strengthen the network effects of a software
provider. This is then combined with a monthly subscription fee based on the number of
MAUs. The model is also related to the customer’s perceived value through the price bundling
parameter, where additional services can be added on to the fixed fee for MAUs.
The model does not include strategy as one of the parameters as the results in the literature
study showed that a strategy can contain several models - it is therefore irrational if the model
itself contained a strategy. Hence, the strategy is displayed as an umbrella unit at a level above
the parameters of the pricing model.
Figure 7. Proposed pricing model for AI personalization products
60
The strategy an organization should decide upon is the first step in the proposed pricing
model. There are two paths to choose in between where skimming was present as a third
choice in the model presented by Lehmann & Buxmann (2009). Similar to a software offering,
it is reasonable to assume that the initial price should not be set too high as it is more difficult
to win dominant market share. With an initial low price, an organization can grow quickly as
more customers adopt to the product. With existing customers who become dependent on the
software, the price can be increased step-wise to maximize revenue and profits, this is also
supported by respondents in the in-depth interviews. The skimming strategy has therefore
been excluded in this model as the interview results have shown it to be difficult to carry
through and be successful with such a strategy within the edtech space, based on the
experience from the interviewees. Furthermore, literature suggests that freemium and pricing
penetration are more suitable for software tools (Lehmann & Buxmann, 2009) as such
strategies seek to create a lock-in effect, which is important for organizations benefiting from
network effects. In order to create strong network effects, the provider must follow a strategy
which allows it to lock in as many suitable users as possible which will create a CLV (Farris, et
al., 2010) high enough for the organization to possibly provide free or discounted integration.
This also promotes the choice of excluding the skimming strategy.
For the parameters, specific variables have been chosen to suit companies delivering AIaaS
products. The formation of price is value based as cost based does not make any sense for AI
companies considering the CLV. The degree of interaction can be unilateral or interactive at
an initial point; the more unilateral and more automized the price determination becomes, the
less resources are needed to be spent by the provider. Nevertheless, the degree of interaction
may be needed to be more interactive at the initial growth stage of an organization along with
the level of market acceptance of the product. When providing a new product, the degree of
interaction might also be higher for the vendor to have a dialogue with the buyer in order to
agree upon a price to secure the customer and create a lock-in effect. As a product matures
and an organization grows, the degree of interaction will decrease to eventually become
entirely unilateral. This should also be what an AIP aims for.
The structure of payment flow is primarily based on recurring payments, either yearly,
quarterly or monthly depending on the buyer’s wish. A yearly signup could come with a small
discount compared to monthly signups. Integration is for free as the organization seeks to
create a lock-in effect for the users where a free integration perks the buyer to connect the AI
product into its existing system and discover the need of it; therefore becoming locked in. This
is more important for startups than for more mature companies as AIPs are highly dependent
on vast amounts of data to create strong network effects. Therefore, startups should provide
free integration in order to collect a sufficient amount of data. As an organization becomes
more mature, the integration may be charged for as the organization already possesses the data
needed to train its AI product. The recurring fee can be either fixed, variable or a mix of both.
A variable price will depend on the exact number of MAUs whilst the fixed price will depend
on a range of MAUs. The purpose of this is that there is a limit where the variable price
becomes too expensive and a fixed price is more beneficial. Moreover, one can create a
combination of the two where the OCP pays a fixed price up to a certain number of MAUs
and if the OCP exceeds this tier, a variable price is paid for the number of MAUs exceeding
the ones included in the fixed fee.
61
The assessment base, in the commissioner’s case, is usage dependent based on the number of
MAUs whilst for other AI-recommendation companies it can be based on the number of
recommendations provided by the algorithm. The in-depth interviews have shown that the
recommendation base could be more appealing for some OCPs since it may be more relevant
for their business. If choosing the usage dependent assessment base, the AIP must then select
a linear pricing model, 2PT or a 3PT (Chao, 2013). From the survey and in-depth interviews,
results have shown that the level of tariff depends on the number of MAUs. OCPs with the
lowest number of users should be provided a linear model whilst the OCPs with the largest
number of MAUs should be allowed for a 3PT. The assessment base can also be usage
independent, which is performance based, where specific KPIs can be provided to measure
the improved performance. The KPIs must alter depending on which segment the customer
belongs to or whether it is a B2C or B2B actor. Such a KPI can be ROI, increased revenue,
decreased cost, improved churn rate, increased efficacy amongst others. Although potential
customers may want to perform an A-B testing to ensure the improved performance, use
cases will increase the assurance for future customers as the product becomes proved and
market accepted. For a new product, A-B testing may however be the most suitable way to
prove the increase in performance. An extension of this logic would be to base the pricing on
a percentage of the customer’s revenue, or to benchmark it to other comparable expenses.
What this percentage would be has however not been investigated in this research as the
respondents have not shared information on its revenue.
Price discrimination may, in this case, be one of the most important factors as it must be
present in order to reach a customer base which is as wide as possible. Price discrimination
will be present to a second and third degree. For the second discrimination degree is built on
the principle of self-selection for the product combination. The second degree also involves
quantity-based price discrimination where the number of MAU’s can be used to place the
buyers in different tier groups. The number of MAU’s relates to the buyer’s revenue,
profitability and hence its ability to pay. Therefore, organizations with a larger number of
MAUs will be placed in a tier group of a higher price per user. This also includes time
discrimination; this is where actors pay different prices depending on the point in time the
product is purchased. For instance, early buyers may receive a lower price as the AIP is in
greater need of data compared to when the company is more mature and does not have to be
as selective; such buyers will pay a higher price. The third degree of discrimination involves
market discrimination such as geographies and segments. One measurement of determining
geographical discrimination can be through a textbook index where the price of a textbook
acts as a base for the price. This would however mostly be relevant for the K12 segment and
possibly higher education, as prices of textbooks vary significantly depending on which
country the book is bought in.
Price bundling is based on customized bundling which is individual for each buyer. The price
bundling is based on the software itself, integration, maintenance and additional services
which makes up the product. The degree of integration is complimentary as the products are
independent of one another. The price level of the product bundling is then subadditive as the
buyer should benefit from combining the products rather than purchasing them individually.
62
5.5 Sustainability
This study investigated pricing models and its relation to value for AIaaS companies within the
edtech space. In order for the proposed solution to be valid, one must consider its
sustainability effects and implications. The concept of sustainability was in 1987 defined by the
UN to consist of three pillars; economic, social and environmental sustainability (UN, 1987).
Within the economic sustainability pillar there are several aspects to be discussed; one must
consider the sustainability of the provider as well as the buyer and the end user; the learner.
Beginning with the provider, which in this case is Sana Labs but may be any AIaaS provider,
the solution has been investigated to meet profitable revenue targets based on the
development costs of the product. The individuals involved in the organization must be
compensated for the time they put in, investors are expecting to receive a return on their
capital and the organization itself seeks to contribute to a growing regional economic
development, which is measured through the contribution to the GDP- a goal generally
accepted by the public in order to keep economic sustainability and has been the most
important policy goal for the five most recent decades (Moldan, et al., 2012). Sustainable
development also considers the usage of non-renewable resources in a manner that does not
eliminate easy access to them for future generations (Moldan, et al., 2012). Since a pricing
model only is an idea rather than a physical product it is up to each and every organization to
implement a policy along with the pricing model that meet the global economic sustainability
goals. Sana’s product which seeks to make adaptive learning available for everyone is by itself
economically sustainable as education should foster more innovation, job creations and
economic growth (Bughin, et al., 2017). UNESCO states that 24.4 million primary school
teachers will need to be recruited and trained globally for the world to reach total access to
primary education by 2030 (Bughin, et al., 2017). In addition to this, another 44.4 million
teachers will need to be recruited for secondary schools (Bughin, et al., 2017). AI products
could contribute in solving the demand of teachers where, for instance, Sana’s AI product
personalizes the education which saves the teacher time and effort. At a wider scale, online
education in general may also be a possible contributor to the solution in meeting the rising
demand of teachers where students will be able to, to a certain extent, learn through AI rather
than a physical teacher and be assessed by a computer rather than a teacher (Bughin, et al.,
2017).
The social pillar of sustainability is most likely the one of highest significance for the human
survival (Moldan, et al., 2012). From previous literature, it is not entirely clear what is meant by
social sustainability - whether it relates to health or to the long term survival of human
mankind (Moldan, et al., 2012). Within this study, social sustainability relates to the people
which are affected by the product. There are two sides to this aspect; the organizations which
seek to implement the pricing model for its AIaaS product and the learners within the edtech
space that are the final consumers being affected by the use of Sana’s AI product. As the goal
of the product essentially is to make adaptive learning available for everyone, the purpose of
the product’s existence meets the sustainability goal of maintaining good health; as this can be
spread through education. In a report by the consulting firm McKinsey & Co (2017) it is
stated that many developed countries suffer from mismatches between education and
employment as the educations to fail meet the demand of the employers and educated
individuals also feel that the labor market fails to match the education with the employment.
63
At a more general level than the specific AI that Sana works with, McKinsey believes that AI
in education will help to minimize these mismatches between academia and the labor market.
AI in education will also help individuals define their level of competency; improve learning
outcome and the quality of education which then can be used to match for an applicable
employer. AI in education also aims to reduce teachers’ administrative tasks and focus on what
teachers are meant to do; teach (Bughin, et al., 2017). This will further contribute to social
sustainability as students and teachers achieve a better working environment, a factor
important for the health and therefore social sustainability.
The third pillar of sustainability, the environmental one, was formerly a part of the social and
economic which sought to both include environmental sustainability (Moldan, et al., 2012). A
definition of environmental sustainability introduced by Goodland (1995) is that it “seeks to
improve human welfare by protecting the sources of raw materials used for human needs and
ensuring that the sinks for human wastes are not exceeded, in order to prevent harm to
humans” (Moldan, et al., 2012, p.6). With relation to pricing models of AIaaS products, the
major resource used is the raw materials needed for computers used by the AI researchers and
developers. This is an aspect every organization using a computer needs to consider today, as
materials such as metals and plastics are processed and used. One important action is to
recycle or make up for the environmental footprint the organization causes which can be done
by planting trees which emit the amount of CO2 used by the processes of the organization.
Another aspect for an AI provider is the use of data storage. The storage of data, specifically
remote storage which is stored in clouds or platforms, requires vast amounts of energy to
continuously be kept running. In the report The Cloud Begins with Coal (Mills, 2013) it is
explained that data traffic used to be the data which flows to and from the user and the data
storage. However, nowadays data traffic is to a larger extent associated with intra-data-center
traffic due to the increase use of IT services such as remote data storage and real time
processing. In the same report, Mills (2013) also shows that nearly seven zettabytes (1ZB =
10^21 bytes or 1Bn Terabytes) per year was needed in 2016 for global data center traffic. In
comparison to a hard drive in a laptop, in which 1 Watt per user is needed to access one photo
twice a day or a hundred, cloud storage can consume 10 times more energy than storing and
accessing it through a laptop. In cloud storage the energy consumption increases with the
amount of data, which is not the case for a hard drive (Mills, 2013). Another researcher states
that data storage will be one of the largest energy consumers and will stand for one fifth of the
global energy consumption by 2025 (Andrae, 2017). This is something that organizations
providing AI solutions need to consider with great seriousness as their products depend on
vast amounts of data, which in Sana’s case is stored in a remote server through a cloud service
(Sana, 2018).
5.6 Ethical Implications
As for sustainability, ethical implications in this study can be divided into and applied to the
area of AI and its relation to education as well as the subject of pricing. Below, a discussion of
ethics related to AI and education will be followed on ethics within pricing.
For AI as a subject, there is a lot of discussion within ethics. The first distinction that needs to
be made when discussing machine ethics is the difference between developing ethics for a
64
machine and developing ethics for the humans who use the machine. In the first case, the
humans who develop the machine, or more specifically AI-product, the machine must learn
ethical principles or have a process of discovering and solving unethical dilemmas through
their own decision making. The second case involves the insurance of that a machine is not
used by a human in an unethical manner. This is however related to the decision making of the
human rather than the machine’s. (Anderson & Leigh Anderson, 2011)
Another ethical implication related specifically to AI in education is who owns the data on
students; who has access to it, what is it used for and by whom (Bughin, et al., 2017). In
Europe, in which the commissioner operates, the General Data Protection Regulation (EU,
2016) is particularly important when collecting data. For organizations operating within the AI
segment, there exists guidelines on automated decision making (Jacobs & Ritzer, 2017). In the
article on how AI is influenced by the GDPR Jacobs and Ritzer (2017) emphasize that the
regulation states decision making “solely based on automated processing”, meaning there is a
complete absence of human involved decision making. These aspects must be considered by
the organizations sharing and collecting the data; Sana in particular but also its customers who
have primary access to the student data.
Ethical implications related to marketing and pricing, are generally discussed within the field of
targeted marketing. Targeted marketing has almost become synonymous with competitive
strategies, where ethics commonly is discussed for harmful products such as alcohol and
cigarettes. The opposite of this is price discrimination - where some potential buyers are
excluded from the marketing as certain groups are denied access to a specific price as they
belong to a specific customer segment (Cui & Choudhury, 2003). In this study, the use of
price discrimination is one of the main features in the pricing model. The reason for not
providing the same price for all segments is that the vendor, Sana in this case, would not
receive enough profits related to the resources and the increase in profit and efficiency
estimated to be improved for the buyer. In addition to this, an AI company that benefits from
network effects needs the maximum amount of users in order to reach maximum value
creation through the enhancement of network effects. As larger platforms have access to a
larger amount of MAUs, the AIP would target larger platforms to maximize its data gathering
and therefore network effects. Due to this, prices would also be adjusted for larger platforms,
which are likely to have higher revenue and profit, making the prices above the range of what
smaller platforms are able to pay. Therefore, by allowing for price discrimination the smaller
platforms, such as startups, are not excluded by high prices, but can afford to integrate the AI
product into their platform and benefit from price discrimination.
5.7 Summary of Findings
The results and analysis of the Van Westendorp PSM showed that respondents considered
USD 5,000 to USD 10,000 as an acceptable price range as a fixed fee per month for the AIaaS
personalization product. When expressing the willingness to pay per learner instead, the
accepted price range turned out to lie in between USD 0.01 and USD 1 per learner per month,
with different spans in between these numbers depending on the end of the range. When this
fixed monthly fee was utilized as an area of discussion in the in-depth interviews, it was
65
discovered that seven out of all the 24 in-depth interview respondents expressed this range to
be very cheap or cheap.
Further, it was discovered that the willingness to pay is dependent on several factors in which
the OCP’s number of MAUs and segment has showed to be the most important. However, it
is difficult to relate all the tier groups to a certain monthly fee entirely based on the collected
results. Nevertheless, a general suggestion is to base the price on a combination of a fixed and
a variable fee for larger tier groups while OCP’s with fewer MAUs only are suggested a
variable fee based on a linear model. In addition to this, the MAUs that exceed a certain tier
group should be priced according to a variable fee above the fixed monthly fee, where the
initial fixed monthly fee is higher for tier groups with more MAUs. Moreover, additional
factors to consider were categorized under price discrimination as it was revealed that the
price could depend on geography, a textbook index as well as the maturity stage of the
company. Other findings to consider for a price model are high or low season, if the OCP is a
B2B or B2C actor and if the product bundling should be tailored or not.
In the structured part of the interviews, the price range from Van Westendorp was combined
with two other attributes; KPI and Additional services in which three to four attribute levels were
defined for each attribute. The respondents’ evaluation of these attribute levels resulted in that
2x learning efficacy was considered as the most important KPI, supported both from the
structured as well as semi-structured parts of the interviews while 4.5x more problems solved was
criticized for measuring quantity rather than quality and therefore considered to be the least
valuable. Other types of KPIs that emerged as a result of the empirical data gathering are KPIs
that are connected to learners’ engagement as well as ones that relate to the OCP’s ROI,
increase in revenue or decrease in costs. In general, the most important finding connected to
KPIs is that these should reflect the OCP’s long term business improvement expressed in
monetary value. Further, weekly performance reports appeared to be the most appealing of the
three presented additional services. However, the results emphasize the importance of being
able to choose to include additional services or not.
Lastly, as a result of the discussion of secondary and primary sources is the proposed pricing
model which visualizes the main decisions to be made as part of determining a price model for
an AIaaS product.
66
6. Conclusion
This section presents the main findings from the research as well as the answers to the research questions. The
answers to the sub- and the main research question hence fulfill the purpose of the study. The conclusion also
includes the contribution within the marketing field of pricing, and the limitations of the study are presented.
The chapter is finalized by a recommendation of future research within pricing models for AI products.
6.1 Main findings
The purpose of this study was to investigate the possible pricing models for an AIaaS product
delimited to the edtech industry. It also sought to create a suitable pricing model for AI
companies offering a new personalization technology based on AI and deep learning
algorithms. To fulfill the purpose of the study, one main research question and two sub-
research questions sought to be answered. The questions and answers to the sub-research
questions are presented below.
1. What is the perceived value delivered by AIaaS and how can it be determined and
mirrored to price?
2. What factors should an AIaaS providing organization consider when determining a
pricing model?
3. What are the main implications of implementing AIaaS within the edtech industry?
This research has shown that the perceived value depends on which segment the customer
belongs to. Specific to this study and the edtech industry, the majority of the respondents
belong to the K12 segment where KPIs such as learning efficacy are highly valued as it enables
more efficient teaching for teachers. For the other segments, it is difficult to draw a conclusion
based on the results as the respondents within these segments were few. In general, the study
has shown that the perceived value is what a potential customer can interpret as increased
performance, which can be measured and determined through KPIs such as decreased costs,
increase in revenue or improved churn rate. This should thereby act as the assessment base of
the price. In addition to this, the perceived value has shown to vary with the interpretation of
the new product as different organizations are at various levels of maturity of acceptance for a
new AIaaS product; some perceive very low value whilst others find it invaluable. The
perceived value becomes highly related to the willingness to pay, which affects the price.
Therefore, mirroring the perceived value to a price must be altered depending on variables
such as segment, size, maturity or geography.
Factors that an AIaaS providing company should consider when determining a pricing model
is the formation of price, structure of payment flow, assessment base, price discrimination as
well as price bundling. Specific to this research, the number of MAUs, size of buying
organization as well as its stage of maturity have shown to be particularly important. An AIP
should pay attention to the assessment base, where a volume-based price rather than a fixed
price justifies the corresponding value and price for the buyer. This can depend on the
number of MAUs, recommendations or another variable that is suitable for the AIP’s
67
customer. In relation to this, the AIP must consider the maturity of the purchasing
organization as the variable price will vary for different tier groups.
The main implications for implementing AIaaS are the general skepticism towards a new
product where one pays for the product without a guarantee of performance. Specifically for
AI in education, there is limited research in how personalized education will affect the social
interaction is classrooms as well as the level of workload for the teacher. Also, standard
procedures for how to deal with adaptive learning are not yet established. In addition to this,
there exists a general perception that more content needs to be created in order to achieve
adaptive learning which means more work and higher creation costs.
Together, the answers to the sub-research questions seek to answer the main research question
which is presented again below.
“What pricing model should an AI-company have for its B2B personalization
product?”
For a B2B personalization product, an AI company should have a pricing model which
corresponds to the value delivered by the product based on the pricing model presented in 5.4.
The pricing model should account for the formation of price, structure of payment flow,
assessment base, price discrimination and price bundling in order to be generalizable for any
AIP. In order to guide the AIP, variables are presented for each parameter which makes the
pricing model adaptable for AIPs outside the edtech industry.
6.2 Contribution
Within industrial management, this study has contributed to the field of marketing research of
pricing models for new AIaaS products. Based on data collected within the edtech space, the
research has contributed through a proposal of a scalable and generalizable pricing model for
AIaaS products. Therefore, this study has contributed to research in the intersection of AI and
pricing; pricing models for AIaaS.
6.3 Limitations
The study was limited to the number of interviews which depended on the primary email
addresses that were accessible to the authors, which in turn affected the response rate and
possibly the results. Further, the study was limited to the people who were interviewed and
respondents in the survey as the ones who did respond may not have possessed the
knowledge needed to entirely cover what was required for the survey or interview.
With limited previous research in pricing for AI products, the study became limited with the
conclusions that could be drawn from research in related areas such as software pricing,
although AI pricing can be argued to be closely related to SaaS due to its similarities such as
network effects.
68
This research was also limited to a time frame of 17 weeks, which likely affected the number
of responses as more reminders and more interviews could have been held with a larger time
span, also a possible effect on the results.
6.4 Future Research
To continue the development and application of AIaaS, pricing models must be further
researched. In addition to this, pricing models must be tested and proved in practice by
organizations in order to ensure the function of the model.
To extend the proposed model and make it more of a model for determining the actual price
(and not a model of how one determines the price), a more extensive investigation of a linear,
2PT and 3PT tariff should be carried out. Future researchers should examine the relationship
for the limit of variable to a fixed price in order to reach a generalizable model for AIaaS
providers, which could be done through pricing optimization. This could rather serve as input
for adjusting the parameters and variables used in the proposed pricing model. Although this
cannot be proven until tested, such pricing optimization may be very market specific and
difficult to create for all AIaaS providers.
This research aimed to reveal customers’ value perception through qualitative data gathering
conducted through structured as well as semi structured interviews. However, to obtain a
more significant statistical perception of customers’ value perception in which preferred
attribute levels are put in numbers, a recommendation for future research that aims to quantify
value is to conduct a full conjoint analysis. With the results from an analysis which contains
the five complete stages of the conjoint analysis, one will be able to statistically determine what
the population perceives as the most attractive product attributes in relation to the presented
price levels. Results from such research does however become very market specific.
Further research within the field specifically for edtech might discover other or completely
new pricing models which potentially could be more suitable for pricing an AIaaS product.
Moreover, similar research to the one in this study but with other KPIs, price levels and
additional services could be conducted to test if the willingness to pay or the price model
could be formulated differently when other areas or attribute levels are considered.
Furthermore, this research was delimited to examine willingness to pay as well as value
perception of AIP’s customers within the edtech industry. Similar research can be done within
other industries that would benefit from AIaaS for personalization as well. An example of an
industry that is changing rapidly as a result of digitalization is medical technology (medtech),
which would be an interesting field to investigate for evaluation of the proposed pricing
model. In addition, an investigation of value perception and willingness to pay in other
industries would serve as a tool to test if the model has a broader field of application, hence
testing the generalizability of the proposed pricing model for AIaaS. As this kind of testing of
the model’s usefulness in other industries was not within the scope of this study, the identified
gap in the field of pricing models for new AI products is not completely closed. For that reason,
future research within other industries to test the pricing model would contribute to closing
the identified gap in the literature.
69
References
Accenture, 2018. Making it personal. Accenture Interactive.
Anderson, C., 2009. Free: The future of a radical price. 1st ed. London: Random House.
Anderson, M. & Anderson Leigh, S., 2011. Machine Ethics. New York: Cambridge University Press.
Andrae, A., 2017. Total Consumer Power Consumption Forecast. Huawei Technologies.
Baker, R., 2011. Implementing Value Pricing: A Radical Business Model for Professional Firms. Hoboken, New Jersey: John Wiley & Sons.
Blomkvist, P. & Hallin, A., 2014. Method for Technology Students: Degree Project Using the 4-Phase Model. First Edition ed. Lund: Studentlitteratur.
Brenner, R., 2016. Spreadsheet Models for Managers. Boston: Fajada Butte Press.
Bughin, J., Hazan, E., Ramaswamy, S., Chui, M., Allas, T., Dahlström, P., Henke, N. & Trench, M., 2017. Artificial intelligence the next digital frontier?. McKinsey Global Institute.
Chao, Y., 2013. Strategic Effects of Three-Part Tariffs Under Oligopoly. International Economic Review, 54(3), pp.977-1015.
Collis, J. & Hussey, R., 2013. Business research: A practical guide for undergraduate and postgraduate students. 4th ed. Basingstoke: Palgrave macmillan.
Cui, G. & Choudhury, P., 2003. Consumer Interests and the Ethical Implications of Marketing: A Contingency Framework. The Journal of Consumer Affairs, 37(2)., pp. 364-387
Deneckere, R., & Kovenock, D., 1992. Price Leadership. The Review of Economic Studies, 59, pp. 143-162
Dholakia, M., 2016. A quick guide to value-based pricing. Harvard Business Review.
Dixit, et al., 2008. A taxonomy of information technology-enhanced pricing strategies. Journal of Business Research, 61(4), pp. 275-283
Dodds, W., Monroe, K. & Grewal, D., 1991. Effects of Price, Brand, and Store Information on Buyers' Product Evaluations. Journal of Marketing Research, 28(3)., pp. 307-3019
Dolgui, A. & Proth, J.-M., 2010. Pricing strategies and models. Annual Reviews in Control, 34(1).
Doyle, P., 2000. Value Based Marketing: Marketing strategies for corporate growth and shareholder value. Wiley.
Ertel, W., 2009. Introduction to Artificial Intelligence. Wiesbaden: Springer.
Esomar, 2015. A New Approach to Study Consumer Perception of Price. RW Connect. [Online] Available at: https://rwconnect.esomar.org/a-new-approach-to-study-consumer-perception-of-price/ [Accessed 8 May 2018]
EU, 2016. REGULATION (EU) 2016/679 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). Official Journal of the European Union.
Farris, P., Bendle, N., Pfeifer, P. & Reibstein, D., 2010. Marketing Metrics: The Definitive Guide to Measuring Marketing Performance. New Jersey: Pearson Education
Fink, A. & Kosecoff, J., 1998. How to Conduct Surveys: A Step by Step Guide. Thousand Oaks, California: Sage Publications
70
Fortune, 2012. Amazon’s Recommendation Secret. [Online] Available at: http://fortune.com/2012/07/30/amazons-recommendation-secret/ [Accessed 15 May 2018]
Gallo, A., 2015. A Refresher on Price Elasticity. Harvard Business Review, 21 August.
Green, P. E. & Srinivasan, V., 1978. Conjoint Analysis in Consumer Research: Issues and Outlook. Journal of Consumer Research, 5(2), pp. 103-123.
Green, P. E. & Srinivasan, V., 1990. Conjoint Analysis in Marketing: New Developments with Implications for Research and Practice. Journal of Marketing, 54(4), pp. 3-19.
Greener, S., 2008. Business research methods. 1st ed. London: BookBoon.
Guest, G., MacQueen, K. & Namey, E., 2012. Introduction to Applied Thematic Analysis. Thousand Oaks: SAGE Publications Inc.
Hager, MA., Wilson, S., Pollak, TH., Rooney, PR., 2003. Response rates for mail surveys of nonprofit organizations: a review and empirical test. Nonprofit Voluntary Sector Q, 32, pp. 252–267.
Harmon, R., Raffo, D. & Faulk, S., 2004. Value-Based Pricing For New Software Products: Strategy Insights for Developers. Portland International Conference Proceedings.
Hinterhuber, A., 2008. Customer value-based pricing strategies: why companies resist. Journal of Business Strategy, 29(4), pp. 41-50.
Holton, J., et al., 2010. The Grounded Theory Review: An international journal. The Grounded Theory Review, 9(2), pp. 1-59.
Ingenbleek, P., Debruyne, M., Frambach, R. T. & Verhallen, T. M. M., 2003. Successful New Product Pricing Practices: A Contingency Approach. Marketing Letters, 14(4), pp. 289-305.
Ishibashi, I., 2008. Collusive price leadership with capacity constraints. International Journal of Industrial Organization, 26(3), pp. 704-715
Iveroth, et al., 2013. How to differentiate by price: Proposal for a five dimensional model. European Management Journal, 31(2), pp. 109-123
Jacobs, S. & Ritzer, C., 2017. Data Privacy: AI and the GDPR. Norton Rose Fullbright, 2nd of
November 2017.
Kienzler, M. & Kowalkowski, C., 2017. Pricing strategy: A review of 22 years of marketing
research. Journal of Business Research, 78, pp. 101-110.
Laatikainen, G. & Ojala, A., 2014. SaaS architecture and pricing models. Jyväskylä: Department of Computer Science and Information Systems, University of Jyväskylä.
LeCun, Y., Bengio, Y. & Hinton, G., 2015. Deep learning. Nature, 521(7553), pp. 436-444.
Lehmann, S. & Buxmann, P., 2009. Pricing Strategies of Software Vendors. Darmstadt: Business & Information Systems Engineering, 1:452.
Lipovetsky, S., 2006. Van Westendorp price sensitivity in statistical modeling. International Journal of Operations and Quantitative Management, 12(2), pp.141.
Maital, S., 1994. Executive economics: ten tools for business decision makers. 1st ed. New York: Simon and Schuster.
Manning, E., 2017. Out with the old school? The rise of ed tech in the classroom. The Guardian. 1st of August 2017.
Marr, B., 2016. What Is The Difference Between Artificial Intelligence And Machine Learning?. Forbes, 6th of December 2016.
Mills, M., 2013. The Cloud Begins with Coal: Big Data, Big Networks, Big Infrastructure, and Big Power. Digital Power Group.
71
Mintzberg, H., Lampel, J., Quinn, J. & Ghoshal, S., 2003. The Strategy Process - Concepts, Context, Cases. Essex: Pearson.
Moldan, B., Janouskova, S. & Hak, T., 2012. How to understand and measure environmental sustainability: Indicators and targets. Ecological Indicators, 17, pp. 4-13.
Nagle, T. T., Hogan, J. & Zale, J., 2013. The Strategy and Tactics of Pricing A Guide to Growing More Profitability. Fifth ed. Harlow: Pearson Education Limited.
Nulty, D., 2008. The adequacy of response rates to online and paper surveys: what can be done? Assessment & Evaluation in Higher Education, 33(3), pp.301-314
Orme, B. K., 2010. Getting started with conjoint analysis: strategies for product design and pricing research. 2nd red. Madison: Research Publishers LLC.
O'Sullivan, A. & Sheffrin, S. M., 2003. Economics: Principles in Action. 1st ed. Upper Saddle River: Pearson Prentice Hall.
Parker, G. G., Van Alstyne, M. W. & Choudary, S. P., 2016. Platform Revolution: How Networked Markets Are Transforming the Economy and How to Make Them Work for You. 1st ed. New York: WW Norton & Company.
Pettey, C., 2015. Moving to a software subscription model. Gartner. [Online] Available at: https://www.gartner.com/smarterwithgartner/moving-to-a-software-subscription-model/ [Accessed 8 May 2018]
Planing, P., 2014. Innovation acceptance: the case of advanced driver-assistance systems. 1st ed. Wiesbaden: Springer Gabler.
Poyar, K., 2018. The Key to SaaS Pricing, Data Driven Sales - How the best B2B sales leaders use data to grow faster, pp. 6-17. Berkley, California: Clearbit.
Rachev, S., Höchstötter, M., Fabozzi, F. & Focardi, S., 2011. Introduction to Regression Analysis. John Wiley & Sons.
Rao, V.R., 2014. Applied Conjoint Analysis. Heidelberg: Springer.
Rich, E. & Knight, K., 1991. Artificial Intelligence. New York: McGraw-Hill.
Richey, R. C., Silber , K. H. & Ely, D. P., 2008. Reflections on the 2008 AECT Definitions of the Field. TechTrends, 52(1), pp. 24-25.
Rouse, M., 2017. Artificial Intelligence as a Service (AIaaS). TechTarget. [Online] Available at: https://searchenterpriseai.techtarget.com/definition/Artificial-Intelligence-as-a-Service-AIaaS [Accessed 8 May 2018]
Ryan, M., 1996. Using Consumer PReferences in Health Care Decision Making - The application of conjoint analysis. London: White Crescent Press.
Sana Labs AB, 2018. Interviews with CEO Joel Hellermark. Stockholm. (January - April 2018)
Sánchez-Fernández, R. & Iniesta-Bonillo M., 2007. The concept of perceived value: a systematic review of the research. University of Almería, Spain: Marketing Theory, 7(4).
Sawtooth Software, 2017. The CBC System for Choice-Based Conjoint Analysis. Sawtooth Software Technical Paper Series, 9. Orem, Utah.
Silverstein, D., Samuel, P. & Decarlo, N., 2008. 51. Conjoint Analysis. in: The Innovator's ToolKit: 50 Techniques for Predictable and Sustainable Organic Growth. Hoboken: John Wiley & Sons, pp. 312-317.
Simon, H., Butscher, S. A. & Sebastian, K.-H., 2003. Better pricing processes for higher profits. Business Strategy Review, 14(2), pp. 63-67.
72
Shapiro, B., 2002. Is Performance-Based Pricing the Right Price for You? Business Research for Business Leaders.
Shapira, B., Rokach, L. & Freilikhman, S., 2013. Facebook single and cross domain data for recommendation systems. User Model User-Adap Inter, 23(2), pp. 211-247.
Spencer, R. W., 2009. How Do Climate Models Work?. [Online] Available at: http://www.drroyspencer.com/2009/07/how-do-climate-models-work/ [Accessed 16 February 2018].
UN, 1987. General Assembly Resolution on Implementation of Agenda 21, the Programme for the Further Implementation of Agenda 21 and the outcomes of the World Summit on Sustainable Development. (A/RES/64/236)
Van Westendorp, P., 1976. NSS-Pricesensitivity-Meter (PSM) – A New Approach to Study. Consumer Perception of Prices. Proceedings of the 29th ESOMAR Congress, pp. 139-167.
Verbrugge, B., 2016. Best Practice, Model, Framework, Method, Guidance, Standard: towards a consistent use of terminology – revised. [Online] Available at: https://www.vanharen.net/blog/van-haren-publishing/best-practice-model-framework-method-guidance-standard-towards-consistent-use-terminology/ [Accessed 16 February 2018]
Zimmermann, N., 2017. Network effects helped Facebook win. Deutsche Welle, 8 September 2017.
Appendices
Appendix I - Van Westendorp Price Sensitivity Meter
Graph showing the full cumulative frequency of respondents up to the highest price answered.
Appendix II - Respondents and Position at Company
Respondent # Position
1 Unknown
2 Publishing Director
3 Chief Revenue Officer
4 President
5 Chief Executive Officer
6 Senior Vice President, Strategy & Analytics
7 Chief Product Officer
8 Chief Operating Officer
9 Co-Founder and Educational Technologist
10 Vice President
11 Development Lead
12 Chief Executive Officer
13 Head of Pedagogy
14 Managing Director
15 Director
16 International Sales
17 Sales Executive
18 Chief Executive Officer
19 Managing Director
20 Vice President, Education
21 Founder, Academic Team Leader
22 Chief Executive Officer
23 Founder and Chief Information Officer
24 Chief Executive Officer
25 Founder and Managing Director
26 Co-Founder and Chief Executive Officer
27 Founder and Chief Executive Officer
28 Co-Founder and Chief Executive Officer
29 Founder and Chief Executive Officer
30 Chief Executive Officer
33 Chief Product Officer
34 Vice President, Content
35 Director
36 Manager, Digital Learning and Business Technology
37 Chief Strategy Officer
38 Head Of Pedagogical Services
39 Digital Director
40 Chief Product Officer
41 Chief Product Officer
42 Chief Revenue Officer
43 Vice President, Marketing
44 Co-Founder and Chief Executive Officer
45 Growth Director
46 Senior Data Scientist
47 Chief Executive Officer
Appendix III – Structured Interview Questions and Selections
Value ConceptionThrough Conjoint Analysis
1
Instructions
• 10 questions will be asked about which product you prefer• Please select exactly one option at each question
2
Which product would you choose?
Price
KPI
Additional service
Product #1 Product #2 Product #3 None2x Learning Efficacy
Price per month: $10k
Executive Briefings
2x Learning Efficacy
Price per month: $10k
Dedicated Solutions Engineer
4.5x more Problems Solved
Price per month: $10k
Weekly Performance Report
I wouldn’t choose any of these
3
Q1
# Respondents 2 5 3 6
Which product would you choose?
Price
KPI
Additional service
12% improved Churn Rate
Price per month: $5k
None
4.5x more Problems Solved
Price per month: $7.5k
Dedicated Solutions Engineer
4
12% improved Churn Rate
Price per month: $10k
Weekly Performance Report
Q2
Product #1 Product #2 Product #3 Product #42x Learning Efficacy
Price per month: $10k
Dedicated Solutions Engineer
# Respondents 4 4 4 4
Which product would you choose?
Price
KPI
Additional service
12% improved Churn Rate
Price per month: $10k
Executive Briefings
4.5x more Problems Solved
Price per month: $5k
None
5
2x Learning Efficacy
Price per month: $7.5k
Weekly Performance Report
I wouldn’t choose any of these
Q3
Product #1 Product #2 Product #3 None
# Respondents 4 3 7 2
Which product would you choose?
Price
KPI
Additional service
12% improved Churn Rate
Price per month: $7.5k
Dedicated Solutions Engineer
12% improved Churn Rate
Price per month: $5k
None
12% improved Churn Rate
Price per month: $7.5k
Weekly Performance Report
I wouldn’t choose any of these
6
Q4
Product #1 Product #2 Product #3 None
# Respondents 6 2 6 2
Which product would you choose?
Price
KPI
Additional service
4.5x more Problems Solved
Price per month: $5k
None
2x Learning Efficacy
Price per month: $10k
Weekly Performance Report
4.5x more Problems Solved
Price per month: $10k
Executive Briefings
I wouldn’t choose any of these
7
Q5
Product #1 Product #2 Product #3 None
# Respondents 3 0 8 5
Which product would you choose?
Price
KPI
Additional service
2x Learning Efficacy
Price per month: $7.5k
Executive Briefings
2x Learning Efficacy
Price per month: $5k
None
4.5x more Problems Solved
Price per month: $7.5k
Weekly Performance Report
8
Q6
Product #1 Product #2 Product #3 Product #44.5x more Problems Solved
Price per month: $5k
None
# Respondents 4 2 10 0
Which product would you choose?
Price
KPI
Additional service
4.5x more Problems Solved
Price per month: $10k
Dedicated Solutions Engineer
12% improved Churn Rate
Price per month: $10k
Dedicated Solutions Engineer
4.5x more Problems Solved
Price per month: $10k
Weekly Performance Report
I wouldn’t choose any of these
9
Q7
Product #1 Product #2 Product #3 None
# Respondents 5 2 5 4
Which product would you choose?
Price
KPI
Additional service
2x Learning Efficacy
Price per month: $5k
None
2x Learning Efficacy
Price per month: $7.5k
Executive Breifings
4.5x more Problems Solved
Price per month: $7.5k
Executive Breifings
I wouldn’t choose any of these
10
Q8
Product #1 Product #2 Product #3 None
# Respondents 6 4 2 2
Which product would you choose?
Price
KPI
Additional service
12% improved Churn Rate
Price per month: $10k
Dedicated Solutions Engineer
2x Learning Efficacy
Price per month: $5k
None
12% improved Churn Rate
Price per month: $7.5k
None
11
12% improved Churn Rate
Price per month: $10k
Weekly Performance Report
Q9
Product #1 Product #2 Product #3 Product #4
# Respondents 1 9 3 3
Which product would you choose?
Price
KPI
Additional service
4.5x more Problems Solved
Price per month: $5k
None
12% improved Churn Rate
Price per month: $7.5k
Executive Briefings
4.5x more Problems Solved
Price per month: $7.5k
Executive Briefings
I wouldn’t choose any of these
12
Q10
Product #1 Product #2 Product #3 None
# Respondents 4 1 8 3
THANK YOU!You have now contributed to the research of personalized education!
13
TRITA 2018:331
www.kth.se