View
3
Download
0
Category
Preview:
Citation preview
Identifying Challenges in Cybersecurity
Data Visualization Dashboards
Patrick Shirazi
Information Security, master's level (120 credits)
2020
Luleå University of Technology
Department of Computer Science, Electrical and Space Engineering
I
Abstract Nowadays, a massive amount of cybersecurity data-objects, such as security events, logs,
messages, are flowing through different cybersecurity systems. With the enormous fast
development of different cloud environments, big data, IoT, and so on, these amounts of data are
increasingly revolutionary. One of the challenges for different security actors, such as security
admins, cybersecurity analysis, and network technicians, is how to utilize this amount of data in
order to reach meaningful insights, so they can be used further in diagnosis, validation, forensic
and decision-making purposes.
In order to make useful and get meaningful insights from this data, we need to have efficient
dashboards that simplify the data and provide a human-understandable presentation of data.
Currently, there are plenty of SIEM and visualization dashboard tools that are using a variety of
report generator engines to generate charts and diagrams. Although there have been many
advances in recent years due to utilizing AI and big data, security professionals are still facing
some challenges in using the visualization dashboards.
During recent years, many research studies have been performed to discover and address these
types of challenges. However, due to the rapid change in the way of working in many companies
(e.g. digital transformation, agile way of working, etc.) and besides utilizing cloud environments,
that are providing almost everything as a service, it is needed to discover what challenges are still
there and whether they are still experiencing the same challenges or new ones have emerged.
Following a qualitative method and utilizing the Delphi technique with two rounds of interviews,
the results show that although the technical and tool-specific concerns really matter, the most
significant challenges are due to the business architecture and the way of working.
II
Acknowledgement I wish to express my sincere appreciation to my supervisor, professor Elragal, of the Department
of Computer Science, Electrical and Space Engineering at Luleå University of Technology. His
fabulous insights with the great mentoring style of him helped me not only performing this
particular research activity but also acquiring an in-depth understanding of research methods in
general. His willingness to give his time so generously has been very much appreciated.
Special thanks should be given to all the other professors and faculty members, particularly Dr.
Awad for his professional guidance and valuable support during the period of my studies.
I will forever be thankful to Dr. Shahram Khadivi for the unforgettable lessons I learned from him
during my other master studies, not just in the context of scientific matters, but also the humanity,
altruism, and virtuously caring for the others. His enthusiasm and love for teaching and supporting
are contagious.
My special thanks are extended to all the experts who were passionately participated in conducting
this study.
Finally, I must express whole-heartedly appreciation to my wife for providing me with unfailing
support and continuous encouragement throughout my years of study and the process of
researching and writing this thesis. This accomplishment would not have been possible without
her support.
III
Abbreviations ATP Advanced Threat Protection
CSIRT Computer Security Incident Response Team
CTA Cognitive Task Analysis
DSRM Design Science Research Methodology
FW Firewall
IaaS Infrastructure as a Service
IDS Intrusion Detection System
IPS Intrusion Prevention System
PaaS Platform as a Service
SaaS Software as a Service
SIEM Security Information and Event Management
SME Subject Matter Experts
SOINN Self-Organizing Incremental Neural Network
UI User Interface
UX User Experience
IV
List of Figures Figure 1. Connection river (Chen, et al., 2014) .............................................................................. 2
Figure 2. Visualization of a security incident (Fan, et al., 2019) .................................................... 3
Figure 3. Abstract relationship between the SIEM and visualization dashboards .......................... 4
Figure 4. Enhanced SIEM architecture (Sarno, et al., 2016) .......................................................... 4
Figure 5. Various input for a SIEM system (Bryant & Saiedian, 2020) ......................................... 4
Figure 6. Sample SIEM Architecture (Lee, et al., 2017) ................................................................ 5
Figure 7. Qualitative research iteration (Streubert & Carpenter, 1999) ......................................... 6
Figure 8. Main steps of the research ............................................................................................... 9
Figure 9. Framework for literature review (Brocke, et al., 2009) ................................................... 9
Figure 10. Conceptualization of the topic ..................................................................................... 10
Figure 11. Process of literature review (Brocke, et al., 2009) ...................................................... 10
Figure 12. The process of selecting review sources ..................................................................... 12
Figure 13. A framework of event visualization (Li, et al., 2020) ................................................. 16
Figure 14. VisAct framework architecture (Wu, et al., 2020). ..................................................... 19
Figure 15. Evaluable components of a visualization practice (Staheli, et al., 2014) .................... 20
Figure 16. Instructional efficiency measurement (Gerven, et al., 2003) ...................................... 21
Figure 17. Components of SvEm Model (Garae, et al., 2018) ..................................................... 22
Figure 18. The traditional process of visualization (Paul, et al., 2015) ........................................ 22
Figure 19. Sample decision-tree for choosing the right graph (Marty, 2008) .............................. 23
Figure 20. Visualizing approx. 10,000 records of a firewall (Marty, 2008) ................................. 24
Figure 21. Generation of a treemap chart using Bubble Treemap method (Görtler, et al., 2018) 24
Figure 22. A Time table graph that shows behaviour patterns (Marty, 2008) .............................. 25
Figure 23. Technical root causes vs. Organizational .................................................................... 34
V
List of Tables Table 1. Network Visualization Categories and data source coverage (Sharafaldin, et al., 2019) . 1
Table 2. An alternate quantitative research method........................................................................ 8
Table 3. An alternate constructive research method ....................................................................... 8
Table 4. Literature Search Materials ............................................................................................. 11
Table 5. Panel member selection criteria ...................................................................................... 12
Table 6. Panel members demography ........................................................................................... 13
Table 7. Comparison of famous SIEM tools (Sönmez & Günel, 2018) ....................................... 18
Table 8. Evaluation criteria conducted based on (Sharafaldin, et al., 2019) ................................ 20
Table 9. Visualization challenges according to (Wagner, et al., 2015) ........................................ 26
Table 10. Details of experts’ consensus on identified challenges ................................................ 33
VI
Table of Contents Abstract ............................................................................................................................................ I
Acknowledgement .......................................................................................................................... II
Abbreviations ................................................................................................................................ III
List of Figures ............................................................................................................................... IV
List of Tables ................................................................................................................................. V
Table of Contents .......................................................................................................................... VI
1. Introduction ................................................................................................................................. 1
1.1 Cybersecurity Data Visualization.......................................................................................... 1
1.2 Security Information and Event Management (SIEM) ......................................................... 3
2. Research Process ......................................................................................................................... 6
2.1 Research Scope ..................................................................................................................... 6
2.2 Delphi Technique .................................................................................................................. 7
2.2.1 Pros and Cons of the Delphi Technique ......................................................................... 7
2.2.2 Alternative Research Methods ........................................................................................ 8
2.2.3 Tailored Delphi Technique ............................................................................................. 8
2.3 Literature review ................................................................................................................... 9
2.4 Identify and Select Panel Members ..................................................................................... 12
2.5 Brainstorming ...................................................................................................................... 13
2.6 Review and Narrow Down Factors ..................................................................................... 14
2.7 Secondary Rounds of Feedback Gathering ......................................................................... 14
3. Related Works and Research Gaps ........................................................................................... 15
3.1 Inherent Data Visualization Challenges .............................................................................. 15
3.2 Human Factor Challenges ................................................................................................... 16
3.3 CTA Researches .................................................................................................................. 17
3.4 Tool Specific Behaviours .................................................................................................... 18
3.5 Evaluation Challenges ......................................................................................................... 19
3.6 Design Challenges ............................................................................................................... 22
3.7 Other Challenges ................................................................................................................. 25
3.8 Research Gaps ..................................................................................................................... 26
3.9 Research Questions ............................................................................................................. 26
4. Empirical Work ......................................................................................................................... 27
4.1 Results from the First Round of Interviews ........................................................................ 27
4.1.1 Business Context Correlation ....................................................................................... 27
VII
4.1.2 Customization of general-purpose tools ....................................................................... 27
4.1.3 Dashboard Design......................................................................................................... 28
4.1.4 Data Quality .................................................................................................................. 28
4.1.5 Human Factor ............................................................................................................... 29
4.1.6 Information Overload ................................................................................................... 29
4.1.7 Integration and Interoperability .................................................................................... 30
4.1.8 KPI Definition .............................................................................................................. 30
4.1.9 Manual Work ................................................................................................................ 30
4.1.10 Setup and Maintenance ............................................................................................... 31
4.1.11 Tool Specific............................................................................................................... 31
4.1.12 Training and Skills...................................................................................................... 32
4.1.13 Trust and Reliability ................................................................................................... 32
4.2 Second Round of Interviews ............................................................................................... 32
4.3 Achieved Consensus ........................................................................................................... 33
4.3.1 Technical-level vs. Organizational-level challenges .................................................... 33
4.3.2 Other Findings .............................................................................................................. 34
5. Conclusion ................................................................................................................................ 35
5.1 Limitations .......................................................................................................................... 35
6. Future Work .............................................................................................................................. 36
Appendix 1 – Lessons learned ...................................................................................................... 37
Appendix 2 – Desirable features ................................................................................................... 39
References ..................................................................................................................................... 40
1
1. Introduction Data visualization, in general, is an approach to demonstrate knowledge and insights regarding a
specific concept. Another usage of the visualization could be illustrating the correlation mapping
of two or more entities, such as customer profiles, company products, and sales. Besides,
visualization can also be used to display the correlation mapping, which shows different segments
of one entity (e.g. customers) toward another concept such as sales (Olshannikova, et al., 2015).
A study by Raghav, et al. (2016) listed a number of benefits of data visualization for companies,
such as improvement in decision making, improvement in ROI, information sharing, and time-
saving.
1.1 Cybersecurity Data Visualization
As security professionals constantly need to monitor a variety of tools, devices, and environments
on a regular basis, it is quite necessary to have efficient data visualization dashboards which can
help them cumulating different sources of input data objects (logs, events) and providing a proper
understanding and awareness of the environment they are working with.
The usage of security data visualization could be categorized in different ways. On example could
be the categorization made by Vieane, et al. (2016), in which some major categories addressed as
consumers of cybersecurity data visualization:
• Network analysis
• Threat analysis
• Situational awareness
Recently, some other researchers tried to expand the usage categories and investigate what kind of
security-related data can be visualized in each category Sharafaldin, et al. (2019). Table 1
demonstrates different categories for cybersecurity data visualization, besides a minimum of data
source coverage for each.
Table 1. Network Visualization Categories and data source coverage (Sharafaldin, et al., 2019)
Visualization category Minimum data source coverage
Host/Server monitoring Network Trace, Security Events, Network Activity Context,
Network Activity Context, Non-log Information, Application
Logs
Internal/External monitoring Network Trace, Security Events, Network Events, Network
Activity Context, Non-log Information, Application Logs
Attack patterns Network Trace, Security Events, Network Events, Network
Activity Context, Non-log Information, Application Logs
Routing behaviour Network Events
IDS Monitoring Security Events
Configuration visualization Non-log Information
Steganography visualization Network Trace
Proxy server monitoring Network Activity Context
Encrypted traffic monitoring Network Activity Context
2
In any domain, visualization helps security professionals analysing the current status of different
environment components, finding anomalies, performing forensics, and having a security posture
practice. That’s why the concept of cybersecurity visualization becomes vital, as it eases the human
understanding of raw data and makes the decision-making process faster.
For example, Chen, et al. (2014) developed a visualization system to address the complexity of
analysis and providing more in-depth insights and understanding. As part of their results, Figure 1
displays an example of how security data visualization can be used to monitor the live stream of
network data. As an illustration, it appears that most of the huge traffic with source port greater
than 200 are heading the destination port 80.
Figure 1. Connection river (Chen, et al., 2014)
In another work, Fan, et al. (2019) tried to implement a network visualization tool for real-time
monitoring. Utilizing machine learning, their tool could display the anomalies. For example,
Figure 2 by that visualizes a DDoS attack.
Each section of Figure 2, reveals one piece of information. For instance, Figure 2a displays the
abnormal pattern node of the DDoS attack in SOINN. Figure 2c demonstrates the importance of
port 80, as multiple IP addresses are connected to that are having heavy traffic, this might indicate
an abnormal situation. If the operator is curious to see more details, he/she can track the traffic
over the nodes in Figure 2b, or view them in Figure 2d to observe one of the nodes which are under
heavy traffic. The thickness of the connection lines shows that there are simultaneous connections
to this node from some other nodes, which indicates a suspicious behaviour.
3
Figure 2. Visualization of a security incident (Fan, et al., 2019)
1.2 Security Information and Event Management (SIEM)
Before studying security data visualization, we need to have an understanding of the SIEM tools,
as data visualization needs to be performed over the manipulated data by the SIEM engine. Figure
3 displays an abstract demonstration of how different components of a SIEM tool will look like.
As it is shown in the diagram, the visualization is normally based on the previous manipulation (in
the SIEM engine) over the raw data (logs).
4
Figure 3. Abstract relationship between the SIEM and visualization dashboards
There could be various architecture and implementation of the SIEM engines. Figure 4 displays
an enhanced architecture proposed by Sarno, et al. (2016) in which “source” means a system that
needs to be monitored.
Figure 4. Enhanced SIEM architecture (Sarno, et al., 2016)
The source of the input data could vary. Figure 5 displays some examples of input data for a SIEM
system.
Figure 5. Various input for a SIEM system (Bryant & Saiedian, 2020)
5
SIEM tools are having different functionalities. The common functionalities could be listed as
follow (Bryant & Saiedian, 2020; Sönmez & Günel, 2018; Cinque, et al., 2018; Novikova, et al.,
2017; Sarno, et al., 2016):
• Data collection, aggregation, and normalization
• Orchestration of alerts and alarms
• Providing monitoring capabilities
• Evaluation of different perimeters
• Threat hunting
• Forensic analysis
• Assessment and of security policies
There could be many different architectures for a SIEM, but usually, they are some components
that are fundamental such as Event collector, Event Processor, and Storage. Figure 6 displays a
sample architecture of a SIEM (Lee, et al., 2017). As it is demonstrated, the SIEM engine, storage,
and user interface are three different components that are functioning in separate layers, while
integrated with each other. One important point to mention here is that in the SIEM engine, this
architecture is using a message queue for both handling the input data while sending messages
(alerts). Although in more modern software architecture, different other approaches could be
considered.
The visualization happens in the user interface layer. That is where the dashboards and visual
elements will appear. The important factor here is if we have proper segregation of layers, the
SIEM tool can be integrated into the different other visualization products, as they only need to
use the APIs.
Figure 6. Sample SIEM Architecture (Lee, et al., 2017)
6
2. Research Process This research utilizes a qualitative research form. This research method is selected due to the fact
the we would like to discover, understand, and explore a particular phenomenon. The phenomenon
in our case can be described as the challenges that cybersecurity professionals are facing today
while utilizing the cybersecurity data-visualization dashboards. One clue could be we are looking
for “what” are the challenges, that indicates we are looking for how a phenomenon behaves in
reality.
By definition, “qualitative methods are often regarded as providing rich data about real-life people
and situations and being more able to make sense of behaviour and to understand behaviour within
its wider context. However, qualitative research is often criticized for lacking generalizability,
being too reliant on the subjective interpretations of researchers and being incapable of replication
by subsequent researchers.” (Polonsky & Waller, 2018)
Figure 7 shows a typical cycle of qualitative approach iterations.
Figure 7. Qualitative research iteration (Streubert & Carpenter, 1999)
2.1 Research Scope
Following items are in the scope of this research:
• Cybersecurity data objects (security events, logs, alerts, etc.) that are produced by
cybersecurity-related components such as SIEM, FW, IPS/IDS, etc.
• Visualization dashboards that cover the cybersecurity data objects
• Both cloud and on-premise environments
The following are out of scope:
• Visualization in different IT domains, such as financial data.
• Information Security in manufacturing or production lines
• Information Security in very old legacy systems, such as mainframes
7
2.2 Delphi Technique
The approach which is planned to follow by this study is a tailored version of the Delphi method.
This method is conceptually based on the consensus development techniques (Vernon, 2009),
(Avella, 2016).
In a systematic review, 100 Delphi studies reviewed from the Web of Science and Elsevier
database (Diamond, et al., 2014). The results of this study showed approximately 98 percent of
those Delphi studies were conducted in order to evaluate the group consensus. Of course, only 72
of them provided a definition for the consensus. This review also showed that 25 pieces of research
were considered the percentage of common agreement as a metric to assess the consensus and the
median of what they considered as a group consensus was about 75%.
The major design characteristics of Delphi can be listed as follow (Avella, 2016; Keeney, et al.,
2001):
1. Being anonymous: This is really important, as having separate discussions with panel
members prevents some issues such as group-pressure or being influenced by dominant
participants (Avella, 2016). On the other hand, the researcher should be extremely cautious
regarding induvial based communication with panel members. it is recommended that
participants should not know each other’s participation in the research, even if they know
each other (Avella, 2016).
2. Capturing Feedback: utilizing SMEs’ knowledge and expertise in the relevant field
3. Iteration (rounds): having rounds and analysis
In a recent study, Tan, et al. (2020) provided one example of the Delphi methods in the real-world
on COVID-19 pandemic as a new challenge to the societies. In their work, they mentioned that
although there is a tremendous amount of information being published daily, it is not easy to gather
a pearl of collective wisdom on the most relevant ones while filtering the others. Therefore, they
utilized the Delphi method to absorb wisdom and provide more appropriate care standards at the
national level.
2.2.1 Pros and Cons of the Delphi Technique
Delphi technique has various useful characteristics which make it a very good alternative for many
research-works. Some of them can be listed as following (Avella, 2016; Keeney, et al., 2001; Fink-
Hafner, et al., 2019):
1- Answering uncertainty
2- Flexibility
3- Cost-effectiveness
4- Freedom of expression
5- Geographically unlimited
There have been some discussions regarding various challenges in the Delphi technique. For
instance, the following major challenges are addressed by (Keeney, et al., 2001; Avella, 2016):
1- Reliability: If two different researchers perform the same research, is there any guarantee
that they will come to the same conclusion?
2- Validity: How can we validate the results?
8
3- Researcher bias and shortcomings impact: How can we make sure the researcher does not
lead the research towards a wrong direction by biasing the initial phase of Delphi (Keeney,
et al., 2001; Avella, 2016)? This could, of course, happen while the results are being narrow
down and selected.
2.2.2 Alternative Research Methods
Although a qualitative research method appears to be a perfect fit, there might be other alternatives
as well. Table 2 and Table 3 summarize the pros and cons of some alternative methods.
Table 2. An alternate quantitative research method
Quantitative
For example, designing a survey with accurate questions, so the feedback from those
Pros
Could be spread faster and easier in less time
Statistical analysis will be easier
Cons
It does not match the research question. We are
looking for exploring a problem to see what is
that? To design surveys, we need to know the
problem very well
Table 3. An alternate constructive research method
Constructive
For example, performing qualitative research first to understand the issue, then designing
surveys to capture more accurate and in-detail information
Pros
More value will be added to the research, so
the community (cybersecurity community)
will get more benefits.
The results could be generalized better.
Cons
Generally, this will take a lot of time and effort,
which exceeds the boundaries of the planned
timeline.
2.2.3 Tailored Delphi Technique
This research tries to tailor the Delphi technique. As the pre-step, a decent literature review needs
to be conducted to give the researcher an in-depth understanding of the problem domain and related
work. This literature review could be a combination of both academic works (such a visualization
scientific foundations, former qualitative works, etc.) and also, whitepapers published by global
IT market leaders. The main steps of this research are shown in Figure 8.
9
Figure 8. Main steps of the research
2.3 Literature review
A literature review needs to be conducted, following a proper method. It needs to be in a way to
reveal the previous related works which have identified the results that are related to the research
motivation. For example, any recent research study that indicates specific challenges in a case
study, or a CTA work.
There are different approaches to conduct a literature review. The selected approach will be based
on the approach introduced by Brocke, et al. (2009), as it defines a clear process for conducting a
literature review. Besides, this approach contains a very important step called “conceptualization
of topic” which is a great basis for further searches.
The purpose of the review is:
- Getting insight regarding the research questions
- Help us adjusting the research questions
- Finding out the gaps in previous research studies
- Help in better conducting on the panel interviews
Figure 9. Framework for literature review (Brocke, et al., 2009)
5. Seconrady rounds of feedback gathering
4. Review &
Narrow Down Factors
3. Brainstorming (round one of
the interviews)
2. Identify &
select Panel Members
1. Literature Review
10
The first step, definition of review scope, is the fundamental step in the review, as we need to be
clear regarding what needs to be done in order to conduct a review that helps to achieve the
research target. The scope of the review will be the same as described earlier in this chapter. In the
second step, a conceptualization of the topic is needed, as it will be a basis for finding the relevant
search terms in the next step. The conceptual map of the research is presented is demonstrated in
Figure 10.
Figure 10. Conceptualization of the topic
For the next step, literature search, it is needed that search keywords will be prepared, as well as
the list of journals and databases to perform the search on them. Figure 11 displays this concept.
Figure 11. Process of literature review (Brocke, et al., 2009)
VisualizationChallenges
VisualizationTechniques
Dashboard
Reports
Charts, Gauges, etc.
Cybersecurity
Cloud
VisualizationTools
Azure Sentinel
Splunk
etc.
11
The materials for conducting the review are presented in Table 4. At first, we started by identifying
the journals that are related to conceptualized topics.
Table 4. Literature Search Materials
Selected
Journals
Asian Spine Journal
Elsevier Journal of Network and Computer Applications
Elsevier: Computers & Security
IEEE Communications Surveys & Tutorials
IEEE Computer Graphics and Applications
IEEE Journal of computers and security
IEEE Transactions on Intelligent Transportation Systems
IEEE Transactions on Visualization and Computer Graphics
Information Systems Research
International Journal of Doctoral Studies
Journal of Clinical Epidemiology
Journal of Computer Graphics forum
Proceedings of the Human Factors and Ergonomics Society Annual Meeting
Springer Journal of Computing in Higher Education
Springer Journal of Digital Imaging
Springer Journal of Information Systems Frontiers
Springer Journal of medical systems
Springer Journal of Visualization
Databases
and Search
Engines
ACM Digital Library
Elsevier
Google Scholar
IEEExplore Digital Library
Microsoft Academic Search
Scopus
SpringerLink
The IEEE Symposium on Visualization for Cyber Security (VizSec)
Selected
Keyword
Cybersecurity Visualization Challenges
Design Security Dashboard
Security Analytics
SIEM Visualization
Visualization Challenge
Visualization Dashboard
Visualization Evaluation
The selection of journals has been based on different criteria, such as relevance, accessibility to
the researcher, year of publishing, number of citations, and so on. Figure 12 displays the process
of selecting the sources to conduct the literature review.
12
Figure 12. The process of selecting review sources
In case of the literature analysis and synthesis step, a deep study of the selected dataset of
collected papers takes place. Backward and forward referencing have been a great help in many
cases, specifically in case of analysing review papers. The outcome of this step is explained in
detail in the chapter 3.
Finally, in research agenda step, structured output of the previous step in regards to further
researches and extending the performed review.
2.4 Identify and Select Panel Members
In the Delphi method, we keep running the iterations until we feel we have achieved a group
consensus (Keeney, et al., 2001). Since its almost all about the group consensus/wisdom, it is vital
to select proper SMEs as panel members that are holding the right qualifications.
Table 5 demonstrates the mandatory and “nice to have” qualifications for selecting panel members.
It worth mentioning that in some sources, this is called the “knowledge resource nomination
worksheet (KRNW).” (Okoli & D.Pawlowski, 2004)
Table 5. Panel member selection criteria
Qualification Rational Type
Professional role in one of the following:
• Information Security domain or tightly
related, such as Cybersecurity Architect,
security analyst, Infrastructure Architect,
network technician, etc.
• Software Architect within Dashboard
Design projects
• Dashboard specialist / Visualization tool
specialist
• Relevance of experience help
capturing relevant input
• It is directly related to the
research questions: Challenges
of users
Mandatory
At least 3 years of experience In such a period, SMEs have faced
some real-world challenges of the
problem
Mandatory
Knowledge and expertise in the cloud
environment project
Be able to compare on-prem vs.
cloud
Mandatory
Analytics skills A better understanding of the data
and how it can be gathered and
processed
Nice to
have
Being a member of an Incident response
team
Have more in-depth knowledge
regarding the reporting process
Nice to
have
Performing Keyword search in each journal
and Databse
Selecting top 10 (approx.)
screening abstracts and removing
old/irrelevant results
Filteringout based on Research Questions
13
According to the qualifications, 12 experts selected. All experts have been working in large
enterprises (more than 1000 employees). Except for one expert who was located in the USA, the
rest were in Sweden. Table 6 demonstrates the panel members' demography. The average
experience of the participants is 11.9 years.
It worth mentioning that the diversity of a range of experts was considered purposefully to prevent
some issues such as getting just technical-oriented or management-oriented answers. The
participants were all asked not to just consider the current organization they are engaged with but
elaborate on their past knowledge and insights as well. The insight and lessons learned that
gathered during the interviews are presented in appendixes on this document.
Table 6. Panel members demography
Nr. Current Role Years of
Experience
Related roles (former/present)
1 Senior Security Adviser 10 CISO, GDPR deployment expert
2 Cybersecurity Analyst 5
3 Senior Security Architect 10 Network Admin and Technician
4 Enterprise Security Architect 25 Network Security, Infra Admin
5 Cloud Architect 7 Security Architect
6 Infra Solution Architect 19
7 Cybersecurity specialist 20
8 IT Security Specialist 10
9 Security Analyst 8
10 Monitoring Architect 6 Managing a team of +10 people who
design monitoring dashboards
11 Dashboard engineer (Splunk
Specialist)
3
12 Senior Security Professional
(OT)/ Managing consultant
20
2.5 Brainstorming
This can be considered as the round one of the Delphi process. Usually, it starts with some pre-
prepared questionnaire, to initiate the process of brainstorming and idea-generation (Keeney, et
al., 2001). At this step, the researcher needs to make the research topic clear to the individual
panellists, then address the issues and ask for feedback. Primarily, it starts with a set of open-ended
questions (Keeney, et al., 2001).
To perform this round of interviews, a list of potential questions prepared based on the performed
literature review. The questions were mostly targeting the experience and opinions of experts
regarding different challenges in security visualization dashboards.
At the first round of interviews, we focused on the open-ended questions that were designed based
on the research questions, so the experts could express their thinking and ideas in a broader way.
They have been encouraged to elaborate as much as possible. In the process of interviews, the
experts assured that their expressions will be anonymous and no personal/organizational
information needs to be provided. Each interview took almost between 45-75 minutes.
14
Some questions (and not limited) to below:
• Tell me about your experience of using visualization dashboards: Your way of usage,
common strong points you see in the visualization dashboards, common weak points, etc.)
• What are the major challenges in security visualization?
• Have you had a problem with understanding what a security visualization dashboard
displays? What were the root causes?
• Generally speaking, do you see any difference in cloud security visualization vs. On-prem?
Why?
• What features do you think are missing in the current security visualization tools?
• In your opinion, what features are we missing in nowadays visualization tools?
• Have you heard other people complaining about visualization dashboards? What do you
think are their concerns?
• If you would like to buy a visualization tool, which features are the most important to you
as a customer?
2.6 Review and Narrow Down Factors
The result from the previous phase needs to be collected, cleansed, and clustered. There might be
some irrelevant items addressed by panel members that could be omitted. On the other hand, the
researcher will have some idea regarding different clusters of opinions which are coherent. These
could be used later on to design more particular questions.
The practical approach for us in this research would be the following process:
1. Gathering all the statements that are expressed by the panel members in the brainstorming
(round one interviews)
2. Removing the duplicates, or irrelevant statements.
3. Categorization of the remaining statements by labelling the similar ones
4. Reviewing the categories and merging them in case of similarity or overlap
After performing the above process, we expect that major categories of challenges will be evolved.
2.7 Secondary Rounds of Feedback Gathering
Having more accurate factors in hand from the previous step, now it is time to have the second run
of the feedback gathering. In this phase, the questions will not be open-ended and panel members
are more required to express their opinions on specific items. In some cases, they need to
approve/reject or rank the items.
15
3. Related Works and Research Gaps There have been many kinds of research in different aspects of visualization. Although not all of
them are related to the field of cybersecurity, the identified challenges in them could be useful.
As a relevant example, Olshannikova, et al. (2015) provided an intensive overall review of the
different aspects of data visualization, including big data visualization, different types of visual
elements (e.g. colour, shape, size, etc.), and even cognitive psychology principles. Having such a
universal review, they deducted that an optimal visualization method had to optimize the usage of
different criteria, otherwise the results might be too complex in case of an overload of visual
elements and eventually, not being human readable.
3.1 Inherent Data Visualization Challenges
There are some inherent challenges in many kinds of data visualization, including cybersecurity
data objects. These challenges are agnostic to the tools, development approaches, and the data
domain. There is no wonder that in many related works, these kinds of challenges such as the high
volume of data, the complexity of the data, and so on, are common.
In a study, Li, et al. (2020) worked on the implementation of an event-sequence visualization tool
called SSRDVis. The tool supposed to visualize and detect abnormal cases using a “rare detection
module”. One challenge they addressed during their work was extracting meaningful information
from a massive amount of sequential data, as sequential data are highly complex. They mentioned
that although there are some methods for the analysis of event sequences, they are still
experiencing a challenge in creating an overall view on several event sequences. For instance, they
mentioned the challenge of applying machine learning methods (e.g. clustering) to sequences
directly (despite they work well with vector space).
Eventually, they proposed a framework for visualization of events considering the variance of
time. According to them, users would like to know “What does a typical event look like? How do
the features and events interact with each other?”
In their work, besides the challenge of processing a massive number of features and timesteps,
they addressed the complexity of event processing over timesteps as well. As the ratio of
occurrence for events varies, besides having many identical events at different intervals. Another
challenge would be the detection of events based on the features and spanning up to hundreds of
timesteps. Figure 13 demonstrates their proposed framework and the process of visualization of
the events.
Data quality seems to be another challenge in the context of visualization. This challenge addressed
by Borovina and Ferreira (2017), besides identifying data defects. For instance, detecting some
data defects need the knowledge of the data context, meaning this needs to be supervised by a
human. On the other hand, just visualizing the data might not necessarily help to detect some of
the data defects. However, in the case study, they tried to follow a qualitative approach to achieve
a set of implications in order to have a visual assessment system for data quality.
16
Figure 13. A framework of event visualization (Li, et al., 2020)
3.2 Human Factor Challenges
A fundamental challenge of how to transform the raw data into a “human-understandable format”
while not losing the meaning, is addressed by some researchers (Paul, et al., 2015).
Besides that, many of the recent related works are addressing a common challenge: in the
traditional way of the data-visualization design process, the end-user is either ignored or not
involved. For instance, Sethi and Wills (2017) mentioned that the development of visualization
tools in cybersecurity is significantly suffering from not having enough involvement of the end-
user, as well as not standardizations and guidelines in both design and evaluation. They also
addressed the issue of the generic design of visualization tools, which in some cases makes it
ineffective for some users.
In another research, Vieane, et al. (2016) tried to address the human factor gaps in different areas
of cybersecurity. A major gap they mentioned was that we are still very beginner in cognitive
human aspects in cybersecurity and more researches are needed in this area.
Investigation researches in which an implementation of a tool besides its evaluation is included
could provide significant knowledge or insight for discovering challenges that users might have
nowadays even with market-leading products. In some cases, researchers start implementing a tool
having a tool architecture in their minds, but after implementation, they did not really get the results
they wanted from the users. In other words, users did not necessarily behave as they have expected.
Discovering various root causes of such failures could be a great help in order to identify
visualization challenges.
Chen, et al. (2014) not only addressed the issue of “traditional single expert analysis system”, but
they also designed and launched an online tool to highlight the importance of collaboration and
higher engagement of more people. In their development, they mentioned two major challenges:
the complexity of input dataset including interrelations, besides the dynamic handling of threshold
settings for the input.
A visualization tool called Charticulator implemented by Ren, et al. (2019), determined from a
constraint-based layout approach. The visualization tool implemented utilizing HTML5 and other
front-end technologies in order to enable a user to design his/her own visualization in an interactive
17
way. They tried to transform the specification of the desired chart to some mathematical
constraints, then trying to computer and generate the visualization layouts in a way that it satisfies
the constraints, using a constraint-solving algorithm.
After implementation, they performed a user acceptance test and noticed three challenges (besides
a natural challenge of low speed due to the constraint-solving algorithms):
• Usability: how the user can place, align, rotate the text, etc.
• Conditional visibility of texts
• Legend creation: users had the challenge to add legends to the charts
They tried to fix these challenges by optimizing the tool itself. However, they noticed even with
the fixes they applied; the third challenge is still there. Although their tool had a button for adding
the challenges, the users were still experiencing difficulties in adding the legends. A valuable
insight by one of the users could give us a very important clue: “I'm still thinking in a regular
[Microsoft] Excel format.” (Ren, et al., 2019). This clearly shows how the human factor and the
user mindset could have an impact on the ways of working.
3.3 CTA Researches
Cognitive task analysis (CTA), as a method that focuses on unobservable task activities of people
(Wei & Salvendy, 2004), might be an option for understanding user behaviour and eventually,
designing effective visualization based on that. Of course, we should consider that CTA results
might significantly vary based on the training time, required skill, and so on (Wei & Salvendy,
2004).
One example is a CTA work by Champion, et al. (2012) that identified three different team
performance factors in cybersecurity:
• The structure of the team
• The communication of the team
• Information overload.
The last item is a very important factor for us as this huge load of information in not only addressed
in other CTA works such as Gutzwiller et al. (2016), but also in other reviews and tool
developments (Chen, et al., 2014; Sethi & Wills, 2017; Sharafaldin, et al., 2019; Staheli, et al.,
2014; Bakirtzis, et al., 2018).
Although the mentioned CTA works were not directly regarding the visualization, the findings
could indirectly be a great insight for any visualizer designer to show how a user could be drawn
in a massive amount of information to perform a simple task, and this will have an impact on their
performance. The other important key finding of Champion et al. is that in cybersecurity tasks, the
situation awareness is moderate-to-low (Champion, et al., 2012). Besides Gutzwiller et al. (2016)
addressed another key finding: as security analysts need the information just in time of the decision
formulation, they have to keep pieces of the information “in the mind”. These points again could
be an indirect vital insight for visualization designers, as the capabilities of feature selection,
tailoring and customization are mentioned in other research studies (Chen, et al., 2014; Fischer &
Keim, 2014; Sharafaldin, et al., 2019; Wagner, et al., 2015).
18
3.4 Tool Specific Behaviours
It seems some challenges raise due to the specific behaviour of the visualization tools. In some
cases, the approach to design a dashboard could be improved (e.g. implementing a new approach
by adding an action history layer (Wu, et al., 2020)). In some other cases, the challenges could be
resolved by optimization and fixes.
Considering the vital role of any SIEM tool, Sönmez and Günel (2018) performed an evaluation
of the famous commercial SIEM tools in the current market. Table 7 demonstrates their
comparison summary. As it clearly appears, the tools are having different behaviours in various
areas, which might cause some challenges for the users. For instance, according to their
comparison, some tools are more difficult to be integrated with custom data.
Table 7. Comparison of famous SIEM tools (Sönmez & Günel, 2018)
According to a study by Sarno, et al. (2016), SIEM tools are having “three principal weaknesses”
while being used in critical infrastructure protection:
• In case of having different security policies, SIEM tools are often do not have capabilities
to resolve those policy conflicts.
• Not all the current SIEM tools are able to monitor, identify, and control the universal
possible data flows for a perimeter.
• Ensuring the integrity of the history and logs, which are going to be used for further
forensic activities. This integration could be sone via encryption or signing the data.
Many different visualization tools investigated by Wu, et al. (2020). In their work, they noticed
that most of them are not recording what actions had been performed over the process of
conducting data visualization. This could be interpreted as a challenge, as users are limited to some
undo/redo actions, and not having a semantic chain of actions. Identifying such a challenge, they
19
proposed a new framework called VizAct consisting of three different layers, including an “Action
Tracker”. Figure 14 demonstrated their proposed framework and its different layers.
Figure 14. VisAct framework architecture (Wu, et al., 2020).
A noticeable challenge that evolved during the implementation of the Charticulator tool by (Ren,
et al. (2019), was people’s mindset regarding the direct manipulation of the charts. According to
them, people that have worked with vector graphic tools (e.g. Adobe Illustrator) believe in
“everything is manipulable”, so they expect to be able to modify the outcome at any time, while
the visualization tool created by them (based on the constraint-solving algorithms) could not
possibly do that, as it was designed to generate the layouts mathematically. They also mentioned
the user interface challenges that their users had, for example “It was sometimes difficult
determining what I needed to click to reveal other properties/options.”
VisComposer is the name of another tool for information visualization by Mei, et al. (2018). In
their work, they focused on the programmability capabilities as well as the user interface and user
interactivity. After the evaluation of their work, they had interviews with the participants.
Although they got some positive feedback, half of the participants had experienced UI challenges
due to the complexity of it, as it required a lot of user interactions in order to perform the visual
mapping and transformation of the data.
3.5 Evaluation Challenges
In a study, Elmqvist and Yi (2015) worked on the evaluation of data visualization. They gathered
a list of 20 evaluation patterns, in either qualitative or quantitative form, in 5 different categories:
• Exploration
• Control
• Generalization
• Validation
• Presentation.
20
In their work, they tried to provide some approaches in order to evaluate different visualization
solutions. Although this is an important matter, it still seems to be at an abstract level and, putting
that into practice is not an easy task.
Evaluation of visualization approaches and dashboard tools is also another challenge. This studied
by Staheli, et al. (2014), and they addressed eight evaluable components in a visualization practice.
Figure 15 demonstrates the components and their connections.
Figure 15. Evaluable components of a visualization practice (Staheli, et al., 2014)
In a related work, Sharafaldin, et al. (2019) introduced an evaluation framework for network
security visualization. Table 8 demonstrates the summary of their criteria.
Table 8. Evaluation criteria conducted based on (Sharafaldin, et al., 2019)
Evaluation Criteria Description
Data source coverage How sophisticated is a visualization tool to handle multiple sources
of input data
Interoperability How sophisticated is a visualization tool to integrate its services with
the other tools e.g. exchange the information
Flexibility and
Interactivity
How easy the users can interact with the system?
Scalability How a tool can handle big data, besides how can it visualize big data
in an efficient way
Machine Assistance How the tool really facilitates users to solve their problems? Is it
popular among users?
Validation evaluation:
Having a set of use cases, is the tool evaluated in practice? The use
case contains:
o How an individual cybersecurity user uses the tool?
o How a team of cybersecurity users is collaborating using the tool?
o How can it be used to solve a real-world problem?
o Does the tool have documentations?
Attack Coverage How many attack patterns are covered by the tool?
21
Figure 16 displays a classic approach to measure the effectiveness of mental activity needed to
understand a phenomenon. In this way, high efficiency is achieved when the reading performance
is high while the mental level is low (Gerven, et al., 2003). Considering efficiency score (E) for a
user as a perpendicular distance between the learning performance and the mental effort. Now, if
we compare three different groups (A, B, and C), and measuring the group means, then when E=0
line could be considered as a measure.
𝐸 =ZPerformance − ZMental Effort
√2
Figure 16. Instructional efficiency measurement (Gerven, et al., 2003)
SvEm is a security visualization effectiveness measurement framework introduced by. In their
proposed framework, the main effectiveness metrics are:
• Visual clarity
• Visibility
• Distortion rates
• User response (viewing) times.
They argued that effectiveness measurement in many other related works is usually based on
technical measurements such as performance, image quality, clarity, etc. While effectiveness does
not necessarily mean that. According to them, more suitable visualizations are those that the viewer
user can understand the story and the rationale behind it, without needing external help in
interpretation.
To do so, they tried to bring cognition, perception and insight contributed to the account. Figure
17 displays the components of their proposed framework, and their relations.
22
Figure 17. Components of SvEm Model (Garae, et al., 2018)
3.6 Design Challenges
In a study, Paul, et al. (2015) worked on. Opposite to the “traditional” approach with a design
based on answering pre-defined problems, they tried first to develop a visual concept having the
end-user (human) in focus, regardless of the input data or user requirements. They mentioned that
this approach works better for new problems that do not have a strong solution in places, and is
not supposed to replace traditional approaches.
Figure 18. The traditional process of visualization (Paul, et al., 2015)
Another research by Bakirtzis, et al. (2018) addressed a simple, yet the vital challenge of the in the
design phase in a visualization tool: lack of access to the real security data for the system designers.
They also highlighted the fact that sometimes the lack of information id due to historical logs, such
as applied patches to resolve a vulnerability in an older version of a software system.
Some related visualization challenges addressed by Marty (2008), such as poor data-quality that
might not contain the information it is supposed to have for the visualization. According to him,
another challenge is deciding on the amount of data to be visualized in each graph in a dashboard.
In his book, Marty mentioned how vital it is to select the proper tool (chart) to display the correct
information. Figure 19 displays a simplified decision-tree for choosing the right type of diagram
to be used in a dashboard. As demonstrated in the figure, it is primarily important to know what is
going to be explained (from the tool perspective) and needs to be understood by the user (from the
user perspective).
Data Analytics Visualization Context Human
23
Figure 19. Sample decision-tree for choosing the right graph (Marty, 2008)
An important aspect of this approach is considering the number of dimensions of the input data.
For example, a histogram might be a good option for comparing 3 different dimensions (e.g.
comparing different urban development factors in three big cities), but it will not be the best option
for comparing hundreds of items. In that way, a Treemap diagram is a much better approach.
Figure 20 displays a treemap chart that visualizes the status of 10,000 records of a firewall, in case
of either they are passed or blocked.
24
Figure 20. Visualizing approx. 10,000 records of a firewall (Marty, 2008)
A related challenge addressed by Görtler, et al. (2018) considers the generating treemap diagrams
in the visualization of the deep hierarchical data including uncertainties. To resolve the issue, they
developed a model to visualize the data (hierarchical in nature) that are impacted by uncertainty.
In Figure 21 they presented how their developed method is generating a treemap chart using a
model that propagates the characteristics of uncertainty data in a hierarchical structure.
Figure 21. Generation of a treemap chart using Bubble Treemap method (Görtler, et al., 2018)
25
Figure 22 displays a time-table graph that is used to visualize the comparison of traffic over time.
As it appears, such a diagram is capable of highlighting gaps and patterns which are the result of
periodic behaviours.
Figure 22. A Time table graph that shows behaviour patterns (Marty, 2008)
3.7 Other Challenges
Nine different categories of network security visualization identified by Sharafaldin, et al. (2019),
based on the classifying recently published works:
1. Host/Server monitoring
2. Attack patterns
3. Internal/External and Internal/Internal monitoring
4. Routing behaviour
5. IDS monitoring
6. Configuration visualization
7. Steganography visualization
8. Proxy server monitoring
9. Encrypted traffic monitoring
Although this list is covering many aspects of visualization, it clearly shows that cloud-specific
visualizations are missing. For example, according to Sharmaa, et al. (2016), cloud providers
usually provide security “as a service”, such as monitoring of identity and access management.
A significant challenge addressed by Wagner, et al. (2015), performing a survey on visualization
systems with a focus on malware analysis, is the tremendous amount of newly identified malware,
besides the extensive growth rate of them.
They divided malware visualization into three categories:
• Individual malware analysis,
• malware comparison
• Malware Summarization.
As a result of their work, they addressed the “future challenges” conducted in Table 9.
26
Table 9. Visualization challenges according to (Wagner, et al., 2015)
Challenge Description
Overlap of Malware
visualization categories
Different categories (mentioned above) are overlapping. The
visualization tool has to be very sophisticated to switch between
individual malware analysis and comparative analysis.
Handling multiple data
sources
How a tool needs to get the input data from multiple sources
(integration), and combine the data.
Capability of customization
and tailoring
How users can tailor and customize the tool to create their own
visualization dashboard
Capability of interactive
adaptive changes
Does the visualization tool have the capability of detecting and
using the knowledge of expert users’ behaviour?
Segregation of visualization
and data analytics
In most systems, these two are so tied together, making it very
difficult for scalability and customization based on the problem.
3.8 Research Gaps
Considering the previous related works, it seems considerable research studies have performed
over the visualization challenges in the field of cybersecurity. However, cybersecurity and
eventually its visualization is rapidly changing nowadays due to the fact of expansion of the cloud
environments and digital transformation.
Thus, one obvious gap is how the visualization challenges look like in the current situation, while
many companies adopted cloud technologies while keeping their on-prem infrastructure? Are they
still experiencing the previous challenges that already covered by some of the related works, or
there has been a change in either major challenge headlines or their priority?
Considering advances in cloud environments and those many companies that are migrating their
infrastructure and applications to the cloud-based environments, how does it differ from classical
on-premise infrastructure? Is there any difference between the visualization of the cloud data,
compared with on-premise data? Considering the existing challenges, how likely they might
happen in the real-world?
3.9 Research Questions
The research gaps show that we need a more updated investigation on the current situation, in the
case of cybersecurity data-visualization challenges. From the previous related works in the past,
we cannot infer which challenges are existing today, or whether new challenges have emerged.
Therefore, answering the following research questions will greatly help to verify known
challenges, besides identifying any recently evolved major challenge:
1- What are the main challenges that security professionals are facing in using the current
tools that are visualizing security-related data objects?
2- How significant are the challenges in real-world scenarios?
27
4. Empirical Work This section elaborates on the details of the empirical research study.
4.1 Results from the First Round of Interviews
After performing the first round of interviews, explained in chapter 25, more than 120 statements
gathered from the panel members. Applying the narrow down step, explained in chapter Review
and Narrow Down Factors, the results condensed into 42 group statements in 13 categories. The
categories are covered alphabetically as the following sub-chapters.
4.1.1 Business Context Correlation
Correlation of dashboards with the company's strategy, vision, risk, and decision-making processes
seems to be an obvious challenge to many experts. This is also regarding how to connect the
outcome of the visualization dashboards to the business context in order to use them for a better
understating of the organization’s status. This looks more like management and organizational
challenge. Some experts mentioned the organizations are unable to answer some key success factor
questions such as:
• Finally, are we having a proper security practice in place or not?
• Is out security posture satisfactory?
• Are we suffering from the skills gap in the field of cybersecurity?
• Do our assets have proper protection?
• How can we “see” the spent budget on internal security training, in the dashboards?
The other major topic here is how the results could be used in decision-making processes. For
example, if the management of an organization is about to decide whether or not they should
prioritize a specific section (e.g. risk management, security incident team, etc.)?
According to the first round of results, another discovered challenge in this context is missing the
“management view” as security visualization dashboards are usually designed for the technical
security professionals.
According to one of the experts: “Managers are interested in how secure we are, and not how many
incidents or vulnerabilities we had covered”
4.1.2 Customization of general-purpose tools
Discussions show that there is a fundamental challenge in nowadays dashboard visualization tools.
The vendors try to build a product that could be sold to as many customers as possible. Hence, it
has to be generalized in a way that could cover so many different usages. On the other hand, experts
believe that custom products are usually a better fit for their domains, as they designed and adjusted
specifically for those domains.
In case of having so many different custom products for each domain of usage, other challenges
such as price, maintenance, and the integration of the tools to get the big picture will emerge.
The outcome of the previous discussion would be general-purpose systems that security experts
need to customize them to their needs, which of course take time and effort.
28
Experts mentioned that usually, visualization tools come with some pre-defined templates for the
most common reports and alerts, but in many cases, they need to spend time on creating their
custom reports.
They also said that requesting vendors for making changes in the products will cost a lot as it is
being considered as customer-specific requests. The chance of having changes in the famous
visualization tools is not that much, as it has dependencies on so many different factors such as
cost, time to delivery, backward compatibility, market, and so on.
4.1.3 Dashboard Design
This category illustrates the challenges in designing the dashboards. It covers both the challenges
in which designers are facing, besides those challenges that end-user thinks are caused due to the
design.
As an end-user, the following challenges addressed:
• Sometimes there is no meaningful story behind the visual elements, and the end-user needs
some explanation to understand the chart.
• Sometimes the visual elements are not clear, or they are missing the explanation for
abbreviations.
• In case of very general dashboards, people might need to export the data and create and
their own visualization.
In the case of designing a dashboard, Experts addressed the following major topics:
• People’s preference:
People are having a vast variety of preferences. For example, some people really like pie
charts while others avoid them.
• Customization:
This is very similar to the challenge which is already covered in the chapter 4.2.1.
In big enterprises, due to the vast number of required visual reports, there is a conflict
between the need for general purpose reports versus particular usage reports. One challenge
of creating these custom reports is more load of work for maintenance, troubleshooting,
change management, and so on.
• Change of requirement and scope creep:
In visualization projects, experts feel new requirements and change requests sometimes
lead to the scope creep issue.
4.1.4 Data Quality
Experts addressed various aspects of data quality challenges. Some of the challenges could result
in low trust in the visualization tools. However, some could be due to the visualization tools
behaviour themselves.
1. False positives: False positives are very common in visualization dashboards. Of course,
they seem to be the challenge of the input data or the processing engine of SIEM tools.
2. Dependencies: While gathering data from different sources, there is always a dependency
on them. For example, if an external link to a system that provides potential threats will be
29
broken, the data will not eventually land in the SIEM tool, thus, the visualization dashboard
will not be showing the current information.
3. Data cleaning and pre-processing: Processing of the massive input data is a challenge.
4. Aggregation and standardisation: data that are gathered from different sources are not
necessarily having the same quality standards. They also might have different formats that
need to be standardised.
5. Data quality challenges in master data. Sometimes, the master-data system suffers from
low-quality data (e.g. in an asset management system, some assets might not have the
correct data fields such as ID, owner, usage, etc.)
4.1.5 Human Factor
The addressed factors in this category are related to the behaviour of the users. Experts mentioned
that Although technical integrations are possible, different teams sometimes not collaborating
enough. This could be due to the team loads, organizations’ internal processes and bureaucracies,
and even organizational politics.
Another identified challenge is that people's mindset is usually conditioned to react mostly to
negative situations in data visualization. For example, people tend to double-check when the
dashboards are showing red flags. The risk in such behaviours could be due to an integration
technical difficulty, some negative results are not being detected or transmitted to the visualization
tool, but the end-user thinks there is nothing negative.
This factor was also a difficult one. As some participants with more years of experience have seen
more human factor challenges while the other participants had neutral ideas.
4.1.6 Information Overload
Two major challenges addressed by the panel members:
1. How to get the right level of details to present to the users?
In different scenarios, users get much more details that they need to perform a specific task,
and they would like to scale out some details.
Note: Some tools use a “drill-down” capability, meaning end user sees a high-level report,
then they can click on a part of the report and see more details. Although this is a very good
approach according to the users, in some cases after one or two levels of drilling-down, the
details would be either confusing or just too much. This is besides taking a long time to
prepare and display the details.
Some users mentioned that some tools or some designed dashboards, do not support the
capability of removing the columns of data that are no needed. Therefore, they have to do
it manually in a third-party tool like MS Excel.
2. Too many visual elements
This is very common while users have to use “all in one” dashboards. Sometimes, there
are very general-purpose dashboards that display too many elements (e.g. for monitoring a
network).
In such situations, the user might feel some conflicts/paradoxes between different visual
elements, and making meaningful correlation among those elements will need mental
processing. According to one expert: “I need visualization to make it easier, not more
complicated”.
30
4.1.7 Integration and Interoperability
Nowadays, each company is utilizing many different IT components in a variety of its domains.
Hence, it is inevitable to make integrations among these components in order to aggregate the data
and make meaningful results. This is more obvious in the case of having master data.
Experts identified some different challenges as follow:
• In big enterprises, it is common that different teams can have their own tools and
dashboards. Although this diversity could be good for using customized tools, it will add
complexity in order to aggregate the data and integrate different software components for
automated data gatherings.
• Integration of the old legacy systems is challenging, as they might not support modern
integration techniques.
• Delay of getting data from different sources, especially external ones.
• Tools might overlap in some areas. One example could be using a tool for monitoring some
network devices, and another tool for updating some of those devices. Hence, some devices
need to be registered twice. In such cases, conflicting entries in tools could reach an
integration issue (as they need to be correlated and aggregated).
4.1.8 KPI Definition
Key Performance Indicators (KIPs) are very common in many organizations. They are supposed
to aggregate the data and display the status of a specific parameter. Thus, there are highly being
used in the visualization dashboards.
Experts mentioned that there are some obvious challenges in this category:
• Designing KPIs based on the available data rather than the business problem
In some cases, people/organizations do not know what to monitor, but they are looking for
KPIs that show them the overall status.
• Wrong KPIs for answering questions
Some designed KPIs are misaligned with realities. For example, just having the number of
patched devices does not necessarily show whether the company is in a good situation or
not. A better approach could be working on the patch curve of last quarter.
• Wrong questions, looking for KPIs
Sometimes, people focus on finding KPIs for answering questions that do not really bring
any useful value. This is correlated to the business context as well.
• Focus on one parameter only
Sometimes, it is needed to include different factors to answer a business question. For
example, to have a better view of vulnerability management, the organization should not
deduct it from the progress perspective during a specific time period.
4.1.9 Manual Work
Experts mentioned that in some cases, which might cause due to the lack of automation/integration,
there is still a need for manual work to extract, check, and build custom reports. There are many
different scenarios that users need to export pieces of information from the different course and
use a third-party application, e.g. MS Excel, in order to aggregate and build a custom report.
31
This is not only due to integration challenges, but also in case of building and maintaining
integrations of different systems will cost time and effort, and that might not be affordable by
teams, specifically if the reports are being used by a limited number of users.
4.1.10 Setup and Maintenance
Each visualization tool needs to be installed, fine-tuned, and maintained. Experts believe that these
activities need the dedication to administration, time, and budget. Such activities also need users
to keep themselves up to date. Although vendors usually help in setting up the tools, there is a
need for involvement of in-house resources as well.
An inherent difficulty in identifying this challenge is that some participants had more architecture
and higher-level view on the organizations, so they seemed to think of setup and maintenance
challenge according to their experiences of late deliveries, lack of resources, etc. While more
technical participants seemed to have an idea of “a proper setup is not an issue”.
4.1.11 Tool Specific
According to experts, visualization tools are having their own limitations, boundaries, and software
issues as well. Some major challenges in this category are as follow:
• Component coverage:
Some tools having difficulties in coverage of the company’s needs. For example, a tool
might not support some firewalls or routers.
• AI-Based tools take time to learn new patterns
• Resource costs are high:
Some tools use a lot of computational or storage resources. In some cases, this will be more
obvious if the end-user needs to perform a search over the history of the data.
• Not having a mobile user interface:
Some tools are designed to be used only via desktop computers, and administrators cannot
manage some features via their mobile phones in case of quick access.
• Fancy UI, but low focus on UX:
Some tools focus on the visualization part today rather than actually improving the
landscape, or at least the user experience (UX). This is more obvious when a user needs to
pass too many steps to perform a task.
• Price:
Some tools are fairly expensive for smaller companies. So, they either have to use free or
open-source tools, or they need to purchase the tools that cover part of their needs. This
will add complexity in case of future integrations, maintenance, and decommission of the
tools.
• Open Source/General tools are complex, less flexible, and take much more time to master.
Besides, they are not necessarily having all the functionalities the organizations want
• Tool boundaries and limitations:
Each tool has some boundaries. For example, one tool might not be able to support more
than 1000 devices. Sometimes, the limitations are due to the licensing and the high cost of
moving toward higher license plans.
32
4.1.12 Training and Skills
Using any tool needs skills and competences. The usage includes installation and maintenance,
deployment, integration, tool development (some tools are having coding functionalities), reports
and queries, operational activities, tuning, and so on.
One challenge addressed by experts is that some of the tools, specifically the famous complex
ones, take a significant amount of time for somebody to master them. This means a new user of
these tools will cost the organization some time, besides hiring skilled people in such an area also
costs.
This challenge will be more obvious when considering the fact that some vendors are not offering
training for free, so a user cannot learn it until he/she onboards to a team/company that is using
the tool. Besides, some tools are not offering best practices, experts’ online forums, and self-
trainings materials.
Experts also addressed that in organizations also suffer from a skills gap. For example, some teams
might not be aware of cloud concepts, so they keep developing their area based on the older ways
of working.
4.1.13 Trust and Reliability
One vital question for many tools could be: how can we trust their outcome? Experts mentioned
that regarding the visualization tools, the displayed information on the dashboards cannot be
trusted as the only factor in making decisions. Hence, there is always a need for double-check in
case of making decisions. For example, if a dashboard shows an attack is happening, before raising
it to the incident response team, it is needed that it will be checked against the false positive issues.
Of course, experts stated that the tools themselves are usually reliable (in the case expected
functionality, bugs, etc.), but the input data or the way the reports, thresholds, alarms, etc. are
designed/set are normally the sources of trust issues.
4.2 Second Round of Interviews
After discovering ideas from the first round of panel interviews and gathering the major categories
of the challenges, the second run conducted in order to evaluate the consensus on the findings and
prioritizing them.
Two outcomes are expected:
1- What are the challenges that most of the experts agreed upon?
2- How likely each challenge could happen?
A combination of the above understandings will help us have a proper answer to the research
questions. Thus, in this round, each interview divided into two parts:
• Part I: demonstrating the categories, and asking for how much each expert agrees with that.
Options to evaluate the consensus were: strongly agree, agree, neutral, disagree, strongly
disagree.
The questions are similar to: Do you believe “x” is a challenge in security visualization
dashboards?
33
• Part II: demonstrating the categories in a random way, and asking each expert to prioritize
them from the most significant challenges to the least ones. This is in case of discovering
how likely each challenge is really happening as of their experience?
The questions are like: Considering the list of identified challenges exists, how to you
prioritize them from the most important challenge to the least ones.
Interview times varied. In some cases, experts could answer all the questions in less than 15
minutes. In some other cases where experts needed more clarification on the terms, it took up to
30 minutes.
In some cases, the participants needed clarification regarding the challenge titles they have not
mentioned themselves. For example, more technical participants asked: “What do you mean by
business context correlation?”
In such cases, they have been provided an overview of other participants thinking, and the best
efforts made to make sure the researcher’s opinion is not driving/biasing the participants’ opinions.
4.3 Achieved Consensus
Generally speaking, there were seldom disagreements in the second round. Aggregating the results
from the second run, Table 10 demonstrates the outcome in two dimensions: likelihood and
severity.
As it appears in the results, data quality is a very common, and severe challenge in the case of data
visualization. The other challenges that come after that are dashboard design, KPI definition,
integration, and business context correlation.
Table 10. Details of experts’ consensus on identified challenges
Severity
Minor Moderate Significant Severe
Lik
elih
oo
d
Very Likely
Training and Skills
---- Manual work
Dashboard Design Data Quality
Likely Customization
---- Information Overload
Trust and Reliability KPI Definition
---- Integration
Business Context Correlation
Possible Tool Specific
---- Setup and Maintenance
Human Factor
4.3.1 Technical-level vs. Organizational-level challenges
Some identified challenges were more technical oriented. These were including data quality, setup
and maintenance, tool-specific, and dashboard design. While some others were tending to be at the
organizational level, such as human factor, business context correlation, and trust. The rest of the
challenges could be partially covering both areas. Figure 23 displays this concept.
34
This could be the key that why in some cases, more senior people (in higher organization hierarchy
roles) strongly agree with an identified challenge, while more technical-oriented participants are
either neutral or not having a “strongly agree” opinion.
Figure 23. Technical root causes vs. Organizational
4.3.2 Other Findings
In the second round, two other topics are covered by asking the experts. The first finding is
regarding gathering security data objectives from the cloud environments. As in the cloud
environments, consumers are dependent on the log gathered by the cloud providers, they most
likely cannot access some actual low-level logs, such as Syslog, firewall detailed traffic logs, and
so on. Half of the interviewed experts believed that although this is not an issue, they would
“prefer” to have access to those logs, while one-third of the rest though this is a challenge and
would “need” to access those logs.
The other identified fact was their opinions regarding 3D visual elements in dashboards. One-third
of the experts believed according to their experiences; 3D visual elements will add more
complexity than bringing any value. More than half of the rest stated that although this might be
useful, it is not a priority for them.
35
5. Conclusion Reviewing our research questions, this study was aiming to find the main challenges that security
professionals are facing in using the current visualization tools. Besides, another target was finding
out the significance of the challenges in real-world scenarios. Performing the empirical work, the
results demonstrated in Table 10.
The results showing that the major challenges are either at a technical level or organization level,
while some are overlapping. Analysing them, we can conclude that currently the most considerable
challenges in the visualization of the security-related data-objects are due to the way of working
and the design/architecture of the data and information. According to the results, the top challenges
are data quality, integration and interoperability, business context, and dashboard design. The first
two are obviously data-driven, and the second two are the way the dashboards are designed and
correlated to the business context.
This research shows that buying more sophisticated or complex tools will not necessarily bring
that much value for the industries if the architecture and business context is ignored. On the other
hand, it shows that just making the visualization tools fancier (with animations, 3D elements, etc.)
seems not to be a priority for the users.
One important achievement that could be inferred from the detailed results is the impact of
seniority in the answers. For example, when it comes to the matter of challenges at the
organizational level, such as human factor and business context correlation, participants with less
than average experience tend to stay neutral, or not strongly agree.
During the empirical work, a lot of insights gathered from the participants which did not directly
point out the challenges, rather what do they think is worth considering. Those ideas and insights
are gathered in Appendixes. They have contributed to state what lessons have they learned from
their past experiences in Appendix 1 – Lessons learned. They also provided their view on what
features do they like to have in Appendix 2 – Desirable features.
For the research community, it is important to pay more attention to how data/information
architecture could be the root cause of many other similar challenges in different areas.
5.1 Limitations
There are some limitations to this work. Although the number of panel members meets the research
requirements, having access to more qualified panel members in different geographies was a
limitation. This comes to matter when considering even the current interviewed experts had very
limited time, so having more rounds of interviews with them was the other challenge.
On the other hand, the detailed architectures of the commercial tools are not publicly available,
this research had to consider them as black boxes. Besides, there has been no possible access to
interview the designer of well-known commercial products. Their point of view could have a
significant impact on finding what are the obstacles in the architecture and production of the
commercial tools, that are hidden to the end-users.
Another limitation was that due to the privacy considerations, asking for more specific business-
oriented questions was not possible.
36
6. Future Work There are various possibilities for the extension of this work. One approach could be a more in-
depth study of each challenge, specifically the more significant ones such as data quality. Another
area of the future work could be how the decision-making process in the current organizations
could be correlated with security data visualization. For example, how a visualization dashboard
can help to detect skills-gap in an enterprise.
As addressed earlier, in the case of organizational-level challenges (e.g. human factor), participants
had different points of view. So, maybe more in-depth researches that are focusing on the
organizational-level challenges could widen the range of extracted knowledge.
37
Appendix 1 – Lessons learned These are some of the insights and lessons learned while interviewing the participants.
• Before going for a new visualization and reporting tool, assess your in-house capabilities:
skills, organization awareness, existing solutions, team maturity, and so on. Ignoring the
organizational maturity and just buying a fancy visualization tool does not bring that much
of value.
• KPI definition should not be dependent on the tool. If we are not mature enough to define
KPIs, the visualization tool does not really help.
• Regardless of how fancy is the dashboard, if the input data is inaccurate, the result will not
be satisfactory and might even be misleading
• Some of the visualization tools look they are very fancy, but the functionality behind them
can be really bad.
• It is vital to ask yourself:
o Who am I helping with visualizing the data?
o Who benefits from it?
o What can you do with the tool?
o How could it be useful for me/others?
• While designing a dashboard, you might see people who have no clue how to visualize
what they need, then it is better to make a mock-up first (not so perfect details). Then
present it to the user and adjust it based on the feedback. Continue this iteration until
reaching a good level of consensus. The whole point is: do not go for a perfect thing in the
beginning.
• In big enterprises, one tool and one perspective are not often possible.
• A system that is specific to an area usually works much better than general-purpose tools.
• It is vital to spot any incident in the very early stages otherwise they could spread quite
quickly across the networks.
• We should always focus on risks and priorities. Then considering whether visualization
solves any problem or not? Does it answer to any risk? Can it help us identify our priorities?
• Technology is always there, but are we using that capability correctly?
• Visualization does not mean just showing some charts and graphs. In 60-80 percent of the
time, people need both visual elements (for the presentations) and also the numbers as a
table.
• If users feel they get irrelevant visualization, they will not take it seriously. For them, it
will be just like a blank page with no real value.
• Currently, there is an extreme focus on external communication and not the internal
network.
• Doing something fun with the data does not necessarily mean doing something useful.
• Some people have requirements that could be simplified: something like shows everything
in one view
• Having software engineering knowledge could help in automation and reducing manual
work.
38
• Cloud is still a new concept to many people, so the cloud providers are in the process of
learning from customers (and even each other’s). That could be a reason why there are so
many frequent updates. This indicates that your visualization solution should not be
dependent on one specific cloud feature as it might be subject to change.
• Unlike the on-prem environments, in which having vertical teams (e.g. separate teams such
as network team, server team, Active Directory Team, etc.) are common, in the cloud
environments often the product team has to set up and maintain the environment
themselves. This could reveal a challenge: each time, they might build their own
visualization approach, and integration of them could be challenging.
39
Appendix 2 – Desirable features Participants have been asked regarding their desirable features in the security visualization tools.
They have been asked “what features do you really like to have, which are either missing in the
current visualization tools or they need improvements?”
The major items are described as follow:
• Possibility to share the visualization elements (charts, graphs, etc.) in a very easy way.
For instance, just clicking share and sending it via an email
• Using heat maps will help a lot in capturing the more intensive areas (e.g. heavy network
traffic, unpatched servers, etc.).
• More financial KPIs. For example, Cost per Alert: how much we are paying for each
alert?
• Integration with third-party visualization tools. For example, Microsoft Visio.
• Forward navigation and drill-down capabilities help a lot
• Network topology map that covers all well-known devices. Although this is supported by
many tools, not all of them support all famous devices.
• Possibility to switch between executive mode (an eagle view over the globe) and
operational mode (more technical)
• Automatically summarizing historical data
• End-to-end visualization of the networking connectivity. For example, how packet could
reach from Server A to server B.
• Capability of generating infographics
• Possibility to detect misconfigurations. For example, avoiding routing loops.
• Integrations with asset management master data. For example, when clicking on a router
in the network topology, the user can see the asset information such as the owner, contact
info, etc.
• Performing visual aid testing. For example, can we reach a SQL Server called SQ1 from
an application server called AP1?
• Capability of converting a hand-drawn map to an electronic map
40
References Avella, J., 2016. Delphi panels: Research design, procedures, advantages, and challenges..
International Journal of Doctoral Studies, Issue 11, pp. 305-321.
Bakirtzis, G., Simon, B. J., Fleming, C. H. & Elks, C. R., 2018. Looking for a Black Cat in a Dark
Room: Security Visualization for Cyber-Physical System Design and Analysis. Berlin, Germany,
2018 IEEE Symposium on Visualization for Cyber Security (VizSec), pp. 1-8.
Borovina, J. M. & Ferreira, J. E., 2017. Visualization properties for data quality visual assessment:
An exploratory case study. Information Visualization, 16(2), pp. 93-112.
Brocke, J. v. et al., 2009. Reconstructing the Giant: On the Importance of Rigour in Documenting
the Literature Search Process. Verona, Italy, s.n., pp. 2206-2217.
Bryant, B. D. & Saiedian, H., 2020. Improving SIEM alert metadata aggregation with a novel kill-
chain based classification model. Computers & Security, Volume 94, p. 101817.
Champion, M. A., Rajivan, P., Cooke, N. J. & S. Jariwala, 2012. Team-based cyber defense
analysis. New Orleans, LA, s.n., pp. 218-221.
Chen, S. et al., 2014. OCEANS: online collaborative explorative analysis on network security..
New York, NY, USA, Association for Computing Machinery, p. 1–8.
Cinque, M., Cotroneo, D. & Pecchia, A., 2018. Challenges and Directions in Security Information
and Event Management (SIEM). Memphis, TN, USA, IEEE, pp. 95-99.
Diamond, I. R. et al., 2014. Defining consensus: A systematic review recommends methodologic
criteria for reporting of Delphi studies. Journal of Clinical Epidemiology, 67(4), pp. 401-409.
Elmqvist, N. & Yi, J. S., 2015. Patterns for visualization evaluation. Information Visualization,
14(3), p. 250–269.
Fan, X., Li, C. & Dong, X., 2019. A real-time network security visualization system based on
incremental learning (ChinaVis 2018). Journal of Visualization, Volume 22, p. 215–229.
Fink-Hafner, D. et al., 2019. Delphi Method: Strengths and Weaknesses. Advances in Methodology
& Statistics/Metodoloski zvezki, 16(2), pp. 1-19.
Fischer, F. & Keim, D. A., 2014. 2014. NStreamAware: real-time visual analytics for data streams
to enhance situational awareness. New York, NY, USA, Association for Computing Machinery.
Garae, J., Ko, R. K. L. & Apperley, M., 2018. A Full-Scale Security Visualization Effectiveness
Measurement and Presentation Approach. New York, 2018 17th IEEE International Conference
On Trust, Security And Privacy In Computing And Communications/ 12th IEEE International
Conference On Big Data Science And Engineering (TrustCom/BigDataSE), pp. 639-650.
Gerven, F. P., Tuovinen, J. E., Tabbers, H. & Van, P. W. M., 2003. Cognitive Load Measurement
as a Means to Advance Cognitive Load Theory. Educational Psychologist, 38(1), pp. 63-71.
Gutzwiller, R. S., Hunt, S. M. & Lange, D. S., 2016. A task analysis toward characterizing cyber-
cognitive situation awareness (CCSA) in cyber defense analysts. San Diego, CA, s.n., pp. 14-20.
41
Görtler, J., Schulz, C., Weiskopf, D. & Deussen, O., 2018. Bubble Treemaps for Uncertainty
Visualization. IEEE Transactions on Visualization and Computer Graphics, 24(1), pp. 719-728.
Keeney, S., Hasson, F. & P.McKenna, H., 2001. A critical review of the Delphi technique as a
research methodology for nursing. International Journal of Nursing Studies, 38(2), pp. 195-200.
Lee, J.-H., Kim, Y. S., Kim, J. H. & Kim, I. K., 2017. Toward the SIEM architecture for cloud-
based security services. Las Vegas, NV, USA, IEEE.
Li, C. et al., 2020. SSRDVis: Interactive visualization for event sequences summarization and rare
detection. Journal of Visualization, Volume 23, p. 171–184.
Liu, L., Silver, D. & Bemis, K., 2020. Visualizing events in time-varying scientific data. Journal
of Visualization, Volume 23, p. pages353–368.
Marty, R., 2008. Applied Security Visualization. s.l.:Addison-Wesley Professional.
Mei, H. et al., 2018. VisComposer: A Visual Programmable Composition Environment for
Information Visualization. Visual Informatics, 2(1), pp. 78-81.
Novikova, E. S., Bekeneva, Y. A. & Shorov, A. V., 2017. Towards visual analytics tasks for the
security information and event management. St. Petersburg, Russia, IEEE, pp. 90-93.
Okoli, C. & D.Pawlowski, S., 2004. The Delphi method as a research tool: an example, design
considerations and applications. Information & Management, 42(1), pp. 15-29.
Olshannikova, E., Ometov, A., Koucheryavy, Y. & Olsson, T., 2015. Visualizing Big Data with
augmented and virtual reality: challenges and research agenda. Journal of Big Data, 2(1).
Paul, C. L., Rohrer, R. & Nebesh, B., 2015. A “design first” approach to visualization innovation.
Computer Graphics and Applications, 35(1), pp. 8-12.
Polonsky, M. J. & Waller, D. S., 2018. Designing and Managing a Research Project: A Business
Student's Guide - Fourth edition.
Raghav, R. S., Pothula, S., Vengattaraman, T. & Ponnurangam, D., 2016. A survey of data
visualization tools for analyzing large volume of data in big data platform. Coimbatore, IEEE, pp.
1-6.
Ren, D., Lee, B. & Brehmer, M., 2019. Charticulator: Interactive Construction of Bespoke Chart
Layouts. IEEE Transactions on Visualization and Computer Graphics, 25(1), pp. 789-799.
Sarno, C. D., Garofalo, A., Matteucci, I. & Vallini, M., 2016. A novel security information and
event management system for enhancing cyber security in a hydroelectric dam. International
Journal of Critical Infrastructure Protection, Volume 13, pp. 39-51.
Sethi, A. & Wills, G., 2017. Expert-interviews led analysis of EEVi — A model for effective
visualization in cyber-security. Phoenix, AZ, USA, 2017 IEEE Symposium on Visualization for
Cyber Security (VizSec).
Sharafaldin, I., Lashkari, A. H. & Ghorbani, A. A., 2019. An evaluation framework for network
security visualizations. Computers & Security, Volume 84, pp. 70-92.
42
Sharmaa, D. H., C.A.Dhote & M.Potey, M., 2016. Identity and Access Management as Security-
as-a-Service from. Procedia Computer Science, Volume 79, pp. 170-174.
Staheli, D. et al., 2014. Visualization evaluation for cyber security: Trends and future directions.
New York, NY, USA, Association for Computing Machinery, p. 49–56.
Streubert, H. J. & Carpenter, D. R., 1999. Qualitative Research; Research; Human
Experimentation Policy Guidelines / Institutional Review Boards; Philosophy of Nursing;.
Philadelphia: s.n.
Sönmez, F. Ö. & Günel, B., 2018. Evaluation of Security Information and Event Management
Systems for Custom Security Visualization Generation. ANKARA, Turkey, IEEE, pp. 38-44.
Tan, K.-A.et al., 2020. Addressing Coronavirus Disease 2019 in Spine Surgery: A Rapid National
Consensus Using the Delphi Method via Teleconference. Asian Spine Journal, Volume Online
First.
Wagner, M. et al., 2015. A Survey of Visualization Systems for Malware Analysis. Cagliari,
Canada, The Eurographics Association, pp. 105-125.
Wei, J. & Salvendy, G., 2004. The cognitive task analysis methods for job and task design: review
and reappraisal. Behaviour & Information Technology, 23(4), pp. 273-299.
Vernon, W., 2009. The Delphi technique: A review. International Journal of Therapy and
Rehabilitation, 16(2), pp. 69-76.
Vieane, A. et al., 2016. Addressing human factors gaps in cyber defense. Proceedings of the
Human Factors and Ergonomics Society Annual Meeting, 60(1), pp. 770-773.
Wu, H. et al., 2020. VisAct: a visualization design system based on semantic actions. Journal of
Visualization, p. 339–352.
Recommended