63
> Software > Artificial Intelligence > Tech Careers > Wearables FEBRUARY 2019 www.computer.org

Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

> Software> Artifi cial Intelligence> Tech Careers> Wearables

FEBRUARY 2019 www.computer.org

Page 2: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

www.computer.org/jobs

Looking for the BEST Tech Job for You?Come to the Computer Society Jobs Board to meet the best employers in the industry—Apple, Google, Intel, NSA, Cisco, US Army Research, Oracle, Juniper...

Take advantage of the special resources for job seekers—job alerts, career advice, webinars, templates, and resumes viewed by top employers.

r1cov4.indd 4 12/29/17 1:10 PM

Page 3: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

STAFF

EditorCathy Martin

Publications Operations Project SpecialistChristine Anthony

Publications Marketing Project SpecialistMeghan O’Dell

Production & DesignCarmen Flores-Garvey

Publications Portfolio ManagersCarrie Clark, Kimberly Sperka

PublisherRobin Baldwin

Senior Advertising CoordinatorDebbie Sims

Circulation: ComputingEdge (ISSN 2469-7087) is published monthly by the IEEE Computer Society. IEEE Headquarters, Three Park Avenue, 17th Floor, New York, NY 10016-5997; IEEE Computer Society Publications Office, 10662 Los Vaqueros Circle, Los Alamitos, CA 90720; voice +1 714 821 8380; fax +1 714 821 4010; IEEE Computer Society Headquarters, 2001 L Street NW, Suite 700, Washington, DC 20036.

Postmaster: Send address changes to ComputingEdge-IEEE Membership Processing Dept., 445 Hoes Lane, Piscataway, NJ 08855. Periodicals Postage Paid at New York, New York, and at additional mailing offices. Printed in USA.

Editorial: Unless otherwise stated, bylined articles, as well as product and service descriptions, reflect the author’s or firm’s opinion. Inclusion in ComputingEdge does not necessarily constitute endorsement by the IEEE or the Computer Society. All submissions are subject to editing for style, clarity, and space.

Reuse Rights and Reprint Permissions: Educational or personal use of this material is permitted without fee, provided such use: 1) is not made for profit; 2) includes this notice and a full citation to the original work on the first page of the copy; and 3) does not imply IEEE endorsement of any third-party products or services. Authors and their companies are permitted to post the accepted version of IEEE-copyrighted material on their own Web servers without permission, provided that the IEEE copyright notice and a full citation to the original work appear on the first screen of the posted copy. An accepted manuscript is a version which has been revised by the author to incorporate review suggestions, but not the published version with copy-editing, proofreading, and formatting added by IEEE. For more information, please go to: http://www.ieee.org/publications_standards/publications/rights/paperversionpolicy.html. Permission to reprint/republish this material for commercial, advertising, or promotional purposes or for creating new collective works for resale or redistribution must be obtained from IEEE by writing to the IEEE Intellectual Property Rights Office, 445 Hoes Lane, Piscataway, NJ 08854-4141 or [email protected]. Copyright © 2019 IEEE. All rights reserved.

Abstracting and Library Use: Abstracting is permitted with credit to the source. Libraries are permitted to photocopy for private use of patrons, provided the per-copy fee indicated in the code at the bottom of the first page is paid through the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923.

Unsubscribe: If you no longer wish to receive this ComputingEdge mailing, please email IEEE Computer Society Customer Service at [email protected] and type “unsubscribe ComputingEdge” in your subject line.

IEEE prohibits discrimination, harassment, and bullying. For more information, visit www.ieee.org/web/aboutus/whatis/policies/p9-26.html.

IEEE COMPUTER SOCIETY computer.org • +1 714 821 8380

www.computer.org/computingedge 1

IEEE Computer Society Magazine Editors in Chief

ComputerDavid Alan Grier (Interim), Djaghe LLC

IEEE SoftwareIpek Ozkaya, Software Engineering Institute

IEEE Internet ComputingGeorge Pallis, University of Cyprus

IT ProfessionalIrena Bojanova, NIST

IEEE Security & PrivacyDavid Nicol, University of Illinois at Urbana-Champaign

IEEE MicroLizy Kurian John, University of Texas, Austin

IEEE Computer Graphics and ApplicationsTorsten Möller, University of Vienna

IEEE Pervasive ComputingMarc Langheinrich, University of Lugano

Computing in Science & EngineeringJim X. Chen, George Mason University

IEEE Intelligent SystemsV.S. Subrahmanian, Dartmouth College

IEEE MultiMediaShu-Ching Chen, Florida International University

IEEE Annals of the History of ComputingGerardo Con Diaz, University of California, Davis

Page 4: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

FEBRUARY 2019 • VOLUME 5, NUMBER 2

THEME HERE

10Managing Energy

Consumption as an Architectural

Quality Attribute

41A Diff erent Lens on

Diversity and Inclusion: Creating Research

Opportunities for Small Liberal Arts Colleges

46P2PLoc: Peer-to-Peer Localization

of Fast-Moving Entities

Page 5: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

51Earables for

Personal-Scale Behavior

AnalyticsSubscribe to ComputingEdge for free at www.computer.org/computingedge.

Software

10 Managing Energy Consumption as an Architectural Quality Attribute

RICK KAZMAN, SERGE HAZIYEV, ANDRIY YAKUBA, AND DAMIAN A. TAMBURRI

17 Silver Bullet Talks with Ksenia Dmitrieva-Peguero GARY MCGRAW

Artificial Intelligence

21 Software Engineering for Machine-Learning Applications: The Road Ahead

FOUTSE KHOMH, BRAM ADAMS, JINGHUI CHENG, MARIOS FOKAEFS, AND GIULIANO ANTONIOL

25 From Raw Data to Smart Manufacturing: AI and Semantic Web of Things for Industry 4.0

PANKESH PATEL, MUHAMMAD INTIZAR ALI, AND AMIT SHETH

Tech Careers

33 Artificial Intelligence and IT Professionals SUNIL MITHAS, THOMAS KUDE, AND JONATHAN WHITAKER

41 A Different Lens on Diversity and Inclusion: Creating Research Opportunities for Small Liberal Arts Colleges

WENDI K. SAPP AND MARY ANN LEUNG

Wearables

46 P2PLoc: Peer-to-Peer Localization of Fast-Moving Entities

ASHUTOSH DHEKNE, UMBERTO J. RAVAIOLI, AND ROMIT

ROY CHOUDHURY

51 Earables for Personal-Scale Behavior Analytics FAHIM KAWSAR, CHULHONG MIN, AKHIL MATHUR, AND

ALLESANDRO MONTANARI

Departments 4 Magazine Roundup 8 Editor’s Note: New Considerations in Software Development

Page 6: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

4 February 2019 Published by the IEEE Computer Society 2469-7087/19/$33.00 © 2019 IEEE

CS FOCUS

T he IEEE Computer Society’s lineup of 12 peer-reviewed techni-

cal magazines covers cutting-edge topics ranging from soft-ware design and computer graphics to Internet comput-ing and security, from scien-tifi c applications and machine intelligence to visualization and microchip design. Here are highlights from recent issues.

Computer

Increasing Women and Underrepresented Minorities in Computing: The Landscape and What You Can DoIn this article from the October 2018 issue of Computer, two experts on increasing women and underrepresented minori-ties in computing discuss the

current landscape in academia and industry, presenting steps that organizations—and you, the reader—can take to bring about changes to increase diver-sity and inclusion.

Computing in Science & Engineering

HPC Opens a New Frontier in Fuel-Engine ResearchThe authors of this article from the September/October 2018 issue of Computing in Science & Engineering discuss how an industry-led research team at the US Department of Energy’s Argonne National Laboratory recently conducted a compu-tationally guided combustion system optimization on the IBM Blue Gene/Q Mira supercom-puter. The team used a high-fi delity simulation approach

Magazine Roundup

Page 7: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

www.computer.org/computingedge 5

to optimize the fuel spray and combustion bowl geometry of a heavy-duty diesel engine using a gasoline-like fuel. The acceler-ated simulation time allowed the team to evaluate an unprecedented number of design variations and improve the production design using a new fuel.

IEEE Annals of the History of Computing

Interview with Charles BigelowCharles Bigelow’s career paral-lels the development of digital font technology. He has designed fonts and consulted about font technol-ogy to many of the companies that created desktop publishing sys-tems. He has also written exten-sively on digital font technology and taught at RISD, Stanford, and RIT. Read more in the July–Sep-tember 2018 issue of IEEE Annals of the History of Computing.

IEEE Computer Graphics and Applications

OpenSpace: Bringing NASA Missions to the PublicThis article from the September/October 2018 issue of IEEE Com-puter Graphics and Applications presents OpenSpace, an open-source astro-visualization soft-ware project designed to bridge the gap between scientifi c discov-eries and their public dissemina-tion. A wealth of data exists for space missions from NASA and other sources. OpenSpace brings together this data and combines it in a range of immersive settings.

Through non-linear storytelling and guided exploration, interactive immersive experiences help the public to engage with advanced space mission data and mod-els, and thus be better informed and educated about NASA mis-sions, the solar system, and outer space. The authors demonstrate this capability by exploring the OSIRIS-Rex mission.

IEEE Intelligent Systems

Towards Musicologist-Driven Mining of Handwritten ScoresHistorical musicologists have been seeking objective and power-ful techniques to collect, analyze, and verify their fi ndings for many decades. The aim of this study, published in the July/August 2018 issue of IEEE Intelligent Sys-tems, is to show the importance of such domain-specifi c problems to achieve actionable knowledge discovery in the real world. The focus is on fi nding evidence for the chronological ordering of J.S. Bach’s manuscripts by propos-ing a musicologist-driven mining method for extracting quantita-tive information from early music manuscripts. Bach’s C-clefs were extracted from a wide range of manuscripts under the direction of domain experts, and with these, the classifi cation of C-clefs was conducted. The proposed methods were evaluated on a dataset con-taining over 1,000 clefs extracted from Bach’s manuscripts. The results show more than 70-percent accuracy for dating Bach’s manuscripts. Dating of Bach’s lost manuscripts was quantitatively

hypothesized, providing a rough barometer to be combined with other evidence to evaluate musi-cologists’ hypotheses.

IEEE Internet Computing

Effi cient Cloud Provisioning for Video Transcoding: Review, Open Challenges, and Future OpportunitiesVideo transcoding is the process of encoding an initial video sequence into multiple sequences of diff er-ent bitrates, resolutions, and video standards, so that it can be viewed on devices of various capabilities and with various network access characteristics. Because video cod-ing is a computationally expensive process and the amount of video in social-media networks drastically increases every year, large media providers’ demand for transcoding cloud services will continue ris-ing. This article, which appears in the September/October 2018 issue of IEEE Internet Computing, sur-veys the state of the art of related cloud services. It also summarizes research on video transcoding and provides indicative results for a transcoding scenario of inter-est related to Facebook. Finally, it illustrates open challenges in the fi eld and outlines paths for future research.

IEEE Micro

Not in Name Alone: A Memristive Memory Processing Unit for Real In-Memory ProcessingData movement between pro-cessing and memory is the root

Page 8: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

6 ComputingEdge February 2019

MAGAZINE ROUNDUP

cause of the limited performance and energy effi ciency in modern von Neumann systems. To over-come the data-movement bottle-neck, the authors of this article from the September/October 2018 issue of IEEE Micro present the memristive Memory Processing Unit (mMPU)—a real processing-in-memory system in which the computation is done directly in the memory cells, thus eliminating the necessity for data transfer. Fur-thermore, with its enormous inner parallelism, this system is ideal for data-intensive applications that are based on single instruction, mul-tiple data (SIMD)—providing high throughput and energy-effi ciency.

IEEE MultiMedia

A Crossmodal Approach to Multimodal Fusion in Video HyperlinkingWith the recent resurgence of

neural networks and the prolif-eration of massive amounts of unlabeled multimodal data, rec-ommendation systems and multi-modal retrieval systems based on continuous representation spaces and deep-learning methods are becoming of great interest. Mul-timodal representations are typi-cally obtained with auto-encoders that reconstruct multimodal data. In this article from the April–June 2018 issue of IEEE MultiMedia, the authors describe an alterna-tive method to perform high-level multimodal fusion that leverages crossmodal translation by means of symmetrical encoders cast into a bidirectional deep neural network (BiDNN). Using the les-sons learned from multimodal retrieval, they present a BiDNN-based system that performs video hyperlinking and recommends interesting video segments to a viewer. Results established using TRECVID’s 2016 video hyperlink-ing benchmarking initiative show that our method obtained the best score, thus defi ning the state of the art.

IEEE Pervasive Computing

Teaching Pervasive Computing in Liberal Arts CollegesIn this article from the July–Sep-tember 2018 issue of IEEE Perva-sive Computing, the authors refl ect on the critical role of liberal arts education in fostering creative, collaborative, and ethical innova-tors for pervasive computing. They discuss why liberal arts education

is important as a foundation for advanced studies and leadership in ubiquitous computing, and they share their experiences teaching pervasive computing in liberal arts colleges.

IEEE Security & Privacy

Fingerprinting for Cyber-Physical System Security: Device Physics Matters TooDue to the increasing attacks against cyber-physical systems, it is important to develop novel solutions to secure these critical systems. System security can be improved by using the physics of process actuators (that is, devices). Device physics can be used to generate device fi ngerprints to increase the integrity of responses from process actuators. Read more in the September/October 2018 issue of IEEE Security & Privacy.

IEEE Software

Software Engineering’s Top Topics, Trends, and ResearchersThis article, which is part of the September/October 2018 issue of IEEE Software on software engi-neering’s 50th anniversary, off ers an overview of the twists, turns, and numerous redirections seen over the years in the software engi-neering (SE) research literature. Nearly a dozen topics have domi-nated the past few decades of SE research, and these have been redi-rected many times. Some are gain-ing popularity, whereas others are becoming increasingly rare.

WWW.COMPUTER.ORG

/COMPUTINGEDGE

Page 9: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

www.computer.org/computingedge 7

Advertising Personnel

Debbie Sims: Advertising CoordinatorEmail: [email protected]: +1 714 816 2138 | Fax: +1 714 821 4010

Advertising Sales Representatives (display)

Central, Northwest, Southeast, Far East: Eric KincaidEmail: [email protected]: +1 214 673 3742Fax: +1 888 886 8599

Northeast, Midwest, Europe, Middle East: David SchisslerEmail: [email protected] Phone: +1 508 394 4026Fax: +1 508 394 1707

Southwest, California: Mike HughesEmail: [email protected]: +1 805 529 6790

Advertising Sales Representative (Classifieds & Jobs Board)

Heather BuonadiesEmail: [email protected]: +1 201 887 1703

Advertising Sales Representative (Jobs Board)

Marie ThompsonEmail: [email protected]: 714-813-5094

ADVERTISER INFORMATION

IT Professional

A Cognitive Assistance Framework for Supporting Human Workers in Industrial TasksCognitive systems are capable of human-like actions such as perception, learning, planning, reasoning, self- and context-awareness, interaction, and per-forming actions in unstructured environments. The authors of this article from the September/Octo-ber 2018 issue of IT Professionalpresent an implemented frame-work for an interactive cognitive system enabling human-centered,

adaptive industrial assistance in cooperative, complex human-in-the-loop assembly tasks. The functionality of the cognitive sys-tem includes enabling perception and awareness, understanding and interpreting situations, rea-soning, decision making, and autonomous acting.

@s e cur it ypr ivac yFOLLOW US

mult-22-03-c1 Cover-1 July 12, 2016 4:40 PM

http://www.computer.org

july

–sep

tem

ber

2016

IEEE M

ultiM

edia

July–Sep

temb

er 2016

❚ Quality M

od

eling

Vo

lum

e 23 N

um

ber 3

IEEE MultiMedia serves the community of scholars,

developers, practitioners, and students who are interested in multiple

media types and work in fields such as image and video processing, audio

analysis, text retrieval, and data fusion.

Read It Today!

www.computer.org /multimedia

Page 10: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

8 February 2019 Published by the IEEE Computer Society 2469-7087/19/$33.00 © 2019 IEEE

EDITOR’S NOTEEDITOR’S NOTEEDITOR’S NOTEEDITOR’S NOTEEDITOR’S NOTEEDITOR’S NOTEEDITOR’S NOTEEDITOR’S NOTEEDITOR’S NOTEEDITOR’S NOTEEDITOR’S NOTEEDITOR’S NOTEEDITOR’S NOTEEDITOR’S NOTE

S oftware development has traditionally focused on highly visible qualities such as performance reliability and user-

friendly design. However, as computing becomes more ubiquitous and central to society, software engineers have been recognizing the need to address new considerations—such as energy con-sumption and security—that are often less notice-able to the user but are no less important.

In this issue of ComputingEdge, IEEE Software’s “Managing Energy Consumption as an Architec-tural Quality Attribute” argues that energy should be treated as one of the key considerations in architectural design, along with modifi ability, performance, and availability. The authors use a case study to explain how measuring and mod-eling energy use can lead to insights on ways to save energy. IEEE Security & Privacy’s “Silver Bullet Talks with Ksenia Dmitrieva-Peguero” discusses how awareness of secure coding practices has grown and what still needs to happen to make software more secure.

Artifi cial intelligence (AI) is also a concern for today’s software engineers. “Software Engineer-ing for Machine-Learning Applications: The Road Ahead,” from IEEE Software, covers the inher-ent challenges of machine learning– based sys-tems and ideas for making them more accurate,

testable, and applicable. Meanwhile, the authors of IEEE Intelligent Systems’ “From Raw Data to Smart Manufacturing: AI and Semantic Web of Things for Industry 4.0” show how AI is helping revolutionize our factories.

AI is also having a profound eff ect on software engineering jobs, potentially changing the types of roles humans have in the software development process. IT Professional’s “Artifi cial Intelligence and IT Professionals” predicts that low-level pro-gramming tasks will increasingly be performed by AI. Another concern when it comes to careers in technology is creating equal opportunities for students to participate in research. In Computing in Science & Engineering’s “A Diff erent Lens on Diver-sity and Inclusion: Creating Research Opportuni-ties for Small Liberal Arts Colleges,” the authors discuss programs that help small colleges collabo-rate with large research institutions.

Finally, this issue of ComputingEdge includes two articles about wearables. Computer’s “P2PLoc: Peer-to-Peer Localization of Fast-Moving Entities” proposes wearable Internet of Things devices for real-time group-motion tracking. IEEE Pervasive Computing’s “Earables for Personal-Scale Behavior Analytics” presents an in-ear multisensory stereo device with applications in areas like healthcare and communication.

New Considerations in Software Development

Page 11: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

PURPOSE: The IEEE Computer Society is the world’s largest association of computing professionals and is the leading provider of technical information in the field.

MEMBERSHIP: Members receive the monthly magazine Computer, discounts, and opportunities to serve (all activities are led by volunteer members). Membership is open to all IEEE members, affiliate society members, and others interested in the computer field.

COMPUTER SOCIETY WEBSITE: www.computer.org

OMBUDSMAN: Direct unresolved complaints to [email protected].

CHAPTERS: Regular and student chapters worldwide provide the opportunity to interact with colleagues, hear technical experts, and serve the local professional community.

AVAILABLE INFORMATION: To check membership status, report an address change, or obtain more information on any of the following, email Customer Service at [email protected] or call +1 714 821 8380 (international) or our toll-free number, +1 800 272 6657 (US):

• Membership applications• Publications catalog• Draft standards and order forms• Technical committee list• Technical committee application• Chapter start-up procedures• Student scholarship information• Volunteer leaders/staff directory• IEEE senior member grade application (requires 10 years

practice and significant performance in five of those 10)

PUBLICATIONS AND ACTIVITIESComputer: The flagship publication of the IEEE Computer Society, Computer, publishes peer-reviewed technical content that covers all aspects of computer science, computer engineering, technology, and applications.

Periodicals: The society publishes 12 magazines, 15 transactions, and two letters. Refer to membership application or request information as noted above.

Conference Proceedings & Books: Conference Publishing Services publishes more than 275 titles every year.

Standards Working Groups: More than 150 groups produce IEEE standards used throughout the world.

Technical Committees: TCs provide professional interaction in more than 30 technical areas and directly influence computer engineering conferences and publications.

Conferences/Education: The society holds about 200 conferences each year and sponsors many educational activities, including computing science accreditation.

Certifications: The society offers three software developer credentials. For more information, visit www.computer .org/certification.

2019 BOARD OF GOVERNORS MEETINGS31 January – 1 February: Sheraton Anaheim Hotel, Anaheim, CA6 – 7 June: Hyatt Regency Coral Gables, Miami, FL(TBD) November: Teleconference

EXECUTIVE COMMITTEEPresident: Cecilia Metra President-Elect: Leila De Floriani; Past President: Hironori Kasahara; First VP: Forrest Shull; Second VP: Avi Mendelson; Secretary: David Lomet; Treasurer: Dimitrios Serpanos; VP, Member & Geographic Activities: Yervant Zorian; VP, Professional & Educational Activities: Kunio Uchiyama; VP, Publications: Fabrizio Lombardi; VP, Standards Activities: Riccardo Mariani; VP, Technical & Conference Activities: William D. Gropp 2018–2019 IEEE Division V Director: John W. Walz2019 IEEE Division V Director Elect: Thomas M. Conte2019–2020 IEEE Division VIII Director: Elizabeth L. Burd

BOARD OF GOVERNORSTerm Expiring 2019: Saurabh Bagchi, Leila De Floriani, David S. Ebert, Jill I. Gostin, William Gropp, Sumi Helal, Avi MendelsonTerm Expiring 2020: Andy Chen, John D. Johnson, Sy-Yen Kuo, David Lomet, Dimitrios Serpanos, Forrest Shull, Hayato YamanaTerm Expiring 2021: M. Brian Blake, Fred Douglis, Carlos E. Jimenez-Gomez, Ramalatha Marimuthu, Erik Jan Marinissen, Kunio Uchiyama

EXECUTIVE STAFFExecutive Director: Melissa RussellDirector, Governance & Associate Executive Director: Anne Marie KellyDirector, Finance & Accounting: Sunny HwangDirector, Information Technology & Services: Sumit Kacker Director, Marketing & Sales: Michelle TubbDirector, Membership Development: Eric Berkowitz

COMPUTER SOCIETY OFFICESWashington, D.C.: 2001 L St., Ste. 700, Washington, D.C. 20036-4928 • Phone: +1 202 371 0101 • Fax: +1 202 728 9614Email: [email protected]

Los Alamitos: 10662 Los Vaqueros Cir., Los Alamitos, CA 90720Phone: +1 714 821 8380 • Email: [email protected]

Asia/Pacific: Watanabe Building, 1-4-2 Minami-Aoyama, Minato-ku, Tokyo 107-0062, Japan • Phone: +81 3 3408 3118 Fax: +81 3 3408 3553 • Email: [email protected]

MEMBERSHIP & PUBLICATION ORDERSPhone: +1 800 272 6657 • Fax: +1 714 821 4641 Email: [email protected]

IEEE BOARD OF DIRECTORSPresident & CEO: Jose M.D. MouraPresident-Elect: Toshio FukudaPast President: James A. JefferiesSecretary: Kathleen KramerTreasurer: Joseph V. LillieDirector & President, IEEE-USA: Thomas M. CoughlinDirector & President, Standards Association: Robert S. FishDirector & VP, Educational Activities: Witold M. KinsnerDirector & VP, Membership and Geographic Activities: Francis B. Grosz, Jr.Director & VP, Publication Services & Products: Hulya KirkiciDirector & VP, Technical Activities: K.J. Ray Liu

revised 5 December 2018

Page 12: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

10 February 2019 Published by the IEEE Computer Society 2469-7087/19/$33.00 © 2019 IEEE102 IEEE SOFTWARE | PUBLISHED BY THE IEEE COMPUTER SOCIETY 0 7 4 0 - 7 4 5 9 / 1 8 / $ 3 3 . 0 0 © 2 0 1 8 I E E E

THE PRAGMATIC ARCHITECT

Editor: Eoin [email protected]

Managing Energy Consumption as an Architectural Quality AttributeRick Kazman, Serge Haziyev, Andriy Yakuba, and Damian A. Tamburri

ENERGY USED TO be free, or so we thought. In the past, software ar-chitects rarely considered software’s energy consumption. Those days are gone. With mobile devices as the pri-mary form of computing for most people,1 with the increasing indus-try and government adoption of the IoT, and with the ubiquity of cloud services as the backbone of our com-puting infrastructure, energy has become an issue architects can no longer ignore. Energy is no longer “free” and unlimited. Mobile de-vices’ energy effi ciency affects us all, and large corporations are increas-ingly concerned with their server farms’ energy effi ciency. Forbes re-ported that in 2016, datacenters globally accounted for more energy consumption (by 40 percent) than the entire UK—about 3 percent of all energy consumed worldwide.2

At both the low and high ends, computational devices’ energy con-sumption has become a crucial concern. This means that we, as ar-chitects, now must add energy effi -ciency to the long list of competing qualities we consider when designing

a system. And, like every other qual-ity attribute, it involves nontrivial tradeoffs: energy use versus perfor-mance, availability, modifi ability, security, or time to market.

We can’t hope to address all these concerns here, but we want to make two important general points:

• Treating energy effi ciency as a quality attribute is no different from treating any other architec-tural quality.

• We can, with a small effort in experimentation and prototyp-ing, and small design changes, substantially improve an applica-tion’s energy use.

Both of these points are good news for architects! Their most immediate con-sequence is that we can reason about energy consumption architecturally. And, by making a relatively small in-vestment in design experiments and design changes, we’re repaid by enor-mous savings in energy use.

To illustrate these two points, we report here on a small case study we performed on the design of an IoT

application—an automated weather station. The station reports telem-etry from sensors related to the am-bient temperature, humidity, wind speed, rainfall, leaf wetness, soil temperature, and so on. All the col-lected data is processed and used to help farmers achieve more effi cient plant growth.

In this domain, energy effi ciency is paramount: these weather stations are left unattended for long periods of time, might be snow-covered, and need to work in low-light conditions for weeks on end. The automated weather station we describe here was initially designed with attention to energy effi ciency, but without explic-itly modeling or measuring energy use. In the following, we describe our experiments, the design changes they helped motivate, their tradeoffs, and their energy savings.

The ExperimentsThe automated weather station is connected to a 12 V DC power source. Every 15 minutes it wakes up from a low-power deep-sleep mode and reports telemetry to the Azure

Page 13: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

www.computer.org/computingedge 11

THE PRAGMATIC ARCHITECT

SEPTEMBER/OCTOBER 2018 | IEEE SOFTWARE 103

cloud using a 3G connection. After that, it suspends all running tasks and falls back into deep-sleep mode.

Between the DC power supply and the weather station, we in-serted a current meter based on the MAX471 current-sensing amplifier. (For the MAX471 data sheet, see http://html.alldatasheet.com/html -pdf/73441/MAXIM/MAX471/125 /1/MAX471.html.) This meter was connected to an analog sense pin of an AVR MCU (microcontroller). The MCU registered readings every 1 ms; after 100 readings, it sent the average value over a UART (universal asyn-chronous receiver–transmitter) to a PC. A simple logging application on the other side of the wire converted the UART input into a CSV (comma-separated values) file. A simple Python script then calculated energy consumption on the basis of the CSV file and plotted a graph.

To calculate energy consumption, we used these formulas:

E 5 P * T,

where P represents power and T rep-resents time, and

P 5 I * V

for DC, where I represents current and V represents voltage.

Because voltage is constant in these devices, we assumed that the power consumed depended fully on the current draw and time. In all experiments, we measured the total energy consumption in watt-hours (Wh).

To simplify measurements and further calculations, we split the timeline for the automated weather system into two main components: work mode and sleep mode. Figure 1 depicts a sample plot of work mode, showing the power (cur-rent) consumed over 309 seconds. The average current was 0.0295 A, and the energy consumption was 0.03048 Wh.

Sleep mode consumed little en-ergy, as we expected. The power consumption was nearly zero, with a watchdog process causing minor spikes of energy consumption. For example, for a sleep mode of 490 s, the average current was 0.001013 A, and the energy consumption was 0.0016538 Wh. Thus, sleep mode consumed roughly 3 percent as much energy as work mode.

On the basis of these measure-ments and some small simplifying

assumptions, we estimated the energy consumption for 1 hour (assum-ing our typical sleep interval of 15 minutes):

• Work 1 Sleep Duration 5 309/60 1 15 5 20.15 min

• Cycles (C) per h 5 60/20.15 5 2.9776

• Average Power Consumption per h 5 (Ework 1 Esleep) * C 5 (0.03048 1 0.0002025 * 15) * C 5 0.0335175 * 2.9776 5 0.0998 Wh

Given this baseline, we now describe our experiments.

Experiment 1In this experiment, we changed the telemetry messages’ payload format from plaintext to Google protocol buffers. The assumption was that the reduced message length should result in less time to send data and fewer chances to fail. However, this came with a tradeoff: increased memory and CPU use to transform messages to and from the protocol-buffer format.

Table 1 shows the kinds of mes-sages being sent and their sizes, us-ing plaintext and protocol buffers.

0.07

0.06

0.05

0.04

0.03

0.02

Curr

ent (

A)

Time (s)

0.01

0 50 100 150 200 250 300

FIGURE 1. A sample plot of work mode for an automated weather station. The average current was 0.0295 A, and the energy

consumption was 0.03048 Wh.

102 IEEE SOFTWARE | PUBLISHED BY THE IEEE COMPUTER SOCIETY 0 7 4 0 - 7 4 5 9 / 1 8 / $ 3 3 . 0 0 © 2 0 1 8 I E E E

THE PRAGMATIC ARCHITECT

Editor: Eoin [email protected]

Managing Energy Consumption as an Architectural Quality AttributeRick Kazman, Serge Haziyev, Andriy Yakuba, and Damian A. Tamburri

ENERGY USED TO be free, or so we thought. In the past, software ar-chitects rarely considered software’s energy consumption. Those days are gone. With mobile devices as the pri-mary form of computing for most people,1 with the increasing indus-try and government adoption of the IoT, and with the ubiquity of cloud services as the backbone of our com-puting infrastructure, energy has become an issue architects can no longer ignore. Energy is no longer “free” and unlimited. Mobile de-vices’ energy effi ciency affects us all, and large corporations are increas-ingly concerned with their server farms’ energy effi ciency. Forbes re-ported that in 2016, datacenters globally accounted for more energy consumption (by 40 percent) than the entire UK—about 3 percent of all energy consumed worldwide.2

At both the low and high ends, computational devices’ energy con-sumption has become a crucial concern. This means that we, as ar-chitects, now must add energy effi -ciency to the long list of competing qualities we consider when designing

a system. And, like every other qual-ity attribute, it involves nontrivial tradeoffs: energy use versus perfor-mance, availability, modifi ability, security, or time to market.

We can’t hope to address all these concerns here, but we want to make two important general points:

• Treating energy effi ciency as a quality attribute is no different from treating any other architec-tural quality.

• We can, with a small effort in experimentation and prototyp-ing, and small design changes, substantially improve an applica-tion’s energy use.

Both of these points are good news for architects! Their most immediate con-sequence is that we can reason about energy consumption architecturally. And, by making a relatively small in-vestment in design experiments and design changes, we’re repaid by enor-mous savings in energy use.

To illustrate these two points, we report here on a small case study we performed on the design of an IoT

application—an automated weather station. The station reports telem-etry from sensors related to the am-bient temperature, humidity, wind speed, rainfall, leaf wetness, soil temperature, and so on. All the col-lected data is processed and used to help farmers achieve more effi cient plant growth.

In this domain, energy effi ciency is paramount: these weather stations are left unattended for long periods of time, might be snow-covered, and need to work in low-light conditions for weeks on end. The automated weather station we describe here was initially designed with attention to energy effi ciency, but without explic-itly modeling or measuring energy use. In the following, we describe our experiments, the design changes they helped motivate, their tradeoffs, and their energy savings.

The ExperimentsThe automated weather station is connected to a 12 V DC power source. Every 15 minutes it wakes up from a low-power deep-sleep mode and reports telemetry to the Azure

Page 14: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

12 ComputingEdge February 2019

THE PRAGMATIC ARCHITECT

104 IEEE SOFTWARE | W W W.COMPUTER.ORG/SOFT WARE | @IEEESOFT WARE

But did this change save energy? Sending these messages using proto-col buffers required 293 s, with an average current of 0.027362 A (see Figure 2). The power consumption was 0.0267519 Wh. Using the pre-vious formula to calculate the aver-age energy consumption per hour, we got

• Work 1 Sleep Duration 5 293/60 1 15 5 19.883 min

• C per h 5 60/20.15 5 3.017• Average Power Consumption

per h 5 (0.027362 1 0.0002025 * 15) * C 5 0.0303995 * 3.017 5 0.0917 Wh

The difference was 0.0998 – 0.0917 5 0.0081 Wh (8 percent). Although 8 percent wasn’t a huge im-provement, it was nontrivial. Given protocol buffers’ other advantages (they describe data using an inter-face description language and gen-erate the code to handle it), this was clearly a win. But given that this en-ergy savings wasn’t huge, the lesson for architects is that tradeoffs among CPU time, memory, and message length must be assessed empirically.

Furthermore, while taking these measurements and observing power consumption graphs, we noticed a strange idle period, just before the

device went into sleep mode (see the area in the green rectangle in Figure 2). Debugging and studying this issue led us to discover a bug in EEPROM persistence functionality and the peripheral-device-scanning logic. The bug’s details aren’t im-portant for this research; what’s important is that we didn’t notice this problem until we measured and visualized the power consumption. Because the device had been func-tioning as expected, it wasn’t obvious that there was a bug until we mea-sured the energy.

Following are the new measure-ments of device power consumption.

Table 1. For experiment 1, the kinds of messages being sent and their sizes (in bytes), using plaintext and Google protocol buffers.

Message

Plaintext Protocol buffers

Header Payload Total Header Payload Total

Status 375 822 1,197 387 275 662

Current telemetry 374 188 562 399 40 439

Historical telemetry*

384 460 844 (5 records)

402 400 802 (10 records)

* Historical telemetry was sent in batches. In the plaintext version, the batch contained five records. With protocol buffers, we could pack 10 records into a batch message, saving 42 bytes.

0.06

0.05

0.04

0.03

0.02

0.01

0.00

0 50 100 150 200 250 300

Curr

ent (

A)

Time (s)

FIGURE 2. Energy use during message transmission, using protocol buffers. Using the buffers improved energy consumption by

8 percent.

Page 15: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

www.computer.org/computingedge 13

THE PRAGMATIC ARCHITECT

SEPTEMBER/OCTOBER 2018 | IEEE SOFTWARE 105

By fixing this bug, we decreased the time required to perform all duties during the polling cycle by almost two-thirds, from 293 to 99 s (see Figure 3). The energy consumption was 0.0116034 Wh. Using the previ-ous values, we recalculated the total power consumption:

• Work 1 Sleep Duration 5 99/60 1 15 5 16.65 min

• C per h 5 60/20.15 5 3.603• Average Power Consumption

per h 5 (0.0116034 1 0.0002025 * 15) * C 5 0.014641 * 3.603 5 0.0527 Wh

The difference, compared to this experiment’s original results, was 0.0917 – 0.0527 5 0.039 Wh (42 percent). The lesson is that you can’t manage it if you don’t measure it! By measuring energy consumption, we were able to increase our un-derstanding of the inner workings of the automated-weather-station application.

Experiment 2This experiment aimed to reduce the number of polling cycles, leading to longer sleep times with significantly

less power consumption. The trade-off here is that more data would be sent in fewer batches. This could have the effect of increasing the time and power needed to send a batch, and it also introduced the need for a send-fail-retry mechanism in case of message failure.

As we discovered from experi-ment 1, the weather station could send up to 10 historical records in a

single batch. A single batch had the energy profile shown in Figure 4, consuming 0.00127 Wh over ap-proximately 10.5 s.

On the basis of this profile, we could now estimate the energy use. That is, given our measurements, we could now form a model of the quality attribute that was no differ-ent from a model of performance or availability. For simplicity’s sake, we

0.10

0.08

0.06

0.04

0.02

0.000 20 40 60 80 100

Curr

ent (

A)

Time (s)

FIGURE 3. Energy use during message transmission after a bug fix. The difference, compared to this experiment’s original results,

was a 42 percent improvement.

Time (s)

Curr

ent (

A)

8

0.025

0.030

0.035

0.040

0.045

0.050

0.055

0.060

10 12 14 16 18

FIGURE 4. The energy profile of a single batch transmission. On the basis of this

profile, we could estimate the energy use.

THE PRAGMATIC ARCHITECT

104 IEEE SOFTWARE | W W W.COMPUTER.ORG/SOFT WARE | @IEEESOFT WARE

But did this change save energy? Sending these messages using proto-col buffers required 293 s, with an average current of 0.027362 A (see Figure 2). The power consumption was 0.0267519 Wh. Using the pre-vious formula to calculate the aver-age energy consumption per hour, we got

• Work 1 Sleep Duration 5 293/60 1 15 5 19.883 min

• C per h 5 60/20.15 5 3.017• Average Power Consumption

per h 5 (0.027362 1 0.0002025 * 15) * C 5 0.0303995 * 3.017 5 0.0917 Wh

The difference was 0.0998 – 0.0917 5 0.0081 Wh (8 percent). Although 8 percent wasn’t a huge im-provement, it was nontrivial. Given protocol buffers’ other advantages (they describe data using an inter-face description language and gen-erate the code to handle it), this was clearly a win. But given that this en-ergy savings wasn’t huge, the lesson for architects is that tradeoffs among CPU time, memory, and message length must be assessed empirically.

Furthermore, while taking these measurements and observing power consumption graphs, we noticed a strange idle period, just before the

device went into sleep mode (see the area in the green rectangle in Figure 2). Debugging and studying this issue led us to discover a bug in EEPROM persistence functionality and the peripheral-device-scanning logic. The bug’s details aren’t im-portant for this research; what’s important is that we didn’t notice this problem until we measured and visualized the power consumption. Because the device had been func-tioning as expected, it wasn’t obvious that there was a bug until we mea-sured the energy.

Following are the new measure-ments of device power consumption.

Table 1. For experiment 1, the kinds of messages being sent and their sizes (in bytes), using plaintext and Google protocol buffers.

Message

Plaintext Protocol buffers

Header Payload Total Header Payload Total

Status 375 822 1,197 387 275 662

Current telemetry 374 188 562 399 40 439

Historical telemetry*

384 460 844 (5 records)

402 400 802 (10 records)

* Historical telemetry was sent in batches. In the plaintext version, the batch contained five records. With protocol buffers, we could pack 10 records into a batch message, saving 42 bytes.

0.06

0.05

0.04

0.03

0.02

0.01

0.00

0 50 100 150 200 250 300

Curr

ent (

A)

Time (s)

FIGURE 2. Energy use during message transmission, using protocol buffers. Using the buffers improved energy consumption by

8 percent.

Page 16: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

14 ComputingEdge February 2019

THE PRAGMATIC ARCHITECT

106 IEEE SOFTWARE | W W W.COMPUTER.ORG/SOFT WARE | @IEEESOFT WARE

chose a polling interval of 1 hour. So, the historical data would be sent in batches of four records (one record each 15 min). The predicted power consumption was

• Work Time 5 99 s 1 10.5 s 5 ,110 s

• Consumed Power during Awake 5 0.0116034 1 0.00127 5 0.012874 Wh

• Sleep Time 5 3,600 s 5 60 min• Consumed Power during Sleep

Time 5 60 * 0.0002025 5 0.01215 Wh

• C per h 5 3,600/(110 1 3,600) 5 0.97

• Estimated Average Power Con-sumption per h 5 (Ew 1 Es) *

C 5 (0.012874 1 0.01215) * 0.97 5 0.02427 Wh

For this experiment, we set up a weather station with a polling interval of 3,600 seconds and ran it for 8,225 seconds (a little over two hours); see Figure 5. The average current draw was 0.0011344 A, and the total power consumption was 0.0312867 Wh. So, we calculated the average power consumption per hour as

E/(Duration/3,600) 5 0.0312867/(8,225/3,600) 5 0.0312867/2.2847 5 0.013694 Wh

Comparing the results with our estimated consumption of 0.02427 Wh,

we realized that there was some er-ror in our calculations or our model. Taking a closer look at the raw re-cording, we noticed that most of the sleep time was recorded as 0 A, meaning that the actual energy con-sumption was close to the meter’s tolerances. So, we were unable to measure such low current consump-tion precisely in real time, and could only trust sleep mode measurements made over long runs.

This insight highlights a new les-son: the need to build both archi-tectural models and prototypes. On one hand, models let us efficiently reason about architectural quality attributes and their tradeoffs. On the other hand, prototypes mandate empirical testing, thus incrementally refining the assumptions built into both models and prototypes.

The difference in power con-sumption from Experiment 1 was 0.0527 – 0.013694 5 0.039006 Wh (42 percent). The difference from the original model (before we improved the design) was 0.0998 – 0.013694 5 0.086106 Wh (86 percent). Table 2 summarizes these differences.

As you can see, with some mod-est changes to the design, we reaped

0.10

0.08

0.06

0.04

0.02

0.00

0 2,000 4,000Time (s)

Curr

ent (

A)

6,000 8,000

FIGURE 5. The energy profile of a weather station running for several hours with a polling interval of 3,600 seconds. Most of the

sleep time was recorded as 0 A, meaning that the actual energy consumption was close to the meter’s tolerances.

Table 2. The differences between the experiments.

Setup DescriptionConsumption per

hour (Wh)Total energy savings (%)

Original Plaintext payload and a 15-min polling interval

0.0998 0

Experiment 1 Binary format 0.0917 8

Binary format 1 bug fix 0.0527 47

Experiment 2 A polling interval of 1 h 0.0137 86

Page 17: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

www.computer.org/computingedge 15

THE PRAGMATIC ARCHITECT

SEPTEMBER/OCTOBER 2018 | IEEE SOFTWARE 107

enormous energy savings, which would greatly increase the robust-ness of the IoT devices and the sys-tem as a whole. Despite operating in sometimes adverse conditions, these systems must never lose data, even if they’re kept offl ine for 24 hours. With this 86 percent energy improvement, we could now meet this requirement.

A s computing moves toward greater scale and mobility, energy use will inevitably

become a key concern for architects. We hope we’ve convinced you that energy can be treated like any other architectural quality attribute. It’s no different, from the perspective of architectural design, than modifi -ability, performance, or availability. It can be modeled and prototyped, and we can reason about the design tradeoffs required to achieve bet-ter energy use. Of course, the design primitives, models, tools, and trad-eoffs are specifi c to energy use, but the fundamental principles and rea-soning methods for architectural de-sign don’t change.

As we said before, you can’t man-age what you don’t measure. So, ar-chitects need to begin thinking about making their software more energy aware, monitoring it and adapting it to environmental or application conditions.

References 1. T. Bindi, “Mobile and Tablet Internet

Usage Surpasses Desktop for First

Time: StatCounter,” ZDNet, 2 Nov.

2016; https://www.zdnet.com/article

/mobile-and-tablet-internet-usage

-exceeds-desktop-for-fi rst-time

-statcounter.

2. R. Danilak, “Why Energy Is a Big

and Rapidly Growing Problem for

Data Centers,” Forbes, 15 Dec. 2017;

https://www.forbes.com/sites

/forbestechcouncil/2017/12/15

/why-energy-is-a-big-and-rapidly

-growing-problem-for-data-centers

/#65451c435a30.

AB

OU

T T

HE

AU

TH

OR

S

RICK KAZMAN is a professor of information technology manage-

ment at the University of Hawai’i at Manoa and a researcher at

Carnegie Mellon University’s Software Engineering Institute. Contact

him at [email protected].

SERGE HAZIYEV is the senior vice president of SoftServe’s

Advanced Technology Group. Contact him at shaziyev@

softserveinc.com.

ANDRIY YAKUBA a senior IoT engineer at SoftServe. Contact him

at [email protected].

DAMIAN A. TAMBURRI is an assistant professor in the

Jheronimus Academy of Data Science and Technical University of

Eindhoven. Contact him at [email protected].

THE PRAGMATIC ARCHITECT

106 IEEE SOFTWARE | W W W.COMPUTER.ORG/SOFT WARE | @IEEESOFT WARE

chose a polling interval of 1 hour. So, the historical data would be sent in batches of four records (one record each 15 min). The predicted power consumption was

• Work Time 5 99 s 1 10.5 s 5 ,110 s

• Consumed Power during Awake 5 0.0116034 1 0.00127 5 0.012874 Wh

• Sleep Time 5 3,600 s 5 60 min• Consumed Power during Sleep

Time 5 60 * 0.0002025 5 0.01215 Wh

• C per h 5 3,600/(110 1 3,600) 5 0.97

• Estimated Average Power Con-sumption per h 5 (Ew 1 Es) *

C 5 (0.012874 1 0.01215) * 0.97 5 0.02427 Wh

For this experiment, we set up a weather station with a polling interval of 3,600 seconds and ran it for 8,225 seconds (a little over two hours); see Figure 5. The average current draw was 0.0011344 A, and the total power consumption was 0.0312867 Wh. So, we calculated the average power consumption per hour as

E/(Duration/3,600) 5 0.0312867/(8,225/3,600) 5 0.0312867/2.2847 5 0.013694 Wh

Comparing the results with our estimated consumption of 0.02427 Wh,

we realized that there was some er-ror in our calculations or our model. Taking a closer look at the raw re-cording, we noticed that most of the sleep time was recorded as 0 A, meaning that the actual energy con-sumption was close to the meter’s tolerances. So, we were unable to measure such low current consump-tion precisely in real time, and could only trust sleep mode measurements made over long runs.

This insight highlights a new les-son: the need to build both archi-tectural models and prototypes. On one hand, models let us efficiently reason about architectural quality attributes and their tradeoffs. On the other hand, prototypes mandate empirical testing, thus incrementally refining the assumptions built into both models and prototypes.

The difference in power con-sumption from Experiment 1 was 0.0527 – 0.013694 5 0.039006 Wh (42 percent). The difference from the original model (before we improved the design) was 0.0998 – 0.013694 5 0.086106 Wh (86 percent). Table 2 summarizes these differences.

As you can see, with some mod-est changes to the design, we reaped

0.10

0.08

0.06

0.04

0.02

0.00

0 2,000 4,000Time (s)

Curr

ent (

A)

6,000 8,000

FIGURE 5. The energy profile of a weather station running for several hours with a polling interval of 3,600 seconds. Most of the

sleep time was recorded as 0 A, meaning that the actual energy consumption was close to the meter’s tolerances.

Table 2. The differences between the experiments.

Setup DescriptionConsumption per

hour (Wh)Total energy savings (%)

Original Plaintext payload and a 15-min polling interval

0.0998 0

Experiment 1 Binary format 0.0917 8

Binary format 1 bug fix 0.0527 47

Experiment 2 A polling interval of 1 h 0.0137 86

This article originally appeared in IEEE Software, vol. 35, no. 5, 2018.

Page 18: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

SUBSCRIBE AND SUBMITFor more information on paper submission, featured articles, calls for papers, and subscription links visit:

www.computer.org/tsusc

The IEEE Transactions on Sustainable Computing (T-SUSC) is a peer-reviewed journal devoted to publishing high-quality papers that explore the different aspects of sustainable computing. The notion of sustainability is one of the core areas in computing today and can cover a wide range of problem domains and technologies ranging from software to hardware designs to application domains. Sustainability (e.g., energy efficiency, natural resources preservation, using multiple energy sources) is needed in computing devices and infrastructure and has grown to be a major limitation to usability and performance.

Contributions to T-SUSC must address sustainability problems in different computing and information processing environments and technologies, and at different levels of the computational process. These problems can be related to information processing, integration, utilization, aggregation, and generation. Solutions for these problems can call upon a wide range of algorithmic and computational frameworks, such as optimization, machine learning, dynamical systems, prediction and control, decision support systems, meta-heuristics, and game-theory to name a few.

T-SUSC covers pure research and applications within novel scope related to sustainable computing, such as computational devices, storage organization, data transfer, software and information processing, and efficient algorithmic information distribution/processing. Articles dealing with hardware/software implementations, new architectures, modeling and simulation, mathematical models and designs that target sustainable computing problems are encouraged.

SCOPE

IEEE TRANSACTIONS ON

SUSTAINABLE COMPUTINGSUBMITTODAY

Page 19: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

2469-7087/19/$33.00 © 2019 IEEE Published by the IEEE Computer Society February 2019 17

INTERVIEWEditor: Gary McGraw, [email protected]

1540-7993/17/$33.00 © 2017 IEEE Copublished by the IEEE Computer and Reliability Societies September/October 2017 7

Hear the full podcast at www.computer.org/silverbullet. Show links, notes, and an online discussion can be found at www.cigital.com/silverbullet.

Silver Bullet Talks with Ksenia Dmitrieva-PegueroGary McGraw | Synopsis

K senia Dmitrieva-Peguero is a principal consultant in Syn-

opsys’ Software Integrity Group. She has many years of hands-on experience in software and sys-tems security and is an expert in many practices including penetra-tion testing, static analysis tool de-sign and execution, customization and deployment, and threat mod-eling. Throughout her career as a consultant, Dmitrieva-Peguero has established and evolved secure cod-ing guidance and best practices for many different firms and has deliv-ered numerous software security

training sessions. She’s passionate about cutting-edge web technolo-gies and probing systems security, and she speaks regularly around the world on topics such as HTML5, CSP, and JavaScript.

You’ve been doing hands-on soft-ware security in the real world for many years. How has the field evolved since you started doing con-sulting seven years ago?Seven years ago, a lot of people didn’t know what we were talk-ing about when we talked about security. We would look for people to hire with programming expe-rience, and we didn’t care about security experience. You would learn everything on the job. These days, there are degrees in secu-rity, and we expect candidates to know security basics and have some experience. When you talk to clients, they know what soft-ware security is. They’ve definitely heard of penetration testing and static analysis. Today, the aware-ness is much higher.

Do you think we’ve made progress as a field in those seven years, beyond raising awareness?There are still problems, but I think we’re making progress. The field has definitely grown. There’s more demand and more understanding of the kinds of problems we’re trying to solve, of the difference between software security and all the other things that have to do with IT and security in a company. In terms of quality, I’m not sure.

Maybe the quality remained constant?There’s more software and there-fore more bugs, and there are more things to be attacked. Now we have software in mobile devices and cars and everywhere else. So, it might look like there are more security issues, but it’s probably also society being more aware of the issues. It’s hard to say if we’ve made any quali-tative progress.

If you could fix any practice area in software security today, which one would it be?Probably threat modeling. Architec-ture review follows because that’s the more complicated one, and people have very different descrip-tions and understanding of what it is. Everybody has their own version of threat modeling and risk analysis.

It might be because we can build a program to go through our code and look for bugs, but we can’t build a similar system to go through a design and look for flaws.Absolutely. There’s less tool-ing because the process is much more involved. Maybe with the

Page 20: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

18 ComputingEdge February 20198 IEEE Security & Privacy September/October 2017

INTERVIEW

development of artificial intelli-gence we’ll be better able to develop some sort of automation for under-standing design problems.

Frameworks like Angular, Node, and Express have been popping up all over the place. In your view, do all frameworks have the same security posture?Definitely not. All frameworks have the problem of building security in and how much each framework developer or maintainer wants to make security a part of it. Unfor-tunately, these days that’s not the highest priority for the framework creators. Very few even bother with it. Then developers are left to solve all the security problems themselves while they’re using the framework. Instead of having the features built in, they have to reinvent the wheel every single time.

On one hand, security should be the responsibility of the framework creators and maintainers. On the other hand, the developers using the framework should demand it. When they choose a framework, they should ask about its security posture. I don’t see any questions on forums like Stack Overflow say-ing, “Hey, which framework is more secure?” Usually you see a question like, “Hey, which framework has more visual components, features, or is faster?”

So, the developers building the frameworks don’t really deal with security, and the developers using the frameworks can screw every-thing up from a security perspective.Right. For validation, developers might write their own routine to validate email addresses, and that will most likely be code copied from Stack Overflow or another GitHub repository where the quality isn’t guaranteed. Depending on the developer, the framework, and how well the library is integrated with the framework, they might be using

a third-party library, which is often a better solution—a little more vali-dated, updated, and secure. But in both cases, they have opportuni-ties to screw things up because even if you use a third-party library or a plug-in, it usually requires some configuration. We’ve seen these examples in Angular where a plug-in comes with a pretty good default security setting, but it doesn’t sat-isfy some features that developers need. So developers turn off secu-rity settings; they change them and then the plug-in doesn’t do the job it’s supposed to do.

You’ve spent time digging into An-gular and thinking about how to au-tomate aspects of security analysis. What have you learned about Angu-lar, and what have you done to get that into automation?Angular is interesting. On one hand, it’s client-side code. We don’t trust anything that runs on the client. A lot of things can be bypassed. But on the other hand, the way the tem-plating is done in Angular provides good protection from cross-site scripting even if malicious data is passed through the server-side code. In terms of automatically find-ing security problems, that’s still a big question. There isn’t a good tool for JavaScript today. It’s possible to find some dataflow and cross-site scripting issues, but the issues that have to do with the configuration of the framework or plug-ins are harder because they are updated so often. We’re always chasing the updates that happen every couple of months. When a new version of Angular comes out, not everything changes, but there might be enough changes that our automated tools don’t find things anymore.

Have you talked to the developers building Angular? Are they aware of what you’re doing?We haven’t communicated directly with them. But from the Angular

developer standpoint, when they introduce new features, they often say, “Hey, this is a breaking build, and we don’t really care about the older versions. We don’t have the requirements or the desire to sup-port whatever was built before. So, we’re just going to go ahead and start a new version.”

You have to figure out what they did, and then figure out how to adjust whatever you built to work again ev-ery time?If a developer started using Angu-lar 1.6 before Angular 2 came out, sometimes they’ll just stick with the old version because they’d have to rewrite their whole appli-cation due to the breaking changes in the framework. Not every com-pany will do that. If developers are still using the older 1.6 version, the tools that were built for 1.6 will still work. But they won’t work for a new application.

Do you think version controlling and keeping everything somewhat simi-lar in terms of its security posture are the biggest open problems?I think one of the problems is keeping up with all the versions and upgrading to the new features and plug-ins. The second problem is keeping up with the variety of frameworks. Last year, Angular was number one, but I think it’s fading out and React is stepping in. Now we’ll need to build automation tools for React. Six months from now, it’ll be something else.

Here’s a trick question. What’s more important: code review or architec-ture risk analysis [ARA]?It depends on the application. If it’s a standard web app that has a database, a back end, and a front end, you might not gain as much doing ARA. You could start with a code review and get more bang for your buck that way. But if it’s soft-ware in a car that isn’t standard or

Page 21: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

www.computer.org/computingedge 19www.computer.org/security 9

another very complicated applica-tion then, yes, we should definitely start with ARA.

How do you think code review and ARA are related?If you’re reviewing a complex sys-tem, you’ll start with an ARA, which will help you identify poten-tial issues. To find out if these issues actually exist in the system, you would do a code review. You would look at something and say, “Hey, this system is talking to this third-party back end that has its own protocol. How about we review the code of this third-party back end and this protocol and how the communi-cation happens?” In doing code review of this scope, we might find some issues.

You’ve experienced and lived in dif-ferent cultures all over the world. Do diverse cultures approach computer security differently, or is it the same?I don’t think there’s much differ-ence in the countries I’ve lived and worked in. I’m from Russia and have worked in Europe and the US. I didn’t see any differences in how people understand or relate to com-puter security. Maybe in other cul-tures, it’s different. I think if you look at other areas like education or psy-chology or maybe medicine, there might be cultural differences, but if you look at technical stuff—math and computers—I think it’s pretty straightforward in any culture.

You give a lot of talks all over the place. What’s your favorite confer-ence to attend or speak at?One of my favorites so far has been AppSec in Europe. There’s a great concentration of web technology experts, which is my playground. I really enjoy communicating and interacting with and listen-ing to all these amazingly smart people. In Europe, I think people are very relaxed and friendly and less commercialized than in the

US. The conference is more like the exchange of ideas and not the exchange of business cards and pro-motional materials.

As a hardcore technologist and a woman, what’s your view of sexism in the field? Do you think it’s harder to gain respect as a technologist if you’re a woman?Yes, unfortunately, it’s there. It’s harder to gain respect for sure, and I think it’s especially hard to gain respect at the middle level. When you’re at a higher level—for example, when speaking at a big conference— there are requirements to have men and women represented equally. And a conference might actually tend to select talks from women more often than from men. At the middle level, working with cli-ents and establishing your position and trust as a woman can be very challenging.

You have to sort of prove yourself a little bit more?Yes. I’ve been in situations where I was working with a male colleague and the client would interact with him but not with me.

Even though you probably knew more than the other guy?Sometimes, yes.

Oh, come on, always. One last ques-tion. One of the coolest things you do is competitive ballroom dancing. When did you start dancing, and did you ever think about going pro?It’s been about 10 years. I’ve never thought about going pro because I started dancing in Russia. Most peo-ple in Russia start ballroom dancing when they are six or seven years old. If you’re starting after high school, there’s no way you can become a professional—it’s treated only as a hobby. In the US, you can become a pro, but you have to make your life out of it. For me, it’s a hobby that I get a lot of enjoyment from.

T he Silver Bullet Podcast with Gary McGraw is cosponsored

by Cigital (part of Synopsys) and this magazine and is syndicated by SearchSecurity.

Gary McGraw is vice president of security technology at Synop-sys. He’s the author of Software Security: Building Security In (Addison-Wesley 2006) and eight other books. McGraw received a BA in philosophy from the Uni-versity of Virginia and a dual PhD in computer science and cognitive science from Indiana University. Contact him via garymcgraw.com.

About Ksenia Dmitrieva-Peguero

K senia Dmitrieva-Peguero is a principal consultant at Synopsys, where she leads a JavaScript research group that

concentrates on common web application vulnerabilities and best practices. Her key areas of expertise include Web 2.0, Java-Script, HTML5, and Content Security Policy. Dmitrieva-Peguero loves studying new technologies, finding their vulnerabilities, and discovering ways to protect them. She presents at confer-ences frequently, including AppSec Europe, BSides Security London, and RSA Asia Pacific and Japan. Dmitrieva-Peguero received an MS in computer science from George Washington University. Outside of the office, she is a competitive ball-room dancer. She lives in Virginia with her husband and their brand-new baby girl.

8 IEEE Security & Privacy September/October 2017

INTERVIEW

development of artificial intelli-gence we’ll be better able to develop some sort of automation for under-standing design problems.

Frameworks like Angular, Node, and Express have been popping up all over the place. In your view, do all frameworks have the same security posture?Definitely not. All frameworks have the problem of building security in and how much each framework developer or maintainer wants to make security a part of it. Unfor-tunately, these days that’s not the highest priority for the framework creators. Very few even bother with it. Then developers are left to solve all the security problems themselves while they’re using the framework. Instead of having the features built in, they have to reinvent the wheel every single time.

On one hand, security should be the responsibility of the framework creators and maintainers. On the other hand, the developers using the framework should demand it. When they choose a framework, they should ask about its security posture. I don’t see any questions on forums like Stack Overflow say-ing, “Hey, which framework is more secure?” Usually you see a question like, “Hey, which framework has more visual components, features, or is faster?”

So, the developers building the frameworks don’t really deal with security, and the developers using the frameworks can screw every-thing up from a security perspective.Right. For validation, developers might write their own routine to validate email addresses, and that will most likely be code copied from Stack Overflow or another GitHub repository where the quality isn’t guaranteed. Depending on the developer, the framework, and how well the library is integrated with the framework, they might be using

a third-party library, which is often a better solution—a little more vali-dated, updated, and secure. But in both cases, they have opportuni-ties to screw things up because even if you use a third-party library or a plug-in, it usually requires some configuration. We’ve seen these examples in Angular where a plug-in comes with a pretty good default security setting, but it doesn’t sat-isfy some features that developers need. So developers turn off secu-rity settings; they change them and then the plug-in doesn’t do the job it’s supposed to do.

You’ve spent time digging into An-gular and thinking about how to au-tomate aspects of security analysis. What have you learned about Angu-lar, and what have you done to get that into automation?Angular is interesting. On one hand, it’s client-side code. We don’t trust anything that runs on the client. A lot of things can be bypassed. But on the other hand, the way the tem-plating is done in Angular provides good protection from cross-site scripting even if malicious data is passed through the server-side code. In terms of automatically find-ing security problems, that’s still a big question. There isn’t a good tool for JavaScript today. It’s possible to find some dataflow and cross-site scripting issues, but the issues that have to do with the configuration of the framework or plug-ins are harder because they are updated so often. We’re always chasing the updates that happen every couple of months. When a new version of Angular comes out, not everything changes, but there might be enough changes that our automated tools don’t find things anymore.

Have you talked to the developers building Angular? Are they aware of what you’re doing?We haven’t communicated directly with them. But from the Angular

developer standpoint, when they introduce new features, they often say, “Hey, this is a breaking build, and we don’t really care about the older versions. We don’t have the requirements or the desire to sup-port whatever was built before. So, we’re just going to go ahead and start a new version.”

You have to figure out what they did, and then figure out how to adjust whatever you built to work again ev-ery time?If a developer started using Angu-lar 1.6 before Angular 2 came out, sometimes they’ll just stick with the old version because they’d have to rewrite their whole appli-cation due to the breaking changes in the framework. Not every com-pany will do that. If developers are still using the older 1.6 version, the tools that were built for 1.6 will still work. But they won’t work for a new application.

Do you think version controlling and keeping everything somewhat simi-lar in terms of its security posture are the biggest open problems?I think one of the problems is keeping up with all the versions and upgrading to the new features and plug-ins. The second problem is keeping up with the variety of frameworks. Last year, Angular was number one, but I think it’s fading out and React is stepping in. Now we’ll need to build automation tools for React. Six months from now, it’ll be something else.

Here’s a trick question. What’s more important: code review or architec-ture risk analysis [ARA]?It depends on the application. If it’s a standard web app that has a database, a back end, and a front end, you might not gain as much doing ARA. You could start with a code review and get more bang for your buck that way. But if it’s soft-ware in a car that isn’t standard or

This article originally appeared in IEEE Security & Privacy, vol. 15, no. 5, 2017.

Page 22: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

SUBSCRIBE AND SUBMITFor more information on paper submission, featured articles, calls for papers, and subscription links visit:

www.computer.org/tbd

IEEE TRANSACTIONS ON

BIG DATASUBMITTODAY

The IEEE Transactions on Big Data (TBD) publishes peer reviewed articles with big data as the main focus. The articles provide cross disciplinary innovative research ideas and applications results for big data including novel theory, algorithms and applications. Research areas for big data include, but are not restricted to, big data analytics, big data visualization, big data curation and management, big data semantics, big data infrastructure, big data standards, big data performance analyses, intelligence from big data, scientific discovery from big data security, privacy, and legal issues specific to big data. Applications of big data in the fields of endeavor where massive data is generated are of particular interest.

SCOPE

Page 23: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

2469-7087/19/$33.00 © 2019 IEEE Published by the IEEE Computer Society February 2019 210 7 4 0 - 7 4 5 9 / 1 8 / $ 3 3 . 0 0 © 2 0 1 8 I E E E SEPTEMBER/OCTOBER 2018 | IEEE SOFTWARE 81

Editor: Giuliano AntoniolPolytechnique Montré[email protected]

Editor: Steve CounsellBrunel [email protected]

Editor: Phillip LaplantePennsylvania State [email protected]

INVITED CONTENT

Software Engineering for Machine-Learning ApplicationsThe Road Ahead

Foutse Khomh, Bram Adams, Jinghui Cheng, Marios Fokaefs, and Giuliano Antoniol

THE NEED AND desire for more auto-mation and intelligence have led to breakthroughs in machine learning (ML) and artifi cial intelligence (AI), yet we still experience failures and shortcomings in the resulting soft-ware systems. The main reason is the shift in the development paradigm in-duced by ML and AI. Traditionally, software systems are constructed deductively, by writing down the rules that govern the system behav-iors as program code. However, with ML techniques, these rules are inferred from training data (from which the requirements are gener-ated inductively). This paradigm shift makes reasoning about the be-havior of software systems with ML components diffi cult, resulting in software systems that are intrinsi-cally challenging to test and verify.

Given the critical and increasing role of ML- and AI-based systems in our society, it’s imperative for both the software engineering (SE) and ML communities to research and develop innovative approaches

to address these challenges. In fact, the learned behavior of an ML-based system might be incorrect, even if the learning algorithm is imple-mented correctly, a situation in which traditional testing techniques are ineffective. A critical problem is how to effectively develop, test, and evolve such systems, given that they don’t have (complete) specifi cations or even source code corresponding to some of their critical behaviors.

Motivated by these challenges, we organized the First Symposium on Software Engineering for Machine Learning Applications (SEMLA) at Polytechnique Montréal on 12 and 13 June 2018, with the kind support of Polytechnique Montréal’s Depart-ment of Computer Engineering and Software Engineering, the Institute for Data Valorization (IVADO), SAP, and Red Hat. The event attracted around 160 participants from all over the world, including students, aca-demics, and industrial practitioners.

SEMLA’s main objective was to create a space in which SE and ML

experts could come together to dis-cuss challenges, new insights, and practical ideas regarding the engi-neering of ML- and AI-based sys-tems. The program included talks and panels presented by renowned academic researchers and indus-trial practitioners, including keynote speakers David Parnas, Lionel Briand, and Yoshua Bengio. The full pro-gram is at http://semla.polymtl.ca. Here, we summarize some key chal-lenges these experts identifi ed.

System AccuracyThe fi rst topic concerned the accu-racy of systems built using ML and AI models, and the responsibilities of engineers building them. For exam-ple, one keynote speaker mentioned three categories of AI research:

• building programs that imitate human behavior to better under-stand human thinking (used in psychology research),

• building programs that play games well (challenging and fun), and

Page 24: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

22 ComputingEdge February 2019

INVITED CONTENT

82 IEEE SOFTWARE | W W W.COMPUTER.ORG/SOFT WARE | @IEEESOFT WARE

• demonstrating that practical computerized products can use the same methods that humans use (risky and often naive).

He stressed that researchers should be very concerned about AI systems in the third category because they can’t guarantee 100 percent accu-racy or correct answers in all cases. He also raised concerns that people are using the Turing test to falsely claim intelligence in systems. He commented, “Turing did not claim that his test was a test for artificial intelligence!”

In response, a leading AI expert stated that AI’s goal is not to achieve 100 percent accuracy because

• humans are also far from 100 percent accuracy in their daily tasks, and

• AI technology’s strength comes from the ability to abstract up from different factors of varia-tion between environments, to obtain models that can general-ize and transfer to situations that weren’t encountered before.

He further explained that AI tech-nologies’ main challenge is the curse of dimensionality—that is, the need for sufficient, labeled data to cover all important factors (features) of a given problem. AI, in fact, needs more training data than humans do!

Whereas the key properties of techniques such as deep learning (for example, compositionality, en-coding into a simpler domain, and conditional computation) aim to re-duce dimensionality’s impact, appli-cations of AI still risk being limited to domains in which labeled data is cheap. Although labeled data is somehow abundant in some SE do-mains (such as defect prediction),

other domains (such as requirement elicitation) are more challenging. Overall, AI’s full impact on SE is still unclear.

Because of AI and ML systems’ intrinsic imperfection, one panelist argued, only harmless AI technology or applications should be released to the public, since the responsibility of every engineer is to protect the pub-lic. He also mentioned that the pub-lic should be informed accurately of the AI technology it’s being exposed to. For example, instead of touting a “100 percent self-driving car,” auto-motive companies should advertise their products as “AI-assisted cars,” with a clear list of the ways in which AI is assisting.

Another panelist emphasized that AI isn’t a panacea. He illustrated how simple techniques could give the illusion of AI, or how the blind application of AI wouldn’t improve the workflow of workers. For ex-ample, in principle, an intelligent robot could easily replace a human worker to hand another worker the right tool for a given job, but not if the worker afterward throws the tool back on a pile. (The robot will have a hard time retrieving the right tool from an unordered pile.) How-ever, using an intelligent robot to return tools in an ordered fashion (which is a different problem) could allow other robots later on to be de-ployed to hand over tools to work-ers. If a traditional computer science algorithm can solve a problem, we should just use that.

System TestingThe second hot topic our experts dis-cussed was the difficulty of testing ML and AI systems. Our panelists debated whether we should tackle the testing of those systems the same way we do the testing of traditional

systems, since an AI system’s behav-ior might be incorrect even if the learning algorithms are implemented correctly. One keynote speaker explained how in complex cyber-physical systems (CPSs), when no clear specifications of the intended systems exist (that is, humans have a lot of knowledge but can’t formalize it), only AI can approximate the sys-tem’s intended behavior by learning models from the available data.

This is a clear improvement over the manual design of models and controllers. However, it pushes most of the risk toward the trained models’ quality. So, how can we perform adequate quality assurance (QA) of AI models, given that the number of environments in which the mod-els will be deployed is unlimited and that the human operator will re-quire a detailed explanation of any failures?

Fortunately, we can use AI tech-nology to reduce the search space of the environments to be tested, nudg-ing QA techniques to those environ-ments most likely to have failures or violate important safety constraints. Such an approach could even work in the system-of-systems context of CPSs, where each sensor and actua-tor must be validated not only in iso-lation but also in close integration with each other.

However, this QA doesn’t guard against hardware failure. So, hard-ware systems should incorporate fault-tolerance mechanisms to cope with such failures. One audience par-ticipant also observed that hardware could incorporate fault-tolerance mechanisms to mitigate the effect of AI model errors, improving AI sys-tems’ robustness.

Another major challenge is that humans, once they’ve started trust-ing AI in their daily tasks, could

Page 25: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

www.computer.org/computingedge 23

INVITED CONTENT

SEPTEMBER/OCTOBER 2018 | IEEE SOFTWARE 83

begin adapting their behavior to the AI assistance. For example, a study in Munich showed how assisted braking initially reduced the number of accidents, until drivers relied too much on the assisted braking and drove more aggressively. So, when is an AI-enabled product ready for release to the public? Although four million miles of test drives can’t pre-vent a serious accident in the next mile, how much information in the test drives can be used to debug and fi x the corresponding fault?

An additional important ques-tion discussed at SEMLA is humans’ role in an AI-driven world. Are hu-mans obsolete once AI technologies become mainstream? The panelists

unanimously disagreed because hu-mans are essential for putting the decisions of AI into context. Al-though the outcome (and potential failures) of the AI impacts the hu-mans’ recommendations, those rec-ommendations are also a human fi lter for AI failures. Further socio-logical research is necessary to study how AI technology affects human behavior.

Industrial ApplicationsSEMLA’s second day was devoted to industrial applications of AI. These industrial speakers discussed the current state of AI in industry and the challenges they face when apply-ing AI models. They also discussed

some open problems and what they consider to be their biggest needs and top priorities.

For example, a presenter from Google Brain pinpointed several programming-language issues in-volved in ML libraries, models, and frameworks. Different approaches in the current libraries have differ-ent advantages and disadvantages. Creating an effi cient syntax for automatic differentiation that can deliver ease of implementation, per-formance, usability, and fl exibility is important but diffi cult. Testing and debugging these implementations are also salient challenges. Industrial practitioners further mentioned that collaboration among experts from

AB

OU

T T

HE

AU

TH

OR

S

FOUTSE KHOMH is an associate professor

at Polytechnique Montréal, where he leads the

SWAT (Software Analytics and Technology) Lab.

Contact him at [email protected].

MARIOS FOKAEFS is an assistant profes-

sor in Polytechnique Montréal’s Department of

Computer Engineering and Software Engineer-

ing. Contact him at [email protected].

BRAM ADAMS is an associate professor at

Polytechnique Montréal, where he leads the

MCIS (Maintenance, Construction and Intel-

ligence of Software) Lab. Contact him at

[email protected].

GIULIANO ANTONIOL is a professor of soft-

ware engineering in Polytechnique Montréal’s

Department of Computer Engineering and

Software Engineering. Contact him at giuliano

[email protected].

JINGHUI CHENG is an assistant professor in

Polytechnique Montréal’s Department of Com-

puter Engineering and Software Engineering.

Contact him at [email protected].

INVITED CONTENT

82 IEEE SOFTWARE | W W W.COMPUTER.ORG/SOFT WARE | @IEEESOFT WARE

• demonstrating that practical computerized products can use the same methods that humans use (risky and often naive).

He stressed that researchers should be very concerned about AI systems in the third category because they can’t guarantee 100 percent accu-racy or correct answers in all cases. He also raised concerns that people are using the Turing test to falsely claim intelligence in systems. He commented, “Turing did not claim that his test was a test for artificial intelligence!”

In response, a leading AI expert stated that AI’s goal is not to achieve 100 percent accuracy because

• humans are also far from 100 percent accuracy in their daily tasks, and

• AI technology’s strength comes from the ability to abstract up from different factors of varia-tion between environments, to obtain models that can general-ize and transfer to situations that weren’t encountered before.

He further explained that AI tech-nologies’ main challenge is the curse of dimensionality—that is, the need for sufficient, labeled data to cover all important factors (features) of a given problem. AI, in fact, needs more training data than humans do!

Whereas the key properties of techniques such as deep learning (for example, compositionality, en-coding into a simpler domain, and conditional computation) aim to re-duce dimensionality’s impact, appli-cations of AI still risk being limited to domains in which labeled data is cheap. Although labeled data is somehow abundant in some SE do-mains (such as defect prediction),

other domains (such as requirement elicitation) are more challenging. Overall, AI’s full impact on SE is still unclear.

Because of AI and ML systems’ intrinsic imperfection, one panelist argued, only harmless AI technology or applications should be released to the public, since the responsibility of every engineer is to protect the pub-lic. He also mentioned that the pub-lic should be informed accurately of the AI technology it’s being exposed to. For example, instead of touting a “100 percent self-driving car,” auto-motive companies should advertise their products as “AI-assisted cars,” with a clear list of the ways in which AI is assisting.

Another panelist emphasized that AI isn’t a panacea. He illustrated how simple techniques could give the illusion of AI, or how the blind application of AI wouldn’t improve the workflow of workers. For ex-ample, in principle, an intelligent robot could easily replace a human worker to hand another worker the right tool for a given job, but not if the worker afterward throws the tool back on a pile. (The robot will have a hard time retrieving the right tool from an unordered pile.) How-ever, using an intelligent robot to return tools in an ordered fashion (which is a different problem) could allow other robots later on to be de-ployed to hand over tools to work-ers. If a traditional computer science algorithm can solve a problem, we should just use that.

System TestingThe second hot topic our experts dis-cussed was the difficulty of testing ML and AI systems. Our panelists debated whether we should tackle the testing of those systems the same way we do the testing of traditional

systems, since an AI system’s behav-ior might be incorrect even if the learning algorithms are implemented correctly. One keynote speaker explained how in complex cyber-physical systems (CPSs), when no clear specifications of the intended systems exist (that is, humans have a lot of knowledge but can’t formalize it), only AI can approximate the sys-tem’s intended behavior by learning models from the available data.

This is a clear improvement over the manual design of models and controllers. However, it pushes most of the risk toward the trained models’ quality. So, how can we perform adequate quality assurance (QA) of AI models, given that the number of environments in which the mod-els will be deployed is unlimited and that the human operator will re-quire a detailed explanation of any failures?

Fortunately, we can use AI tech-nology to reduce the search space of the environments to be tested, nudg-ing QA techniques to those environ-ments most likely to have failures or violate important safety constraints. Such an approach could even work in the system-of-systems context of CPSs, where each sensor and actua-tor must be validated not only in iso-lation but also in close integration with each other.

However, this QA doesn’t guard against hardware failure. So, hard-ware systems should incorporate fault-tolerance mechanisms to cope with such failures. One audience par-ticipant also observed that hardware could incorporate fault-tolerance mechanisms to mitigate the effect of AI model errors, improving AI sys-tems’ robustness.

Another major challenge is that humans, once they’ve started trust-ing AI in their daily tasks, could

Page 26: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

24 ComputingEdge February 2019

INVITED CONTENT

84 IEEE SOFTWARE | W W W.COMPUTER.ORG/SOFT WARE | @IEEESOFT WARE

different fields is important for de-veloping ML applications.

Healing the RiftFrom these two days of intensive dis-cussions, two key questions emerged:

• How should software develop-ment teams integrate the AI model lifecycle (training, testing, deploying, evolving, and so on) into their software process?

• What new roles, artifacts, and activities come into play, and how do they tie into existing agile or DevOps processes?

Answering these questions requires combined knowledge in SE and ML.

Unfortunately, a rift exists between these two communities, which we tried to understand through SEMLA. One reason for this rift is that stake-holders in the AI community focus on algorithms and their performance characteristics, whereas stakehold-ers in the SE community focus on implementing and deploying those algorithms.

So far, no real venue has inte-grated both fields, yet intersections exist between them, one of which is testing. The notion of coming up with ways to break a system is in-tegral to ML, and the scale of test sets (thousands to millions of in-stances) is huge compared to the number of test cases in software

systems. On the other hand, miss-ing test cases or test cases that are known to fail (but are rare) are a larger issue for ML than for regular software systems.

W e believe that the SE and ML communities should work together

to solve the critical challenges of as-suring the quality of AI and software systems in general. We have a lot to benefit from each other!

Read your subscriptions through the myCS publications portal at

http://mycs.computer.org

Take the CS Library wherever you go!

IEEE Computer Society magazines and Transactions are now available to subscribers in the portable ePub format.

Just download the articles from the IEEE Computer Society Digital Library, and you can read them on any device that supports ePub. For more information, including a list of compatible devices, visit

www.computer.org/epub

This article originally appeared in IEEE Software, vol. 35, no. 5, 2018.

Rejuvenating Binary Executables ■ Visual Privacy Protection ■ Communications Jamming

January/February 2016Vol. 14, No. 1

Policing Privacy ■ Dynamic Cloud Certification ■ Security for High-Risk Users

March/April 2016Vol. 14, No. 2

IEEE Symposium onSecurity and Privacy

Smart TVs ■ Code Obfuscation ■ The Future of Trust

May/June 2016Vol. 14, No. 3

IEEE Symposium onIEEE Symposium onIEEE Symposium onIEEE Symposium onIEEE Symposium onSecurity and PrivacySecurity and PrivacySecurity and PrivacySecurity and PrivacySecurity and PrivacySecurity and Privacy

IEEE Security & Privacy magazine provides articles with both a practical and research bent by the top thinkers in the fi eld.• stay current on the latest security tools and theories and gain invaluable practical and research knowledge,• learn more about the latest techniques and cutting-edge technology, and• discover case studies, tutorials, columns, and in-depth interviews and podcasts for the information security industry.

computer.org/security

Page 27: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

2469-7087/19/$33.00 © 2019 IEEE Published by the IEEE Computer Society February 2019 25

DEPARTMENT: Internet of Things

From Raw Data to Smart Manufacturing: AI and Semantic Web of Things for Industry 4.0

AI techniques combined with recent advancements in

the Internet of Things, Web of Things, and Semantic

Web—jointly referred to as the Semantic Web—

promise to play an important role in Industry 4.0. As

part of this vision, the authors present a Semantic Web

of Things for Industry 4.0 (SWeTI) platform. Through

realistic use case scenarios, they showcase how SweTI technologies can address

Industry 4.0’s challenges, facilitate cross-sector and cross-domain integration of

systems, and develop intelligent and smart services for smart manufacturing.

Industry 4.0 refers to the Fourth Industrial Revolution—the recent trend of automation and data exchange in manufacturing technologies. To fully realize the Industry 4.0 vision, manufacturers need to unlock capabilities such as vertical integration through connected and smart manufactur-ing assets of a factory, horizontal integration through connected discrete operational systems of a factory, and end-to-end integration throughout the entire supply chain. Several architectures and conceptual platforms (for example, RAMI 4.0 in Europe, IIRA in the US) have been proposed to develop Industry 4.0 applications.1 However, these reference architectures are largely missing the granularity of Semantic Web and AI technologies. We believe that combined AI and Seman-tic Web technologies are a good fit for the plethora of complex problems related to interoperabil-ity, automated, flexible, and self-configurable systems such as Industry 4.0 systems.2 In this article, we present a Semantic Web of Things for Industry 4.0 (SWeTI) platform for building Industry 4.0 applications.

We first present a representative set of Industry 4.0 use cases, followed by a layered SWeTI platform for building these use cases. Each layer contains a variety of tools and techniques to build smart applications that can process raw sensory information and support smart manufactur-ing using Semantic Web and AI techniques. Finally, we present our existing tools and middle-ware for realizing the SWeTI platform.

Pankesh Patel Fraunhofer USA

Muhammad Intizar Ali National University of Ireland

Amit Sheth Kno.e.sis Center

Page 28: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

26 ComputingEdge February 2019

IEEE INTELLIGENT SYSTEMS

USE CASES Below is a representative set of Industry 4.0 use cases that can potentially leverage Semantic Web technologies.

Use Case 1: Deep Integration In this use case, an integrated and holistic view of a factory is established to improve decision-making across different departments and to reduce overall complexity. This includes the inter-linking of diverse data sources such as sensor measurements (for example, temperature, vibra-tion, pressure, power), the manufacturing execution system (for example, work orders, material needed for the production, incoming material number), business processes, work force, and so on. Although much of this data is already captured by IT systems, it remains largely inaccessible in an integrated way without investing significant manual effort. Thus, the objective of this use case is to make all data available in a unified model to support users (factory planners, machin-ists, controllers, field technician, and so forth) in decision-making. For example, consider the following scenarios:

• A factory planner needs input from diverse sources regarding order plans, machine maintenance schedules, workforce availability, and so on.3

• A field technician must quickly troubleshoot an onsite industrial asset, and is seeking a solution that combines a summary of the problem, including difficulty and time esti-mates; links to relevant manuals and necessary parts; additional physical tools to resolve the problems and the current location of these tools; and, if the problem is difficult to resolve, additional support from people with the necessary expertise.

• During production, a machinist needs to know which tools are required to perform the task at hand, the location of these tools and materials, and quality control standards to be adhered to.3

Use Case 2: Horizontal Integration This use case extends the vertical integration of all factory operation into the horizontal dimen-sion, knitting together the relevant players in a manufacturing supply chain—the raw materials and parts suppliers, logistics, inventory of supplied goods, production process, warehouses and distributors of finished products, sales and marketing, customers—through an interconnected networks of Internet of Things (IoT) devices and external information sources such as social media and web services (for example, financial services or weather forecasting), overseen via an overarching semantic-enabled engine. Some examples include the following:

• A smart factory manager wants to optimize supply chain and warehouse facilities to en-sure that the right amount of raw material is always available in the warehouse to sup-port production processes. Because the factory produces customized products, allowing customers to choose their food products’ ingredients, it is difficult to estimate the amount and type of raw material required to fulfill customer orders. However, efficient data-mining and machine-learning techniques that harness social media data can provide insights into ongoing trends and customer preferences, which will help the warehouse manager optimize the supply chain and ensure that the right amount of raw ingredients is always available in the warehouse to replenish the processing machine at the food production factory.

• A production manager in a manufacturing unit needs an integrated view of the supply chain, including raw materials and the distribution network, to optimize internal manu-facturing processes. A horizontal integration of smart factory production processes, supply, and the distribution network including fleet management data and external da-tasets such as traffic congestion, weather, and social media is required to build an opti-mal strategy for the production of perishable food products. Using this integrated information, the production manager can adapt internal business processes on the fly.

Page 29: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

www.computer.org/computingedge 27

INTERNET OF THINGS

Use Case 3: Autonomous System This class of use cases deals with enabling factory devices to cooperate to achieve the factory’s overall objectives. The following are just a few examples:

• Self-organization. When a production order comes down to the factory, machines can communicate and exchange information with on another to organize resources to com-plete the orders on time. Resource allocation is determined at runtime, rather than pre-allocated, depending on the machines’ current conditions, including current factory workload at factory, machine availability and maintenance schedule time, backlog of customer orders, and machine capacity. Moreover, resource allocation could consider external electricity rates data to achieve the goal of reducing the factor’s energy con-sumption and carbon footprint.

• Flexible manufacturing. Decentralized control is very useful when market demands lead to the introduction of new machines in factories. The new machines can participate by simply announcing their services and features during the resource allocation process. This illustrates the flexibility and adaptability of a factory, where new machines can be integrated in a plug-and-produce fashion according to market demands with minimal downtime.

• Fault tolerance. The highly dynamic and self-organization features result in a fault-tolerant system—faulty sensors can be replaced by discovering new sensors with similar functionality to prevent downtime during the production process.

A SEMANTIC WEB OF THINGS FOR INDUSTRY 4.0 PLATFORM In this section, we present our SWeTI platform’s layered architecture for Industry 4.0 application design. Figure 1 depicts the overall architecture beginning with the data processing pipeline at the device-, sensor-, or machine-level and moving toward intelligent autonomous applications or dashboards at the application layer. In what follows, we briefly describe each layer and its com-ponents and functionality.

Figure 1. A layered view of the Semantic Web of Things for Industry 4.0 (SWeTI) platform.

IEEE INTELLIGENT SYSTEMS

USE CASES Below is a representative set of Industry 4.0 use cases that can potentially leverage Semantic Web technologies.

Use Case 1: Deep Integration In this use case, an integrated and holistic view of a factory is established to improve decision-making across different departments and to reduce overall complexity. This includes the inter-linking of diverse data sources such as sensor measurements (for example, temperature, vibra-tion, pressure, power), the manufacturing execution system (for example, work orders, material needed for the production, incoming material number), business processes, work force, and so on. Although much of this data is already captured by IT systems, it remains largely inaccessible in an integrated way without investing significant manual effort. Thus, the objective of this use case is to make all data available in a unified model to support users (factory planners, machin-ists, controllers, field technician, and so forth) in decision-making. For example, consider the following scenarios:

• A factory planner needs input from diverse sources regarding order plans, machine maintenance schedules, workforce availability, and so on.3

• A field technician must quickly troubleshoot an onsite industrial asset, and is seeking a solution that combines a summary of the problem, including difficulty and time esti-mates; links to relevant manuals and necessary parts; additional physical tools to resolve the problems and the current location of these tools; and, if the problem is difficult to resolve, additional support from people with the necessary expertise.

• During production, a machinist needs to know which tools are required to perform the task at hand, the location of these tools and materials, and quality control standards to be adhered to.3

Use Case 2: Horizontal Integration This use case extends the vertical integration of all factory operation into the horizontal dimen-sion, knitting together the relevant players in a manufacturing supply chain—the raw materials and parts suppliers, logistics, inventory of supplied goods, production process, warehouses and distributors of finished products, sales and marketing, customers—through an interconnected networks of Internet of Things (IoT) devices and external information sources such as social media and web services (for example, financial services or weather forecasting), overseen via an overarching semantic-enabled engine. Some examples include the following:

• A smart factory manager wants to optimize supply chain and warehouse facilities to en-sure that the right amount of raw material is always available in the warehouse to sup-port production processes. Because the factory produces customized products, allowing customers to choose their food products’ ingredients, it is difficult to estimate the amount and type of raw material required to fulfill customer orders. However, efficient data-mining and machine-learning techniques that harness social media data can provide insights into ongoing trends and customer preferences, which will help the warehouse manager optimize the supply chain and ensure that the right amount of raw ingredients is always available in the warehouse to replenish the processing machine at the food production factory.

• A production manager in a manufacturing unit needs an integrated view of the supply chain, including raw materials and the distribution network, to optimize internal manu-facturing processes. A horizontal integration of smart factory production processes, supply, and the distribution network including fleet management data and external da-tasets such as traffic congestion, weather, and social media is required to build an opti-mal strategy for the production of perishable food products. Using this integrated information, the production manager can adapt internal business processes on the fly.

Page 30: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

28 ComputingEdge February 2019

IEEE INTELLIGENT SYSTEMS

Device Layer At the factory floor, devices ranges from production machines (such as PLCs, industrial motors, pumps, and robots) to smart devices and tools (such as smartwatches, glasses, sensors, and smartphones) that provide extended human–machine interface and functionality. From a connec-tivity viewpoint, these could be legacy devices (which communicate through legacy protocols such as profibus, modbus, opc) as well as new equipment with embedded technology that allows these devices to communicate through recent IoT standards such as OPC-UA, MQTT, Bluetooth, and so on.

Edge Layer The edge layer transforms data generated by factory floor device into information. Typically, industrial gateways are deployed to this layer, which are relatively powerful devices compared to the lowest layer. The broad functionality at this layer is presented as follows:

• Interoperability. For instance, OPC-UA specifies real-time communication of plant da-ta between control devices from different vendors (such as ABB or Siemens). Produc-tion Performance Management Protocol (PPMP; https://bit.ly/2P0WGUc) describes the payload definition of three types of messages generated from industrial assets: machine messages, measurement messages, process messages. This protocol is independent of transport protocols such as HTTP/REST, MQTT, and AMQP.

• Analytics-based actions. Actions include reactions to various production events such as executing rule-based alerts and/or sending commands back (for example, reconfigu-ration of industrial machines) to the production equipment and tools.

• Edge analytics. These provide data aggregation techniques, data filtering, and cleansing techniques to refine data, implementing a device-independent model as a base for deci-sions, and analytical logic for specific device domains and types.

Greengrass (https://aws.amazon.com/greengrass) is an edge-analytic software solution from Amazon. Similarly, Microsoft offers Azure IoT Edge. Greengrass has a small footprint that can run on gateway devices such as BeagleBone and Raspberry PI. Using Lambda functions, AWS Greengrass provides data filtering and computation capabilities. Developers can push small ana-lytical capabilities by deploying lambda functions from AWS cloud (for further details, see “On Using the Intelligent Edge for IoT Analytics”4).

Cyber Layer The cyber layer acts as a distributed information hub. Having massive information gathered from diverse distributed information sources, this layer prepares ground for the data-analytic layer for specific analytics. Key forms these information sources take are enterprise wide knowledge: Industry 4.0 envisions the access of data across different players (logistics, customers, distribu-tors, and supplier) in the supply chain. Information from various machines on factory floors (through the edge layer or directly from the device layer) is pushed to form the linked network of information (that is, Linked Data; https://bit.ly/29YZz5b) generated from production machines. Linked data is a natural fit because it provides an abstraction layer on top of a distributed set of data, stored across the supply chain.

An alternate technology solution could be blockchain (https://bit.ly/2nna7Ac). It is decentralized and distributed across peer-to-peer networks. Each participant in the network can read and write data in blockchain. The blockchain network remains in sync. For instance, blockchain can be used for timely industrial asset maintenance, sharing necessary information across different or-ganizations.5 Repair partners could monitor the blockchain for maintenance to minimize down-time in factory production and record their work on the blockchain for further use. The regulators of an industrial plant equipment would have access to asset records, allowing them to provide timely certification to ensure that the asset is safe for factory workers.

Page 31: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

www.computer.org/computingedge 29

INTERNET OF THINGS

Domain knowledge sources include the Linked Open Data (LOD), Linked Open Reasoning (LOR), and Linked Open Services (LOS). These approaches reuse data, rules, and services, re-spectively, from the web.6 For instance, Sensor-based Linked Open Rules (S-LOR)6 is a data set of interoperable rules used to interpret data produced by sensors. LOS is intended to share and reuse services and applications that can be used to compose crossdomain Industry 4.0 applica-tions.

External sources are also available. Real-time data streams from external sources such as social media and tracking devices from logistics, weather, traffic, and news feeds can be brought into the cyber layer. This data can be linked with other information (for example, events affecting supply shipments, possibly with the support of information extraction techniques such as for event identification) at the data-analytic layer for strategic optimization, such as route network improvements. This would provide a fully transparent view of the supply chain and let managers make decisions to keep the supply chain flow moving.

An alternative to the distributed information hub is to use the cloud to build Industry 4.0 applica-tions. Cloud-based manufacturing is a centralized single-shop place that allows manufacturers to apply industrial analytics on top of stored data. For example, Microsoft Azure (https://bit.ly/2KLKbs9) use the term “data lake” to store big data. A cloud-based centralized repository allows users to store structured and unstructured data at any scale. Users can run dif-ferent analytics—from simple analytics such as data visualization to complex analytics such as real-time analytics, machine learning, and big data processing.

Additional analytics on top of the data sources described above enables several capabilities, de-scribed in the next section.

Data-Analytic Layer The massive amount of data available on the cyber layer creates a distributed data lake, which creates an opportunity to add industrial analytics on top of this data lake by leveraging AI algo-rithms. The purpose of industrial analytics is to identify invisible relationships among data at the application layer (discussed in the next section), thus enabling decision-makers to make opti-mized decisions.

The industrial analytics algorithms can be on-premise (written by an enterprise) or cloud-based. Various cloud vendors allow manufacturers to store and integrate data, apply industrial analytics available in a cloud marketplace, and develop customized business solutions leveraging cloud-based tools. As an example, GE developed Predix (www.predix.io/catalog), an industrial Internet platform that offers a marketplace to deploy various apps and services, including predictive maintenance, anomaly detection, and algorithms for the intelligent edge. Similarly, Azure AI gallery (https://gallery.azure.ai) offers a catalog service that finds different machine learning algorithms for the industrial Internet. Microsoft has integrated AzureML (https://studio.azureml.net), a visual programming environment, to develop machine-learning algorithms. Therefore, AzureML developers can make their algorithms available to the commu-nity by publishing an Azure AI gallery. Recently, Siemens launched MindSphere (https://siemens.mindsphere.io; hosted on AWS), a cloud-based Industry 4.0 OS that lets manu-facturers connect their industrial machines to the cloud and offers a marketplace (like an AppStore) to use deployment-ready Industrial applications.

Application Layer The application layer presents the acquired knowledge from the cyber layer to users (such as domain experts and decision-makers) so that correct decisions can be taken. This layer is about building meaningful and customized applications on top of services and data exposed by the data-analytic layer and presenting the acquired knowledge to the user in an appropriate manner. A broad variety of machine learning approaches7 have been developed to analyze and extract higher-level information from diverse IoT data.

IEEE INTELLIGENT SYSTEMS

Device Layer At the factory floor, devices ranges from production machines (such as PLCs, industrial motors, pumps, and robots) to smart devices and tools (such as smartwatches, glasses, sensors, and smartphones) that provide extended human–machine interface and functionality. From a connec-tivity viewpoint, these could be legacy devices (which communicate through legacy protocols such as profibus, modbus, opc) as well as new equipment with embedded technology that allows these devices to communicate through recent IoT standards such as OPC-UA, MQTT, Bluetooth, and so on.

Edge Layer The edge layer transforms data generated by factory floor device into information. Typically, industrial gateways are deployed to this layer, which are relatively powerful devices compared to the lowest layer. The broad functionality at this layer is presented as follows:

• Interoperability. For instance, OPC-UA specifies real-time communication of plant da-ta between control devices from different vendors (such as ABB or Siemens). Produc-tion Performance Management Protocol (PPMP; https://bit.ly/2P0WGUc) describes the payload definition of three types of messages generated from industrial assets: machine messages, measurement messages, process messages. This protocol is independent of transport protocols such as HTTP/REST, MQTT, and AMQP.

• Analytics-based actions. Actions include reactions to various production events such as executing rule-based alerts and/or sending commands back (for example, reconfigu-ration of industrial machines) to the production equipment and tools.

• Edge analytics. These provide data aggregation techniques, data filtering, and cleansing techniques to refine data, implementing a device-independent model as a base for deci-sions, and analytical logic for specific device domains and types.

Greengrass (https://aws.amazon.com/greengrass) is an edge-analytic software solution from Amazon. Similarly, Microsoft offers Azure IoT Edge. Greengrass has a small footprint that can run on gateway devices such as BeagleBone and Raspberry PI. Using Lambda functions, AWS Greengrass provides data filtering and computation capabilities. Developers can push small ana-lytical capabilities by deploying lambda functions from AWS cloud (for further details, see “On Using the Intelligent Edge for IoT Analytics”4).

Cyber Layer The cyber layer acts as a distributed information hub. Having massive information gathered from diverse distributed information sources, this layer prepares ground for the data-analytic layer for specific analytics. Key forms these information sources take are enterprise wide knowledge: Industry 4.0 envisions the access of data across different players (logistics, customers, distribu-tors, and supplier) in the supply chain. Information from various machines on factory floors (through the edge layer or directly from the device layer) is pushed to form the linked network of information (that is, Linked Data; https://bit.ly/29YZz5b) generated from production machines. Linked data is a natural fit because it provides an abstraction layer on top of a distributed set of data, stored across the supply chain.

An alternate technology solution could be blockchain (https://bit.ly/2nna7Ac). It is decentralized and distributed across peer-to-peer networks. Each participant in the network can read and write data in blockchain. The blockchain network remains in sync. For instance, blockchain can be used for timely industrial asset maintenance, sharing necessary information across different or-ganizations.5 Repair partners could monitor the blockchain for maintenance to minimize down-time in factory production and record their work on the blockchain for further use. The regulators of an industrial plant equipment would have access to asset records, allowing them to provide timely certification to ensure that the asset is safe for factory workers.

Page 32: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

30 ComputingEdge February 2019

IEEE INTELLIGENT SYSTEMS

In recent years, industrial vendors have demonstrated a wide variety of Industry 4.0 applications, combining a range of technologies such as digital twin, chatbot/natural language processing, and augmented reality. For instance, by combining the functionality exposed by the data-analytic layer with an industrial asset, developers can create a digital twin—a virtual representation of an industrial asset. An extension of this work is to integrate different interfaces with the digital twin. GE Digital has demonstrated an extended demo of a digital twin (https://youtu.be/2dCz3oL2rTw); customers can ask the digital twin questions about its perfor-mance and potential issues in natural language and receive answers back in natural language. Moreover, customers can interact with digital twin through augmented reality (AR) devices such as the Microsoft HoloLens and obtain a 3D view of an asset (for example, a steam turbine) to analyze its internal parts.

The Industry 4.0 application-building approaches are divided into two broad categories:

• Rapid application development (RAD). To address the longer development time prob-lem, RAD tools provide abstractions that hide low-level programming details. For ex-ample, Node-RED (https://nodered.org) offers node to read data from an OPC-UA server. Similarly, Kura Wires (www.eclipse.org/kura) offers visual programming con-structs to acquire data using industrial protocols such as Modbus and OPC-UA.

• Cloud-based tools. These tools offer services that implement common application de-velopment functions, such as connecting industrial assets to the cloud, data storage, and data visualization. Developers can configure these tools and develop applications very rapidly. Various vendors have started offering cloud-based tools for building Industry 4.0 applications. For instance, Amazon Lex (https://aws.amazon.com/lex) offers a ser-vice that develops conversational interfaces into any application using text and voice. Microsoft Azure Accelerator (https://bit.ly/2P1qEan) offers preconfigured and customi-zable solutions. Developers can deploy these solutions in minutes, connect factory floor assets, and deploy Industry 4.0 solutions such as remote monitoring, connected factory, and predictive maintenance.

SWETI PLATFORM COMPONENTS We continue to leverage our existing tools and middleware to build Industry 4.0 applications. We plan to use some open source tools for Industry 4.0 (https://bit.ly/2gIbVow) from Eclipse Foundation. Here, we present our existing open source tools for building Industry 4.0 applica-tions.

IoTSuite IoTSuite (https://bit.ly/2M9FldX) is a framework for prototyping IoT applications by making application development easier by hiding development-related complexity from developers. It takes platform-independent high-level specifications as inputs, parses them, and generates plat-form-specific code that results in a distributed software system collaboratively hosted by hetero-geneous IoT devices. The high-level specification includes specification about sensors, actuators, storage devices, and computational component specification as well as deployment specification that describe device properties. Thus, developers need not concern themselves with the platform- and runtime-specific aspects of development. The key characteristics of IoTSuite that make it suitable for our Industry 4.0 projects are its flexibility:

• It can generate code for different target programming languages (for example, C, C++, Python, Java) as the code generator for target programming language is exposed as a plug-in. To generate a new programming language, IoTSuite developers simply write a plug-in to generate code in a target programming language.

• It can plug different runtime systems (for example, MQTT, CoAP, OPC-UA) as it ex-poses well-defined interfaces to plug different runtime systems. IoTSuite developers simply have to implement runtime specific interfaces to plug a target runtime system.

Page 33: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

www.computer.org/computingedge 31

INTERNET OF THINGS

This framework has been used for industrial-grade devices such as ABB’s RIO 600 as well as popular devices such as Raspberry PI, Arduino, and Android devices. The framework’s current version integrates standards such as OPC-UA, MQTT, CoAP, and WebSocket, and generates code in programming languages such as Node.js, Android, and Java.

Machine-to-Machine Measurement The Machine-to-Machine Measurement (M3) framework (https://bit.ly/2OWMWdk) is intended to help build crossdomain IoT applications. It uses semantic technologies to achieve interopera-bility among heterogeneous IoT systems. Reasoning over semantically annotated data produced by IoT devices can help create user suggestions. For example, a body temperature value of 38 degrees could be associated with a naturopathy application that would suggest home remedies when a constant high fever was sensed. This framework uses LOD, LOR, Linked Open Vocabu-laries (LOV), and LOS to enhance interoperability and get meaningful knowledge from data.6

ACEIS (https://bit.ly/2OYQtYA) middleware contains a set of tools designed for IoT data ana-lytics and uses Semantic Web technologies to build various components including automated streaming data discovery, integration on the fly, event detection, and contextually aware decision support systems.8 This middleware can be used to build smart applications for Industry 4.0.

CONCLUSION Industry 4.0 is an emerging area of research. In this article, we described the example of the SWeTI platform, which augments Semantic Web, AI, and data analytics to support the building of smart IoT applications for Industry 4.0. We presented a set of realistic use case scenarios to advocate for the SWeTI platform.

ACKNOWLEDGMENTS We acknowledge the fruitful contributions of the SWeTI workshop (https://swetiworkshop.wordpress.com) participants and contributors. A productive discus-sion concluded by the workshop inspired some of the ideas presented in this article. This work is partially funded by SFI under grant no. SFI/16/RC/3918 and SFI/12/RC/2289.

REFERENCES 1. I. Grangel-González et al., “The Industry 4.0 Standards Landscape from a Semantic

Integration Perspective,” Proc. 22nd IEEE Int’l Conf. Emerging Technologies and Factory Automation (ETFA), 2017, pp. 1–8.

2. A. Sheth, “Internet of Things to Smart IoT Through Semantic, Cognitive, and Perceptual Computing,” IEEE Intelligent Systems, vol. 31, no. 2, 2016, pp. 108–112.

3. N. Petersen et al., “Monitoring and Automating Factories Using Semantic Models,” Proc. 6th Joint Int’l Conf. Semantic Technology (JIST), 2016, pp. 100–115.

4. P. Patel, M.I. Ali, and A. Sheth, “On Using the Intelligent Edge for IoT Analytics,” IEEE Intelligent Systems, vol. 32, no. 5, 2017, pp. 64–69.

5. D. Miller, “Blockchain and the Internet of Things in the Industrial Sector,” IT Professional, vol. 20, no. 3, 2018, pp. 15–18.

6. A. Gyrard et al., “Building the Web of Knowledge with Smart IoT Applications,” IEEE Intelligent Systems, vol. 31, no. 5, 2016, pp. 83–88.

7. M.S. Mahdavinejad et al., “Machine Learning for Internet of Things Data Analysis: A Survey,” Digital Communications and Networks, vol. 4, no. 3, 2017, pp. 161–175.

8. F. Gao et al., “Automated Discovery and Integration of Semantic Urban Data Streams: The ACEIS Middleware,” Future Generation Computer Systems, vol. 76, 2017.

IEEE INTELLIGENT SYSTEMS

In recent years, industrial vendors have demonstrated a wide variety of Industry 4.0 applications, combining a range of technologies such as digital twin, chatbot/natural language processing, and augmented reality. For instance, by combining the functionality exposed by the data-analytic layer with an industrial asset, developers can create a digital twin—a virtual representation of an industrial asset. An extension of this work is to integrate different interfaces with the digital twin. GE Digital has demonstrated an extended demo of a digital twin (https://youtu.be/2dCz3oL2rTw); customers can ask the digital twin questions about its perfor-mance and potential issues in natural language and receive answers back in natural language. Moreover, customers can interact with digital twin through augmented reality (AR) devices such as the Microsoft HoloLens and obtain a 3D view of an asset (for example, a steam turbine) to analyze its internal parts.

The Industry 4.0 application-building approaches are divided into two broad categories:

• Rapid application development (RAD). To address the longer development time prob-lem, RAD tools provide abstractions that hide low-level programming details. For ex-ample, Node-RED (https://nodered.org) offers node to read data from an OPC-UA server. Similarly, Kura Wires (www.eclipse.org/kura) offers visual programming con-structs to acquire data using industrial protocols such as Modbus and OPC-UA.

• Cloud-based tools. These tools offer services that implement common application de-velopment functions, such as connecting industrial assets to the cloud, data storage, and data visualization. Developers can configure these tools and develop applications very rapidly. Various vendors have started offering cloud-based tools for building Industry 4.0 applications. For instance, Amazon Lex (https://aws.amazon.com/lex) offers a ser-vice that develops conversational interfaces into any application using text and voice. Microsoft Azure Accelerator (https://bit.ly/2P1qEan) offers preconfigured and customi-zable solutions. Developers can deploy these solutions in minutes, connect factory floor assets, and deploy Industry 4.0 solutions such as remote monitoring, connected factory, and predictive maintenance.

SWETI PLATFORM COMPONENTS We continue to leverage our existing tools and middleware to build Industry 4.0 applications. We plan to use some open source tools for Industry 4.0 (https://bit.ly/2gIbVow) from Eclipse Foundation. Here, we present our existing open source tools for building Industry 4.0 applica-tions.

IoTSuite IoTSuite (https://bit.ly/2M9FldX) is a framework for prototyping IoT applications by making application development easier by hiding development-related complexity from developers. It takes platform-independent high-level specifications as inputs, parses them, and generates plat-form-specific code that results in a distributed software system collaboratively hosted by hetero-geneous IoT devices. The high-level specification includes specification about sensors, actuators, storage devices, and computational component specification as well as deployment specification that describe device properties. Thus, developers need not concern themselves with the platform- and runtime-specific aspects of development. The key characteristics of IoTSuite that make it suitable for our Industry 4.0 projects are its flexibility:

• It can generate code for different target programming languages (for example, C, C++, Python, Java) as the code generator for target programming language is exposed as a plug-in. To generate a new programming language, IoTSuite developers simply write a plug-in to generate code in a target programming language.

• It can plug different runtime systems (for example, MQTT, CoAP, OPC-UA) as it ex-poses well-defined interfaces to plug different runtime systems. IoTSuite developers simply have to implement runtime specific interfaces to plug a target runtime system.

Page 34: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

32 ComputingEdge February 2019

IEEE INTELLIGENT SYSTEMS

ABOUT THE AUTHORS Pankesh Patel is a senior research scientist at Fraunhofer USA’s Center for Experimental Software Engineering (CESE). His current focus is implementation of Industry 4.0 tech-niques and methodologies in commercial environments. Contact him at [email protected].

Muhammad Intizar Ali is an adjunct lecturer, research fellow, and research unit leader of the Reasoning, Querying, and IoT Data Analytics Unit at the Insight Centre for Data Ana-lytics, National University of Ireland. His research interests include Semantic Web, Internet of Things, and data analytics. Contact him at [email protected].

Amit Sheth is the LexisNexis Ohio Eminent Scholar and executive director of the Ohio Center of Excellence in Knowledge-Enabled Computing (Kno.e.sis), and a Fellow of IEEE and AAAI. Contact him at [email protected].

This article originally appeared in IEEE Intelligent Systems, vol. 33, no. 4, 2018.

The #1 AI Magazine www.computer.org/intelligent IE

EE

Cutting Edgestay on the

P U T T I N G A I I N T O P R A C T I C E

IEE

E

January/fEbruary 2016

Also in this issue: aI’s 10 to Watch 56 real-Time Taxi Dispatching 68 from flu Trends to Cybersecurity 84

www.computer.org/intelligent

IEEE Ja

nu

ary/FEBru

ary 2016

On

line B

ehA

viO

rA

l An

Aly

sis VO

LuM

E 31 nu

MBEr 1

IS-31-01-C1 Cover-1 January 11, 2016 6:06 PM

IEEE Intelligent Systems provides

peer-reviewed, cutting-edge

articles on the theory and

applications of systems that

perceive, reason, learn, and

act intelligently.

of Artificial Intelligence

Page 35: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

2469-7087/19/$33.00 © 2019 IEEE Published by the IEEE Computer Society February 2019 33

FROM THE EDITORS

Artificial Intelligence and IT Professionals

As “self-programming techniques” manifest in the form of artificial intelligence (AI), many are wondering how AI will affect IT professionals. For example, some predict that AI could reduce the number of jobs for software develop-ers by 70 percent in India, which accounts for 65 percent of global IT offshore work and 40 percent of IT-enabled busi-ness process work.1

However, such dire predictions are not new. It is helpful to recall a similar prediction almost 60 years ago when Her-bert Simon, a Nobel Prize winner sometimes called ‘the founding father of AI,’ predicted that ‘self-programming techniques’ would lead to the extinction of the computer programming occupation by 1985. Simon noted:2

“…we can dismiss the notion that computer programers [sic] will become a powerful elite in the automated corporation. It is far more likely that the programing occupation will be-come extinct (through the further development of self-programing techniques) than that it will become all powerful. More and more, computers will program themselves….”

While massive industrial and technical developments—including personal computers in the 1980s; the World Wide Web in the 1990s; outsourcing and offshoring in the 2000s; and social media, mobile computing, and cloud computing in the 2010s—created some peaks and valleys, the computer programming occupation has continued its inexorable growth, belying the initial pessimism.

Rather than attempt a blunt prediction of future decades, we approach the question of how AI will affect IT professionals by first identifying the factors that influence the demand for software programmers, then discussing how these factors relate to AI, and finally articulating the likely impact of AI on IT professionals.

FACTORS THAT INFLUENCE THE DEMAND FOR PROGRAMMERS AND IT PROFESSIONALS We begin by delving into what software programmers do and how those activities are affected by technical developments. To ‘program’ is to develop a series of instructions or operations to be performed by a mechanism such as a computer. The personal computer revolution of the 1980s and the advent of the World Wide Web in the 1990s greatly increased the information intensity of industrial activity by allowing numerous occupational tasks to be codified and standardized.3 The ability to access world-class intra-firm efficiencies through the personal computer and inter-

Sunil Mithas

Muma College of Business at the University of South Florida

Thomas Kude ESSEC Business School

Jonathan Whitaker University of Richmond Robins School of Business

Page 36: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

34 ComputingEdge February 2019

IT PROFESSIONAL

firm efficiencies through the World Wide Web drove demand for software such as enterprise re-source planning (ERP) software. In turn, this demand for software accelerated the number of computer science degrees during the mid-1980s and late 1990s, as shown in Figure 1. In this way, the codification and standardization enabled by IT created significant new demand for IT and the software programming profession during the 1980s and 1990s.

Figure 1. Degrees in computer and information sciences conferred by US postsecondary institutions. (Source: Digest of Education Statistics, 2016, US National Center for Education Statistics, https://nces.ed.gov/pubsearch/pubsinfo.asp?pubid=2017094)

While the increase of outsourcing and the advent of offshoring during the 2000s might not have changed the overall demand for software programming, it certainly shifted and reallocated that demand across firms and geographies. Outsourcing and offshoring of software development and other occupations during the past two decades was enabled by higher levels of modularization. Modularization is the decomposition of a product or service into components, such that the com-ponents can be performed independently by separate people in different firms or geographies, and later be reintegrated. We find evidence of the impact of modularization on software pro-grammers in developed economies by noting the relatively flat level of employment for software programmers in the US (see Figure 2) and of information and communication technology (ICT) specialists in the European Union during the 2000s. Meanwhile, India’s National Association of Software and Service Companies (NASSCOM) reports that the number of IT and business pro-cess outsourcing (BPO) professionals in India increased almost ten-fold from 430,000 in 2001 to 3,860,000 in 2017.

To place these figures in context, the research firm IDC estimates that the number of global soft-ware development professionals was 11,000,000 in 2014. Combining this IDC estimate with the US Census Bureau data in Figure 2 and the NASSCOM data above suggests that about 15 per-cent of software development professionals are in the US, and about 30 percent of software de-velopment professionals are in India.

Figure 1 shows that the number of bachelor’s degrees in computer and information science de-clined by 26 percent from 2004-2005 to 2009-2010, but then increased sharply from 2009-2010 to reach an all-time high in 2014-2015. The sharp decline in computer and information science degrees might have been due to concerns about offshoring because we do not see a decline in two related fields (business and engineering/engineering technologies) from 2004-2005 to 2009-2010, where the number of bachelor’s degrees in business increased by 15 percent, and the num-ber of bachelor’s degrees in engineering and engineering technologies increased by 12 percent.

Page 37: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

www.computer.org/computingedge 35

FROM THE EDITORS

Note that the number of computer and information science bachelor’s degrees surged by 50 per-cent from 2009-2010 to 2014-2015 as concerns about offshoring subsided, an increase far greater than the 2 percent increase in business degrees and 30 percent increase in engineering/engineer-ing technologies degrees during the same timeframe.

It is important to note that the impacts of offshoring vary for different types of IT occupations. For example, in our related research we found that information-intensive and high-skill occupa-tions experienced higher employment growth, despite a slight decline in salary growth in the US from 2000-2004, suggesting that many information-intensive service occupations have a tacit component that make them more difficult to relocate offshore. In one of our research papers, we note that total employment for computer and information systems managers (a more tacit and less codifiable occupation) increased 20 percent from 283,480 in 2000 to 341,250 in 2015, while wages increased 76 percent from $80,250 to $141,000 during the same timeframe. In contrast, employment for computer programmers (a more codifiable occupation) declined 45 percent from 2000 to 2015, and wages for computer programmers increased 38 percent from 2000 to 2015, only half the rate of increase for manager positions. More recently, firms are using crowdsourc-ing or micro-sourcing through platforms such as Amazon Mechanical Turk or upwork.com to outsource or offshore some activities. However, the extent to which these platforms will nega-tively impact the work of software programmers that involves complex workflows remains de-batable.4

Figure 2. Computer programmer employment and wages in the US. (Sources: 1970 data from US Census Bureau supplementary report number PC(S1)-32, https://www.census.gov/library/publications/1973/dec/population-pc-s1-32.html; 1980 data from US Census Bureau supplementary report number PC80(S1)-8, https://www.census.gov/content/dam/Census/library/publications/1983/demo/pc80-s1-8.pdf; 1987-1996 data from US General Accounting Office report GAO/HEHS-98-159R, https://www.gao.gov/products/HEHS-98-159R; 1997-2017 data from US Bureau of Labor Statistics Occupational Employment Statistics, https://www.bls.gov/oes/tables.htm)

In addition to the offshoring of information-intensive activities, another factor that facilitated the disaggregation of business processes is the global movement of labor. The cross-border move-ment of IT professionals continues to attract significant debate on immigration, visa issues, and employment and wages of IT professionals in developed economies.5 Even firms that offshored and outsourced realized the limits of outsourcing, and progressive firms kept at least some criti-cal IT capabilities and programmers onshore and in-house for strategic reasons.

The above findings in the context of technological and organizational developments during the 1980s, 1990s, and 2000s can inform the discussion of ‘extinction’ or ‘substitution’ for the soft-

IT PROFESSIONAL

firm efficiencies through the World Wide Web drove demand for software such as enterprise re-source planning (ERP) software. In turn, this demand for software accelerated the number of computer science degrees during the mid-1980s and late 1990s, as shown in Figure 1. In this way, the codification and standardization enabled by IT created significant new demand for IT and the software programming profession during the 1980s and 1990s.

Figure 1. Degrees in computer and information sciences conferred by US postsecondary institutions. (Source: Digest of Education Statistics, 2016, US National Center for Education Statistics, https://nces.ed.gov/pubsearch/pubsinfo.asp?pubid=2017094)

While the increase of outsourcing and the advent of offshoring during the 2000s might not have changed the overall demand for software programming, it certainly shifted and reallocated that demand across firms and geographies. Outsourcing and offshoring of software development and other occupations during the past two decades was enabled by higher levels of modularization. Modularization is the decomposition of a product or service into components, such that the com-ponents can be performed independently by separate people in different firms or geographies, and later be reintegrated. We find evidence of the impact of modularization on software pro-grammers in developed economies by noting the relatively flat level of employment for software programmers in the US (see Figure 2) and of information and communication technology (ICT) specialists in the European Union during the 2000s. Meanwhile, India’s National Association of Software and Service Companies (NASSCOM) reports that the number of IT and business pro-cess outsourcing (BPO) professionals in India increased almost ten-fold from 430,000 in 2001 to 3,860,000 in 2017.

To place these figures in context, the research firm IDC estimates that the number of global soft-ware development professionals was 11,000,000 in 2014. Combining this IDC estimate with the US Census Bureau data in Figure 2 and the NASSCOM data above suggests that about 15 per-cent of software development professionals are in the US, and about 30 percent of software de-velopment professionals are in India.

Figure 1 shows that the number of bachelor’s degrees in computer and information science de-clined by 26 percent from 2004-2005 to 2009-2010, but then increased sharply from 2009-2010 to reach an all-time high in 2014-2015. The sharp decline in computer and information science degrees might have been due to concerns about offshoring because we do not see a decline in two related fields (business and engineering/engineering technologies) from 2004-2005 to 2009-2010, where the number of bachelor’s degrees in business increased by 15 percent, and the num-ber of bachelor’s degrees in engineering and engineering technologies increased by 12 percent.

Page 38: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

36 ComputingEdge February 2019

IT PROFESSIONAL

ware programming occupation, because activities that can be codified, standardized, and modu-larized are also more likely to be automated through AI.

While the modularization of software development has reduced the complexity of individual ac-tivities, complexity is increased by the need to coordinate work across software teams and inte-grate individual modules to create a product that is Apple-simple, Google-fast, and SAP-reliable at the same time.6 The complexity of software programming jobs has also increased due to changes in system development methodologies and the rise of agile methods that call for closer collaboration among software developers and customers integrating design thinking and related approaches.7

For example, given the difficulty to elicit precise requirements from clients upfront, agile software development insists on constant cus-tomer feedback and collaboration, which might be difficult to achieve in a disaggregated work mode.8 Because of the need for innovation and closer collaboration between software developers and users, firms are realizing the value of investing in their internal technical capabili-ties and digital transformation by bringing more software development in-house, sometimes moving toward a hybrid model that includes both on-premise and cloud computing.9

The foregoing discussion suggests that Simon’s 1960 prediction about computer programmers becoming extinct needs to be seen in the con-text of major industrial and technical developments. Simon made his prediction before the advent of the personal computer and the World Wide Web, changes in software development methodologies and the role of IT departments in firms, and many other wider trends in tech-nology, business, and society.10 As part of these developments, the de-mand for computer programmers has increased (see Figure 2) and the IT profession, which consisted mostly of code development in the 1960s, has now diversified into many different job descriptions and re-sponsibilities.11 For example, in addition to computer programmers, software developers, and web developers, current US Bureau of Labor Statistics computer occupations include information security analysts, network and computer systems administrators, computer network architects, computer user support specialists, and computer network support specialists.

SOFTWARE DEVELOPMENT IN THE AGE OF ARTIFICIAL INTELLIGENCE While Simon wrote that “more and more, computers will program themselves” during a time of great anticipation for AI, this anticipation did not come to fruition at that time. Now that we are again at a time of enthusiasm based on recent advances in AI and machine learning, there is a need to take a more thoughtful perspective on how the factors discussed above fit into AI, and on how AI is likely to influence the software development profession.

We consider two different roles of AI for software development: (1) AI as a tool to program soft-ware, and (2) AI as the software itself—sometimes referred to as Software 2.0.12 Making some cautious but informed predictions, it is likely that both of these roles will be relevant for software development in the future, and that there will be a place for human software developers in both of these AI roles.

The first role of AI as a tool to program software means that AI directly writes program code or indirectly helps human programmers to write program code—in the sense of instructions for computers. Consequently, the tasks of human software developers will follow the general trajec-tory of automation and outsourcing, where high-level tasks carried out by human programmers will move to even higher levels of value creation, while lower-level programming tasks will in-creasingly be performed by AI.13 This is in line with earlier conjectures that emerging technolo-gies can destroy some jobs, and in this case, it will replace jobs that involve lower-level

There is a need to

take a more

thoughtful

perspective on how

AI is likely to

influence the

software

development

profession.

Page 39: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

www.computer.org/computingedge 37

FROM THE EDITORS

programming. Thus, AI will substitute for humans by simplifying the entire job (robots replacing workers) or substituting some activities within a job that are amenable to rule-based logic (simi-lar to automated teller machines taking over some functions of a human teller). These trends have been underway for some time and are already visible across industries.

However, technologies also create new jobs (for example, data scientists), change the mix of jobs in the economy, and alter the nature of activities within jobs.14 For example, AI might comple-ment humans in jobs that require pattern recognition or case-based reasoning. In the case of soft-ware development, recent discussions on the role of AI suggest that AI assistance might help human programmers avoid errors and strategic mistakes when coding.15 For example, AI could act as a pair programming partner, reducing the resource needs for the established agile practice of pair programming. Agile practices could also be useful in test-driven development, where hu-mans could focus on writing test cases and AI would create code that satisfies the test.15 In this way, software development would be conducted through human–machine interaction.16 Going beyond software development, other occupations that are menial and/or prone to error, and there-fore good candidates for AI-enabled displacement, include cashiers, laboratory technicians, ac-countants, auditors, and tax preparers.

The second role of AI as the software itself suggests that we would not use traditional program-ming code—in the sense of instructions as to what the computer should do—but would replace program code with AI (Software 2.0).12 However, we believe that traditional program code will continue to be relevant. For example, Brynjolfsson and Mitchell17 suggest that AI/Software 2.0 is particularly useful for stable tasks. Thus, traditional programming might still be needed in more dynamic environments such as the case of frequently changing customer requests or the context of more exploratory research projects.

It seems feasible that some code will be replaced by AI and that new problems will be addressed through AI instead of traditional code. For example, traditional instructions could be replaced by the weights in a neural network. We see this already in the context of translation services, speech recognition, and video gaming.12 But even in such a context, we would likely still need software developers. The work of software developers might shift away from traditional coding and to-ward designing and developing the architecture that brings together AI modules to solve a prob-lem, to development tasks related to data governance, and/or to activities requiring judgment rather than activities requiring rule-based decisions. Furthermore, the omnipresence of AI in ubiquitous or experiential computing18 will create a continuing need for software developers. Re-latedly, recent efforts to create explainable AI (XAI) or to address ethical questions related to AI, such as bias and discrimination, will likely continue to require software programmers.19

To further explore the potential impacts of AI on various IT occupations, Table 1 shows US Bu-reau of Labor Statistics 2016-2017 data for the average wage and current number of positions, and 2016-2026 projections for the growth of each occupation. These seven IT occupations each include at least 100,000 employees, and reasonably represent the range of average wages among IT professionals.

The Bureau of Labor Statistics classifies the growth of each occupation into one of four catego-ries: decline, average growth (5 to 9 percent), faster than average growth (10 to 14 percent), and much faster than average growth (15 percent and up). The projected growth for each IT occupa-tion provides some clues for how AI could affect various IT occupations, considering these Bu-reau of Labor Statistics projections at face value.

The projection that software developer (applications) and web developer occupations are ex-pected to grow much faster than average from 2016-2026 suggests that AI is expected to comple-ment (not displace) traditional programming over the next decade. Similarly, the much faster than average growth projection for information security analysts suggests that AI could create demand for IT professionals who can address both cybersecurity and privacy considerations when bad actors use AI capabilities to design more sophisticated attacks.20

IT PROFESSIONAL

ware programming occupation, because activities that can be codified, standardized, and modu-larized are also more likely to be automated through AI.

While the modularization of software development has reduced the complexity of individual ac-tivities, complexity is increased by the need to coordinate work across software teams and inte-grate individual modules to create a product that is Apple-simple, Google-fast, and SAP-reliable at the same time.6 The complexity of software programming jobs has also increased due to changes in system development methodologies and the rise of agile methods that call for closer collaboration among software developers and customers integrating design thinking and related approaches.7

For example, given the difficulty to elicit precise requirements from clients upfront, agile software development insists on constant cus-tomer feedback and collaboration, which might be difficult to achieve in a disaggregated work mode.8 Because of the need for innovation and closer collaboration between software developers and users, firms are realizing the value of investing in their internal technical capabili-ties and digital transformation by bringing more software development in-house, sometimes moving toward a hybrid model that includes both on-premise and cloud computing.9

The foregoing discussion suggests that Simon’s 1960 prediction about computer programmers becoming extinct needs to be seen in the con-text of major industrial and technical developments. Simon made his prediction before the advent of the personal computer and the World Wide Web, changes in software development methodologies and the role of IT departments in firms, and many other wider trends in tech-nology, business, and society.10 As part of these developments, the de-mand for computer programmers has increased (see Figure 2) and the IT profession, which consisted mostly of code development in the 1960s, has now diversified into many different job descriptions and re-sponsibilities.11 For example, in addition to computer programmers, software developers, and web developers, current US Bureau of Labor Statistics computer occupations include information security analysts, network and computer systems administrators, computer network architects, computer user support specialists, and computer network support specialists.

SOFTWARE DEVELOPMENT IN THE AGE OF ARTIFICIAL INTELLIGENCE While Simon wrote that “more and more, computers will program themselves” during a time of great anticipation for AI, this anticipation did not come to fruition at that time. Now that we are again at a time of enthusiasm based on recent advances in AI and machine learning, there is a need to take a more thoughtful perspective on how the factors discussed above fit into AI, and on how AI is likely to influence the software development profession.

We consider two different roles of AI for software development: (1) AI as a tool to program soft-ware, and (2) AI as the software itself—sometimes referred to as Software 2.0.12 Making some cautious but informed predictions, it is likely that both of these roles will be relevant for software development in the future, and that there will be a place for human software developers in both of these AI roles.

The first role of AI as a tool to program software means that AI directly writes program code or indirectly helps human programmers to write program code—in the sense of instructions for computers. Consequently, the tasks of human software developers will follow the general trajec-tory of automation and outsourcing, where high-level tasks carried out by human programmers will move to even higher levels of value creation, while lower-level programming tasks will in-creasingly be performed by AI.13 This is in line with earlier conjectures that emerging technolo-gies can destroy some jobs, and in this case, it will replace jobs that involve lower-level

There is a need to

take a more

thoughtful

perspective on how

AI is likely to

influence the

software

development

profession.

Page 40: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

38 ComputingEdge February 2019

IT PROFESSIONAL

Table 1. 10-year projected growth for various IT occupations. Source: Digest of Education Statistics, 2016, US National Center for Education Statistics, available at

https://nces.ed.gov/pubsearch/pubsinfo.asp?pubid=2017094.

Position 2016-2017 wages

2016-2017 employment

2016-2026 projected employment growth

Software developers (appli-cations)

$101,790 831,000 Much faster than average (15%+)

Web developers $67,990 163,000 Much faster than average (15%+)

Information security analysts $95,510 100,000 Much faster than average (15%+)

Computer user support spe-cialists

$50,210 637,000 Faster than average (10-14%)

Database administrators $87,020 120,000 Faster than average (10-14%)

Network and computer sys-tems administrators

$81,100 391,000 Average (5-9%)

Computer network support specialists

$62,340 199,000 Average (5-9%)

Computer network architects $104,650 163,000 Average (5-9%)

The projection that the computer user support specialist occupation is expected to grow faster than average also suggests a complementary role for AI over the next decade, as additional appli-cations and functionality will draw additional users. Similarly, the need for effective manage-ment of additional data that results from additional applications and functionality is consistent with the projection that the database administrator occupation is expected to grow faster than av-erage, reinforcing a complementary role for AI over the next decade. However, the projection that the occupations of network and computer systems administration, computer network support specialists, and computer network architects will grow at only an average rate suggests that AI is expected to automate computation-intensive tasks such as managing the flow of network traffic. It will be interesting to examine the extent to which these projections map to reality as capabili-ties of AI unfold over time and reveal the extent to which AI complements or substitutes activi-ties in IT occupations.

CONCLUSION AI will bring many changes to the IT profession. While it is difficult to predict precisely how these changes will unfold, just as it was difficult for Simon to predict six decades ago, there are reasons to believe that software development will change and that with appropriate investments in human capital, software programmers should be able to respond to the changes in technolo-gies and customer needs.21

For computer science students, we do not expect any major short-term changes in curricula as students still need to learn the basics of computer programming. However, over time we expect that more entry-level computer programming concepts will trickle down into high school curric-ula and coding boot camps, and more advanced concepts such as AI and machine learning will extend beyond computer information and science degrees into other majors such as business and

Page 41: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

www.computer.org/computingedge 39

FROM THE EDITORS

the natural sciences. On the whole, there are reasons to be optimistic about the future of software programmers and IT professionals because the seamless integration of “human and computer in-telligence to solve interesting and important problems that impact the future of work, organiza-tions, and broader society” will continue the high demand for their talents and creativity.22

REFERENCES 1. V. Ganesh, “Automation to Kill 70% of IT Jobs,” Hindu BusinessLine, blog,

November 2017; https://www.thehindubusinessline.com/info-tech/automation-to-kill-70-of-it-jobs/article9960555.ece.

2. H.A. Simon, “The Corporation: Will It Be Managed by Machines?,” Management and the Corporations, M.L. Anshen and G.L. Bach, McGraw Hill, 1960.

3. S. Mithas and J. Whitaker, “Is the World Flat or Spiky? Information Intensity, Skills and Global Service Disaggregation,” Information Systems Research, vol. 18, no. 3, 2007, pp. 237–259.

4. D. Retelny, M.S. Bernstein, and M.A. Velentine, “No Workflow Can Ever Be Enough: How Crowdsourcing Workflows Constrain Complex Work,” Proc. ACM Human-Computer Interaction, 2017.

5. S. Mithas and H.C. Lucas, “Are Foreign IT Workers Cheaper? U.S. Visa Policies and Compensation of Information Technology Professionals,” Management Science, vol. 56, no. 5, 2010, pp. 745–765.

6. S. Earley et al., “From BYOD to BYOA, Phishing, and Botnets,” IT Professional, vol. 16, no. 5, 2014, pp. 16–18.

7. C.T. Schmidt et al., “How Agile Practices Influence the Performance of Software Development Teams: The Role of Shared Mental Models and Backup,” Proceedings of the 34th International Conference on Information Systems, 2014.

8. K. Schwaber and M. Beedle, Agile Software Development with Scrum, Prentice Hall, 2001.

9. J. Bennett, “Why GM Hired 8,000 Programmers,” The Wall Street Journal, blog, February 2015; http://www.wsj.com/articles/gm-built-internal-skills-to-manage-internet-sales-push-1424200731?KEYWORDS=why+GM+hired.

10. S. Mithas and F.W. McFarlan, “What Is Digital Intelligence?,” IT Professional, vol. 19, no. 4, 2017, pp. 3–6.

11. R. Moncarz, “Training for Techies: Career Preparation in Information Technology,” Occupational Outlook Quarterly, vol. 46, no. 3, 2002, pp. 38–45.

12. A. Karpathy, “Software 2.0,” Medium, blog, November 2017; https://medium.com/@karpathy/software-2-0-a64152b37c35.

13. P. Smith, “SAP Founder and CEO Say Governments Must Act on AI Challenge as Google Lays Out Core Principles,” The Australian Financial Review, June 2018; https://www.afr.com/technology/sap-founder-and-ceo-say-governments-must-act-on-ai-challenge-as-google-lays-out-core-principles-20180609-h116c4.

14. F. Levy and R.J. Murnane, The New Division of Labor: How Computers are Creating The Next Job Market, Russell Sage Foundation, 2004.

15. I. Huston, “AI Is Not the End of Software Developers: A Data Scientist’s Take on Software 2.0.,” Built to Adapt, blog, January 2018; https://builttoadapt.io/ai-is-not-the-end-of-software-developers-28d80df3c331.

16. B. McDermott, “Machines Can’t Dream,” SAP, January 2018; https://news.sap.com/2018/01/impact-of-artificial-intelligence-machines-cant-dream.

17. E. Brynjolfsson and T. Mitchell, “What Can Machine Learning Do? Workforce Implications,” Science, vol. 358, no. 6370, 2017, pp. 1530–1534.

18. Y. Yoo, “Computing in Everyday Life: A Call for Research on Experiential Computing,” MIS Quarterly, vol. 34, no. 2, 2010, pp. 213–231.

19. G. Nott, “‘Explainable Artificial Intelligence’: Cracking Open the Black Box of AI,” Computer World, April 2017.

20. I. Bojanova et al., “Cybersecurity or Privacy,” IT Professional, vol. 18, no. 5, 2016, pp. 16–17.

21. S. Murugesan, “Stay Professionally Fit, Always,” IT Professional, vol. 19, no. 6, 2017, pp. 4–7.

IT PROFESSIONAL

Table 1. 10-year projected growth for various IT occupations. Source: Digest of Education Statistics, 2016, US National Center for Education Statistics, available at

https://nces.ed.gov/pubsearch/pubsinfo.asp?pubid=2017094.

Position 2016-2017 wages

2016-2017 employment

2016-2026 projected employment growth

Software developers (appli-cations)

$101,790 831,000 Much faster than average (15%+)

Web developers $67,990 163,000 Much faster than average (15%+)

Information security analysts $95,510 100,000 Much faster than average (15%+)

Computer user support spe-cialists

$50,210 637,000 Faster than average (10-14%)

Database administrators $87,020 120,000 Faster than average (10-14%)

Network and computer sys-tems administrators

$81,100 391,000 Average (5-9%)

Computer network support specialists

$62,340 199,000 Average (5-9%)

Computer network architects $104,650 163,000 Average (5-9%)

The projection that the computer user support specialist occupation is expected to grow faster than average also suggests a complementary role for AI over the next decade, as additional appli-cations and functionality will draw additional users. Similarly, the need for effective manage-ment of additional data that results from additional applications and functionality is consistent with the projection that the database administrator occupation is expected to grow faster than av-erage, reinforcing a complementary role for AI over the next decade. However, the projection that the occupations of network and computer systems administration, computer network support specialists, and computer network architects will grow at only an average rate suggests that AI is expected to automate computation-intensive tasks such as managing the flow of network traffic. It will be interesting to examine the extent to which these projections map to reality as capabili-ties of AI unfold over time and reveal the extent to which AI complements or substitutes activi-ties in IT occupations.

CONCLUSION AI will bring many changes to the IT profession. While it is difficult to predict precisely how these changes will unfold, just as it was difficult for Simon to predict six decades ago, there are reasons to believe that software development will change and that with appropriate investments in human capital, software programmers should be able to respond to the changes in technolo-gies and customer needs.21

For computer science students, we do not expect any major short-term changes in curricula as students still need to learn the basics of computer programming. However, over time we expect that more entry-level computer programming concepts will trickle down into high school curric-ula and coding boot camps, and more advanced concepts such as AI and machine learning will extend beyond computer information and science degrees into other majors such as business and

Page 42: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

40 ComputingEdge February 2019

IT PROFESSIONAL

22. H. Jain et al., “Special Issue of Information Systems Research-Humans, Algorithms, and Augmented Intelligence: The Future of Work, Organizations, and Society,” Information Systems Research, vol. 29, no. 1, 2018, pp. 250–251.

ABOUT THE AUTHORS Sunil Mithas is a world-class scholar and professor at the Muma College of Business at the University of South Florida. His research interests include strategies for managing innova-tion and excellence for corporate transformation, focusing on the role of technology and other intangibles. Mithas is the author of the books Digital Intelligence: What Every Smart Manager Must Have for Success in an Information Age (Finerplanet, 2016) and Dancing Elephants and Leaping Jaguars: How to Excel, Innovate, and Transform Your Organiza-tion the Tata Way (2014). He is a member of IT Professional’s editorial board. Contact him at [email protected].

Thomas Kude is an associate professor at ESSEC Business School in France. His current research focuses on digital ecosystems, agile software development, and IT management. In his research, Kude regularly works with companies in the software industry and beyond. Kude received a PhD from the University of Mannheim in Germany, and his work has been published in renowned academic journals and presented at international conferences. Con-tact him at [email protected].

Jonathan Whitaker is an associate professor at the University of Richmond Robins School of Business. Prior to his academic career, he worked as a technology consultant with Price Waterhouse and A.T. Kearney. He earned an MBA from the University of Chicago and a PhD from the University of Michigan, and his research has been published in leading aca-demic journals and profiled in the Wall Street Journal and MIT Sloan Management Review. Contact him at [email protected].

This article originally appeared in IT Professional, vol. 20, no. 5, 2018.

Page 43: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

2469-7087/19/$33.00 © 2019 IEEE Published by the IEEE Computer Society February 2019 41

DEPARTMENT: Diversity and Inclusion

A Different Lens on Diversity and Inclusion:

Creating Research Opportunities for Small Liberal Arts Colleges

Much of the focus on diversity originates from the charac-teristics of an individual—race, gender, socioeconomic sta-tus, and so on. However, when we consider the concept of diversity from another angle, what do we see? What are the fundamental reasons for considering diversity? In the world of science, technology, engineering, and mathemat-ics (STEM) research, a magnifying glass has been placed on the inequities among researchers. We have all heard of organizations or movements promoting women and minor-ities to pursue STEM careers or providing support for those already in the profession. However, there is an addi-tional facet to the idea of underrepresented groups in

STEM: small liberal arts colleges. Affiliation with small liberal arts colleges might translate to decreased opportunities. Nevertheless, some programs attempt to bridge this gap.

INFLUENCES OF DIVERSITY Standard definitions of diversity (from the perspective of protected classes of individuals) have historically centered around a person’s race or ethnicity. More recently, the focus has grown to include socioeconomic status, religion, sex, and age.1 Beyond the perspective of protected clas-ses, a new emphasis has been made on the productivity of diverse groups.2 When a diverse group of people comes together to solve a problem, unique ideas and solutions are created. In the world of STEM, this kind of broad-minded perspective is required to tackle the multifaceted scientific issues that exist today. This is especially true when it comes to computational science and engi-neering, an innately multidisciplinary field.

Another aspect of diversity that is rarely explored is the individual’s association with an aca-demic institution, which can affect student and faculty opportunities.3 For example, if an under-graduate student wishes to perform medical research, she might find it easier to do so at a university that includes a medical school. Otherwise, that student might need to look for outside opportunities, perhaps as a summer intern, which could be quite competitive and present barriers such as family responsibilities, money for travel, and time commitments.4 Similarly, a member of the faculty might be looking to gather information and data to draw some conclusions about his classroom teaching style, but might not have the luxury of time to devote to it because his emphasis is on performing research in his field. Perhaps if the student and the professor had ac-cess to different resources, both could accomplish their goals within their institutions.

Wendi K. Sapp Oak Ridge National Laboratory

Mary Ann Leung Sustainable Horizons Institute

Editor: Mary Ann Leung, [email protected]

90Computing in Science & Engineering Copublished by the IEEE CS and the AIP

1521-9615/18/$33 ©2018 IEEEJuly/August 2018

Page 44: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

42 ComputingEdge February 2019

COMPUTING IN SCIENCE & ENGINEERING

However, no one institution can meet everyone’s needs. There is not a single university that holds all possible resources. Academic institutions focus on aspects of higher learning, which can broadly be boiled down to two categories: creating new knowledge, and imparting knowledge. In these categories is where we find the distinction between research universities and small liberal arts institutions. Granted, other academic institutions might fall into these categories too, such as community colleges, but the focus of this text will be on the aforementioned.

Some of us might be familiar with the differences between research universities and small liberal arts colleges. Probably the most well-known difference between these two institutions is the way that faculty are expected to spend their time. At a large research university, the professors pri-marily focus on training graduate students, performing research, and publishing the results in journals. This kind of activity contributes knowledge. In contrast, small liberal arts professors spend more time teaching courses and mentoring students.5 Many professors engage their stu-dents in research activities, even if resources might be limited. Professors who are employed by small liberal arts colleges typically choose to do so because they enjoy teaching and having a closer relationship with students, which is made possible, in part, by the small class sizes.6 Faculty at small liberal arts colleges understand the importance of undergraduate research experiences. However, developing research projects for undergraduate students is not a simple task.

Small liberal arts colleges, despite some of the resource challenges, successfully produce gradu-ates who will later obtain advanced degrees in research fields. In fact, Cech found evidence of this while analyzing the National Science Foundation’s Survey of Earned Doctorates data.7 Cech states, “Only about 8 percent of students who attend four-year colleges or universities are en-rolled in baccalaureate colleges (a category that includes national liberal arts colleges). Among the students who obtain PhDs in science, 17 percent received their undergraduate degree at a baccalaureate college. Thus, these colleges are about twice as productive as the average institu-tion in training eventual PhDs.”8

SMALL LIBERAL ARTS COLLEGES ARE UNDERREPRESENTED Professors, students, and staff from small liberal arts colleges might be at a disadvantage when it comes to national grants, award programs, research opportunities, and access to educational tools and resources. When applications are made to nationally competitive programs, applications that are affiliated with larger institutions could appear stronger because of their lengthy list of previ-ous scientific activities. These individuals and groups from larger institutions have probably had more opportunities historically than applicants from small liberal arts colleges.

At a small liberal arts college in Frederick, Maryland, Dr. Xinlian Liu enjoys the mentoring rela-tionships he can build with his students. Like many professors who decide to work at small lib-eral arts colleges, finding opportunities for his students and himself is often tricky. The time allotted to teaching and spending time with students can reduce the amount of time left for re-search. This, paired with the added difficulty of accessing supercomputers and other computing resources, causes Xinlian to fear that he will only be able to produce students who can read about technology in class but not be able to get their hands on it. He enjoys teaching courses on high-performance computing (HPC) but states “there are a [lack of] resources available to support curricula such as [HPC].” Upon considering the field of HPC, he feels that students who attend small liberal arts colleges are “among the least represented in the field.” To improve his stu-dents’ awareness of computational education, Dr. Liu volunteers as an XSEDE Campus Cham-pion, has founded an Nvidia CUDA Education Center, and mentors Blue Waters undergraduate interns.

91July/August 2018 www.computer.org/cise

Page 45: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

www.computer.org/computingedge 43

DIVERSITY AND INCLUSION

OPPORTUNITIES TO ENGAGE WITH SMALL LIBERAL ARTS COLLEGES Faculty at small liberal arts colleges and large research institutions performed the same duties as graduate students. These individuals begin with similar experiences, but as they advance, their strengths, accomplishments, and preferences often shape careers. After graduation, some individ-uals might pursue positions that emphasize teaching and administration, while others might seek out posts that emphasize creating new knowledge and significant grant-writing. Ultimately, fac-ulty, regardless of the institution at which they work, have a similar skill set. Additionally, there is an inherent curiosity that often accompanies those involved in STEM fields. This curiosity does not disappear merely because an individual no longer performs research.

Encouraging faculty from small liberal arts colleges to pursue research collaboration opportuni-ties can help improve the quality of research projects by providing fresh perspectives and new skills. Additionally, these professors can include their students in the research activities.

Dr. Liu’s efforts to encourage students to consider careers in HPC had a lasting effect on two of his students in the summer of 2017. The Sustainable Research Pathways (SRP) program—a part-nership between the Sustainable Horizons Institute (SHI) and the Lawrence Berkeley National Laboratory (LBNL)—was looking for faculty and students from college and universities to be matched with researchers at LBNL. Dr. Liu jumped at the opportunity to pursue research collab-orations with scientists at LBNL.

During the program’s first three years, it cultivated more than 30 new research collaborations. The program begins with a matching workshop. At this one-day event, faculty showcase their research projects during a poster session while researchers from the laboratories deliver presenta-tions about their work. Afterward, the researchers and faculty participate in speed meetings to explore matches. Once matches are made, plans are hatched to begin the summer research expe-rience. Overall, 87 percent of workshop attendees match with staff, indicating a high degree of mutual interest in collaboration.

Among past workshop attendees, 9 percent of faculty have come from liberal arts colleges (see Figure 1). “The goal of the SRP Program is to not only support faculty in building research col-laborations, but also to develop a pipeline of diverse candidates through students who partici-pate.”9 During the summer research program, students accompany faculty who have been matched. So far, 40 students have participated in the SRP program.

Figure 1. The composition of the SRP workshop attendees’ home institutions.

92July/August 2018 www.computer.org/cise

COMPUTING IN SCIENCE & ENGINEERING

However, no one institution can meet everyone’s needs. There is not a single university that holds all possible resources. Academic institutions focus on aspects of higher learning, which can broadly be boiled down to two categories: creating new knowledge, and imparting knowledge. In these categories is where we find the distinction between research universities and small liberal arts institutions. Granted, other academic institutions might fall into these categories too, such as community colleges, but the focus of this text will be on the aforementioned.

Some of us might be familiar with the differences between research universities and small liberal arts colleges. Probably the most well-known difference between these two institutions is the way that faculty are expected to spend their time. At a large research university, the professors pri-marily focus on training graduate students, performing research, and publishing the results in journals. This kind of activity contributes knowledge. In contrast, small liberal arts professors spend more time teaching courses and mentoring students.5 Many professors engage their stu-dents in research activities, even if resources might be limited. Professors who are employed by small liberal arts colleges typically choose to do so because they enjoy teaching and having a closer relationship with students, which is made possible, in part, by the small class sizes.6 Faculty at small liberal arts colleges understand the importance of undergraduate research experiences. However, developing research projects for undergraduate students is not a simple task.

Small liberal arts colleges, despite some of the resource challenges, successfully produce gradu-ates who will later obtain advanced degrees in research fields. In fact, Cech found evidence of this while analyzing the National Science Foundation’s Survey of Earned Doctorates data.7 Cech states, “Only about 8 percent of students who attend four-year colleges or universities are en-rolled in baccalaureate colleges (a category that includes national liberal arts colleges). Among the students who obtain PhDs in science, 17 percent received their undergraduate degree at a baccalaureate college. Thus, these colleges are about twice as productive as the average institu-tion in training eventual PhDs.”8

SMALL LIBERAL ARTS COLLEGES ARE UNDERREPRESENTED Professors, students, and staff from small liberal arts colleges might be at a disadvantage when it comes to national grants, award programs, research opportunities, and access to educational tools and resources. When applications are made to nationally competitive programs, applications that are affiliated with larger institutions could appear stronger because of their lengthy list of previ-ous scientific activities. These individuals and groups from larger institutions have probably had more opportunities historically than applicants from small liberal arts colleges.

At a small liberal arts college in Frederick, Maryland, Dr. Xinlian Liu enjoys the mentoring rela-tionships he can build with his students. Like many professors who decide to work at small lib-eral arts colleges, finding opportunities for his students and himself is often tricky. The time allotted to teaching and spending time with students can reduce the amount of time left for re-search. This, paired with the added difficulty of accessing supercomputers and other computing resources, causes Xinlian to fear that he will only be able to produce students who can read about technology in class but not be able to get their hands on it. He enjoys teaching courses on high-performance computing (HPC) but states “there are a [lack of] resources available to support curricula such as [HPC].” Upon considering the field of HPC, he feels that students who attend small liberal arts colleges are “among the least represented in the field.” To improve his stu-dents’ awareness of computational education, Dr. Liu volunteers as an XSEDE Campus Cham-pion, has founded an Nvidia CUDA Education Center, and mentors Blue Waters undergraduate interns.

91July/August 2018 www.computer.org/cise

Page 46: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

44 ComputingEdge February 2019

COMPUTING IN SCIENCE & ENGINEERING

In 2017, after the initial matching event, Dr. Liu learned that he matched with researcher Dr. Sil-via Crivelli, and that two students would accompany him to LBNL that summer to work on the challenging problem of applying deep learning algorithms to 3D structures.

“Throughout the summer, Dr. Crivelli involved me in several projects she was working on,” Dr. Liu said of the intensive 10-week summer research program. He is sure that the experience has revived his aptitude for performing cutting-edge research. Dr. Crivelli was glad to have Xinlian’s machine learning and convolutional neural networks expertise in the lab. Furthermore, Dr. Crivelli observed of Xinlian that “he seemed to have a passion for research and he doesn’t have many opportunities to pursue it as a professor in a small liberal arts college. I thought the lab in-ternship would be a great opportunity for him to get back to research.” She was optimistic that the summer would be productive and fruitful—and she was right. After a summer of rigorous research activities, the team’s hard work paid off when their poster was accepted to the Super-computing Conference (SC17).

Tom Corcoran, one of Dr. Liu’s students, says this about the success, “Having my hard work publicly validated and recognized by being accepted into such a prestigious conference will un-doubtedly improve my chances of being able to conduct similar work in the future, and to be ac-cepted into a strong graduate program at a good school.” Rafael Zamora-Resendiz, the second student to accompany Dr. Liu to LBNL, also understands the implications of his involvement in a program such as SRP. He believes the SRP program helped to provide a platform for small lib-eral arts colleges: “I hope our project communicates that students from smaller institutions, like Hood College, are very capable of producing competitive work.”

Both students are planning to pursue their heightened interest in HPC in graduate school. This is a trajectory that neither of them would have considered for themselves before this experience. Tom and Rafael are first-generation scholars who lacked confidence that their applications to graduate school would be competitive. In fact, more than 50 percent of the SRP applicant’s stu-dents were first-generation scholars. Like most first-generation students, attending college and obtaining a bachelor’s degree can be daunting. Indeed, only 24 percent of first-generation stu-dents graduate with a degree after eight years.10 But now, with a prestigious poster acceptance to SC17 and advanced research experience, not only are Rafael and Tom completing their bache-lor’s degrees, they are well on their way to graduate school and rewarding careers.

CONCLUSION After experiencing the success of a program such as SRP, Dr. Liu maintains a hopeful and enthu-siastic attitude toward the ongoing research collaborations that resulted from the program. He is excited about “continued work with Dr. Crivelli for years to come.” The SRP workshop events helped to ensure a good match between participants. Xinlian believes this was “the foundation of everything else.” The opportunity for research collaborations to be made with faculty at small liberal arts institutions will bestow a lasting influence on the students who took part, which leads to greater diversity in STEM research fields.

The impact that the SRP Program has had on Dr. Liu and his students is a lasting one. In addition to the direct benefit to the researchers in SRP, visiting faculty often extend the impact of the pro-gram by using their research experience in the classroom at their home institutions, co-publish papers, and present findings as well as continue their collaborations at the laboratory during sub-sequent summers with a new group of students.

ACKNOWLEDGMENTS The SRP Program is funded through Lawrence Berkeley National Laboratory by Office of Advanced Scientific Computing Research, US Department of Energy under contract no. DE-AC02-05CH11231.

93July/August 2018 www.computer.org/cise

Page 47: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

www.computer.org/computingedge 45

DIVERSITY AND INCLUSION

REFERENCES 1. T. Haring-Smith, “Broadening Our Definition of Diversity,” Lib. Educ., vol. 98, 2012,

pp. 6–13. 2. E. Ostrom, “The Difference: How the Power of Diversity Creates Better Groups,

Firms, Schools, and Societies,” Perspect. Polit., vol. 6, 2008, pp. 828–829. 3. M. Clarke, “The Impact of Higher Education Rankings on Student Access, Choice, and

Opportunity,” High. Educ. Eur., vol. 32, 2007, pp. 59–70. 4. M. Falasca, “Barriers to Adult Learning: Bridging the Gap,” Aust. J. Adult Learn., vol.

51, 2011, pp. 583–590. 5. K.A. Feldman, “Class Size and College Students’ Evaluations of Teachers and

Courses: A Closer Look,” Res. High. Educ., vol. 21, 1984, pp. 45–116. 6. I. Puri, “Don’t Overlook Liberal Arts Schools: Small Class Size and Access to

Faculty,” Huffington Post, 21 June 2016; www.huffingtonpost.com/ishan-puri/dont-overlook-liberal-art_b_10574942.html.

7. T.B. Hoffer et al., “Doctorate Recipients from United States Universities: Summary Report 2000,” Survey of Earned Doctorates, report, Nat'l Science Foundation, 2001.

8. T.R. Cech, “Science at Liberal Arts Colleges: A Better Education?,” Daedalus, vol. 128, 1999, pp. 195–216.

9. H. Haskell and M.A. Leung, “Changing Lives and Building a Pipeline through Sustainable Research Pathways,” Sustainable Horizons Institute, 2017; doi.org/shinstitute.org/changing-lives-and-building-a-pipeline-through-srp.

10. X. Chen and C.D. Carroll, “First-Generation Students in Postsecondary Education: A Look at Their College Transcripts,” Nat'l Center for Education Statistics, government report NCES 2005–171, July 2005.

ABOUT THE AUTHORS Wendi K. Sapp is a contracted systems engineer and technical writer in the National Cen-ter for Computational Sciences at Oak Ridge National Laboratory. Contact her at [email protected].

Mary Ann Leung is president and founder of the Sustainable Horizons Institute. Contact her at [email protected].

94July/August 2018 www.computer.org/cise

COMPUTING IN SCIENCE & ENGINEERING

In 2017, after the initial matching event, Dr. Liu learned that he matched with researcher Dr. Sil-via Crivelli, and that two students would accompany him to LBNL that summer to work on the challenging problem of applying deep learning algorithms to 3D structures.

“Throughout the summer, Dr. Crivelli involved me in several projects she was working on,” Dr. Liu said of the intensive 10-week summer research program. He is sure that the experience has revived his aptitude for performing cutting-edge research. Dr. Crivelli was glad to have Xinlian’s machine learning and convolutional neural networks expertise in the lab. Furthermore, Dr. Crivelli observed of Xinlian that “he seemed to have a passion for research and he doesn’t have many opportunities to pursue it as a professor in a small liberal arts college. I thought the lab in-ternship would be a great opportunity for him to get back to research.” She was optimistic that the summer would be productive and fruitful—and she was right. After a summer of rigorous research activities, the team’s hard work paid off when their poster was accepted to the Super-computing Conference (SC17).

Tom Corcoran, one of Dr. Liu’s students, says this about the success, “Having my hard work publicly validated and recognized by being accepted into such a prestigious conference will un-doubtedly improve my chances of being able to conduct similar work in the future, and to be ac-cepted into a strong graduate program at a good school.” Rafael Zamora-Resendiz, the second student to accompany Dr. Liu to LBNL, also understands the implications of his involvement in a program such as SRP. He believes the SRP program helped to provide a platform for small lib-eral arts colleges: “I hope our project communicates that students from smaller institutions, like Hood College, are very capable of producing competitive work.”

Both students are planning to pursue their heightened interest in HPC in graduate school. This is a trajectory that neither of them would have considered for themselves before this experience. Tom and Rafael are first-generation scholars who lacked confidence that their applications to graduate school would be competitive. In fact, more than 50 percent of the SRP applicant’s stu-dents were first-generation scholars. Like most first-generation students, attending college and obtaining a bachelor’s degree can be daunting. Indeed, only 24 percent of first-generation stu-dents graduate with a degree after eight years.10 But now, with a prestigious poster acceptance to SC17 and advanced research experience, not only are Rafael and Tom completing their bache-lor’s degrees, they are well on their way to graduate school and rewarding careers.

CONCLUSION After experiencing the success of a program such as SRP, Dr. Liu maintains a hopeful and enthu-siastic attitude toward the ongoing research collaborations that resulted from the program. He is excited about “continued work with Dr. Crivelli for years to come.” The SRP workshop events helped to ensure a good match between participants. Xinlian believes this was “the foundation of everything else.” The opportunity for research collaborations to be made with faculty at small liberal arts institutions will bestow a lasting influence on the students who took part, which leads to greater diversity in STEM research fields.

The impact that the SRP Program has had on Dr. Liu and his students is a lasting one. In addition to the direct benefit to the researchers in SRP, visiting faculty often extend the impact of the pro-gram by using their research experience in the classroom at their home institutions, co-publish papers, and present findings as well as continue their collaborations at the laboratory during sub-sequent summers with a new group of students.

ACKNOWLEDGMENTS The SRP Program is funded through Lawrence Berkeley National Laboratory by Office of Advanced Scientific Computing Research, US Department of Energy under contract no. DE-AC02-05CH11231.

93July/August 2018 www.computer.org/cise

This article originally appeared in Computing in Science & Engineering, vol. 20, no. 4, 2018.

Page 48: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

46 February 2019 Published by the IEEE Computer Society 2469-7087/19/$33.00 © 2019 IEEE94 C O M P U T E R P U B L I S H E D B Y T H E I E E E C O M P U T E R S O C I E T Y 0 0 1 8 - 9 1 6 2 / 1 8 / $ 3 3 . 0 0 © 2 0 1 8 I E E E

THE IOT CONNECTION

Localization has been exten-sively studied in various contexts, both indoor and outdoor. Yet, emerging ap-

plications continue to ask for new requirements that challenge exist-ing localization mechanisms. For in-stance, a team of soccer or basketball players might seek their precise po-sitions during a game—this is valu-able to coaching and sports analytics applications. As another example, a swarm of wirelessly connected IoT drones carrying chemical probes might need to � y in precise forma-tions to analyze water samples from a polluted lake while constantly re-porting their sensor readings and each drone’s relative position in the swarm to a central aggregator. Similarly, an army troop on the ground or a group of ­ rst responders

P2PLoc: Peer-to-Peer Localization of Fast-Moving EntitiesAshutosh Dhekne, Umberto J. Ravaioli, and Romit Roy Choudhury, University of Illinois at Urbana-Champaign

P2PLoc envisions wearable Internet of

Things devices that compute the relative

positions of each user, resulting in a topology

or confi guration of mobile users that can

be tracked in real time for group-motion

applications.

FROM THE EDITOR

Precise indoor locations systems are coming of age, and enabling context-aware operations will result in a wide range of effective Internet of Things (IoT) applica-tions.1 However, for many of these systems to operate, they need location refer-ence points, or some fi xed nodes of known location. This article makes the case that there are many peer-to-peer location applications that only require the rela-tive positions of the mobile nodes, and explores the issues that need to be consid-ered to accurately determine their topology. —Roy Want

r10iot.indd 94 10/5/18 2:37 PM

Page 49: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

www.computer.org/computingedge 47O C T O B E R 2 0 1 8 95

EDITOR ROY WANT Google; [email protected]

in a disaster-relief e� ort could bene� t from the ability to continuously visu-alize their group’s con� guration. How-ever, GPS might not be adequately pre-cise or even available on a battle� eld, and environmental infrastructure might not be available, for example, on a basketball court.

As a solution, P2PLoc (peer-to-peer localization) envisions wearable IoT devices on users’ arms or wrists that exchange wireless messages to ulti-mately compute the relative positions of each group member. The outcome is a topology or con� guration of mo-bile users that can be tracked in real time. We believe this can be a valuable primitive to various group-motion applications.

Existing localization approaches are usually based on creating a large database of received signal strength from a few � xed access points and then matching the measured signal strength to report the approximate location of the user. However, in the sports or army contexts, we might not have the liberty to create such a � n-gerprint of the entire arena. Instead, we propose to use the time wireless signals take to travel between two de-vices as a measure of the distance be-tween them. The precision of this time measurement directly correlates with the bandwidth of the wireless signal used. Therefore, we use ultra-wideband (UWB) radios with a 1 GHz bandwidth. When used with a packet-handshake protocol called two-way ranging (TWR), today’s UWB platforms can estimate the distance between two devices with about 10 cm precision without clock synchronization.

A group of players or military per-sonnel can be abstracted as a net-work topology (see Figure 1), with each node representing an individual and the edges representing the dis-tance between them. Given n nodes, TWR performed between every pair

generates the distance of each edge in the network. These distances naturally over-determine the system, producing the relative topology graph (relative be-cause the produced topology could be a rotated version of the true topology) of all the participating devices. Our goal is to localize dynamic nodes whose locations change over time. Tracking mobile nodes requires fast collection time to prevent measurements from becoming too stale. Collecting each pairwise distance is not possible be-cause each TWR handshake consumes time and there are O(n2) pairwise dis-tances to be measured, scaling poorly as the number of devices, n, grows. Of course, instead of over-determining the system through n2 measurements, we can still solve the topology with 3n2pairwise measurements. This would signi� cantly reduce the total localiza-tion time. Thus, the essential question for this approach comes down to which O(n) pairwise distance measurements will result in fast and accurate track-ing of the topology.

There are three main factors to con-sider when choosing the pairs:

› the total number of wireless message exchanges while exe-cuting the TWR protocol;

› the geometric dilution of pre-cision, which changes with the topology; and

› occlusions caused by humans that makes some links unusable.

TWR PROTOCOL OPTIMIZATIONSFigure 2a shows the original TWR protocol. It is simply a ping-pong of messages with precisely measured timings at both participating devices. It is comprised of three time-stamped messages exchanged between a device pair—an initiator and a responder. We obtain the time of � ight by aver-aging the di� erence between the two

round-trip times and the two turn-around times. For a group of devices, one would require all three messages to be exchanged between each selected device pair. This would mean 3 · d mes-sages need to be exchanged to obtain d distance measurements (see Figure 2b). Also, we need at least three dis-tance measurements for every node to uniquely solve a topology. By care-fully picking the edges, it is possible to obtain a solution in ⌈ 3n2 ⌉ distance measurements for n nodes. Figure 2b shows one such careful choice.

The original TWR protocol is de-signed for one-to-one distance mea-surement. However, the broadcast nature of wireless channels permits one-to-many operations, providing an opportunity to reduce the total number of messages exchanged. As shown in Figure 2, the initiator’s POLL message can be heard by all other nodes. They can then take turns to send the RESP message back to the initiator. A single FINAL message then su� ces for all the responders to calculate their distance from the initiator. A further optimi-zation is possible where all initiators take turns to send their POLL mes-sages, and the responders take turns

Distance measurement

Player

Figure 1. A group of players abstracted as a graph. Nodes represent the players and edges denote each distance mea-surement performed.

r10iot.indd 95 10/5/18 2:37 PM

94 C O M P U T E R P U B L I S H E D B Y T H E I E E E C O M P U T E R S O C I E T Y 0 0 1 8 - 9 1 6 2 / 1 8 / $ 3 3 . 0 0 © 2 0 1 8 I E E E

THE IOT CONNECTION

Localization has been exten-sively studied in various contexts, both indoor and outdoor. Yet, emerging ap-

plications continue to ask for new requirements that challenge exist-ing localization mechanisms. For in-stance, a team of soccer or basketball players might seek their precise po-sitions during a game—this is valu-able to coaching and sports analytics applications. As another example, a swarm of wirelessly connected IoT drones carrying chemical probes might need to � y in precise forma-tions to analyze water samples from a polluted lake while constantly re-porting their sensor readings and each drone’s relative position in the swarm to a central aggregator. Similarly, an army troop on the ground or a group of ­ rst responders

P2PLoc: Peer-to-Peer Localization of Fast-Moving EntitiesAshutosh Dhekne, Umberto J. Ravaioli, and Romit Roy Choudhury, University of Illinois at Urbana-Champaign

P2PLoc envisions wearable Internet of

Things devices that compute the relative

positions of each user, resulting in a topology

or confi guration of mobile users that can

be tracked in real time for group-motion

applications.

FROM THE EDITOR

Precise indoor locations systems are coming of age, and enabling context-aware operations will result in a wide range of effective Internet of Things (IoT) applica-tions.1 However, for many of these systems to operate, they need location refer-ence points, or some fi xed nodes of known location. This article makes the case that there are many peer-to-peer location applications that only require the rela-tive positions of the mobile nodes, and explores the issues that need to be consid-ered to accurately determine their topology. —Roy Want

r10iot.indd 94 10/5/18 2:37 PM

Page 50: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

48 ComputingEdge February 201996 C O M P U T E R W W W . C O M P U T E R . O R G / C O M P U T E R

THE IOT CONNECTION

to send only one RESP message each. This is followed by the initiators send-ing FINAL messages. Just three initia-tors are required to solve the topology. This optimized ranging protocol and the resultant set of distance measure-ments are shown in Figure 2c and 2d, respectively. While this protocol is suboptimal in the number of distances measured, it is optimal in the number of wireless messages exchanged. Ul-timately, minimizing the message ex-changes speeds up localization.

DILUTION OF PRECISIONThe optimized TWR protocol provides an upper bound on the system update

rate for n nodes. However, it makes no claims about the accuracy obtained through a particular choice of three initiator nodes. If all distance mea-surements were precise, this choice would not matter. However, if dis-tance measurements have even small errors, such as those introduced by hardware noise, then the localiza-tion accuracy can be severely affected by the choice of initiator nodes. The dark overlapping area in Figure 3 shows the region of confusion—node T could be anywhere within this re-gion. Observe how the geometry of the initiators (labeled A1, A2, and A3) affects this dark region.

This problem, called geometric di-lution of precision (DoP), occurs in GPS receivers as well. GPS DoP solutions2,3 should be applicable in this situation. However, there is a key difference in the way GPS estimates DoP and what would be required in a short-range sys-tem like ours. GPS only uses the angles between vectors formed by the initi-ator positions but ignores the mag-nitude. While this works reasonably well for GPS (because of the very large distance between Earth and the satel-lites), ignoring the magnitude can lead to poor choices in short-range systems. Instead, we calculate the estimated lo-calization error directly and select the best initiators.

HUMAN OCCLUSIONSAccurate distance measurements de-pend on the wireless device’s ability to identify the direct line of sight (LOS) path between two nodes. This can be-come challenging due to body block-ing when the device is worn by hu-mans. Non-line-of-sight (NLOS) paths can then be misinterpreted as being the first path, causing large-ranging errors. Figure 4 demonstrates the impact of body blocking with a set of nodes (blue squares) arranged in a semicircle around a person wearing a UWB device on his or her arm. Dis-tance estimates for devices blocked by the person’s body are significantly scattered and erroneous (red streaks), while those obtained by non-occluded devices are more precise (green dots). Using distance estimates from oc-cluded nodes to solve for the topology can cause severe localization errors. In a fast-moving topology, human occlu-sions are common and a scheme that does not cater to such situations will fail miserably. Thus, even if DoP con-siderations indicate a set of initiators to be optimal, occlusions might render that choice infeasible.

Determining occlusion based on link quality between every node pair is time-consuming. Fortunately, be-cause every device can overhear all ongoing communication, a device can

Round-trip time

Round-trip time

Turnaroundtime

Turnaroundtime

1 distancemeasurement,

3 messages used

ResponderInitiator

11 distances measured,33 messages used

POLL

RESP

FINAL

All initiators also sendRESP messages

Only three nodesbehave as initiators

R1R2 … Rn–1I1 I2 I3

15 distances measured,13 messages used

i

i i

POLL

RESP

FINAL

......

(a) (b)

(c) (d)

Figure 2. (a) The original two-way ranging (TWR) protocol. Each distance measure-ment needs three messages. (b) For triangulation, each node must have at least three edges. (c) Our modified TWR protocol to minimize the number of wireless messages exchanged in a group of nodes. (d) Only three nodes initiate TWR—a minimal number of messages exchanged ensures faster overall protocol time.

r10iot.indd 96 10/5/18 2:58 PM

Page 51: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

www.computer.org/computingedge 49 O C T O B E R 2 0 1 8 97

deduce its link quality with all other devices just by listening to the chan-nel without incurring time costs. Each device independently deduces occlu-sions and produces an exclusion list, which is updated at every round of the pipelined TWR protocol.

EVALUATION PLATFORM AND RESULTSWe invited 10 volunteers to wear UWB arm bands while playing basketball. The volunteers took specific positions on a basketball court, creating a to-pology. Each UWB node (https://www .decawave.com/products/evk1000 -evaluation-kit), shown in Figure 5, ran our modified TWR protocol and chose a set of appropriate initiator nodes based on DoP and occlusions. The vol-unteers moved into 22 different topol-ogies, mimicking important positions in a basketball game. Overall, the 75th percentile localization accuracy for all the volunteers across all topologies was around 0.8 m. Of course, some to-pologies provided poor occlusion-free choices, causing a relatively long tail. In a real game, we expect such cases to be few and short-lived. To measure the impact of occlusions alone, we repeated the game by mounting the UWB nodes on tripods. The resulting localization error stayed under 0.2 m, showing the significant impact of hu-man occlusions.

IMPLEMENTATION IN IOT DEVICESWe have discussed specific optimi-zations and pitfalls in implementing a peer-to-peer relative localization scheme in the context of sports and other group activities. Whereas we used UWB devices for performing the distance measurements, the fun-damentals discussed here remain applicable for any ranging technol-ogy. Recent advancements in the IEEE 802.11-REVmc protocol 4 allow for wireless time-of-flight measure-ments on commodity WiFi devices and access points, which can easily be adapted to perform the peer-to-peer

measurements envisioned in this arti-cle. IoT devices that support this tech-nology can be built today. P2PLoc can

thus transform ad hoc playgrounds into sports-analytics arenas without relying on expensive tracking technology.

A2

u2u1 u3 u2

u1 u3

A2

A3 A3

A1

A1

T

Region of confusionRegion of confusion

T

Figure 3. The node’s location can be estimated to be anywhere in the region of confu-sion. The shape and area of this region depends on both the magnitude and the angle of radius vectors formed by the initiators.

Occluded device, erroneous distance estimates

Non-occluded device, correct distance estimates

Human body occluding the device

UWB device strapped to one arm

Figure 4. Effect of human occlusions on estimated distance. Ultra-wideband (UWB) devices that are blocked by a person’s body obtain erroneous distance estimates.

r10iot.indd 97 10/5/18 2:37 PM

96 C O M P U T E R W W W . C O M P U T E R . O R G / C O M P U T E R

THE IOT CONNECTION

to send only one RESP message each. This is followed by the initiators send-ing FINAL messages. Just three initia-tors are required to solve the topology. This optimized ranging protocol and the resultant set of distance measure-ments are shown in Figure 2c and 2d, respectively. While this protocol is suboptimal in the number of distances measured, it is optimal in the number of wireless messages exchanged. Ul-timately, minimizing the message ex-changes speeds up localization.

DILUTION OF PRECISIONThe optimized TWR protocol provides an upper bound on the system update

rate for n nodes. However, it makes no claims about the accuracy obtained through a particular choice of three initiator nodes. If all distance mea-surements were precise, this choice would not matter. However, if dis-tance measurements have even small errors, such as those introduced by hardware noise, then the localiza-tion accuracy can be severely affected by the choice of initiator nodes. The dark overlapping area in Figure 3 shows the region of confusion—node T could be anywhere within this re-gion. Observe how the geometry of the initiators (labeled A1, A2, and A3) affects this dark region.

This problem, called geometric di-lution of precision (DoP), occurs in GPS receivers as well. GPS DoP solutions2,3 should be applicable in this situation. However, there is a key difference in the way GPS estimates DoP and what would be required in a short-range sys-tem like ours. GPS only uses the angles between vectors formed by the initi-ator positions but ignores the mag-nitude. While this works reasonably well for GPS (because of the very large distance between Earth and the satel-lites), ignoring the magnitude can lead to poor choices in short-range systems. Instead, we calculate the estimated lo-calization error directly and select the best initiators.

HUMAN OCCLUSIONSAccurate distance measurements de-pend on the wireless device’s ability to identify the direct line of sight (LOS) path between two nodes. This can be-come challenging due to body block-ing when the device is worn by hu-mans. Non-line-of-sight (NLOS) paths can then be misinterpreted as being the first path, causing large-ranging errors. Figure 4 demonstrates the impact of body blocking with a set of nodes (blue squares) arranged in a semicircle around a person wearing a UWB device on his or her arm. Dis-tance estimates for devices blocked by the person’s body are significantly scattered and erroneous (red streaks), while those obtained by non-occluded devices are more precise (green dots). Using distance estimates from oc-cluded nodes to solve for the topology can cause severe localization errors. In a fast-moving topology, human occlu-sions are common and a scheme that does not cater to such situations will fail miserably. Thus, even if DoP con-siderations indicate a set of initiators to be optimal, occlusions might render that choice infeasible.

Determining occlusion based on link quality between every node pair is time-consuming. Fortunately, be-cause every device can overhear all ongoing communication, a device can

Round-trip time

Round-trip time

Turnaroundtime

Turnaroundtime

1 distancemeasurement,

3 messages used

ResponderInitiator

11 distances measured,33 messages used

POLL

RESP

FINAL

All initiators also sendRESP messages

Only three nodesbehave as initiators

R1R2 … Rn–1I1 I2 I3

15 distances measured,13 messages used

i

i i

POLL

RESP

FINAL

......

(a) (b)

(c) (d)

Figure 2. (a) The original two-way ranging (TWR) protocol. Each distance measure-ment needs three messages. (b) For triangulation, each node must have at least three edges. (c) Our modified TWR protocol to minimize the number of wireless messages exchanged in a group of nodes. (d) Only three nodes initiate TWR—a minimal number of messages exchanged ensures faster overall protocol time.

r10iot.indd 96 10/5/18 2:58 PM

Page 52: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

50 ComputingEdge February 201998 C O M P U T E R W W W . C O M P U T E R . O R G / C O M P U T E R

THE IOT CONNECTION

The proliferation of IoT devices in everyday sensing and ana-lytics is pushing the envelope

for location tracking. P2P location tracking is well suited for many of these applications due to its low en-ergy footprint and extreme robustness under any environmental condition. Despite the challenges of peer-to-peer location tracking, from our results, we see the potential in the feasibility and vast utility of such a primitive. P2PLoc is only a first step in this direction,

enabling accurate and fast tracking of a team of devices in the absence of ex-ternal infrastructure.

REFERENCES1. R. Want, W. Wang, and S. Chesnutt,

“Accurate Indoor Location for the IoT,” Computer, vol. 51, no. 8, 2018, pp. 66–70.

2. R.B. Langley, Dilution of Precision, GPS World, May 1999, pp. 52–59; www2.unb.ca/gge/Resources /gpsworld.may99.pdf.

3. P. Massatt and K. Rudnick, Geometric Formulas for Dilution of Precision Calcu-lations, Navigation, 1990, pp. 379–391; https://onlinelibrary.wiley.com /doi/abs/10.1002/j.2161-4296.1990 .tb01563.x.

4. IEEE Std. P802.11-REVmc, IEEE Approved Draft Standard for Infor-mation Technology—Telecommuni-cations and Information Exchange between Systems—Local and Met-ropolitan Area Networks—Specific Requirements Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications, IEEE, 2016; https://ieeexplore.ieee .org/document/7558098.

0.5

1

CDF

00 2

75% = 0.82m

90% = 2.65m

Tripod basedVolunteer based

4

Localization error (m)

6 8

Figure 6. Overall localization error remains under 1 m for most cases, even with human occlusions. Without human occlusions, localization error is under 20 cm. CDF: cumulative distribution function.

ASHUTOSH DHEKNE is a PhD

student at the University of Illinois at

Urbana-Champaign. Contact him at

[email protected].

UMBERTO J. RAVAIOLI is an analyst

at Toyon Research Corporation.

Contact him at um.ravaioli@gmail

.com.

ROMIT ROY CHOUDHURY is a pro-

fessor and Jerry Sanders III Scholar

at University of Illinois at Urbana-

Champaign. Contact him at croy@

illinois.edu.

UWB antenna

Armband

UWB device

Power source

Figure 5. A wearable arm band carrying a UWB device.

Read your subscriptions through the myCS publications portal at

http://mycs.computer.org

r10iot.indd 98 10/5/18 3:10 PM

This article originally appeared in Computer, vol. 51, no. 10, 2018.

Page 53: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

2469-7087/19/$33.00 © 2019 IEEE Published by the IEEE Computer Society February 2019 51

DEPARTMENT: WEARABLE COMPUTING

Earables for Personal-Scale Behavior Analytics

The rise of consumer wearables promises to have a

profound impact on people’s lives by going beyond

counting steps. Wearables such as eSense—an in-

ear multisensory stereo device—for personal-scale

behavior analytics could help accelerate our

understanding of a wide range of human activities in

a nonintrusive manner.

The era of wearables has arrived. As more established forms of wearables such as timepieces, rings, and pendants get a digital makeover, they are reshaping our everyday experiences with new, useful, exciting, and sometimes entertaining services. Millions of people now wear com-

mercial wearables on a daily basis to quantify their physical activity, social lifestyle, and health.1 However, for wearables to have a broader impact on our lives, next-generation consumer weara-bles must expand their monitoring capabilities beyond a narrow set of exercise-related physical activities.

One of the barriers for modern wearables in modeling richer and broader human activities is the limited diversity of the embedded sensors. Inertial sensors, such as gyroscopes, are constrained to track motion activities; microphones are primarily limited to conversational activity sensing; and radio-wave-based sensing primarily works for proxemic context detection. Also, many non-commercial academic endeavors exploring newer wearables with ambitious modeling targets suffer from poor ergonomics, limiting large-scale studies and experimental trials. We argue that one way to address these challenges is to leverage a form factor with an already established func-tion and augment it with diverse, established sensing modalities while maintaining its ergonom-ics and comfort. Indeed, several recent wearable initiatives, including smart eyeglasses2 and smart wristbands,3 aimed at modeling medical-grade biomarkers but allowed open data access to spur new research. We envision that wearables equipped with multimodal sensing and real-time data accessibility could foster new research leading toward a comprehensive understanding of various human behavioral traits in a nonintrusive manner. Such understanding will uncover op-portunities for new and useful applications in the areas of precise, predictive, and personalized medicine; digital, physical, mental, and social well-being; cognitive assistance; and sensory hu-man communication experiences.

Fahim Kawsar Nokia Bell Labs, Cambridge

Chulhong Min Nokia Bell Labs, Cambridge

Akhil Mathur Nokia Bell Labs, Cambridge

Alessandro Montanari Nokia Bell Labs, Cambridge

Department Editors: Oliver Amft; [email protected]

Kristof Van Laerhoven; [email protected]

Page 54: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

52 ComputingEdge February 2019

IEEE PERVASIVE COMPUTING

To this end, we present eSense, an in-ear multi-sensory high definition wireless stereo device in an aesthetically pleasing and ergonomically comfortable form factor. eSense offers a set of APIs to allow developers to record real-time data streams of different sensors and to reconfigure dif-ferent system parameters suitable for behavioral inferences. We briefly discuss the system’s design and development and its potential in a wide range of research studies and applications in personal-scale behavior analytics.

DESIGN DYNAMICS Wearables with many embedded sensors are now mainstream. Many of these wearables manifest in traditional forms—for example, a wristband, smartwatch, or pendant. As such, a new wearable form demands careful design assessment from multiple perspectives including ergonomics, aes-thetics, functionality, and interaction usability. We have chosen the wireless earbud for efficient, robust, and multimodal sensing of behavioral attributes for the following benefits:

• Established functionality. Earbuds are already integrated into people’s lives, and pro-vide access to high-definition music during work, commuting, and exercise. Earbuds al-so allow people to make hands-free calls. eSense, first and foremost, is a comfortable earpiece capable of producing a high-definition wireless audio experience in a compel-ling form, but it has added sensing functionalities. Consequently, eSense seamlessly augments established earpieces without demanding changes to users’ current behaviors.

• Unique placement. The ear is a relatively stationary part of the body, so placement of a sensing unit in the ear offers two concrete benefits. First, due to the stationary nature, sensory signals (accelerometer, gyroscope, audio, and so on) are less susceptible to mo-tion artifacts and external noises. So, sensor data carries accurate and precise infor-mation concerning recognition of physical and conversational activities in comparison to other wearable devices such as those worn on the wrist. Such signal clarity has a pro-found impact on the robustness of sensory models. Second, placement in the ear enables earables to monitor head and mouth movements, in addition to whole-body movements, in a noninvasive way. This unique capability creates opportunities for many novel ap-plications in the areas of personal health, dietary monitoring, and attention management.

• Privacy-preserving interaction. Earbuds are intimate and discreet, enabling users to have immediate and hands-free access to information in a privacy-preserving and social-ly acceptable way. Other wearable form factors typically demand explicit actions and at-tention from users, but earbuds can deliver auditory information even when a user is mindfully engaged in a physical or social activity.

Wireless earbuds provide users with freedom of movement and hands-free interaction, minimiz-ing situational disability and fragmentation of attention. In addition, earables can be worn for many hours without any impact on primary motor or cognitive activities. These advantages col-lectively shaped our design decision to select the earbud as an ideal wearable platform for per-sonal-scale behavior analytics.

ESENSE eSense is designed to be able to track a set of head- and mouth-related behavioral activities in-cluding speaking, eating, drinking, shaking, and nodding, as well as a set of whole-body move-ments. Automatically tracking these activities has profound value to various applications in the areas of quantified lifestyle, computational social science, healthcare, and well-being. While these application areas benefit from a diverse range of sensory signals including audio, motion, orientation, photoplethysmogram (PPG), temperature, and galvanic skin response, we designed eSense with three sensory channels—audio, motion, and Bluetooth Low Energy (BLE) radio. Three aspects influenced this design decision: the physical dimension of the eSense printed cir-cuit board to maintain the aesthetics and comfort, the minimization of signal interference from adjacent sensors, and the maximization of battery life to offer the primary functional service: high-definition music playback.

Page 55: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

www.computer.org/computingedge 53

WEARABLE COMPUTING

Figure 1. eSense system. (a) Audio, motion, and Bluetooth Low Energy radio sensing are powered by a CSR processor and a 40-mAH battery. (b) Schematic design. (c) Front and back of the printed circuit board.

Hardware Two main concerns drove the hardware design of eSense (see Figure 1): physical size and func-tional requirements. Physically, we wanted eSense to equal the size of a standard wireless earbud (including battery, electronics, and all outside connections) to ensure that it could be worn natu-rally with comfort. Functionally, we wanted eSense to permit reprogramming the sensors and recharging of the battery by users. Taking both of these concerns into account, we used a cus-tom-designed 15 × 15 × 3 mm PCB. eSense is composed of a Qualcomm CSR8670, a program-mable Bluetooth dual-mode flash audio system-on-chip (SoC) with one microphone; a TDK MPU6050 six-axis inertial measurement unit (IMU) including a three-axis accelerometer, a three-axis gyroscope, a digital motion processor, and a two-state button; a circular LED; associ-ated power regulation; and battery-charging circuitry. There is no internal storage or real-time clock. We opted for an ultra-thin 40-mAh LiPo battery to provide the system with power. This battery offers a reasonable energy profile: 3.0 h of inertial sensing at 50 Hz and 1.2 h for simul-taneous audio sensing at 16 kHz and inertial sensing at 50 Hz. The carrier casing is equipped with an external battery enabling recharging of eSense earbuds on the go. Each earbud weights 20 g and is 18 × 20 × 20 mm.

Firmware We developed an energy-aware firmware that implements the classic Bluetooth stack including the Advanced Audio Distribution Profile (A2DP) for high-definition audio streaming, and mono channel recording. The firmware also implements the full BLE radio stack for delivering the accelerometer and gyroscope data and configuring different parameters. A set of BLE character-istics expose these functionalities for setting the sampling rate and duty cycle of the microphone and IMU, setting the advertisement packet interval and connection interval of BLE, and receiv-ing the sensor data. Also, we have designed standard BLE characteristics for receiving the bat-tery voltage and advertisement packets. In our design, continuous bi-directional audio streaming uses classical Bluetooth, and motion data streaming uses BLE. To accommodate simultaneous audio and motion data transfer, eSense implements a multiplexing protocol transparently without requiring any modification to the host device stack. Finally, to enable continuous proximity sens-ing, eSense broadcasts advertisement packets continuously, and the advertisement interval can be configured programmatically to maintain a right balance between battery life and application requirements.

IEEE PERVASIVE COMPUTING

To this end, we present eSense, an in-ear multi-sensory high definition wireless stereo device in an aesthetically pleasing and ergonomically comfortable form factor. eSense offers a set of APIs to allow developers to record real-time data streams of different sensors and to reconfigure dif-ferent system parameters suitable for behavioral inferences. We briefly discuss the system’s design and development and its potential in a wide range of research studies and applications in personal-scale behavior analytics.

DESIGN DYNAMICS Wearables with many embedded sensors are now mainstream. Many of these wearables manifest in traditional forms—for example, a wristband, smartwatch, or pendant. As such, a new wearable form demands careful design assessment from multiple perspectives including ergonomics, aes-thetics, functionality, and interaction usability. We have chosen the wireless earbud for efficient, robust, and multimodal sensing of behavioral attributes for the following benefits:

• Established functionality. Earbuds are already integrated into people’s lives, and pro-vide access to high-definition music during work, commuting, and exercise. Earbuds al-so allow people to make hands-free calls. eSense, first and foremost, is a comfortable earpiece capable of producing a high-definition wireless audio experience in a compel-ling form, but it has added sensing functionalities. Consequently, eSense seamlessly augments established earpieces without demanding changes to users’ current behaviors.

• Unique placement. The ear is a relatively stationary part of the body, so placement of a sensing unit in the ear offers two concrete benefits. First, due to the stationary nature, sensory signals (accelerometer, gyroscope, audio, and so on) are less susceptible to mo-tion artifacts and external noises. So, sensor data carries accurate and precise infor-mation concerning recognition of physical and conversational activities in comparison to other wearable devices such as those worn on the wrist. Such signal clarity has a pro-found impact on the robustness of sensory models. Second, placement in the ear enables earables to monitor head and mouth movements, in addition to whole-body movements, in a noninvasive way. This unique capability creates opportunities for many novel ap-plications in the areas of personal health, dietary monitoring, and attention management.

• Privacy-preserving interaction. Earbuds are intimate and discreet, enabling users to have immediate and hands-free access to information in a privacy-preserving and social-ly acceptable way. Other wearable form factors typically demand explicit actions and at-tention from users, but earbuds can deliver auditory information even when a user is mindfully engaged in a physical or social activity.

Wireless earbuds provide users with freedom of movement and hands-free interaction, minimiz-ing situational disability and fragmentation of attention. In addition, earables can be worn for many hours without any impact on primary motor or cognitive activities. These advantages col-lectively shaped our design decision to select the earbud as an ideal wearable platform for per-sonal-scale behavior analytics.

ESENSE eSense is designed to be able to track a set of head- and mouth-related behavioral activities in-cluding speaking, eating, drinking, shaking, and nodding, as well as a set of whole-body move-ments. Automatically tracking these activities has profound value to various applications in the areas of quantified lifestyle, computational social science, healthcare, and well-being. While these application areas benefit from a diverse range of sensory signals including audio, motion, orientation, photoplethysmogram (PPG), temperature, and galvanic skin response, we designed eSense with three sensory channels—audio, motion, and Bluetooth Low Energy (BLE) radio. Three aspects influenced this design decision: the physical dimension of the eSense printed cir-cuit board to maintain the aesthetics and comfort, the minimization of signal interference from adjacent sensors, and the maximization of battery life to offer the primary functional service: high-definition music playback.

Page 56: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

54 ComputingEdge February 2019

IEEE PERVASIVE COMPUTING

Middleware To support application development with eSense, we have created a thin middleware for the iOS and Android operating systems, and Node.js middleware for desktop platforms. This middleware lets developers connect to and configure eSense and ingest sensory data in real time. It also of-fers a set of predefined audio- and movement-based context primitives that developers can readi-ly use in their applications in an event-driven manner.

SIGNAL CHARACTERISTICS We have experimentally explored the differential characteristics of the BLE, audio, and inertial signals captured by eSense. Comparing eSense to two other forms, a smartphone and a smart-watch, we looked at several key factors that impact behavior analytics pipelines including sam-pling variability, the signal-to-noise ratio (SNR), placement invariance, and sensitivity to motion artifacts. The experimental results suggest that eSense is robust in modeling these signals and in most conditions demonstrates comparable and often superior performance in signal stability and noise sensitivity.

Figure 2 shows the SNR of motion and audio signals received under identical physical activity conditions—sitting, standing, and walking—for eSense, a LG Urbane 2 smartwatch, and a Google Nexus 6 smartphone in the pocket. For inertial sensors, we derived the SNR by compu-ting the power ratio of the signal and noise in decibel (dB) scale. The noise profile—electrical noise and sensor biases—was obtained from the stationary states of the devices. We computed the SNR for audio signals by the power ratio of the signal and noise in the context of speech during different physical activities. We obtained the noise profile in the absence of speech sig-nals during these activities and before calculating the SNR in dB scale; this noise was removed from the recorded signal through spectral subtraction. For inertial sensing, we observe that the earbud and smartwatch carry more information than a smartphone about the target physical ac-tivities. With respect to audio sensing, the earbud provides the highest SNR in all physical activi-ty conditions owing to the device’s unique placement, which is less affected by the acoustic profile of human activities.

Figure 2. Signal-to-noise ratio (SNR) of accelerometer, gyroscope, and microphone signals of eSense, a smartphone, and a smartwatch under different activity conditions. eSense’s earbud captures differential signals in all cases.

APPLICATION LANDSCAPE The most plausible consumer-focused applications for multisensory wearables such as eSense are smart music players that can react to social and emotional contexts, fitness trackers, and af-fective communication tools. Building upon many academic research efforts, several commer-cial-grade earbuds today such as Bragi’s The Dash, Bose SoundSport, Jabra Elite Sport, and Sony Xperia offer a superior noise-cancelling music experience augmented with fitness coach-

Page 57: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

www.computer.org/computingedge 55

WEARABLE COMPUTING

ing. Two other active application areas that are gaining momentum are in-ear personal assistants and real-time language translators. A number of crowdfunded projects are currently exploring these services with wearables.

There are, however, many other application areas in which we envision earables providing sig-nificant benefits. In fact, over the past two decades, seminal research in the ubiquitous compu-ting and wearable computing domains has sought to achieve useful, engaging, and sometimes ambitious behavioral analytics with ear-worn sensing devices including continuous monitoring of cardiovascular function, heart rate and stress,4,5 measurement of oxygen consumption and blood flow,6 tracking eating episodes,7 dietary and swallowing activities,8,9 and several other biomarkers. Here we briefly look at potential applications for eSense and similar earables in these areas.

Health and Well-being eSense can be effectively used to monitor head- and mouth-related behavioral activities includ-ing speaking, eating, drinking, shaking, and nodding, as well as a set of whole-body movements. It can also be extended to detect more minute head and neck movements to augment various clinical medicine applications related to neck and head injury. Moreover, with eSense conversa-tional activity monitoring capabilities, social interactions can be quantified that to further help treat different mental health conditions and provide well-being feedback. Figure 3 shows a workplace well-being application that models eSense sensory streams for a variety of physical, digital, and social well-being metrics10 and provides personalized and actionable feedback in conversational and visual representations.

Figure 3. A personalized well-being feedback app for eSense that captures physical, mental, and social well-being at the workplace.

Cognitive Assistant In recent years, we have seen the emergence of conversational agents such as Siri, Cortana, and Google Assistant into mainstream use by millions of users on a daily basis. However, these agents are not yet capable of understanding users’ physical, social, and environmental contexts. We envision that a platform such as eSense will pave the path for these agents to be hyperaware of their users’ context, and thereby be able to offer personalized services more persuasively and integrate with our inner cognition loop seamlessly. Also, subtle gestures such as nodding or shaking can act as semi-implicit interaction techniques with these agents. Such cognitive agents will be extremely beneficial for the mobile workforce (for example, field service professionals and retail workers) as well as structured-workplace workers (for example, office workers and call center workers) by providing them with situational guidance and activity-aware recommen-dations, enabling them to adhere workplace safety and regulations precisely.

IEEE PERVASIVE COMPUTING

Middleware To support application development with eSense, we have created a thin middleware for the iOS and Android operating systems, and Node.js middleware for desktop platforms. This middleware lets developers connect to and configure eSense and ingest sensory data in real time. It also of-fers a set of predefined audio- and movement-based context primitives that developers can readi-ly use in their applications in an event-driven manner.

SIGNAL CHARACTERISTICS We have experimentally explored the differential characteristics of the BLE, audio, and inertial signals captured by eSense. Comparing eSense to two other forms, a smartphone and a smart-watch, we looked at several key factors that impact behavior analytics pipelines including sam-pling variability, the signal-to-noise ratio (SNR), placement invariance, and sensitivity to motion artifacts. The experimental results suggest that eSense is robust in modeling these signals and in most conditions demonstrates comparable and often superior performance in signal stability and noise sensitivity.

Figure 2 shows the SNR of motion and audio signals received under identical physical activity conditions—sitting, standing, and walking—for eSense, a LG Urbane 2 smartwatch, and a Google Nexus 6 smartphone in the pocket. For inertial sensors, we derived the SNR by compu-ting the power ratio of the signal and noise in decibel (dB) scale. The noise profile—electrical noise and sensor biases—was obtained from the stationary states of the devices. We computed the SNR for audio signals by the power ratio of the signal and noise in the context of speech during different physical activities. We obtained the noise profile in the absence of speech sig-nals during these activities and before calculating the SNR in dB scale; this noise was removed from the recorded signal through spectral subtraction. For inertial sensing, we observe that the earbud and smartwatch carry more information than a smartphone about the target physical ac-tivities. With respect to audio sensing, the earbud provides the highest SNR in all physical activi-ty conditions owing to the device’s unique placement, which is less affected by the acoustic profile of human activities.

Figure 2. Signal-to-noise ratio (SNR) of accelerometer, gyroscope, and microphone signals of eSense, a smartphone, and a smartwatch under different activity conditions. eSense’s earbud captures differential signals in all cases.

APPLICATION LANDSCAPE The most plausible consumer-focused applications for multisensory wearables such as eSense are smart music players that can react to social and emotional contexts, fitness trackers, and af-fective communication tools. Building upon many academic research efforts, several commer-cial-grade earbuds today such as Bragi’s The Dash, Bose SoundSport, Jabra Elite Sport, and Sony Xperia offer a superior noise-cancelling music experience augmented with fitness coach-

Page 58: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

56 ComputingEdge February 2019

IEEE PERVASIVE COMPUTING

Driving Behavior Monitoring One specialized behavior analytics application that eSense can be used for is to track drivers’ head movements to ensure that drivers are awake, alert, and looking in the right direction. The hands-free and immediate interaction affordance of eSense can also be leveraged as a contextual communication interface to provide drivers with relevant information about their routes, and to prevent them from reading texts while driving.

Contextual Notification Management Timely delivery of notifications on mobile devices has been an active area of study in the past few years. Researchers have primarily fo-cused on understanding the receptivity of mobile notifications and predicting opportune moments to deliver notifications to optimize metrics such as response time, engagement, and emotion. eSense’s ability to understand situational context could be incorporated into designing effective notification delivery mechanisms in the future.

Lifelogging eSense could also enable lifelogging using nonvision modalities. While today’s lifelogging applications are primarily vision based, audio, motion, and proximity sensing can collectively identify and capture users’ memorable and vital everyday events and can help them intuitively recall their past experiences.

Computational Social Science Research There is a rich body of literature on tracking face-to-face interactions and the impact of the space on enabling them. These works often apply to controlled environments and settings such as of-fices. eSense could be used as a platform to conducting such studies at scale, thereby opening up opportunities for new insights in computational social science.

CONCLUSION Wearable devices such as eSense hold enormous potential in accelerating our understanding of a wide range of human activities in a nonintrusive manner. As such, we plan to share this platform with ubiquitous computing researchers and create an open data repository for eSense-driven research and research on other wearables. Academic researchers are encouraged to visit http://www.esense.io and register their interest in participating in an early access pilot project in the fourth quarter of 2018 using donated eSense units, associated software libraries, and a data-sharing platform. We will also showcase eSense at the 2018 UbiComp and ISWC conferences. We expect that these efforts will collectively help our community to achieve the next break-throughs in wearable sensing systems, especially in understanding the dynamics of human be-havior in the real world.

REFERENCES 1. O. Amft and K. Van Laerhoven, “What Will We Wear after Smartphones?,” IEEE

Pervasive Computing, vol. 16, no. 4, 2017, pp. 80–85. 2. F. Wahl et al., “Personalizing 3D-Printed Smart Eyeglasses to Augment Daily Life,”

Computer, vol. 50, no. 2, 2017, pp. 26–35.

eSense could pave

the path for

conversational

agents to be

hyperaware of their

users’ context, and

thereby be able to

offer personalized

services more

persuasively and

integrate with our

inner cognition loop

seamlessly.

Page 59: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

www.computer.org/computingedge 57

WEARABLE COMPUTING

3. M.-Z. Poh, N.C. Swenson, and R.W. Picard, “A Wearable Sensor for Unobtrusive, Long-Term Assessment of Electrodermal Activity,” IEEE Trans. Biomedical Eng., vol. 57, no. 5, 2010, pp. 1243–1252.

4. Y. Tomita and Y. Mitsukura, “An Earbud-Based Photoplethysmography and Its Application,” Electronics and Communications in Japan, vol. 101, no. 1, 2018, pp. 32–38.

5. S. Nirjon et al., “MusicalHeart: A Hearty Way of Listening to Music,” Proc. 10th ACM Conf. Embedded Network Sensor Systems (SenSys 12), 2012, pp. 43–56.

6. S.F. LeBoeuf et al., “Earbud-Based Sensor for the Assessment of Energy Expenditure, Heart Rate, and VO2max,” Medicine and Science in Sports and Exercise, vol. 46, no. 5, 2014, pp. 1046–1052.

7. A. Bedri et al., “EarBit: Using Wearable Sensors to Detect Eating Episodes in Unconstrained Environments,” Proc. ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, vol. 1, no. 3, 2017; doi.org/10.1145/3130902.

8. K. Taniguchi et al., “Earable RCC: Development of an Earphone-Type Reliable Chewing-Count Measurement Device,” J. Healthcare Eng., 2018; doi.org/10.1155/2018/6161525.

9. O. Amft et al., “Analysis of Chewing Sounds for Dietary Monitoring,” Proc. 7th Int’l Conf. Ubiquitous Computing (UbiComp 05), 2005, pp. 56–72.

10. A. Mashhadi et al., “Understanding the Impact of Personal Feedback on Face-to-face Interactions in the Workplace,” Proc. 18th ACM Int’l Conf. Multimodal Interaction (ICMI 16), 2016, pp. 362–369.

ABOUT THE AUTHORS Fahim Kawsar leads pervasive systems research at Nokia Bell Labs in Cambridge, UK, and holds a Design United Professorship at TU Delft. Contact him at [email protected].

Chulhong Min is a research scientist at Nokia Bell Labs in Cambridge, UK. Contact him at [email protected].

Akhil Mathur is a research scientist at Nokia Bell Labs in Cambridge, UK. Contact him at [email protected].

Alessandro Montanari is a research scientist at Nokia Bell Labs in Cambridge, UK. Con-tact him at [email protected].

IEEE PERVASIVE COMPUTING

Driving Behavior Monitoring One specialized behavior analytics application that eSense can be used for is to track drivers’ head movements to ensure that drivers are awake, alert, and looking in the right direction. The hands-free and immediate interaction affordance of eSense can also be leveraged as a contextual communication interface to provide drivers with relevant information about their routes, and to prevent them from reading texts while driving.

Contextual Notification Management Timely delivery of notifications on mobile devices has been an active area of study in the past few years. Researchers have primarily fo-cused on understanding the receptivity of mobile notifications and predicting opportune moments to deliver notifications to optimize metrics such as response time, engagement, and emotion. eSense’s ability to understand situational context could be incorporated into designing effective notification delivery mechanisms in the future.

Lifelogging eSense could also enable lifelogging using nonvision modalities. While today’s lifelogging applications are primarily vision based, audio, motion, and proximity sensing can collectively identify and capture users’ memorable and vital everyday events and can help them intuitively recall their past experiences.

Computational Social Science Research There is a rich body of literature on tracking face-to-face interactions and the impact of the space on enabling them. These works often apply to controlled environments and settings such as of-fices. eSense could be used as a platform to conducting such studies at scale, thereby opening up opportunities for new insights in computational social science.

CONCLUSION Wearable devices such as eSense hold enormous potential in accelerating our understanding of a wide range of human activities in a nonintrusive manner. As such, we plan to share this platform with ubiquitous computing researchers and create an open data repository for eSense-driven research and research on other wearables. Academic researchers are encouraged to visit http://www.esense.io and register their interest in participating in an early access pilot project in the fourth quarter of 2018 using donated eSense units, associated software libraries, and a data-sharing platform. We will also showcase eSense at the 2018 UbiComp and ISWC conferences. We expect that these efforts will collectively help our community to achieve the next break-throughs in wearable sensing systems, especially in understanding the dynamics of human be-havior in the real world.

REFERENCES 1. O. Amft and K. Van Laerhoven, “What Will We Wear after Smartphones?,” IEEE

Pervasive Computing, vol. 16, no. 4, 2017, pp. 80–85. 2. F. Wahl et al., “Personalizing 3D-Printed Smart Eyeglasses to Augment Daily Life,”

Computer, vol. 50, no. 2, 2017, pp. 26–35.

eSense could pave

the path for

conversational

agents to be

hyperaware of their

users’ context, and

thereby be able to

offer personalized

services more

persuasively and

integrate with our

inner cognition loop

seamlessly.

This article originally appeared in IEEE Pervasive Computing, vol. 17, no. 3, 2018.

Page 60: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

IEEE Computer Society’s Conference Publishing Services (CPS) is now offering conference program mobile apps! Let your attendees have their conference schedule, conference information, and paper listings in the palm of their hands.

The conference program mobile app works for Android devices, iPhone, iPad, and the Kindle Fire.

For more information please contact [email protected]

CONFERENCESin the Palm of Your Hand

Page 61: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

www.computer.org/computingedge 59

IEEE Software seeks practical, readable

articles that will appeal to experts and

nonexperts alike. The magazine aims to

deliver reliable information to software

developers and managers to help them

stay on top of rapid technology change.

Submissions must be original and no more

than 4,700 words, including 250 words for

each table and � gure.

Author guidelines: www.computer.org/software/author

Further details: [email protected]

www.computer.org/software

Call for Articlesn

ov

emb

er •

dec

emb

er 2

015

IC-19-06-c1 Cover-1 October 9, 2015 3:26 PM

IEEE INTERN

ET COM

PUTIN

G

NO

vEMbER • D

ECEMbER 2015

ThE IN

TERNET O

f YOU

vO

l. 19, NO

. 6 w

ww

.COM

PUTER

.ORG

/INTERN

ET/

ma

rc

h •

ap

ril

201

6

IC-20-02-c1 Cover-1 February 11, 2016 10:30 PM

IEEE INTERN

ET COM

PUTIN

G

Ma

RCh • a

PRIl 2016 ExPlO

RING

TOM

ORRO

w’s IN

TERNET

vOl. 20, N

O. 2

ww

w.CO

MPU

TER.O

RG/IN

TERNET/

jan

ua

ry

• fe

br

ua

ry

201

6

IC-20-01-c1 Cover-1 December 7, 2015 1:45 PM

IEEE INTERN

ET COM

PUTIN

G

jaN

Ua

RY • fEbRUa

RY 2016 IN

TERNET ECO

NO

MICs

vOl. 20, N

O. 1

ww

w.CO

MPU

TER.O

RG/IN

TERNET/

ma

y •

jun

e 2

016

IC-20-03-c1 Cover-1 April 13, 2016 8:45 PM

IEEE INTERN

ET COM

PUTIN

G

MaY • jU

NE 2016

ClOU

D sTOR

aGE

vOl. 20, N

O. 3

ww

w.CO

MPU

TER.O

RG/IN

TERNET/

july

• a

ug

us

t 2

016IEEE IN

TERNET CO

MPU

TING

jU

lY • aU

GU

sT 2016 M

EasU

RING

ThE IN

TERNET

vOl. 20, N

O. 4

ww

w.CO

MPU

TER.O

RG/IN

TERNET/

Want to know more about the Internet?This magazine covers all aspects of Internet computing,

from programming and standards to security and networking.

www.computer.org/internet

Page 62: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

IEEE Letters of the Computer Society (LOCS) is a rigorously peer-reviewed forum for rapid publication of brief articles describing high-impact results in all areas of interest to the IEEE Computer Society.

Topics include, but are not limited to:

• software engineering and design;• information technology;• software for IoT, embedded, and cyberphysical

systems;• cybersecurity and secure computing;• autonomous systems;• machine intelligence;• parallel and distributed software and

algorithms;• programming environments and languages;• computer graphics and visualization;• services computing;• databases and data-intensive computing;• cloud computing and enterprise systems;• hardware and software test technology.

LOCS offers open access options for authors. Learn more about IEEE open access publishing:

www.ieee.org/open-access

EDITOR IN CHIEFDarrell Long - University of California, Santa Cruz

ASSOCIATE EDITORS• Dan Feng - Huazhong University of Science

and Technology• Gary Grider - Los Alamos National Laboratory• Kanchi Gopinath - Indian Institute of Science

(IISc), Bangalore• Katia Obraczka - University of California, Santa

Cruz• Thomas Johannes Emil Schwarz - Marquette

University• Marc Shapiro - Sorbonne-Université–LIP6 &

Inria• Kwang Mong Sim - Shenzhen University

Learn more about LOCS, submit your paper, or become a subscriber today: www.computer.org/locs

Page 63: Artifi cial Intelligence Tech Careers Wearables … · Vienna IEEE Pervasive Computing Marc Langheinrich, University of Lugano Computing in Science & Engineering Jim X. Chen, George

IEEE Security & Privacy magazine provides articles with both a practical and research bent by the top thinkers in the field.

✔ Stay current on the latest security tools and theories and gain invaluable practical and research knowledge,

✔Learn more about the latest techniques and cutting-edge technology, and

✔ Discover case studies, tutorials, columns, and in-depth interviews and podcasts for the information security industry.

www.computer.org/subscribe

SPgeneral_full.indd 111 2/28/18 4:31 PM