Upload
others
View
4
Download
0
Embed Size (px)
Citation preview
Atollic TrueSTUDIO® for ARM®
Quick Start Guide for ARM®
User Guide
Document Data
ii | P a g e
COPYRIGHT © Copyright 2009-2014 Atollic AB. All rights reserved. No part of this document may be reproduced or distributed without prior written consent of Atollic AB. The software product described in this document is furnished under a license and may only be used, or copied, according to the license terms.
TRADEMARKS
Atollic, Atollic TrueSTUDIO, Atollic TrueINSPECTOR, Atollic TrueVERIFIER, Atollic TrueANALYZER and Atollic TrueSTORE and the Atollic logotype are trademarks or registered trademarks owned by Atollic. ARM, ARM7, ARM9 and Cortex are trademarks, or registered trademarks, of ARM Limited. ECLIPSE is a registered trademark of the Eclipse foundation. Microsoft, Windows, Word, Excel and PowerPoint are registered trademarks of Microsoft Corporation. Adobe and Acrobat are registered trademarks of Adobe Systems Incorporated. All other product names are trademarks, or registered trademarks, of their respective owners.
DISCLAIMER
The information in this document is subject to change without notice and does not represent a commitment of Atollic AB. The information contained in this document is assumed to be accurate, but Atollic assumes no responsibility for any errors or omissions. In no event shall Atollic AB, its employees, its contractors, or the authors of this document be liable for any type of damage, losses, costs, charges, claims, demands, claim for lost profits, fees, or expenses of any nature or kind.
DOCUMENT IDENTIFICATION
TS-UG-ARM November 2012
REVISION
2nd February 2013 – Applies to Atollic TrueSTUDIO® v4.0
3rd June 2013 – Applies to Atollic TrueSTUDIO® v4.1
4th September 2013 – Applies to Atollic TrueSTUDIO® v4.2
5th November 2013 – Applies to Atollic TrueSTUDIO® v4.3
6th April 2014 – Applies to Atollic TrueSTUDIO® v5.0
7th June 2014 – Applies to Atollic TrueSTUDIO® v5.1
Atollic AB Science Park Gjuterigatan 7 SE- 553 18 Jönköping Sweden
+46 (0) 36 19 60 50
Email: [email protected] Web: www.atollic.com
Atollic Inc. 241 Boston Post Rd. West (1st Floor) Marlborough, Massachusetts . 01752 United States
+1 (973) 784 0047 (Voice) +1 (877) 218 9117 (Toll Free)
Email: [email protected] Web: www.atollic.com
Table of Contents
iii | P a g e
Contents About this document ............................................................. 25
Intended Readers ..................................................................................... 25
Document Conventions ............................................................................ 26
Getting Started .................................................... 27
Introduction .............................................................................................. 28
Support ..................................................................................................... 30
Lite, Pro and Premium Versions ............................................................... 31
Preparing for Start .................................................................................... 32
Workspaces & Projects ................................................................................... 32
Perspectives & Views ...................................................................................... 33
Views ................................................................................................ 36
Starting the Program ................................................................................ 38
Starting with different language ..................................................................... 40
Managing Existing Workspaces ....................................................................... 41
Copy Properties Between Workspaces ............................................ 41
Keeping Track on Java Heap Space .................................................. 41
Creating a New Project ............................................................................. 43
One-Click Example Project Installation ........................................................... 50
Using an Existing Project ................................................................................. 51
Creating a Static Library .................................................................................. 51
Creating a Makefile Project From Existing Code ............................................. 53
Configuring the Project’s Build settings ................................................... 55
Debug and Release Build Settings ................................................................... 59
Compiler Optimization .................................................................................... 59
Link Time Optimization (LTO) .......................................................................... 61
Changing Tool Chain Version .......................................................................... 63
Building the Project .................................................................................. 66
Table of Contents
iv | P a g e
Enable Build on Save ....................................................................................... 67
Rebuild All ....................................................................................................... 67
Headless Build ................................................................................................. 68
Logging ............................................................................................................ 70
The Build Size .................................................................................................. 70
Creating a .hex-File .......................................................................................... 72
Command Line Patterns .................................................................................. 73
Create .list-Files ................................................................................ 74
Temporary Assembly File ................................................................................ 75
Building One File ............................................................................................. 75
Linking the Project .................................................................................... 78
Include Libraries .............................................................................................. 78
Referring Project ............................................................................................. 80
Dead Code Removal ........................................................................................ 81
Linker Script ..................................................................................................... 81
Generate A New Linker Script ......................................................................... 82
Place Code In a New Memory Region ............................................................. 84
Place Variables At Specific Addresses ............................................................. 85
Linking In a Block of Binary Data ..................................................................... 87
Managing Existing Projects ....................................................................... 88
The Index ......................................................................................................... 88
Finding Include Paths, Macros etc. ................................................................. 90
Debugging ................................................................................................. 93
Starting the Debugger ..................................................................................... 94
Managing the Debug Configurations ............................................................ 100
Customize the Debug Perspective ................................................................ 102
Debugging ..................................................................................................... 103
Terminate, Rebuild and Re-launch ................................................ 104
Disassembly View ........................................................................... 104
Breakpoints .................................................................................... 105
Table of Contents
v | P a g e
Expressions..................................................................................... 107
Live Expressions ............................................................................. 107
Local Variables ............................................................................... 108
Fill Memory With a Byte Pattern ................................................... 109
SFRs ................................................................................................ 110
Fault Analyzer ................................................................................ 112
Terminal View ................................................................................ 113
Stopping the Debugger ................................................................................. 115
Upgrading the GDB Server ............................................................................ 116
Configure Segger’s GDB Server ..................................................................... 117
Change Flash Caching..................................................................... 118
Enable Log File ............................................................................... 118
Settings Command Line Option ..................................................... 119
Debugging Two Targets at the Same Time ................................................... 121
First Alternative - Local GDB-server Using GUI Options ................ 121
Second Alternative - Remote GDB-server Using Command-line
Options ........................................................................................... 122
I/O redirection ........................................................................................ 123
Position Independent Code .................................................................... 126
Installing 3rd party plugins ...................................................................... 129
Serial Wire Viewer Tracing ..................................132
Using Serial Wire Viewer Tracing ........................................................... 133
Serial Wire Debug (SWD) .............................................................................. 133
Serial Wire Output (SWO) ............................................................................. 133
Serial Wire Viewer (SWV) .............................................................................. 133
Instrumentation Trace Macrocell (ITM) ........................................................ 134
Starting SWV Tracing .............................................................................. 135
The SWV Views ....................................................................................... 142
The Timeline Graphs ..................................................................................... 144
Table of Contents
vi | P a g e
Statistical Profiling ......................................................................................... 144
Exception Tracing .......................................................................................... 146
Exception Data ............................................................................... 146
Exception Statistics ........................................................................ 147
Printf() Redirection Over ITM ................................................................. 149
Change the Trace Buffer Size ................................................................. 150
Common SWV Problems ........................................................................ 151
Instruction Tracing ..............................................152
Instruction Tracing .................................................................................. 153
Enable Trace .................................................................................................. 153
Writing a Trace Port Configuration File ......................................... 155
Configuring the Tracing Session .................................................................... 156
ETM Trace Port Configuration File Reference ............................................... 158
Add Trace Trigger .......................................................................................... 158
Add Trace Trigger in the Editor ...................................................... 159
Managing Trace Triggers ............................................................................... 160
Start Trace Recording .................................................................................... 161
Analyzing the Trace ....................................................................................... 161
Display Options .............................................................................. 162
Search the Trace Log ...................................................................... 163
Exporting a Trace Log .................................................................................... 163
RTOS-Aware Debugging ......................................165
RTOS Kernel Awareness Debugging ....................................................... 166
Segger embOS ........................................................................................ 167
Requirements ................................................................................................ 167
Finding the Views .......................................................................................... 167
System Information ....................................................................................... 168
Task List ......................................................................................................... 169
Timers ............................................................................................................ 170
Table of Contents
vii | P a g e
Resource Semaphores ................................................................................... 171
Mailboxes ...................................................................................................... 172
HCC Embedded eTaskSync ..................................................................... 174
Requirements ................................................................................................ 174
Finding the View ............................................................................................ 174
Task List ......................................................................................................... 175
FreeRTOS and OpenRTOS ....................................................................... 176
Requirements ................................................................................................ 176
Finding the Views .......................................................................................... 176
Task List ......................................................................................................... 177
Queues .......................................................................................................... 178
Semaphores .................................................................................................. 179
Timers ............................................................................................................ 180
Freescale MQX ........................................................................................ 182
Requirements ................................................................................................ 182
Finding the View ............................................................................................ 182
Task Summary ............................................................................................... 182
Task Details ................................................................................................... 183
Quadros RTXC ......................................................................................... 185
Requirements ................................................................................................ 185
Finding the Views .......................................................................................... 185
Kernel Information ........................................................................................ 186
Tasks (Task List and Stack Info) ..................................................................... 186
Task List tab .................................................................................... 187
Stack Info tab ................................................................................. 188
Alarms ........................................................................................................... 189
Counters ........................................................................................................ 190
Event Sources ................................................................................................ 190
Exception Backtrace ...................................................................................... 191
Exceptions ..................................................................................................... 192
Table of Contents
viii | P a g e
Mailboxes ...................................................................................................... 193
Mutexes......................................................................................................... 194
Partitions ....................................................................................................... 195
Pipes .............................................................................................................. 196
Queues .......................................................................................................... 197
Semaphores .................................................................................................. 198
Express Logic ThreadX ............................................................................ 200
Requirements ................................................................................................ 200
Finding the Views .......................................................................................... 200
Thread List ..................................................................................................... 201
Semaphores .................................................................................................. 202
Mutexes......................................................................................................... 203
Message Queues ........................................................................................... 204
Event Flags .................................................................................................... 205
Timers ............................................................................................................ 206
Memory Block Pools...................................................................................... 206
Memory Byte Pools ....................................................................................... 207
TOPPERS/ASP .......................................................................................... 209
Requirements ................................................................................................ 209
Finding The Views ......................................................................................... 209
Tasks .............................................................................................................. 210
Static Information Tab ................................................................... 210
Current Status Tab ......................................................................... 211
Dataqueues ................................................................................................... 212
Static Information Tab ................................................................... 212
Current Status Tab ......................................................................... 213
Event Flags .................................................................................................... 214
Static Information Tab ................................................................... 214
Current Status Tab ......................................................................... 215
Mailboxes ...................................................................................................... 215
Table of Contents
ix | P a g e
Static Information Tab ................................................................... 216
Current Status Tab ......................................................................... 216
Memory Pools ............................................................................................... 217
Static Information Tab ................................................................... 217
Current Status Tab ......................................................................... 218
Cyclic Handlers .............................................................................................. 219
Static Information Tab ................................................................... 219
Current Status Tab ......................................................................... 220
Alarm Handlers .............................................................................................. 220
Static Information Tab ................................................................... 221
Current Status Tab ......................................................................... 221
Micrium µC/OS-III ................................................................................... 223
Requirements ................................................................................................ 223
Finding the Views .......................................................................................... 223
System Information ....................................................................................... 224
Task List ......................................................................................................... 226
Semaphores .................................................................................................. 227
Mutexes......................................................................................................... 228
Message Queues ........................................................................................... 229
Event Flags .................................................................................................... 230
Timers ............................................................................................................ 231
Memory Partitions ........................................................................................ 232
Source Code Review ...........................................234
Introduction to Code Reviews ................................................................ 235
Planning a Review – Review ID Creation ................................................ 237
Creating a Review ID ..................................................................................... 238
Tailoring a Review Id Template ..................................................................... 243
Conducting a Source Code Review ......................................................... 245
Individual Phase ............................................................................................ 248
Table of Contents
x | P a g e
Team Phase ................................................................................................... 250
Rework Phase ................................................................................................ 252
Additional Settings ........................................................................................ 253
Static Code Analysis ............................................255
Introduction ............................................................................................ 256
Coding Standards Compliance ...................................................................... 256
Source Code Metrics ...................................................................................... 257
Reports .......................................................................................................... 257
Selecting Coding Standard Rules ............................................................ 258
Running a Code Inspection ..................................................................... 261
Analyzing the Results .............................................................................. 263
The Violation Summary View ........................................................................ 263
The Rule Description View ............................................................................ 265
Violation View ............................................................................................... 266
Introduction to Code Complexity Analysis .................................................... 269
Recommended Level of Code Complexity .................................................... 270
When to Do Code Analysis ............................................................................ 271
The Code Metrics View ................................................................................. 271
Generate a Report .................................................................................. 275
Remove Analysis Results ........................................................................ 276
Summary ................................................................................................. 277
Test Automation .................................................278
Introduction ............................................................................................ 279
Function Tests ............................................................................................... 279
Integration Tests ........................................................................................... 281
Concepts ........................................................................................................ 282
Function Testing Tutorial ........................................................................ 285
The Code To Be Tested .................................................................................. 285
Initial Steps .................................................................................................... 286
Table of Contents
xi | P a g e
Excluding Files and Folders ........................................................................... 287
Adding Functions To Test .............................................................................. 289
Define Expected Results ................................................................................ 296
Defining Advanced Types Of Expected Values .............................. 297
Running Tests ................................................................................................ 298
Analyzing The Test Results ............................................................................ 303
Analyzing The Test Quality ............................................................................ 304
Advanced Input And Output Dependencies ................................................. 306
Integration Test Tutorial ......................................................................... 309
The Code To Be Tested .................................................................................. 309
Creating The Test Scenario............................................................................ 309
Managing The Test Configuration .......................................................... 317
Managing Test Suites .................................................................................... 317
Managing Tests ............................................................................................. 319
Managing Test Cases ..................................................................................... 319
Setup And Teardown Code ..................................................................... 322
Creating Setup And Teardown Definitions .................................................... 322
Adding Code .................................................................................................. 323
Using Setup And Teardown Definitions ........................................................ 324
Detailed Test Control .............................................................................. 326
Test Drivers ................................................................................................... 326
Instrumentation Macros ............................................................................... 326
Input Macros .................................................................................. 326
Test Result Macros ......................................................................... 327
Output Macros ............................................................................... 329
Other Macros ................................................................................. 330
Testing String Variables ................................................................................. 332
Generate Reports ................................................................................... 337
Important Notes ..................................................................................... 338
Project Sharing .............................................................................................. 338
Table of Contents
xii | P a g e
Synchronize TrueVERIFIER ............................................................................ 338
Test Quality Measurement .................................340
Introduction ............................................................................................ 341
Why Perform Code Coverage Analysis? ........................................................ 342
Different Types of Analysis ............................................................................ 342
Block Coverage ............................................................................... 343
Function Coverage ......................................................................... 343
Function Call Coverage .................................................................. 344
Branch Coverage ............................................................................ 344
Tool Support .................................................................................................. 346
Performing Code Coverage Analysis ...................................................... 347
Preparations .................................................................................................. 347
Starting Code Coverage Analysis ................................................................... 347
Generate a CSV-File ................................................................................ 358
Measurement Strategy And Resource Usage ........................................ 359
List of Figures
xiii | P a g e
Figures Figure 1 - Workspaces and Projects ............................................................. 33
Figure 2 – Editing Perspective ...................................................................... 34
Figure 3 - Switch Perspective ....................................................................... 35
Figure 4 - Switch Perspective ....................................................................... 35
Figure 5 – Toolbar Buttons for Perspectives and Views .............................. 35
Figure 6 - View Menu toolbar button .......................................................... 36
Figure 7 - Show View Dialog Box .................................................................. 37
Figure 8 – Toolbar Buttons for Perspectives and Views .............................. 37
Figure 9 - Workspace Launcher .................................................................... 38
Figure 10 - Information Center .................................................................... 39
Figure 11 – Information Center Menu Command ....................................... 40
Figure 12 – Information Center Toolbar Button (A) ..................................... 40
Figure 13 – Manage Workspaces ................................................................. 41
Figure 14 – Display Java Heap Space Status ................................................. 42
Figure 15 – Project Creation Buttons ........................................................... 43
Figure 16 - Starting the Project Wizard ........................................................ 43
Figure 17 - C Project Configuration .............................................................. 44
Figure 18 - TrueSTUDIO® Hardware Configuration ...................................... 45
Figure 19- TrueSTUDIO® Software Configuration ........................................ 46
Figure 20 - TrueSTUDIO® Debugger Configuration ...................................... 47
Figure 21 - Select Configurations ................................................................. 48
Figure 22 - Project Explorer View ................................................................. 49
Figure 23 – Editor View ................................................................................ 49
Figure 24 – Project Creation Buttons ........................................................... 50
Figure 25 – Atollic TrueSTORE ...................................................................... 50
Figure 26 – Selection of Existing Project File ............................................... 51
Figure 27 – Selection of Static Library Project ............................................. 52
Figure 28 – Create a Makefile Project from existing code ........................... 53
Figure 29 – Locate the code and select <none> .......................................... 53
Figure 30 – Edit the PATH variable ............................................................... 54
Figure 31 – Build Settings Toolbar Button ................................................... 55
Figure 32 – Build Settings Menu Selection................................................... 55
Figure 33 - Project Properties Dialog Box .................................................... 56
Figure 34 – Tool Settings, Miscellaneous Options ....................................... 57
List of Figures
xiv | P a g e
Figure 35 – Target Settings Dialog Box......................................................... 58
Figure 36 – Compiler Optimization Settings for a Project ........................... 60
Figure 37 – Compiler Optimization Settings for a File ................................. 60
Figure 38 – Linker LTO Settings for a Project ............................................... 62
Figure 39 – Linker LTO Settings for a Project ............................................... 62
Figure 40 – Build Settings Toolbar Button ................................................... 63
Figure 41 – Tool Chain Version tab .............................................................. 63
Figure 42 – Manage the Build Configurations.............................................. 64
Figure 43 – Create New Configuration ......................................................... 65
Figure 44 – Old Tool Chain Version for the New Build Configuration ......... 65
Figure 45 - Build Toolbar Button .................................................................. 66
Figure 46 – Build on Save ............................................................................. 67
Figure 47 – Rebuild Toolbar Button ............................................................. 67
Figure 48 – Rebuild Active Configuration Menu Selection .......................... 68
Figure 49 – Open the Properties view ......................................................... 71
Figure 50 – Open the Properties view ......................................................... 72
Figure 51 – Output format Selection ........................................................... 73
Figure 52 – Build Settings Toolbar Button ................................................... 74
Figure 53 – Generate –list Files .................................................................... 74
Figure 54 – Enable the Build Automatically Menu Item .............................. 76
Figure 55 –Build Selected File(s) .................................................................. 77
Figure 56 –GNU Linker manual link .............................................................. 78
Figure 57 –Include a library .......................................................................... 79
Figure 58 –Set Project References ............................................................... 80
Figure 59 –Set Project References ............................................................... 81
Figure 60 –Select New, Other… .................................................................... 82
Figure 61 –Select New, Other… .................................................................... 83
Figure 62 –Enter the name of the script ...................................................... 84
Figure 63 – The Indexer Picks up the Documentation for a Function ......... 88
Figure 64 – Workspace Indexer Settings ...................................................... 89
Figure 65 – Project Indexer Settings ............................................................ 90
Figure 66 – Scanner Discovery Settings ....................................................... 91
Figure 67 – Preprocessor Include Paths, Macros etc. .................................. 92
Figure 68 – Start Debug Session Toolbar Button ......................................... 94
Figure 69 - Debug Configuration Dialog Box ................................................ 94
Figure 70 – The Configure Debug Toolbar Button ....................................... 95
Figure 71 - Debug Configuration, Debugger Panel for the SEGGER J-Link ... 96
List of Figures
xv | P a g e
Figure 72 - Debug Configuration, Debugger Panel for the ST-Link .............. 96
Figure 73 - Debug Configuration, Startup Scripts Panel .............................. 98
Figure 74 – Debug Perspective ..................................................................... 99
Figure 75 – The Configure Debug Toolbar Button ..................................... 100
Figure 76 – The target ELF-file in Debug Session Configuration ................ 101
Figure 77 – Customize Perspective Dialog Box .......................................... 102
Figure 78 - Run Menu ................................................................................. 103
Figure 79 - Run Control Command Toolbar ............................................... 103
Figure 80 – Terminate, Rebuild and Re-launch Toolbar Button ................ 104
Figure 81 – Instruction Stepping Button .................................................... 104
Figure 82 – Disassembly View .................................................................... 104
Figure 83 - Toggle Breakpoint Context Menu ............................................ 105
Figure 84 – Breakpoints View..................................................................... 105
Figure 85 – Breakpoints Properties ............................................................ 106
Figure 86 – Conditional Breakpoint ........................................................... 106
Figure 87 – Expressions View ..................................................................... 107
Figure 88 – Drag and Drop of Variable to the Expressions View ............... 107
Figure 89 – Live Expressions View .............................................................. 108
Figure 90 – Live Expressions view Number Format ................................... 108
Figure 91 – Variables View ......................................................................... 109
Figure 92 - The Memory Fill toolbar button............................................... 109
Figure 93 - The Memory Fill dialog ............................................................. 109
Figure 94 - SFRs Menu Command .............................................................. 110
Figure 95 - SFR View ................................................................................... 111
Figure 96 – SFR View Buttons .................................................................... 111
Figure 97 – CMSIS-SVD Settings Properties Panel ..................................... 111
Figure 98 – Terminal View .......................................................................... 113
Figure 99 – Terminal Toolbars .................................................................... 113
Figure 100 – Terminal Settings ................................................................... 114
Figure 101 - The Terminate Menu Command ............................................ 115
Figure 102 - C/C++ Editing Perspective ...................................................... 116
Figure 103 – Changing the path to the GDB Server ................................... 117
Figure 104 –GDB Server Control Panel – General Tab ............................... 118
Figure 105 –GDB server Control panel – Settings tab ................................ 119
Figure 106 – Debug Configuration – Connect to remote GDB server........ 120
Figure 107 –New, Other… .......................................................................... 123
Figure 108 –Select Minimal System Calls Implementation ........................ 124
List of Figures
xvi | P a g e
Figure 109 –Select location and Heap implementation ............................. 124
Figure 110 –Add –fPIE for Assembler and C Compiler ............................... 126
Figure 111 – Use –fPIE for Linker ............................................................... 127
Figure 112 –Remove the monitor reset command .................................... 128
Figure 113 –Select Install New Software ................................................... 129
Figure 114 –Enter Download Site and select plugins ................................. 130
Figure 115 – Accept license agreements ................................................... 130
Figure 116 – The plugins are installed ....................................................... 131
Figure 117 –Different types of tracing ....................................................... 134
Figure 118 – Open Debug Configurations Toolbar Button ........................ 135
Figure 119 – Change ST-Link Debug Configuration for SWV ...................... 135
Figure 120 – Change SEGGER J-Link Debug Configuration for SWV .......... 136
Figure 121 – SWV Data Trace Menu Command ......................................... 137
Figure 122 – Configure Serial Wire Viewer Button .................................... 137
Figure 123 – The Serial Wire Viewer Settings Dialog ................................. 138
Figure 124 – The Start/Stop Trace Button ................................................. 141
Figure 125 – Resume Debug Button .......................................................... 141
Figure 126 – Empty SWV Data Button ....................................................... 141
Figure 127 – Several SWV Views Displayed Simultaneously...................... 143
Figure 128 –Statistical Profiling Configuration .......................................... 145
Figure 129 – Statistical Profiling View ........................................................ 145
Figure 130 – Exception Tracing Configuration ........................................... 146
Figure 131 – Exception View, Data Tab ...................................................... 146
Figure 132 – Exception View, Statistics Tab ............................................... 147
Figure 133 – Serial Wire Viewer Preferences............................................. 150
Figure 134 – Enable tracing in the debug configuration ............................ 154
Figure 135 – Configuration Toolbar Button ............................................... 156
Figure 136 - Trace Configuration ................................................................ 157
Figure 137 - Trace Configuration ................................................................ 158
Figure 138 – Add Trace Trigger in the Editor ............................................. 160
Figure 139 –Trace Trigger in the Editor ...................................................... 160
Figure 140 –Trace Trigger in the Editor ...................................................... 160
Figure 141 – Record Toolbar Button .......................................................... 161
Figure 142 - The trace log view .................................................................. 161
Figure 143 - Trace restarted ....................................................................... 162
Figure 144 – Display Options Toolbar Button ............................................ 163
Figure 145 – Search Toolbar Button .......................................................... 163
List of Figures
xvii | P a g e
Figure 146 – Export Toolbar Button ........................................................... 163
Figure 147 - Exporting the trace log ........................................................... 164
Figure 148 - View Top Level Menu ............................................................. 167
Figure 149 -embOS Show View Toolbar Button ......................................... 168
Figure 150 - embOS System Information View .......................................... 168
Figure 151 - embOS System Information View (Fault Condition) .............. 168
Figure 152 - embOS Task List View ............................................................ 169
Figure 153 - embOS Timers View ............................................................... 171
Figure 154 - embOS Resource Semaphores View ...................................... 172
Figure 155 - embOS Mailboxes View ......................................................... 172
Figure 156 – eTaskSync Show View Toolbar Button .................................. 174
Figure 157 - eTaskSync Task List View ....................................................... 175
Figure 158 – FreeRTOS View Top Level Menu ........................................... 176
Figure 159 – FreeRTOS Show View Toolbar Button ................................... 177
Figure 160 - FreeRTOS Task List View ........................................................ 177
Figure 161 - FreeRTOS Queues View .......................................................... 179
Figure 162 - FreeRTOS Semaphores View .................................................. 180
Figure 163 - FreeRTOS Timers View ........................................................... 181
Figure 164 –MQX Show View Toolbar Button ........................................... 182
Figure 165 - MQX Task Summary View ...................................................... 183
Figure 166 - MQX Task Details ................................................................... 184
Figure 167 – RTXC Show View Toolbar Button .......................................... 185
Figure 168 – RTXC Kernel Information View .............................................. 186
Figure 169 - RTXC Task List tab in Task view .............................................. 187
Figure 170 – RTXC Task Stack Info .............................................................. 188
Figure 171 - RTXC Alarms View .................................................................. 189
Figure 172 - RTXC Counters View ............................................................... 190
Figure 173 - RTXC Event Sources View ....................................................... 191
Figure 174 - RTXC Exception Backtrace View ............................................. 192
Figure 175 - RTXC Exceptions View ............................................................ 192
Figure 176 - RTXC Mailboxes View ............................................................. 193
Figure 177 - RTXC Mutexes View ............................................................... 194
Figure 178 - RTXC Partitions View .............................................................. 195
Figure 179 - RTXC Pipes View ..................................................................... 196
Figure 180 - RTXC Queues View ................................................................. 197
Figure 181 - RTXC Semaphores View ......................................................... 198
Figure 182 – ThreadX View Top Level Menu .............................................. 200
List of Figures
xviii | P a g e
Figure 183 - ThreadX Show View Toolbar Button ...................................... 201
Figure 184 - ThreadX Thread List View ...................................................... 201
Figure 185 - ThreadX Semaphores View .................................................... 203
Figure 186 - ThreadX Mutexes View .......................................................... 203
Figure 187 - ThreadX Message Queues View ............................................. 204
Figure 188 - ThreadX Event Flags View ...................................................... 205
Figure 189 - ThreadX Timers View ............................................................. 206
Figure 190 - ThreadX Memory Block Pools View ....................................... 207
Figure 191 - ThreadX Memory Byte Pools View ........................................ 208
Figure 192 – TOPPERS Show View Toolbar Button .................................... 209
Figure 193 – TOPPERS Tasks Static Information Tab ................................. 210
Figure 194 – TOPPERS Tasks Current Status Tab ....................................... 211
Figure 195 – TOPPERS Dataqueues Static Information Tab ....................... 212
Figure 196 – TOPPERS Dataqueues Current Status Tab ............................. 213
Figure 197 – TOPPERS Event Flags Static Information Tab ........................ 214
Figure 198 – TOPPERS Event Flags Current Status Tab .............................. 215
Figure 199 – TOPPERS Mailboxes Static Information Tab ......................... 216
Figure 200 – TOPPERS Mailboxes Current Status Tab ............................... 216
Figure 201 – TOPPERS Memory Pools Static Information Tab ................... 217
Figure 202 – TOPPERS Memory Pools Current Status Tab ........................ 218
Figure 203 – TOPPERS Cyclic Handlers Static Information Tab .................. 219
Figure 204 – TOPPERS Cyclic Handlers Current Status Tab........................ 220
Figure 205 – TOPPERS Alarm Handlers Static Information Tab ................. 221
Figure 206 – TOPPERS Alarm Handlers Current Status Tab ....................... 221
Figure 207 - View Top Level Menu ............................................................. 224
Figure 208 - Show View Toolbar Button .................................................... 224
Figure 209 - µC/OS-III System Information View ....................................... 225
Figure 210 - µC/OS-III Task List View .......................................................... 226
Figure 211 - µC/OS-III Semaphores View ................................................... 228
Figure 212 - µC/OS-III Mutexes View ......................................................... 229
Figure 213 - µC/OS-III Message Queues View............................................ 230
Figure 214 - µC/OS-III Event Flags View ..................................................... 231
Figure 215 - µC/OS-III Timers View ............................................................ 231
Figure 216 - µC/OS-III Memory Partitions View ......................................... 232
Figure 217 – Atollic TrueSTUDIO Support for the Code Review Workflow 235
Figure 218 – Project Properties Menu Selection ....................................... 238
Figure 219 - GUI for Creating and Managing Code Reviews ...................... 238
List of Figures
xix | P a g e
Figure 220 - Dialog for Creating a New Review ID ..................................... 239
Figure 221 - Dialog for Managing the Work Product of a Review ............. 239
Figure 222 - Add Reviewers to the Review ................................................ 240
Figure 223 - Choose Author for the Review Session .................................. 240
Figure 224 - Review Comment Parameter Options ................................... 241
Figure 225 - Setting Default Options for Review Parameters .................... 241
Figure 226 - Naming the Review Issue Data Folder ................................... 242
Figure 227 - Filter Settings for the Different Phases .................................. 242
Figure 228 - Editing the DEFAULT Review Template ................................. 244
Figure 229 - Code Review Selected via Open Perspective Command ....... 245
Figure 230 - The Code Review Perspective ................................................ 245
Figure 231 – The Code Review Table View ................................................ 246
Figure 232 – The Code Review Editor View ............................................... 247
Figure 233 - Individual Phase Selected in the Code Review Toolbar ......... 248
Figure 234 - Reviewer ID Selection Dialog ................................................. 248
Figure 235 - The Source Code Button & Drop-Down Menu ...................... 249
Figure 236 - Add Code Review Issue... ....................................................... 249
Figure 237 – A Code Review Issue in the Review Editor View ................... 249
Figure 238 - Review Marker Displayed on Editor Line 101 ........................ 250
Figure 239 - Team Phase Toolbar Button ................................................... 250
Figure 240 - Code Review Editor View Content in Team Phase ................. 251
Figure 241 - Review Markers and Tooltip Information in the Editor ......... 251
Figure 242 - Team Phase Toolbar Button ................................................... 252
Figure 243 - Code Review Editor View Content in the Rework Phase ....... 253
Figure 244 - Accessing Code Review Preference Settings ......................... 253
Figure 245 - Customize Filters Applied for All Phases ................................ 254
Figure 246 - Customize Visible Code Review Table Columns .................... 254
Figure 247 - The Rule Setting Panel ........................................................... 258
Figure 248 - Selecting Rules ....................................................................... 259
Figure 249 - Excluding files and folders from analysis ............................... 260
Figure 250 – Opening the Static Code Analysis Perspective ...................... 261
Figure 251 - Starting a Code Analysis Session ............................................ 261
Figure 252 - The Static Code Analysis Perspective ..................................... 262
Figure 253 - MISRA violations in the editor ............................................... 262
Figure 254 - The static code analysis perspective ...................................... 263
Figure 255 - The Inspection Summary View (Rule mode) .......................... 264
Figure 256 - The Inspection Summary View (Severity mode) .................... 264
List of Figures
xx | P a g e
Figure 257 - The Inspection Summary View (Source mode) ...................... 265
Figure 258 - The Inspection Summary View (Syntax Error mode) ............. 265
Figure 259 - The Rule Description View ..................................................... 266
Figure 260 - The Violation View ................................................................. 267
Figure 261 - Individual Violations ............................................................... 267
Figure 262 – Generate Report Toolbar Button .......................................... 267
Figure 263 - The Report Generator Dialog Box .......................................... 268
Figure 264 - The Open Folder Message Box .............................................. 268
Figure 265 - View the Generated Report ................................................... 269
Figure 266 – The Code Metric View (Module Mode)................................. 272
Figure 267 – Metric Mode Toolbar Button ................................................ 272
Figure 268 - The Code Metric View (File Mode) ........................................ 273
Figure 269 – Metric dropdown list ............................................................. 273
Figure 270 – Metric Mode Toolbar Button ................................................ 273
Figure 271 - The Code Metric View (Function Mode) ................................ 274
Figure 272 – Generate Report Toolbar Button .......................................... 275
Figure 273 - The Report Generator Dialog Box .......................................... 275
Figure 274 - The Open Folder Message Box .............................................. 275
Figure 275 – Rule Violation Markers .......................................................... 276
Figure 276 – Remove Violation Button ...................................................... 276
Figure 277 – Example of Auto-Generated Unit Tests ................................ 280
Figure 278 - Test scenarios ......................................................................... 282
Figure 279 - Switch Perspective ................................................................. 286
Figure 280 – The Unit Test Perspective ..................................................... 287
Figure 281 - The File and Folder Exclusion Dialog Box ............................... 288
Figure 282 - Selecting the Project Root Item ............................................. 289
Figure 283 -Unit Test Buttons .................................................................... 289
Figure 284 - The Function Selection Dialog Box ......................................... 290
Figure 285 – Function filtering ................................................................... 291
Figure 286 - Selecting functions to test ..................................................... 292
Figure 287 - Input and output dependencies ............................................ 293
Figure 288 - The Unit Test Perspective with test suites added.................. 294
Figure 289 - Test drivers in the unittest folder .......................................... 294
Figure 290 - Test suites and tests in the Unit Test view ............................ 295
Figure 291 – Unit Test Toolbar Buttons ..................................................... 295
Figure 292 - The test cases in the selected test ......................................... 296
Figure 293 - Define expected return codes................................................ 297
List of Figures
xxi | P a g e
Figure 294 - Selecting test suites and tests ................................................ 298
Figure 295 – Run Unit Test Button ............................................................. 298
Figure 296 - Unit Test Launch Configurations ............................................ 299
Figure 297 – The Debugger JTAG Probe Configuration ............................. 300
Figure 298 - The Unit Test Debugger Startup Script Tab ........................... 301
Figure 299 - The progress view ................................................................. 302
Figure 300 - The Unit Test Perspective after a Test Session ...................... 302
Figure 301 - Test result overview ............................................................... 303
Figure 302 - Detailed test results ............................................................... 303
Figure 303 - Test coverge of the functions in the project .......................... 304
Figure 304 - The test coverage of the functions in a test .......................... 305
Figure 305 - The test coverge for each test case ....................................... 305
Figure 306 - The test coverge toolbar buttons .......................................... 305
Figure 307 - More complex input and output dependencies .................... 307
Figure 308 - More inputs and output dependencies ................................. 307
Figure 309 - Creating a test scenario ......................................................... 310
Figure 310 - Select functions for the scenario ........................................... 311
Figure 311 - Functions are selected ........................................................... 311
Figure 312 - Functions are added to the test scenario .............................. 312
Figure 313 - The input and output dependencies ...................................... 313
Figure 314 - The generated test scenario .................................................. 314
Figure 315 - The FTP Download Scenario .................................................. 314
Figure 316 - Input and output values for all functions in the test scenario
.................................................................................................................... 315
Figure 317 - The test scenario implementation in the test driver file ....... 315
Figure 318 - The test configuration form ................................................... 318
Figure 319 – Adding or importing more tests to a test suite ..................... 318
Figure 320 - Editing a test .......................................................................... 319
Figure 321 - Import and export test data ................................................... 319
Figure 322 - The Test cases view toolbar ................................................... 321
Figure 323 - The Assist Code tab ................................................................ 322
Figure 324 - Adding a setup or teardown definition .................................. 323
Figure 325 - Setup and teardown implementations in the unittest folder 323
Figure 326 - Edit the setup and teardown implementation files ............... 324
Figure 327 - Adding a setup definition to a test suite ................................ 325
Figure 328 - Select the setup definition to use .......................................... 325
Figure 329 - Advanced failure detection .................................................... 329
List of Figures
xxii | P a g e
Figure 330 - Test driver instrumented for diagnosis output ...................... 331
Figure 331 - Character pointers are not string variables by default .......... 333
Figure 332 - Open the test driver in the editor .......................................... 334
Figure 333 - Modified test driver ............................................................... 335
Figure 334 - The test vector now accepts strings ...................................... 335
Figure 335 - Test cases with character strings ........................................... 336
Figure 336 - Test results with strings ......................................................... 336
Figure 337 - The Generate Unit Test Report Button .................................. 337
Figure 338 – Export Unit Test Result .......................................................... 337
Figure 339 – Unit Test Toolbar Buttons ..................................................... 339
Figure 340 - Reanalyze the project ............................................................ 339
Figure 341 – Switch to Coverage Perspective ............................................ 348
Figure 342 – Coverage Perspective ............................................................ 349
Figure 343 - Exclude files and folders from analysis .................................. 350
Figure 344 - Starting a Code Coverage Analysis Session ............................ 350
Figure 345 – Launch Configuration Dialog ................................................. 351
Figure 346 – Select and Configure the Debugger ...................................... 351
Figure 347 – The Startup Analyze Panel .................................................... 352
Figure 348 - Analysis, Instrumentation and Rebuild .................................. 352
Figure 349 - Code Coverage Perspective with no Analysis Results ............ 353
Figure 350 - Hardware Connection Selected ............................................. 354
Figure 351 - Visualization of Code Coverage Analysis Results ................... 355
Figure 352 - Using the MC/DC view to understand why the test coverage is
not better ................................................................................................... 355
Figure 353 - Terminating an Analysis Session ............................................ 357
Figure 354 – Return to C/C++ Perspective Toolbar Button ....................... 357
Figure 355 – Generate Report Toolbar Button .......................................... 358
Figure 356 - The Export CSV file Dialog Box ............................................... 358
List of Tables
xxiii | P a g e
Tables Table 1 – Typographic Conventions ............................................................. 26
Table 2 – Exception Data Columns ............................................................. 147
Table 3 – Exception Statistics Columns ...................................................... 148
Table 4 – embOS System Variables ............................................................ 169
Table 5 – embOS Task Parameters ............................................................. 170
Table 6 – embOS Timer Parameters .......................................................... 171
Table 7 – embOS Resource Semaphore Parameters ................................. 172
Table 8 – embOS Mailbox Parameters ....................................................... 173
Table 9 – eTaskSync Task Parameters ........................................................ 175
Table 10 – FreeRTOS Task Parameters....................................................... 178
Table 11 – FreeRTOS Queue Parameters ................................................... 179
Table 12 – FreeRTOS Semaphore Parameters ........................................... 180
Table 13 – FreeRTOS Timer Parameters .................................................... 181
Table 14 – MQX Task Parameters .............................................................. 183
Table 15 – RTXC Kernel Information .......................................................... 186
Table 16 – RTXC Task List Parameters ........................................................ 188
Table 17 – RTXC Stack Info ......................................................................... 188
Table 18 – RTXC Alarm Parameters ........................................................... 189
Table 19 – RTXC Counter Parameters ........................................................ 190
Table 20 – RTXC Event Source Parameters ................................................ 191
Table 21 – RTXC Exception Backtrace Parameters .................................... 192
Table 22 – RTXC Exception Parameters ..................................................... 193
Table 23 – RTXC Mailbox Parameters ........................................................ 193
Table 24 – RTXC Mutex Parameters ........................................................... 195
Table 25 – RTXC Partition Parameters ....................................................... 196
Table 26 – RTXC Pipe Parameters .............................................................. 197
Table 27 – RTXC Queue Parameters .......................................................... 198
Table 28 – RTXC Semaphore Parameters ................................................... 199
Table 29 – ThreadX Thread Parameters ..................................................... 202
Table 30 – ThreadX Semaphore Parameters ............................................. 203
Table 31 – ThreadX Mutex Parameters ..................................................... 204
Table 32 – ThreadX Message Queue Parameters ...................................... 205
Table 33 – ThreadX Event Flag Parameters ............................................... 205
Table 34 – ThreadX Timer Parameters ....................................................... 206
Overview
xxiv | P a g e
Table 35 – ThreadX Memory Block Pool Parameters ................................ 207
Table 36 – ThreadX Memory Byte Pool Parameters .................................. 208
Table 37 – TOPPERS Tasks Static Information ........................................... 210
Table 38 – TOPPERS Tasks Current Status ................................................. 211
Table 39 – TOPPERS Dataqueue Static Information .................................. 212
Table 40 – TOPPERS Dataqueues Current Status ....................................... 213
Table 41 – TOPPERS Event Flags Static Information .................................. 214
Table 42 – TOPPERS Event Flags Current Status ........................................ 215
Table 43 – TOPPERS Mailboxes Static Information .................................... 216
Table 44 – TOPPERS Mailboxes Current Status.......................................... 217
Table 45 – TOPPERS Memory Pools Static Information ............................. 218
Table 46 – TOPPERS Memory Pools Current Status ................................... 218
Table 47 – TOPPERS Cyclic Handlers Static Information ............................ 219
Table 48 – TOPPERS Cyclic Handlers Current Status .................................. 220
Table 49 – TOPPERS Alarm Handlers Static Information ........................... 221
Table 50 – TOPPERS Alarm Handlers Current Status ................................. 222
Table 51 – µC/OS-III System Variables ....................................................... 226
Table 52 – µC/OS-III Task Parameters ........................................................ 227
Table 53 – µC/OS-III Semaphore Parameters ............................................ 228
Table 54 – µC/OS-III Mutexes Parameters ................................................. 229
Table 55 – µC/OS-III Message Queue Parameters ..................................... 230
Table 56 – µC/OS-III Event Flag Parameters .............................................. 231
Table 57 – µC/OS-III Timer Parameters ...................................................... 232
Table 58 – µC/OS-III Memory Partitions Parameters ................................ 233
Table 59 – Atollic TrueSTUDIO Support for the Code Review Workflow... 236
Table 60 - Code Review Toolbar Buttons ................................................... 246
Table 61 - Code Review Table View Toolbar Button Description .............. 247
Table 62 – The Code Review Editor View Toolbar Button Description ...... 248
Table 63 – Relation between Complexity Level and Risk of Bugs .............. 270
Table 64 – Risk of Introducing New Bug When Correcting Existing........... 271
Table 65 – TrueVERIFIER Terms and Concepts .......................................... 284
Table 66 – Defining Advanced Types Of Expected Values ......................... 298
Table 67 – Input macros ............................................................................. 327
Table 68 – Test result macros .................................................................... 328
Table 69 – Output macros .......................................................................... 329
Table 70 – Test result macros .................................................................... 330
Introduction
25 | P a g e
ABOUT THIS DOCUMENT Welcome to the Atollic TrueSTUDIO® for ARM® User Guide. The purpose of this document is to provide information on how to use Atollic TrueSTUDIO® and the bundled products:
Atollic TrueINSPECTOR® – static source code inspection
Atollic TrueANALYZER® – dynamic test and code coverage analysis
Atollic TrueVERIFIER® – embedded test automation
INTENDED READERS This document is primarily intended for users of Atollic TrueSTUDIO®, or any of the bundled products Atollic TrueINSPECTOR®, Atollic TrueANALYZER® and Atollic TrueVERIFIER®.
The Atollic TrueSTUDIO® Pro and Premium products includes full version of Atollic TrueINSPECTOR®.
Atollic TrueANALYZER® and Atollic TrueVERIFIER® only runs in demo mode in Atollic TrueSTUDIO® Pro and Lite products.
Introduction
26 | P a g e
DOCUMENT CONVENTIONS The text in this document is formatted to ease understanding and provide clear and structured information on the topics covered. The following typographic conventions apply:
Table 1 – Typographic Conventions
Style Use
Command Keyboard Command or Source Code Section.
Object Name Name of a User Interface Object (Menu, Menu Command, Button, Dialog Box, etc.) that appears on the computer screen.
Cross Reference Cross reference within the document, or to an external document.
Product Name Name of Atollic product.
Identifies instructions specific to the Graphical User Interface (GUI).
Identifies instructions specific to the Command Line Interface (CLI).
Identifies Help Tips and Hints.
Identifies a Caution.
Introduction
27 | P a g e
GETTING STARTED This section provides information on how to begin using Atollic TrueSTUDIO® for ARM®.
The following topics are covered:
Introduction
Preparing for Start
Starting the Program
Creating a New Project
Configuring the Project
Building the Project
Debugging
Getting Started
28 | P a g e
INTRODUCTION Welcome to Atollic® TrueSTUDIO® and its optional add-on modules! The TrueSTUDIO product family is the most powerful C/C++ IDE available for development of high-quality ARM® software.
Atollic believe the world deserves better embedded software, and our vision is to provide the best tools available anywhere to create high-quality embedded software. Unlike traditional C/C++ development tools, the TrueSTUDIO IDE guides you through the process of developing high quality software. TrueSTUDIO brings the best principles from the automotive and aerospace industries to all embedded developers. We offer 7 specific functionality modules to help improve the software quality:
Getting Started
29 | P a g e
Using the above modules for software engineering, team collaboration, code analysis, automatic software testing and test quality measurement, embedded products of superior quality can be released. Make sure to use the advanced productivity and quality enhancers available in Atollic TrueSTUDIO and its optional add-on modules!
Getting Started
30 | P a g e
SUPPORT Atollic provides technical support from our European and USA offices, and via our international distributors in various countries all over the world.
Technical support is granted to customers with Atollic TrueSTUDIO Pro or Premium license and a valid Support & Upgrade Agreement (SUA) license. Always make sure to state the SUA-license number when requesting Support.
Contact your local distributor or Atollic support team.
For technical support in Europe and world-wide:
For technical support in North, Central and South America:
Atollic Support Team always tries to solve the problem with the fastest possible response-time. The support lead-time can be dramatically reduced by taking the following important pro-active steps:
Always include the SUA-license number.
For installation related problem - always attach the install.log-file located in %installdir%\ (C:\Program Files (x86)\Atollic\TrueSTUDIO for
ARM Pro 4.3.0).
For any other type of problem - always attach the workspace log-file. This file is named .log, and is located in %WORKSPACE%\.metadata\
For problems related to building, attach the project build log. By default it is found in %WORKSPACE%\.metadata\.plugins\org.eclipse.cdt.ui\myProject.b
uild.log
When requesting to kill a node locked license to be able to install it on another computer, always include the Kill Code.
Include the USB-key number for problems with an USB-license Key. The number is found on the USB-license key.
Getting Started
31 | P a g e
LITE, PRO AND PREMIUM VERSIONS Atollic TrueSTUDIO® is available in a free Lite version and the commercial Pro and Premium versions. The free Lite version may be used for commercial development, but has a code-size limitation, lacks some advanced features, and has a legal license limitation. The commercial Pro version has none of these limitations.
The Premium version also includes the Atollic TrueVERIFIER® unit testing tool and the Atollic TrueANALYZER® MC/DC-level code coverage analysis tool in a single seamless and smoothly integrated package. The TrueSTUDIO Premium version is designed to help embedded systems developers to achieve two important, but often mutually exclusive, goals of achieving higher software quality, while at the same time, meeting ever increasing time-to-market requirements for product releases.
For more information on the features and differences between the Lite and Pro version, please read the Atollic TrueSTUDIO® Feature Comparison Guide, which is available here:
www.atollic.com/index.php/feature-comparison
Lite Version License Summary
Atollic TrueSTUDIO® Lite is a free of charge and reduced version of Atollic TrueSTUDIO® Pro. The Lite version contains almost all features needed for development and debugging, but lacks some of the more advanced features from the other versions. It also has a code-size limitation.
The Lite version may be used for commercial development, given that the end-user manual of the product developed clearly advertises that the product is developed using Atollic TrueSTUDIO®. The manual must also include a link to www.atollic.com.
This is required as long as at least one of the product developers use the Lite version. Only if all developers use the Pro and Premium versions, may this be omitted.
Getting Started
32 | P a g e
PREPARING FOR START Atollic TrueSTUDIO® is built using the ECLIPSE™ framework, and thus inherits some characteristics that may be unfamiliar to new users. The following sections outline important information to users without previous experience of ECLIPSE™.
WORKSPACES & PROJECTS As Atollic TrueSTUDIO® is built using the ECLIPSE™ framework, the ECLIPSE™ project and workspace model applies. The basic concept is outlined below:
A workspace contains projects. Technically, a workspace is a directory containing project directories or references to them.
A project contains files. Technically, a project is a directory containing files that may be organized in sub-directories.
A single computer may hold several workspaces at various locations in the file system. Each workspace may contain many projects.
The user may switch between workspaces, but only one workspace can be active at any one time.
The user may access any project within the active workspace. Projects located in another workspace cannot be accessed, unless the user switches to that workspace.
Switching workspace is a quick way of shifting from one set of projects to another set of projects. It will trigger a quick restart of the product.
In practice, the project and workspace model facilitates a well-structured hierarchy of workspaces, containing projects, containing files.
Getting Started
33 | P a g e
PERSPECTIVES & VIEWS Atollic TrueSTUDIO® is a very powerful product with a great many docking views, loaded with various features. Displaying all docking views simultaneously would overload the user with information that may not be relevant to the task at hand.
To overcome this problem, docking views can be organized in perspectives, where a perspective contains a number of predefined docking views. A perspective typically handles one development task, such as:
C/C++ Code Editing
Debugging
Bug Database Access
Version Control
Code Review
Static Source Code Analysis
Code Coverage Analysis
Atollic TrueSTUDIO®
Workspace 1 (C:\Joe\Workspace)
Project A
Project B
. . .
Workspace 2 (C:\Customer1)
Project C
Project D
. . .
Workspace 3 (X:\NewProjects)
Project E
Project F
. . .
Workspace
currently inactive
Workspace
currently active
Workspace
currently inactive
Figure 1 - Workspaces and Projects
Getting Started
34 | P a g e
As an example, the C/C++ Editing perspective displays docking views that relate to code editing, such as Editor Outline and Class Browser. The Debug perspective displays docking views that relate to Debugging, such as Breakpoints and CPU Registers.
Switching from one perspective to another is a quick way to hide some docking views and display others.
Atollic TrueSTUDIO® comes with a number of pre-configured perspectives. Developers may modify these, or create entirely new, at will. Atollic TrueSTUDIO is designed around a philosophy that one perspective shall be used for one task, and that a perspective should not contain a lot of GUI objects from other perspectives.
As the figure below outlines, the idea is that one perspective shall be used for each work task. It is however clear that one perspective, the C/C++ editing perspective is the “master” perspective where developers spend most time. Therefor that perspective is the center-point of Atollic TrueSTUDIO, and developers temporarily jump to other perspectives to do other tasks, and when completed, jump back to the editing and building perspective again.
The C/C++ Editing perspective is also the perspective that is opened when Atollic TrueSTUDIO is started the first time.
Figure 2 – Editing Perspective
We think it is valuable to use the editing and building perspective as the master perspective, and all other perspectives used temporarily for other work tasks.
To switch perspective, select the Open Perspective toolbar button or use the menu command View, Open Perspective:
Editing
Building Debug
Version
control
. . .
Getting Started
35 | P a g e
Figure 3 - Switch Perspective
Alternatively, click any of the perspective buttons in the top right corner of the main window (only the last few ones active are displayed here):
Figure 4 - Switch Perspective
Figure 5 – Toolbar Buttons for Perspectives and Views
One can always return to the default C/C++ Editing and Building perspective by clicking the Return to editor and building perspective toolbar button (D). Editing and Building is the main activity for most C/C++ developers, hence the dedicated button.
Getting Started
36 | P a g e
VIEWS
When Atollic TrueSTUDIO is started for the first time, the C/C++ Editing Perspective is activated by default. This perspective does not show all available docking views by default, to reduce information overload. The same principle applies to all perspectives.
To access additional features built into the product, open additional docking views. To do this, select the View toolbar button (C) or use the menu command View:
Figure 6 - View Menu toolbar button
The above list of views, while comprehensive, is still not complete. This list only contains the most common views for the work task related to the currently selected Perspective. To access even more docking views, select Other… from the list. This opens the Show View dialog box. Double click on any docking view to open it and access the additional features:
Getting Started
37 | P a g e
Figure 7 - Show View Dialog Box
The remaining toolbar buttons related to perspectives and views are:
Figure 8 – Toolbar Buttons for Perspectives and Views
Information Center (A) – Displays the initial welcome screen from the first time the product was started, after being installed. More details will follow in “Starting the Program” below.
Perspective (B) – A shortcut that opens a perspective of the user’s choice.
Getting Started
38 | P a g e
STARTING THE PROGRAM After installing Atollic TrueSTUDIO® according to TrueSTUDIO® Installation Guide, start the program by performing the following steps (applies to Microsoft® Windows® Vista® and Windows 7®):
1. Open the Microsoft® Windows® Start Menu
2. Click on All Programs
3. Open the Atollic folder
4. Open the TrueSTUDIO® for ARM® folder
5. Click on the Atollic TrueSTUDIO® item in this folder
6. Wait for the program to start, and the Workspace Launcher dialog to be displayed.
Figure 9 - Workspace Launcher
This dialog enables the user to select the name and location of the Active Workspace. The Active Workspace is a folder that will hold all projects currently accessible by the user. The user may open any existing project in the workspace. Any newly created projects will be stored in the workspace.
7. Enter the full name (with path) of the workspace folder to be used for the current session. Alternatively, browse to an existing workspace folder, or use the default workspace folder. This is located within the home directory of the current user, e.g. C:\Users\User\Atollic\TrueSTUDIO If the appointed workspace folder does not yet exist, it will be created.
8. Click on the OK button
Getting Started
39 | P a g e
9. Wait for the Information Center window to be displayed.
Figure 10 - Information Center
The user must have write-access to the home directory to be able to start Atollic TrueSTUDIO®.
Atollic recommends that the Active Workspace folder is located not too many levels below the file system root. This is to avoid exceeding the Windows® path length character limitations. This can cause build errors if the file paths become longer than Windows can handle.
Getting Started
40 | P a g e
This window enables the user to quickly reach information regarding the product, and how to use it, by clicking on the corresponding hypertext links. It is not required to read all material before using the product for the first time. Rather, it is recommended to consider the Information Center as a collection of reference information to return to, whenever required during development.
The Information Center window may be reached at any time via the Help, Welcome menu command or via the Information Center toolbar button.
Figure 11 – Information Center Menu Command
Figure 12 – Information Center Toolbar Button (A)
10. Start using Atollic TrueSTUDIO® by closing the Welcome page (click the “X” in the Welcome page tab above its main window area). The Information Centre window is closed, but may be restored at any time, as described above.
STARTING WITH DIFFERENT LANGUAGE Start Atollic TrueSTUDIO from command line using following options:
English TrueSTUDIO.exe -nl en Japanese TrueSTUDIO.exe -nl ja Korean TrueSTUDIO.exe -nl ko Simplified Chinese TrueSTUDIO.exe -nl zh
Getting Started
41 | P a g e
MANAGING EXISTING WORKSPACES The workspaces known to Atollic TrueSTUDIO can be managed by selecting Windows, Preferences and in the Preferences Dialog select General, Startup and Shutdown, Workspaces.
Figure 13 – Manage Workspaces
However, removing a Workspace from that list will not remove the files. Neither will removing the files from the file system remove the workspace from this list.
COPY PROPERTIES BETWEEN WORKSPACES
To copy Workspace preferences from one workspace to another, in the menu select File, Export. Then in the panel select General, Preferences. Press the Next button and in the next page enable Export All and a good filename of your choice.
Then select File, Switch Workspace and your new workspace. Atollic TrueSTUDIO will then restart and open with the new workspace.
In the menu select File, Import and in the panel select General, Preferences. Press the Next button and on the next page enable Import All and enter your file name. The preferences will now be the same in the two workspaces.
KEEPING TRACK ON JAVA HEAP SPACE
Too keep track on how much Java heap space is used, select Windows in the menu and then Preferences.
Getting Started
42 | P a g e
Figure 14 – Display Java Heap Space Status
Select the General node and then enable Show heap status. The currently used and available Java Heap Space will then be displayed in the lower right corner of the Workspace. The garbage collector can also be manually triggered there.
Getting Started
43 | P a g e
CREATING A NEW PROJECT Atollic TrueSTUDIO® supports both Managed and Unmanaged projects. Managed projects are handled entirely by the IDE and may be configured via GUI settings. Unmanaged projects require the existence of a makefile, which needs to be maintained manually.
The toolbar has three buttons for quick creation of new projects.
Figure 15 – Project Creation Buttons
To create a new Managed Mode C project, perform the following steps:
1. Click the button to create a C Project (A).
As an alternative, select the File, Create new C Project… menu command to start the Atollic TrueSTUDIO® project wizard.
Figure 16 - Starting the Project Wizard
Getting Started
44 | P a g e
2. Wait for the C Project configuration page to be displayed.
Figure 17 - C Project Configuration
Enter a Project name (such as “MyProject”), select Embedded C Project as Project type, and Atollic ARM® Tools as Toolchains. Click the Next button.
Getting Started
45 | P a g e
3. Wait for the TrueSTUDIO® Hardware configuration page to be displayed.
Figure 18 - TrueSTUDIO® Hardware Configuration
Configure the hardware settings according to your evaluation board or custom board design.
The default selection for floating point operations is either Software implementation, Mix HW/SW implementation or Hardware implementation, depending on the selected microcontroller.
Some microcontrollers have floating point support implemented in hardware. For such microcontrollers, the selection Hardware implementation is more efficient, and will thus be default.
However, this setting will not work properly on devices that do not have floating point support in hardware. In such a case, Software implementation will be default.
If no specific Vendor is selected, a template linker script file named
template_flash.ld will be generated. The appropriate values for your particular target board must be entered manually into this file. If not, the linking process will fail.
Getting Started
46 | P a g e
Please note that evaluation boards may have hardware switches for configuration of Code location in RAM or FLASH. The setting selected in the project wizard must correspond to the settings on the board.
The Mix HW/SW implementation is for those projects that have libraries that aren’t compiled for hardware floating point. In this implementation the function calls are not using the FPU-registers as in a pure Hardware implementation. The FPU will however still be used inside the project functions.
When finished, click the Next button.
4. Wait for the TrueSTUDIO® Software configuration page to be displayed.
Figure 19- TrueSTUDIO® Software Configuration
Select the desired Runtime library to be used. For information about the differences between Newlib–nano and the regular Newlib, please refer to the Newlib-nano readme file, accessible from the Information Center (Figure 10).
If the target board has a limited amount of memory, the Use tiny printf/sprinf/fprintf (small code size) setting is recommended. If there are no particular memory requirements, use the default selection.
When finished, click the Next button.
Getting Started
47 | P a g e
5. Wait for the TrueSTUDIO® Debugger configuration page to be displayed.
Figure 20 - TrueSTUDIO® Debugger Configuration
Atollic TrueSTUDIO® supports several different types of JTAG probes. Select the probe to be used during debugging.
When finished, click the Next button.
If using an RTOS, it is recommended to generate a system calls file, and select the Fixed Heap size option. This option requires that the _Min_Heap_Size
symbol is defined in the linker script .ld file. The .ld file is stored in the root directory of the currently selected project. The heap size defined by _Min_Heap_Size must meet the heap size required by the application.
Getting Started
48 | P a g e
6. Wait for the TrueSTUDIO® Select Configurations page to be displayed.
Figure 21 - Select Configurations
Keep the default selections. Click on the Finish button.
A new Managed Mode C-project is now created. Atollic TrueSTUDIO® generates target specific sample files in the project folder to simplify development.
Getting Started
49 | P a g e
7. Expand the project folder (“MyProject” in this example) and the src subfolder in the Project Explorer docking view.
Figure 22 - Project Explorer View
8. Double click on the main.c file in the Project Explorer tree to open the file in the editor.
Figure 23 – Editor View
Getting Started
50 | P a g e
ONE-CLICK EXAMPLE PROJECT INSTALLATION The Atollic TrueSTORE® system is a repository with hundreds of free example projects for various evaluation boards. Atollic TrueSTUDIO® users can easily find the latest available set of example projects on our server, as well as download and install them into the Atollic TrueSTUDIO® Active Workspace, with a single mouse-click! Any example application of interest is up and running on the target hardware in less than a minute.
To find the examples relevant to a specific target board, select the Download button in the toolbar (C in the image below).
Figure 24 – Project Creation Buttons
Wait for the Atollic TrueSTORE® Dialog box to open.
Figure 25 – Atollic TrueSTORE
The Atollic TrueSTORE® requires an internet connection to work, as all example projects are stored on our internet server.
Getting Started
51 | P a g e
Select the project(s) of interest and click Finish. The project is imported into the Active Workspace and is immediately ready to be built and executed on the target board. The whole process typically takes less than a minute.
USING AN EXISTING PROJECT To use an existing project in Atollic TrueSTUDIO®, double-click the .project file located within the project folder to open it.
Figure 26 – Selection of Existing Project File
Wait for Atollic TrueSTUDIO® to start, as a result of double-clicking the .project file.
CREATING A STATIC LIBRARY To create a Static Library-project select in the top menu File, New, C Project and in the wizard-dialog that pops up select Static Library, Embedded C Library and Atollic ARM Tools.
Please note that if the File Browser is configured not to display file extensions, two nameless icons will appear in the file list, representing the .project and the .cproject files. The use of files without a filename is an unfortunate heritage from the ECLIPSE™ framework.
Atollic recommends that example projects downloaded from outside Atollic TrueSTUDIO®, e.g. from a chip vendor, be located not too many levels below the file system root. This is to avoid exceeding the Windows® path length character limitations.
Getting Started
52 | P a g e
Figure 27 – Selection of Static Library Project
The project will then be built as an archive file with the name lib{project-namn}.a, as for an instance libMyLibrary.a
The command line tools are needed to create an archive file from an object file without first creating a library project.
Open a Windows command prompter – cmd.exe
Move to the ARMTOOLS\bin folder in TrueSTUDIO installation folder cd C:\%installdir&\ARMTools\bin
Run the archive command arm-atollic-eabi-ar -r libStaticLibrary.a src\syscalls.o
It is recommended to always place a library and the library code in a separate project and never include them in the main-project.
If the library project should be recompiled at the same time as the project that have included it, the library project should be added as a reference to the other project. Select Project, Properties and in Project References select the library project.
Getting Started
53 | P a g e
CREATING A MAKEFILE PROJECT FROM EXISTING
CODE To import an existing makefile project select File, New and Makefile Project with Existing Code.
Figure 28 – Create a Makefile Project from existing code
Enter the name of the new project and the location of the existing code.
Make sure to select <none> as the Toolchain for the Indexer Settings.
Figure 29 – Locate the code and select <none>
Then add the path to the existing toolchain to the system PATH environment variable. That can be done from within Atollic TrueSTUDIO by select Project, Properties and then C/C++ Build, Environment.
Locate the PATH variable in the list, select it and click Edit. If PATH can’t be located, click Add and write PATH in the Name textbox.
Getting Started
54 | P a g e
Figure 30 – Edit the PATH variable
In the Value textbox, add the full path to the location of the toolchain, and also any other location from where any executables in the makefile are located. Separate the paths with a “;”.
Getting Started
55 | P a g e
CONFIGURING THE PROJECT’S BUILD
SETTINGS Managed Mode projects can be configured using dialog boxes. Unmanaged Mode projects require a manually maintained makefile.
Atollic TrueSTUDIO® provides extensive GUI controls for configuration of command line tool options using a simple point-and-click mechanism.
To configure a Managed Mode project, perform the following steps:
1. Select a project in Project Explorer docking view.
2. Click on the Build settings toolbar button or select Project, Build Settings….
Figure 31 – Build Settings Toolbar Button
Figure 32 – Build Settings Menu Selection
3. The project Properties dialog box is displayed.
4. Expand the C/C++ Build item in the tree in the left column. Then select the Settings item to display the build Settings panel.
Getting Started
56 | P a g e
Figure 33 - Project Properties Dialog Box
5. Select panels as desired and configure the command line tool options using the GUI controls.
Advanced users may wish to enter command line options manually. This can be done in the Miscellaneous panel for any tool.
Getting Started
57 | P a g e
Figure 34 – Tool Settings, Miscellaneous Options
6. Some project build settings are relevant for both Managed Mode projects and Unmanaged Mode projects. For instance the selected microcontroller or evaluation board may affect both the options to the compiler during a Managed Mode build, and also how additional components in Atollic TrueSTUDIO, for instance the SFR docking view, and debugger, will behave.
Project build settings relevant for both Managed Mode projects and Unmanaged Mode projects are collected in the Target Settings panel.
The Build Settings are sometimes also referred to as the Build Configurations.
Getting Started
58 | P a g e
Figure 35 – Target Settings Dialog Box
7. When the project Configuration is complete, click the OK button to accept the new settings.
Changing to a different hardware target, will cause a new linker script file
(.ld) to be generated, with FLASH and RAM settings adjusted to the memory size of the new target device.
However, libraries, header files, etc. will not be generated automatically for the new target! These must be added manually to the project.
Furthermore, the new linker script needs to be edited manually, to include the linker settings found in the Tool Setting, C Linker, General panel.
If the target device needs to be changed, Atollic recommends generating a new project for that target. Copy the source code from the current project to the new project. This is to avoid the need for manual editing of the linker file.
Getting Started
59 | P a g e
DEBUG AND RELEASE BUILD SETTINGS A project created in Atollic TrueSTUDIO contains by default two Build configurations, the Debug and the Release configuration. There are two settings that differentiate them. The Debug configuration is built with debugging information and no optimization level.
Settings done in the project will mostly only affect the current configuration. When most of the development is done and it is time to switching to the Release configuration, there might be a lot of settings done under the development process that is missing in the Release configuration. To make sure that the Release configuration contains all necessary settings, it may be easiest to create a new Release configuration, copy the settings from the Debug configuration, and then just change the debug information level and optimization level.
1. Select the project in the Project Explorer and right click Project, Manage Build Configurations…
2. Delete the old Release configuration or the configuration that does not have all the used settings
3. Click New…
4. Name the new configuration. E.g. Release
5. Select to copy settings from the existing Debug configuration.
6. Click OK.
7. Select the new Release configuration and click Set Active.
8. Close the dialog by clicking the OK button.
Next, open up Project, Properties, and navigate to C/C++ Build, Settings, Tool Settings. In the Debugging node, for the Assembler and C/C++ Compiler, set the debug level to none. Then select an optimization level in the Optimization node for the C/C++ Compiler.
The build output folder will be named as the active build configuration. So when the project is built, the .elf file will be located in the Release folder for the new Release configuration.
COMPILER OPTIMIZATION The GNU C/C++ compiler (and hence Atollic TrueSTUDIO) have 6 levels of compiler optimization; -O0 for no optimization up to -O3 for full optimization. There is one level for size optimization (-Os) which is commonly required in embedded projects and another level for speed optimization (-Ofast). Also available is a level for turning on optimizations that won’t interfere with the debug experience (-Og).
Getting Started
60 | P a g e
To enable compiler optimization in the commercial versions of Atollic TrueSTUDIO, select optimization option from the dropdown list in the C/C++ Build, Settings, Tool Settings, C compiler, Optimization panel in the Project Properties dialog box. The optimization options can also be set per file in the File Properties dialog box, found by right-clicking an individual file.
Figure 36 – Compiler Optimization Settings for a Project
Figure 37 – Compiler Optimization Settings for a File
Getting Started
61 | P a g e
The optimization setting is per Build Configuration. Per default the Debug configuration is optimized with –O0 and the Release configuration has -Os.
In addition to the simplified optimization settings mentioned above, about 100 optimization settings can be set individually using various command line options and #pragmas. Consult the Compiler manual for details. It can be found from the Information Center view or the Windows start menu.
To define a specific optimization level on a block of code, use the optimize attribute on the block:
void __attribute__((optimize("O1"))) myFunk(unsigned char
data) {
// The code the needs to have the –O1 optimizing
}
LINK TIME OPTIMIZATION (LTO) Using LTO means that when compiling individual files the output is not object code, but instead an intermediate internal format between the original source code and assembly code. This means that when the linker is doing the final link it has access to more optimizable information about each file and a globally optimized program is generated.
However, because of the way this works also means that in order to use this feature fully it is necessary to provide the linker tool with some extra information that usually has only been supplied to the compiler tool. This extra information can be any optional extra flag that you might have added to the compiler process.
In most cases however it will only be required to add the following flags to the Linker tool Miscellaneous field
-flto -ffunction-sections -fdata-sections –Os
Getting Started
62 | P a g e
Figure 38 – Linker LTO Settings for a Project
The optimization flag (-Os) should have the same value as the optimization flag for the compiler, see page 59 for more information.
It is also required to change the Compiler tool settings and there add the –flto flag to the miscellaneous field.
Figure 39 – Linker LTO Settings for a Project
Getting Started
63 | P a g e
CHANGING TOOL CHAIN VERSION When upgrading to a new version of Atollic TrueSTUDIO it is a good idea to not immediately also switch the tool chain. To change to an older version of the Atollic ARM Tools tool chain or the PC tool chain click on the Build settings toolbar button.
Figure 40 – Build Settings Toolbar Button
Select the Tool Chain Version tab.
Figure 41 – Tool Chain Version tab
There are three options available here:
Default – This option will use the tool chain in the currently running installation of TrueSTUDIO.
Fixed TrueSTUDIO version – if there are other versions of TrueSTUDIO installed on the computer, this option allows the user to select from what version the tool
Getting Started
64 | P a g e
chain will be selected. It will then select that version even if the installation folder for the selected version is changed.
Fixed tool chain location – Used to point to a specific folder.
These setting are saved individually for each Project’s Build Configuration. That way it is possible to have different Build Configurations using different tool chain versions. That way a quick regression test can be created.
To create a new Debug Build configuration for an older version of the tool chain, do the following:
1. Right click the project and select Build Configurations, Manage…
Figure 42 – Manage the Build Configurations
2. In the panel select New… to create a new Build Configuration.
3. Enter a good name for the new Build Configuration. Use one word, such as OldToolChain, without white space and press OK and OK again in the Manage Configuration panel.
When working with a version control system in a team, the second option is strongly recommended for a project. That way all developers will use the same tool chain even if using different versions of Atollic TrueSTUDIO.
Getting Started
65 | P a g e
Figure 43 – Create New Configuration
4. In the Tool Chain Version tab it is now possible to set the Default version of the tool chain for the normal Debug Build Configuration and a Fixed TrueSTUDIO version for the OldToolChain Build Configuration.
Figure 44 – Old Tool Chain Version for the New Build Configuration
Getting Started
66 | P a g e
BUILDING THE PROJECT To start a Build, click on the Build toolbar button. Only files that are changed since the last build, or that depends on changed files or settings, will be built.
Figure 45 - Build Toolbar Button
The Build result is displayed in the Console window. At the end are the code size figures. For example:
Print size information
text data bss dec hex filename
66232 2808 4004 73044 11d54 GSM lib cb1.elf
Print size information done
The values are organized according to memory sections and areas. Per default, the linker arranges the memory into the sections text, data, bss. More information is found
in the linker script file (.ld).
The dec and hex figures express the size of the .elf file. Below the filename header
is the name of the .elf file.
Getting Started
67 | P a g e
ENABLE BUILD ON SAVE To enable Atollic TrueSTUDIO to automatically build a file when it is saved, the Build Behavior setting needs to be changed.
In the top menu select Project, Properties and in the Properties panel select C/C++ Build. Open the Behavior tab and enable Build on resource save.
Figure 46 – Build on Save
REBUILD ALL To force a Rebuild of all files included in the project, click on the Rebuild toolbar button or select the menu command Project, Rebuild Project.
Figure 47 – Rebuild Toolbar Button
Getting Started
68 | P a g e
Figure 48 – Rebuild Active Configuration Menu Selection
HEADLESS BUILD This is intended for Managed Mode projects that are to be integrated into script-controlled builds, such as nightly builds on build servers for continuous integration process methods, etc. It is possible to start a build process from the operating system command line also for Managed Mode projects. The IDE GUI is never displayed in this case, and the user does not have to interact manually with the IDE at all.
In order to be able to run the headless build on a virtual machine server, the USB-locked or the floating network license is needed for the server. Atollic TrueSTUDIO will only run as a script with the headless Build, therefore there is no need for using the remote desktop license type.
The IDE installation folder, e.g. C:\Program Files (x86)\Atollic\TrueSTUDIO for ARM Pro 4.3.0\ide, contains the file headless.bat, which is used for running
headless builds.
Getting Started
69 | P a g e
Option Description
-data {[uri:/]/path/to/workspace} This option is always required and selects which workspace to use for the headless build. If the selected workspace folder does not exist, it will be created automatically.
-import {[uri:/]/path/to/project} Optionally import a project into the workspace before the headless build starts.
-importAll {[uri:/]/path/to/projectTreeURI} Optionally import several projects into the workspace before the headless build starts.
-build {projname_reg_exp} Build all build configurations of the selected project. If the project name contains wildcards (? and *), all matching projects will be built.
-build {projname_reg_exp/configname} Build the selected project using only the selected build configuration. If the project name contains wildcards (? and *), all matching projects will be built.
-build all Build all configurations of all projects in the selected workspace.
-cleanBuild {projname_reg_exp} Rebuild all build configurations of the selected project. If the project name contains wildcards (? and *), all matching projects will be rebuilt.
-cleanBuild {projname_reg_exp/configname} Rebuild the selected build configuration of the selected project. If the project name contains wildcards (? and *), all matching projects will be rebuilt.
-cleanBuild all Rebuild all build configurations of all projects in the selected workspace.
-I {include_path} Additional include path to add to tools.
-include {include_file} Additional include file to pass to tools.
-D {prepoc_define} Additional preprocessor defines to pass to the tools.
-E {var=value} Replace/add value to environment variable when running all tools.
-Ea {var=value} Append value to environment variable when running all tools.
-Ep {var=value} Append value to environment variable when running all tools.
-Er {var} Remove/unset the given environment variable.
Getting Started
70 | P a g e
An option argument is parsed as a string, a comma separated list of strings, or a boolean, depending on the type of option.
Example:
headless.bat -data "C:\Users\User\headless\buildWS" -
importAll "C:\Users\User\headless\checkOutDir" -cleanBuild
all > "C:\Users\User\headless\build.log"
This command will create a temporary workspace folder buildWS for this build. It will
import all projects from the folder checkOutDir (not copy, just import to the temporary workspace) and build all build configurations defined in each project. The result will be
stored in the folder checkOutDir. A log file will be created in the folder headless.
LOGGING To enable build logging, right-click on the project and select Properties. Then select C/C++ Build, Loggings.
The logs can then by default be found in
WORKSPACE_PATH\.metadata\.plugins\org.eclipse.cdt.ui\MyProjec
t.build.log
THE BUILD SIZE After building a project, object files and an application binary file (typically in ELF format) exist under the Debug or Release folder in the Project Explorer view file tree.
To study the properties (such as code or data size) of an object file, open the Properties view.
To open the Properties view, press the Show View toolbar button and select the Properties view.
Getting Started
71 | P a g e
Figure 49 – Open the Properties view
Then select the object file in the Project Explorer view. The Property view will display a large number of properties, including code and data sizes of the object module.
To study the properties (such as code or data size) of a linked application binary file, open the Properties view and select the ELF file in the Project Explorer view.
Getting Started
72 | P a g e
Figure 50 – Open the Properties view
The Property view will display a large number of properties, including code and data sizes of the complete application.
Data is normally stored in the “data" segment and code is normally stored in the "text" segment.
CREATING A .HEX-FILE To convert your program to another output format, do the following:
1. Open up Project, Properties, C/C++ Build, Settings, Tool Settings, Other, Output format
2. Check the box Convert build output and choose a format in the dropdown menu.
Getting Started
73 | P a g e
Figure 51 – Output format Selection
3. Build the project
The converted output will be located in the output directory associated with the currently active build configuration, typically Debug/Release directory.
Other supported file formats are: Binary, S-record, S-record with symbols and Verilog Hex Dump.
To manually create .hex-files and .bin-files, add Post-build steps in the Build Step tab:
arm-atollic-eabi-objcopy -O binary myfile.elf myfile.bin
arm-atollic-eabi-objcopy -O ihex myfile.elf myfile.hex
arm-atollic-eabi-objcopy -O srec myfile.elf myfile.srec
COMMAND LINE PATTERNS The Command Line Pattern is used to assemble parts the builds up the command line that is
used to build the project.
To find it, press the Build Setting toolbar button.
Getting Started
74 | P a g e
Figure 52 – Build Settings Toolbar Button
In the C/C++ Build, Settings select the Tool Settings tab. Each one of the different tools in the toolchain (Assembler, Compiler, Linker and Other) has its own patter that can be modified.
The pattern consists of the replaceable variables COMMAND, FLAGS, OUTPUT_FLAG,
OUTPUT_PREFIX, OUTPUT and INPUTS.
The default command line pattern is ${COMMAND} ${FLAGS}
${OUTPUT_FLAG}${OUTPUT_PREFIX}${OUTPUT} ${INPUTS}
White space and other characters are significant and are copied to the created command.
But the environment variables can also be used. They are defined in Project, Properties,
C/C++ Build, and then Environment.
CREATE .LIST-FILES
To get list files with assembler information when the files in the projects are compiled the build settings for the C/C++ compiler needs to be updated.
In the C/C++ Build, Settings select the Tool Settings tab and then C Compiler. In the Expert settings for Command Line Pattern add -Wa,-aln=${OUTPUT}.list as shown below.
Figure 53 – Generate –list Files
Getting Started
75 | P a g e
TEMPORARY ASSEMBLY FILE Save the temporary assembly file by adding the -save-temps flag to the compiler.
In the menu select Project, Properties, C/C++ build, Settings.
Open the Tool Settings tab.
Then C Compiler, Miscellaneous. Add –save-temps and rebuild the project.
The assembler file will be located in the build output directory and will be called:
FILENAME.s
There will also be a FILENAME.i that is the preprocessed c-code.
BUILDING ONE FILE It is a bit complicated to enable the build option to build only one file in a project. It cannot be done while the default setting Build Automatically is enabled. This will also disable the Build on resource save behavior.
In the top menu select Window, Customize Perspective and in the dialog windows open the Menu Visibility tab. Expand the Project node and enable the Build Automatically option.
Getting Started
76 | P a g e
Figure 54 – Enable the Build Automatically Menu Item
Press OK, then go to the Project menu and make sure Build automatically is unchecked for the project.
The popup menu option Build Selected File(s) is now enabled.
Getting Started
77 | P a g e
Figure 55 –Build Selected File(s)
Getting Started
78 | P a g e
LINKING THE PROJECT For detailed information about the linker, please read The GNU Linker manual. It can be found by selecting the Information Center toolbar button and open the Information Center view. There locate Document center, Build tools and press the Linker link.
The manual can also be found from the Windows start menu when selecting Atollic, TrueSTUDIO, Manuals, Command line Tools, Linker
Figure 56 –GNU Linker manual link
This chapter will explain some of the more common problems encountered during linking.
INCLUDE LIBRARIES In order to include a library into a project right-click on the project where the library will be included; select Properties, C/C++ Build and Settings. Then select the Tool Settings-tab, select C Linker, Libraries.
Getting Started
79 | P a g e
Figure 57 –Include a library
1. In the Libraries list add the name of the library - not the path! The name is the filename without “lib” prefix and without the file extension (.a). It is important not to include those parts of the name. This is a GCC convention.
Example: For a library-file named libmyLibrary.a add the name myLibrary.
2. In the Library Paths list, set the path to where the library is located. Do not include the name of the library in the path.
Example: ../../myLibrary/Debug, this is the path to the archive file of the library project myLibrary residing in the same workspace as the application project.
The included libraries can also be found by right-clicking the project and select C/C++ General and open the Libraries-tab and the Libraries Path-tab.
After a project is referring to another project, be that a library or a normal project, that information needs to be provided to Atollic TrueSTUDIO. See Referring Project on page 80 for more information.
Getting Started
80 | P a g e
REFERRING PROJECT Whenever one project is using code from another project, these should be referring to each other.
In the top menu select Project, Properties and select Project References. Then select and mark the referred project.
Figure 58 –Set Project References
The referred project will now be rebuilt when needed and the Indexer will also be able to find functions from the library and open them. To do that press the Ctrl key and in the editor, click the library-function where it is used. The source file in the library will then be opened in an editor.
The library will also be able to create the Call hierarchy of its functions. To find it, mark the function name and press Ctrl+Alt+H. The Call Hierarchy will then be displayed in the Call Hierarchy view.
If a project needs to refer to a specific build of another project, select instead Project, Properties and then C/C++ General, Paths and Symbols and open the References tab and select the Build Configuration that the current project is referring to.
Getting Started
81 | P a g e
Figure 59 –Set Project References
With this way of referring between different Build configurations, the involved project will not be rebuilt more than necessary.
The references can also have priority among each other.
DEAD CODE REMOVAL Linker optimization is the process where the linker removes unused code and data sections from the output binary. Runtime- and middleware libraries typically include many functions that are not used by all applications, thus wasting valuable memory unless removed from the output binary.
To enable linker optimization, select the Remove unused code and/or the Remove unused data checkboxes in the Project wizard as appropriate (at project creation time).
LINKER SCRIPT The linker (.ld) script file defines where things end up in memory.
For example the following part:
/* The program code and other data goes into FLASH */
.text :
{
. = ALIGN(4);
*(.text) /* .text sections (code) */
*(.text*) /* .text* sections (code) */
*(.rodata) /* .rodata sections (constants, strings, etc.) */
*(.rodata*) /* .rodata* sections (constants, strings, etc.) */
Getting Started
82 | P a g e
*(.glue_7) /* glue arm to thumb code */
*(.glue_7t) /* glue thumb to arm code */
*(.eh_frame)
KEEP (*(.init))
KEEP (*(.fini))
. = ALIGN(4);
_etext = .; /* define a global symbols at end of code */
} >FLASH
This tells the linker to put all .text, .rodata etc. sections in the FLASH region. It is possible to define new memory regions and decide where a specific code section shall end up. Please read the Linker manual, accessible from Manuals, Command line tools, Linker of your TrueSTUDIO Windows start menu entry, for details about how the linker works. Especially section 3.6 and 3.7 could be of interest.
When building a Project Wizard generated project in TrueSTUDIO, a .map file is created in the build output folder (Debug/Release). In this file can be seen exactly which address the linker has put the code/data.
The initialization of the .data section is done in the startup code, generated by the TrueSTUDIO project wizard. It just copies the init values of the variables to RAM.
GENERATE A NEW LINKER SCRIPT From time to time there is the need for a new Linker script, as for instance when changing the target platform for an existing project.
To automatically create a new linker script, start by selecting the project to add the script into. Right click the project and select New, Other…
Figure 60 –Select New, Other…
1. In the dialog that then pops up select C/C++ and then Linker script.
Getting Started
83 | P a g e
Figure 61 –Select New, Other…
2. Click Next.
3. The target must now be select properly. Here is the chance to select a new target. The current settings can be found by right-clicking the project and selecting Properties, C/C++ Build, Settings.
4. Click Finish.
The script is now generated.
In order to use the new script right-click the project and select Properties and in the dialog select C/C++ Build, Settings and in the panel select the Tool Settings-tab, C Linker, General.
Enter the name of the new linker script.
Getting Started
84 | P a g e
Figure 62 –Enter the name of the script
The check-box Do not use standard start files gives two options to execute user-defined code before entering main, instead of modifying the Reset-handler. Both are triggered by the libc_init_array call in the startup code.
Option1: Constructors for objects or constructor functions are placed in a list called .init_array. These functions are then executed one by one by the libc_init_array.
Option2: Add code to a .init section. libc_init_array will run the _init function which will execute all instructions added to the .init section. The crti and crtn contains the start and end
of the _init function.
PLACE CODE IN A NEW MEMORY REGION Modify the .ld-linker script’s memory regions. This is an example of a linker script file containing the following memory regions:
MEMORY
{
FLASH (rx) : ORIGIN = 0x08000000, LENGTH = 128K
RAM (xrw) : ORIGIN = 0x20000000, LENGTH = 16K
MEMORY_B1 (rx) : ORIGIN = 0x60000000, LENGTH = 0K
}
Getting Started
85 | P a g e
Add a new area by editing the file. In this example the IP-Code region is added.
MEMORY
{
FLASH (rx) : ORIGIN = 0x08000000, LENGTH = 64K
IP_CODE (x) : ORIGIN = 0x08010000, LENGTH = 64K
RAM (xrw) : ORIGIN = 0x20000000, LENGTH = 8K
MEMORY_B1 (rx) : ORIGIN = 0x60000000, LENGTH = 0K
}
Place the following code a bit further down in the script, between the .data { ... } and the .bss { ... } section:
.ip_code :
{
*(.IP_Code*);
} > IP_CODE
This tells the linker to place all sections named .IP_Code* into the IP_CODE memory region which is specified to start at target memory address: 0x8010000.
In the C-code, tell the compiler which functions should go to this section by adding __attribute__((section(".IP_Code"))) before the function declaration.
Example:
__attribute__((section(".IP_Code"))) int placed_logic()
{
/* TODO - Add your application code here */
return 1;
}
The placed_logic()-function will now be placed in the IP_CODE memory region by the linker.
PLACE VARIABLES AT SPECIFIC ADDRESSES The first step in order to place variables at a specified address in memory is to create a new memory region in the linker script (the .ld-file). Take a look at an example of a linker script file containing the following memory regions:
Getting Started
86 | P a g e
MEMORY
{
FLASH (rx) : ORIGIN = 0x08000000, LENGTH = 128K
RAM (xrw) : ORIGIN = 0x20000000, LENGTH = 16K
MEMORY_B1 (rx) : ORIGIN = 0x60000000, LENGTH = 0K
}
A new memory region should be added by editing the file. In this example add the MYVARS region.
MEMORY
{
FLASH (rx) : ORIGIN = 0x08000000, LENGTH = 64K
MYVARS (x) : ORIGIN = 0x08010000, LENGTH = 64K
RAM (xrw) : ORIGIN = 0x20000000, LENGTH = 8K
MEMORY_B1 (rx) : ORIGIN = 0x60000000, LENGTH = 0K
}
Now the memory section should be added. Place the following a bit further down in the script, between the .data { ... } and the .bss { ... } section:
.myvars :
{
*(.myvars*);
} > MYVARS
This tells the linker to place all sections named .myvars* from input into the .myvars output section in the MYVARS memory region, which is specified to start at target memory address: 0x8010000.
A section can be called almost anything except some predefined names such as data.
Now the variables need to be put in that region.
To be absolutely certain that the order will stay the same even if they are spread in multiple files, add each variable to its own section. Then map the order of the variables in the linker script.
So for example, the c code could be:
__attribute__((section(".myvars.VERSION_NUMBER"))) uint32_tVERSION_N
UMBER;
__attribute__((section(".myvars.CRC"))) uint32_t CRC;
__attribute__((section(".myvars.BUILD_ID"))) uint16_t BUILD_ID;
__attribute__((section(".myvars.OTHER_VAR"))) uint8_t OTHER_VAR;
And then decide the order in the linker script by adding the specially named sections like:
Getting Started
87 | P a g e
.myvars :
{
*(.myvars.VERSION_NUMBER)
*(.myvars.CRC)
*(.myvars.BUILD_ID)
*(.myvars*);
} > MYVARS
LINKING IN A BLOCK OF BINARY DATA The scenario is that there is a file with binary data needs to be put in the memory. It is named ../readme.txt.
Then the reference in the C file might look like this using the incbin directive and the allocatable (“a”) option on the section.
asm(".section .binary_data,\"a\";"
".incbin \"../readme.txt\";"
);
That section is then added in the linker script with instructions that the section should be put in flash.
.binary_data :
{
_binary_data_start = .;
KEEP(*(.binary_data));
_binary_data_end = .;
} > FLASH
This block can then be accessed from the C code with code similar to the following:
extern int _binary_data_start;
int main(void)
{
int *bin_area = &_binary_data_start;
…
}
Getting Started
88 | P a g e
MANAGING EXISTING PROJECTS
THE INDEX In Atollic TrueSTUDIO there is an important mechanism called Indexer that creates a database of the source and header files. That database is called the Index and is used to provide information for all navigation and content assist in Atollic TrueSTUDIO. It includes the information about where to find information such as where a function is located and used, where a preprocessor symbol is located and where global variables are defined. Try pressing CTRL and clicking on a function that is used somewhere in the code. The editor will jump to its definition. Also hovering over it will display its comments and documentation.
Figure 63 – The Indexer Picks up the Documentation for a Function
The Indexer is running in the background and keeps track on all changes in the project.
The Indexer is normally customized per Workspace, but can also be set on a per project basis. To customize the Indexer per Workspace in the menu select Windows, Preferences and in the Preferences dialog select C/C++, Indexer.
It is a good idea to only have the currently active projects opened. Close the rest of the opened projects in the workspace. This will make the indexer work faster and reduce the memory used by TrueSTUDIO. It also makes it easier finding errors and bug in the code.
A project is closed by right-clicking it and select Close Project.
Getting Started
89 | P a g e
Figure 64 – Workspace Indexer Settings
To customize the Indexer setting per project right-click the project and select Properties. In the Properties dialog select C/C++ General and then Indexer.
Getting Started
90 | P a g e
Figure 65 – Project Indexer Settings
Select Enable project specific settings. It is a good idea to skip large files and files with many hundreds of includes. This will prevent the Java heap from running out of space. If the project is version controlled, it is also a good idea to store the settings within the project.
From time to time the Index fails to keep track on the changes in the project. Most likely this is due to some includes being changed or missed. Then the Index database needs to be rebuilt. To do that right-click the project and select Index, Rebuild.
If this doesn’t solve the problem or the indexer’s database file (the .pdom-file) is corrupted, open the workspace folder and locate the hidden directory: .metadata\.plugins\org.eclipse.cdt.core Delete the file: YOUR_PROJECT_NAME.pdom and restart Atollic TrueSTUDIO. The Index is now rebuilt from scratch.
The most likely reason for a corrupted .pdom-file is that TrueSTUDIO somehow crashed during indexing. That can happen if Atollic TrueSTUDIO runs out of Java heap space, see Keeping Track on Java Heap Space on page 41 for more information about the Java Heap.
FINDING INCLUDE PATHS, MACROS ETC. For the Indexer to work correctly it needs to be fed information about all the symbols and included files. The process providing that information is called the Scanner Discovery mechanism. It uses Language Setting Providers to try to automatically provide that information.
Getting Started
91 | P a g e
The preferences for the Scanner Discovery mechanism can be found by selecting Windows in the menu and then Preferences.
Figure 66 – Scanner Discovery Settings
In the Preferences panel select C/C++, Build, Settings and then to the right the Discovery-tab.
A list of the available Language Setting Providers are then displayed. A Language Setting Provider is a specialized mechanism to discover settings. Some providers calls the tool chain for built in compiler symbol and includes. Others scan the build output for that information. The found entries are then stored in the workspace (shared) or for each project. The Atollic ARM Tools Language Settings is not shared between workspaces.
By selecting one provider that individual provider can be configured. If that provider have found and are sharing some entries in the workspace, those entries can be removed by pressing Clear Entries. That can be a good idea to do if the path to included files are wrong. Enable Allocate console in the Console View will send output to the console each time the providers runs.
The project and Build configuration specific settings and entries can be found by selecting the project and then in the menu select Project, Properties and in the Properties panel for
The Scanner Discovery mechanism is rewritten and the old property Discovery Options for projects is deprecated and replaced with the new Preprocessor Include Paths, Macros etc. property.
Getting Started
92 | P a g e
the project select C/C++ General, Preprocessor Include Paths, Macros etc. and select the Providers tab first.
Figure 67 – Preprocessor Include Paths, Macros etc.
When using a version control system it is best to enable Store entries in project setting folder.
The Entries tab displays the found entries for the different providers. At the top is the CDT User Setting Entries. By selecting that user defined entries can be added.
By pressing Restore Default all locally stored entries will be removed.
It is recommended to always Restore Default when changing tool chain version or upgrading Atollic TrueSTUDIO. This replaces the old method for clearing of discovered entries found in the deprecated Discovery Options properties.
Getting Started
93 | P a g e
DEBUGGING Atollic TrueSTUDIO® includes a very powerful graphical debugger based on the GDB command line debugger. Atollic TrueSTUDIO® also bundles GDB servers for some of the supported JTAG probes, including the ST-LINK, SEGGER J-Link, and P&E Micro JTAG probes.
Atollic TrueSTUDIO® automatically starts and stops the GDB server as required during debugging, thus creating a seamless integration of the GDB server.
To prepare for debugging with an ST-LINK JTAG probe connected to your electronic board, perform the following steps:
1. Verify that the RAM and FLASH configuration switches on the target board is set to match the Atollic TrueSTUDIO® project configuration, regarding memory. Note: Not all boards have such configuration abilities.
2. Determine whether the board supports JTAG-mode or SWD-mode debugging, or both, and if Serial Wire Viewer (SWV) operation is supported. Note that the physical connector for the JTAG probe may be identical, regardless of the modes supported. Consult the hardware Circuit Diagram or a Hardware Designer within your organization to determine the actual debug modes supported.
3. Connect the JTAG cable between the JTAG probe and the target board.
4. Connect the USB cable between the PC and the JTAG probe.
5. Make sure the target board has a proper power supply attached.
Once the steps above are performed, a debug session in Atollic TrueSTUDIO® can be started.
The settings and information in this Debugging chapter mainly includes information for Segger J-Link and the ST-LINK probes. For more information about the P&E Micro JTAB probe, please read this application note: http://atollic.com/download/applicationnotes/AN1205_Using_PEMicro_gdbserver_Guide_Draft.pdf
Getting Started
94 | P a g e
STARTING THE DEBUGGER Perform the following steps to start the debugger:
1. Select your project in Project Explorer docking view to the left.
2. Click on the Debug toolbar button (the insect icon) or press the F11 key to start the debug session.
Figure 68 – Start Debug Session Toolbar Button
Alternatively, start the debug session by right-clicking on the project name in the Project Explorer docking view. Then select Debug As, Embedded C/C++ Debugging from the context menu.
3. The first time debugging is started for a project; Atollic TrueSTUDIO® displays a dialog box that enables the user to confirm the debug configuration, before launching the debug session. After the first debug session is started, this dialog box will not be displayed any more.
Figure 69 - Debug Configuration Dialog Box
Getting Started
95 | P a g e
The debug configurations can also be reached by clicking the Configure Debug toolbar button.
Figure 70 – The Configure Debug Toolbar Button
4. The Main panel contains information on the project and executable to debug. The settings in the Main panel do not normally have to be changed.
5. Click on the Debugger panel to display it. The panel contains information on the JTAG probe to use, its configuration, and how to start it. Some settings are probe-specific.
6. Open the Debug probe drop down list. Select the JTAG probe to be used during the debug session.
Getting Started
96 | P a g e
Figure 71 - Debug Configuration, Debugger Panel for the SEGGER J-Link
Figure 72 - Debug Configuration, Debugger Panel for the ST-Link
7. Select debug probe Interface: SWD or JTAG, depending on the capabilities of the target board and the selected JTAG probe.
8. If SWD interface was selected in step above, please proceed as follows; otherwise skip to step 9.
For the ST-Link JTAG probe:
The SWV settings include the option Wait for sync packet. Enabling this option will ensure that a larger part of the received data packages are correct (complete), but may also lead to an increased number of packages being lost. This is especially true if the data load (number of packages) is high.
For the SEGGER J-Link JTAG probe:
The initial speed of the debug connection can be configured. Atollic recommends starting at an initial speed of 4000 KHz. If the communication turns out not to work as expected at that speed,
Getting Started
97 | P a g e
please try another value. Proceeding stepwise in this manner, will lead to a quicker launch of the debug session.
The JTAG Scan Chain settings are specific to the JTAG interface and are thus disabled.
To be able to use some sort of tracings, select the appropriate Trace system such as the Serial Wire Viewer (SWV) feature or the Embedded Trace Buffer (ETB).
9. If JTAG interface was selected in step 5, please proceed as follows; otherwise skip to step 11.
Some JTAG probes can be used for multi target and multi core debugging in a JTAG Scan Chain. This requires the configuration of JTAG Scan Chain settings.
In most cases, Atollic TrueSTUDIO® is able to automatically detect these settings, in which case the Auto option has been selected.
If manual configuration is required, please select the Manual option. Then select Position and IRPre for each core. For more information, please refer to the documentation from the microcontroller manufacturer and/or the manufacturer of the target board.
10. If any other debug probe then ST-Link JTAG probe was selected in step 6, the following Misc settings are relevant:
Atollic TrueSTUDIO® is able to automatically recognize and launch J-Link scripts at the start of a debug session. If a script is needed to debug a wizard-created project, the wizard will also automatically create one.
To manually select the J-Link script to be launched, please enable the Use J-Link script file option and browse to the desired script file.
To be able to use the Live Expressions view during debugging the Live Expression mechanism has to be enabled during startup. It is enabled by default.
11. Click on the Startup Scripts panel to display it.
Getting Started
98 | P a g e
Figure 73 - Debug Configuration, Startup Scripts Panel
12. The Startup Script panel contains the initialization scripts that are sent to the GDB debugger upon debugger start. The scripts can contain any GDB or GDB server commands that are compatible with the application, JTAG probe and target board. The Startup Script tab is also where GDB script programs are defined.
The Target Hardware Initialization tab is for the script used to initialize the hardware and the Target Software Startup Scripts tab is for the scripts used to initialize the software. In most cases the Target Hardware Initialization script can be empty but if some hardware needs to be configured before software can be loaded to target commands can be added here. For example in some systems the external data/address bus and DRAM refresh control needs to be initialized before software can be loaded.
The Debug tab contains the script used for normal debugging and the Analyze and Unit Test tabs contain GDB scripts used by the Atollic TrueANALYZER® and Atollic TrueVERIFIER® products. While in this tutorial, leave the default selections unaltered.
13. Click on the OK button to start the debug session.
Getting Started
99 | P a g e
14. Atollic TrueSTUDIO® launches the debugger, and switches to the Debug perspective, which provides a number of docking views and windows suitable for debugging.
Figure 74 – Debug Perspective
Getting Started
100 | P a g e
MANAGING THE DEBUG CONFIGURATIONS A majority of Atollic TrueSTUDIO® users will focus on the build configuration for Debug. This is to be able to build, download and investigate the behavior of the software during execution. The build configuration named Debug, has two important properties:
Complete symbolic information is emitted by the tool chain to help the user navigate the information in the source code, during the debug process.
The lowest level of optimization is normally used, to maintain a direct relationship between source code and machine code. If too high optimization levels are used, large portions of the generated machine code may be removed during optimization. This limits the abilities to map source code to machine code. Consequently it makes it harder to follow the execution at source code level in a debugger.
When the software is considered to behave as required, a Release build configuration, with no symbolic debug information, and a high level of optimization, is usually built.
After switching from the Debug to the Release build configuration, the target board can be programmed by launching a debug session. During this process, caution must be executed to prevent unexpected results from occurring.
The Atollic TrueSTUDIO® philosophy of determining which executable image will be loaded into the target, with the current project settings, must be considered carefully.
It is possible to create multiple debug launch configurations. To do this, click on the Configure Debug toolbar button.
Figure 75 – The Configure Debug Toolbar Button
This brings up the list of existing debug launch configurations. By right clicking on an existing configuration, the options to create a new configuration, duplicate the existing, or delete it, appears.
The easiest way to create a new configuration is to duplicate an existing one, edit the configuration settings in the dialog box, and then rename it. In this way multiple debug launch configurations are easily created. The user may toggle among the debug launch configurations in the list, and launch the most suitable session for the task at hand.
If the user does not explicitly choose a debug launch configuration from the existing list, Atollic TrueSTUDIO® launches the most recently used debug launch configuration.
Assume that a user has created a build configuration named Debug, and a debug launch configuration that loads the ELF-file, created by the Debug build, to the target. Assume further that the user launches a debug session to debug this ELF-file.
Getting Started
101 | P a g e
Following this, the user switches to the build configuration named Release, and launches a new debug session by clicking on the debugging icon. Atollic TrueSTUDIO® will fetch the most recent debug launch configuration, which specifies that the ELF-file from the Debug build configuration, and not the Release ELF-file, is to be programmed into the target.
Atollic TrueSTUDIO® has no means of automatically selecting the ELF-file associated with the currently active build configuration (Debug or Release), when a debug session is started. The build image used will always be the one specified in the debug launch configuration, regardless of the active build configuration.
This behavior is different from some other development environments that automatically reconfigure the debug launch mechanism, to use the ELF-file from the currently active build configuration.
In Atollic TrueSTUDIO®, the user must create a debug launch configuration that explicitly refers to the particular ELF-file that is to be loaded, when a debug session is started.
Example: The user generates a project from the Project Wizard and builds an ELF-file using the build configuration named Debug. The debug session configuration dialog box shows the location of the ELF-file:
Figure 76 – The target ELF-file in Debug Session Configuration
To create a debug launch configuration that refers to the Release ELF-file, instead of the Debug ELF-file, simply change Debug in the above path to Release. It is recommended to rename the debug launch configuration to clearly mark it as a Release configuration.
To load the Release ELF-file into the target, start a debug session based on this debug launch configuration.
If desired, other properties of the new debug launch configuration can be edited as well. For example, setting the temporary breakpoint at the first line of main(), may be omitted by inserting a comment on the corresponding line in the GDB initialization script. This is done via the Debug dialog box, in the Startup Scripts, Target Software Startup Scripts panel.
Getting Started
102 | P a g e
CUSTOMIZE THE DEBUG PERSPECTIVE The Debug perspective and other perspectives in Atollic TrueSTUDIO® can be enhanced with several toolbar buttons and menus by selecting the Windows, Customize Perspective menu command.
Figure 77 – Customize Perspective Dialog Box
Getting Started
103 | P a g e
DEBUGGING Once the debug session has been started, Atollic TrueSTUDIO® switches automatically to the Debug perspective, sets a breakpoint at main(), resets the processor, and executes the startup code until execution stops at the first executable program line inside main().
The Debug perspective is now active. The next program line to be executed is highlighted in the source code window. A number of execution control functions are available from the Run menu:
Figure 78 - Run Menu
Alternatively, the execution control commands are available in the Debug view toolbar.
Figure 79 - Run Control Command Toolbar
Getting Started
104 | P a g e
TERMINATE, REBUILD AND RE-LAUNCH
By pressing this toolbar button, the current debug session is terminated, the source code is built (modified source code only), a new build image generated and the debug session is re-launched – all with just one mouse-click.
Figure 80 – Terminate, Rebuild and Re-launch Toolbar Button
DISASSEMBLY VIEW
A common user action, not available from the Run menu, is to switch between C/C++ level stepping in the C/C++ source code window, and assembler level instruction stepping in the Disassembly view.
Click on the instruction stepping button to activate assembler level instruction stepping in the Disassembly view. Click it once more to return to C/C++ level stepping in the C/C++ source code window.
Figure 81 – Instruction Stepping Button
Figure 82 – Disassembly View
By right-clicking in the left part of the view, the Function Offset can also be displayed.
Getting Started
105 | P a g e
BREAKPOINTS
A standard code breakpoint at a source code line can easily be inserted by double-clicking in the left editor margin, or by right-clicking the mouse in the left margin of the C/C++ source code editor. A context menu will appear in the latter case.
Figure 83 - Toggle Breakpoint Context Menu
Select the Toggle Breakpoint menu command to set or remove a breakpoint at the corresponding source code line.
More complicated types of breakpoints, such as Watch Points and Event Breakpoints (for PC projects) are configured in the Breakpoints view.
Figure 84 – Breakpoints View
Getting Started
106 | P a g e
CONDITIONAL BREAKPOINT When setting a normal breakpoint the program will break each time reaching that line. If that is not the desired behavior a condition can be set on the breakpoint that regulates if the program should actually break or not on that breakpoint.
Set a breakpoint at a line. Right-click it and open the Breakpoint Properties... The Breakpoint Properties can also be opened from the Breakpoints view.
The following view is opened.
Figure 85 – Breakpoints Properties
Enter a condition. In the example below “g1==100” is a global variable, but the variable can also be a local stack variable.
Figure 86 – Conditional Breakpoint
What happens when running now is that the gdbserver will break each time the line is executed but gdb will test the condition and restart running if the variable g1 not is equal to 100. This method could be used when debugging an RTOS with several tasks if the RTOS kernel has a variable that the Breakpoint condition could be tested on to see which task is running. The only problem with this method is that it takes some time for GDB to evaluate the condition.
The conditions are written in C-style so you can write “g1%2==0” to get more complex conditions.
Getting Started
107 | P a g e
EXPRESSIONS
The Expressions view displays many different types of data, including global variables, local variables and CPU core registers. The Expressions view also allows users to create mathematical expressions that are evaluated automatically, such as (Index * 4 + Offset).
The information is updated whenever the debug execution is halted.
Figure 87 – Expressions View
An expression is displayed in many formats simultaneously, and the view can parse
complicated data types and display complex data types like a C-language struct.
Furthermore, it is CPU core registers may be added to the view, in addition to local and global variables.
The users may drag and drop variables from the editor into the Expressions view. This applies to complex data types as well.
Figure 88 – Drag and Drop of Variable to the Expressions View
The value of variables and writeable registers may also be changed via the Expressions view.
LIVE EXPRESSIONS
The Live Expressions view works a lot like the Expression view with the exceptions that all the expressions are sampled live during the debug execution.
The view displays many different types of global variables. The Expressions view also allows users to create mathematical expressions that are evaluated automatically, such as (Index * 4 + Offset).
Getting Started
108 | P a g e
Figure 89 – Live Expressions View
An expression is displayed in many formats simultaneously, and the view can parse
complicated data types and display complex data types like a C-language struct.
The sample speed is determined by the number of Expressions being sampled. An increased number of Expressions being sampled will result in a slower sample rate.
Only one format of numbers is used at the same time to speed up the sampling. To change the format, use the dropdown arrow.
Figure 90 – Live Expressions view Number Format
LOCAL VARIABLES
The Variables view auto-detects and display the value of local variables. It provides extensive information about each variable, such as value in hex/dec/bin format. The content of complex variable types is also displayed.
The Live Expressions view requires a Segger J-Link probe and a Segger J-Link GDBServer v4.78h or later.
To be able to use the Live Expressions view during debugging the Live Expression mechanism has to be enabled during startup. This is by default enabled when Segger J-Link probe is selected in the debug configuration.
Please read the Starting the Debugger section for more information.
Getting Started
109 | P a g e
Figure 91 – Variables View
FILL MEMORY WITH A BYTE PATTERN
In the Memory view and the Memory Browser view there is an added toolbar button called Open Memory Fill dialog
Figure 92 - The Memory Fill toolbar button
The Memory Fill dialog is opened when the toolbar button is pressed.
Figure 93 - The Memory Fill dialog
The filled area is up to, but not including, the end address.
Getting Started
110 | P a g e
SFRS
Special Function Registers (SFRs) can be viewed, accessed and edited via the SFRs view. The view displays the information for the current project. It will change its content if another project is selected.
To open the view, select the View, SFRs menu command.
Figure 94 - SFRs Menu Command
The SFRs view can also be useful in the C/C++ Editing Perspective, however then only the names and addresses of the registers will be displayed.
Getting Started
111 | P a g e
Figure 95 - SFR View
Figure 96 – SFR View Buttons
The toolbar buttons are found at the top right corner of the view. The Base format buttons (A) are used to change what base the registers values are displayed in.
The Configure SVD settings button (B) opens up the CMSIS-SVD Settings Properties Panel for the current project.
Figure 97 – CMSIS-SVD Settings Properties Panel
Two CMSIS-SVD (System View Description) data files can be pointed out for the project. All SVD-files must comply with the syntax as outlined in the CMSIS-SVD specification found on ARM® website. If this requirement is not met, the SFR-view is likely not to show any register information.
Getting Started
112 | P a g e
The Device file field is typically used to for the Device file. This file should include the information for the whole device. Other views may fetch information from the SVD-file pointed out by this field, therefore Atollic recommends only using this field for SVD-files containing full system description. This field is typically only changed of the microcontroller vendor releases a new version of the SVD-files. Updated files can be obtained from the CMSIS category on http://www.ARM.com.
The Custom file field can be used to define special function registers related to custom hardware, in order to simplify the viewing of different register states. Another possible use case is to create a SFR favorites’ file, containing a subset of the content in the Device file. This subset may be frequently checked or registers. If a Custom file is pointed out a new top-node in the SFR-view will be created containing the Custom file related register information.
Both fields may be changed by the user and both fields may be used at the same time.
FAULT ANALYZER
The Fault Analyzer view helps developers to identify and resolve hard-to-find system faults that occur when the CPU has been driven into a fault condition by the application software. The fault analyzer feature interprets information extracted from the Cortex-M nested vector interrupt controller (NVIC) in order to identify the reasons that caused the fault.
Some conditions that trigger faults are:
accessing invalid memory locations
accessing memory locations on misaligned boundaries
executing undefined instruction
include division by zero errors
Within the debugger, after a fault has occurred, the code line where the fault occurred will be displayed. The user can view the reasons for the error condition. Faults are broadly categorized into bus, usage and memory faults.
Bus faults occur when an invalid access attempt is made across the bus, either of a peripheral register or a memory location.
Usage faults are the result of illegal instructions or other program errors.
Memory faults include attempts of access an illegal location or violations of rules maintained by the memory protection unit (MPU).
To further aid fault analysis, an exception stack frame visualization option provides a snapshot of the MCU register values at the time of the crash. Isolating the fault to an individual instruction allows the developer to reconstruct the MCU condition at the time the faulty instruction was executed.
Getting Started
113 | P a g e
In the Debugger perspective the Fault Analyzer view is opened from the menu. Select the menu command View, Fault Analyzer or use the toolbar icon Show View to open a drop down list; then select Fault Analyzer.
FAULT ANALYZER VIEW The Fault Analyzer view has five main sections which can be expanded and collapsed. The sections contain different kind of information to help understand the reason why a particular fault has occurred. The sections are Hard Fault Details, Bus Fault Details, Usage Fault Details, Memory Management Fault Details and Register Content During Fault Exception.
TERMINAL VIEW
A terminal is included to allow I/O communication with target using Serial/SSH/Telnet communication.
Figure 98 – Terminal View
It can be located by selecting the Open View toolbar button and then select Terminal in the dropdown list.
Figure 99 – Terminal Toolbars
To start using the terminal, press button A. This will open up the Terminal Settings Dialog.
Getting Started
114 | P a g e
Figure 100 – Terminal Settings
Select what type of connection is preferred. That will most likely be Serial communication.
For more information for how to redirect the I/O to the Terminal, see the chapter about I/O redirection on page 123.
Getting Started
115 | P a g e
STOPPING THE DEBUGGER When the debug session is completed, the running application must be stopped.
1. Stop the target application by selecting the Run, Terminate menu command, or by clicking on the Terminate toolbar button in the Debug view.
Figure 101 - The Terminate Menu Command
Getting Started
116 | P a g e
2. Atollic TrueSTUDIO® now automatically switches to the C/C++ editing perspective
Figure 102 - C/C++ Editing Perspective
UPGRADING THE GDB SERVER Some GDB probe manufacturer, such as Segger, upgrades their GDB server more frequently then new versions of Atollic TrueSTUDIO are released. To use the latest version, download it from the manufacturer website and install it in the preferred folder.
Then change the setting that points out where the server is stored. Select the top-menu Windows, Preferences and then open Run/Debug, Embedded C/C++ Application, Debug Hardware, and the name of the GDB probe used. The path to the newly installed Gdb server can be entered there.
Getting Started
117 | P a g e
Figure 103 – Changing the path to the GDB Server
CONFIGURE SEGGER’S GDB SERVER Segger’s GDB Server can be configured in different way for such as logging and flashing.
Do the following steps to configure Seggers’s GDB server.
Connect the JTAG probe to the computer.
Open a command window (cmd) in Windows and move to the folder for the installed GDB server:
cd %TrueSTUDIO installation folder%\Servers\J-Link_gdbserver
(or where the GDB server is installed).
Start the GDB server with
JLINK.exe
Now there should have a new icon in Windows Notification Area (by default in the lower right corner in Windows) for Segger J-Link GDB server. Right click to open it. The Control panel for the GDB server will then be opened.
Getting Started
118 | P a g e
Figure 104 –GDB Server Control Panel – General Tab
A good idea is now to in the General tab deselect Start minimized and Always on top.
CHANGE FLASH CACHING
The Memory View does not always reflect exactly what’s flashed on the target. What does not happen is if the program alters the flash contents, the Memory panel does not reflect that.
To fix this go to the Settings tab and deselect the Allow caching of flash contents.
ENABLE LOG FILE
Do the following steps to enable logging to a log file in Seggers’s GDB server
1. Open the Control Panel as described above.
2. Then open the Settings tab and enter a name of a log file.
3. Close, stop the running GDB server and restart debugging.
4. The GDB server should now save information in the new log file.
Getting Started
119 | P a g e
Figure 105 –GDB server Control panel – Settings tab
SETTINGS COMMAND LINE OPTION
There is a Command Line option to the Segger GDB server to include a settings file when debugging. In order to make this useful in Atollic TrueSTUDIO set the Debug Configuration
to Connect to remote GDB server.
Getting Started
120 | P a g e
Figure 106 – Debug Configuration – Connect to remote GDB server
Then start the GDB server manually from the command line.
A typical command line for a STM32F10C eval board is the following:
JLinkGDBServerCL.exe -port 2331 -CPU Cortex-M -device STM32F107VC -
endian little -speed 4000 -if swd
Now add the -SettingsFile C:\tmp\ExampleSettingsFile.txt to the command.
Getting Started
121 | P a g e
DEBUGGING TWO TARGETS AT THE SAME TIME Multiprocessor debugging is possible using two Segger’s J-Links at the same time connected to two different microcontrollers, these probes are both connected to one PC on different USB-ports. For clarity let us say that the developer have two different microcontrollers: HW_A and HW_B.
In Atollic TrueSTUDIO this will typically require only running one instance of Atollic TrueSTUDIO containing one project for each microcontroller. The default port to be used for Segger J-Link is 2331. This is presented in the Debugger tab in the Debug Configurations dialog. The developer needs to change the port for one of the projects to use another port, e.g. port 2341.
FIRST ALTERNATIVE - LOCAL GDB-SERVER USING GUI
OPTIONS
The debug configuration for the project can use GDB connection selection Autostart local GDBServer. However, please note that as two J-Links are connected to the PC the Segger J-Link software will display a GUI where it must be selected which J-Link that is to be associated with which hardware board. The developer needs to be quite fast to make the selection here and start the GDB server. When the selection is made, the GDB server will start and connect to the board using the selected probe and GDB will connect to the GDB server. If this selection is not made fast enough the debug session in Atollic TrueSTUDIO will timeout because there was no server to connect to.
When the Debug Configuration has been configured for both projects so that each board is associated to a specific probe, the user may try to debug each board individually first. When it is confirmed that this is working it is time to debug both targets at the same time. Proceed as follow: 1. First start to debug HW_A. 2. The developer will automatically be switched to the Debug Perspective in Atollic TrueSTUDIO when a debug session is started. Switch to C/C++ Perspective. 3. Select the project for HW_B and start debugging this. The Debug perspective will now open again. 4. There will be two application stacks/nodes in the debug view: One for each project (hardware). When changing selected node in the Debug view the depending editor, variable view etc. will be updated to present information valid to the selected project/board.
Getting Started
122 | P a g e
SECOND ALTERNATIVE - REMOTE GDB-SERVER USING
COMMAND-LINE OPTIONS
It may be easier to start the GDB server manually and change the Debug Configurations to Connect to remote GDB server. This setting is made in the Debugger tab in the Debug Configurations dialog. If Connect to remote GDB server is selected, the developer must start the GDB server manually before starting the debug session. To start Segger J-Link GDB server manually please follow this procedure: 1. Open a Windows Console (Command Prompt) 2. Change directory to the location where the GDB server is located, normally to: C:\Program Files (x86)\Atollic\TrueSTUDIO for ARM Pro
4.1.0\Servers\J-Link_gdbserver 3. Start the GDB server: E.g start using port 2341 with SWD interface mode: JLinkGDBServerCL.exe -port 2341 -if SWD -select usb=123456789 (The 123456789 is serial number of dongle.) Start another GDB server in a second command prompt, using another port number in a similar way and let this connect to the second J-Link. Now when both Segger J-Link GDB servers are running the developer can debug the two projects individually or multi-target. Please note that the Debug Configurations needs to use the same port as the GDB server is listening on and Connect to remote GDB server shall be used.
Getting Started
123 | P a g e
I/O REDIRECTION The C runtime library includes many functions, including some that typically handle I/O. The I/O related runtime functions include printf(), fopen(), fclose(), and many
others.
It is common practice to redirect the I/O from these functions to the actual embedded
platform, such as redirecting printf() output to an LCD display or a serial cable, or to
redirect file operations like fopen() and fclose() to some Flash file system middleware. Atollic TrueSTUDIO® also comes with an integrated Terminal that can be used to display redirected I/O, see page 113 for more information.
The free Atollic TrueSTUDIO® Lite do not support I/O redirection, and instead have do-nothing stubs compiled into the C runtime library.
Atollic TrueSTUDIO® Pro and Premium do support I/O redirection. To enable I/O
redirection the file syscalls.c should be included and built into the project:
1. In the Project explorer view, Right click on the project and select New, Other...
Figure 107 –New, Other…
2. Expand System calls.
3. Select Minimal System Calls Implementation and click next.
Getting Started
124 | P a g e
Figure 108 –Select Minimal System Calls Implementation
4. Click Browse... and select the src folder as new file container. Also select the Heap Implementation. There is one dynamic heap implementation that is default and a fixed one intended for RTOS use. If the latter is selected a modification of
the script linker_script.ld in accordance with the instructions is also needed.
Figure 109 –Select location and Heap implementation
Getting Started
125 | P a g e
5. Click OK.
6. Click on Finish and verify that "syscalls.c" is added to your project.
To redirect the printf() to the target output, the _write() function needs to be modified. Exactly how this is done depends on the hardware and library implementation. Here is an example:
int _write(int file, char *ptr, int len)
{
int index;
for(index=0 ; index<len ; index++){
__io_putchar(*ptr++) /* Your target output function */
}
}
Another way to redirect the IO is to use the SWV tracing and redirect to the SWV-console. More about this can be found in the section covering Serial Wire Viewer Tracing.
Getting Started
126 | P a g e
POSITION INDEPENDENT CODE When for instance working on a bootloader, position independency is a great help. PIC (Position Independent Code) is relative to the program counter. If it is compiled for address 0 but placed at 0x81000 it still runs properly. The compiler has an option -fPIE that enables the compiler to generate position independent code for executable. Add this option into the tool settings for the Assembler and C Compiler in the Miscellaneous settings in the Build Configuration.
Figure 110 –Add –fPIE for Assembler and C Compiler
Also use this -fPIE option for the linker. E.g. in the Miscellaneous settings the Other options field for the C linker, the command may look like
-Wl,-cref,-u,Reset_Handler -fPIE
Getting Started
127 | P a g e
Figure 111 – Use –fPIE for Linker
Make sure that the stack pointer is set up correctly. Normally this is done by issuing a monitor reset command as part of the Startup-script for the debug session. However now the start code needs to sets the stack pointer instead; do this by adding the following assembly line at the top of the Reset_Handler()-function located in the startup file:
ldr sp, =_estack
This will make sure that the stack pointer is initialized when the Reset_Handler()-function runs. Since the monitor reset command is not used any more, it needs to be removed from the Debug Startup Script. Do this by opening your debug configuration, by pressing the Debug Configuration button, switch to the Startup Scripts tab. This contains all commands that are used to launch a debug session. Try commenting the monitor reset line out by adding a # sign in front.
Getting Started
128 | P a g e
Figure 112 –Remove the monitor reset command
Also in some examples the SystemInit()-function for the vector table relocation needs to be changed.
SCB->VTOR = FLASH_BASE | 0x20000; /* Vector Table Relocation in
Internal FLASH */
If not, this SystemInit()-function will relocate interrupts to flash beginning.
To test that the code is started where is should be, also comment out the continue command from the Debug Startup script. This will suspend execution on the first instruction in the Reset_Handler(), making it possible to debug the start-up code.
Getting Started
129 | P a g e
INSTALLING 3RD PARTY PLUGINS It is possible to install hundreds of additional third party ECLIPSE™ plugins in Atollic TrueSTUDIO® for users that want even more functionality in their TrueSTUDIO IDE.
ECLIPSE™ plugins are easily found by searching at Eclipse marketplace (http://marketplace.eclipse.org/). However, please bear in mind that not all plugins for ECLIPSE™ are compatible with Atollic TrueSTUDIO®.
1. To install a plugin select Help, Install New Software…
Figure 113 –Select Install New Software
2. Then enter the URL to the update site for the plugin. If the URL is not known, All Available Sites can be selected.
Select the appropriate plugins. Please remember that not all ECLIPSE™ plugins are compatible with Atollic TrueSTUDIO®.
Click the Next button.
If no direct internet connection is available, the plugin can be downloaded in archive form from a computer with internet connection, and then manually moved to the computer with a TrueSTUDIO installation. Add the archived file by clicking the Add button and then select Archive and select the downloaded file.
Getting Started
130 | P a g e
Figure 114 –Enter Download Site and select plugins
3. Review the items to be installed and click the Next button.
4. Read all the licenses agreements and click accept if the terms are found acceptable. Then click the Finish button.
Figure 115 – Accept license agreements
Getting Started
131 | P a g e
5. The plugins are now automatically downloaded and installed.
Figure 116 – The plugins are installed
6. Restart Atollic TrueSTUDIO® and the plugins are ready to be used.
Introduction
132 | P a g e
SERIAL WIRE VIEWER
TRACING This section provides information on how to use Serial Wire Viewer Tracing (SWV) in Atollic TrueSTUDIO® for ARM.
The following topics are covered:
Using Serial Wire Viewer Tracing
Start SWV Tracing
The Timeline graphs
Statistical profiling
Printf() redirection over ITM
Change the Trace Buffer Size
Common SWV problems
Serial Wire Viewer Tracing
133 | P a g e
USING SERIAL WIRE VIEWER TRACING To use system analysis and real-time tracing in compatible ARM® processors, a number of different technologies interact; Serial Wire Viewer (SWV), Serial Wire Debug (SWD) and Serial Wire Output (SWO). These technologies are part of the ARM® Coresight™ debugger technology and will be explained below.
SERIAL WIRE DEBUG (SWD) Serial Wire Debug (SWD) is a debug port similar to JTAG, and provides the same debug capabilities (run, stop on breakpoints, single-step) but with fewer pins. It replaces the JTAG connector with a 2-pin interface (one clock pin and one bi-directional data pin). The SWD port itself does not provide for real-time tracing.
SERIAL WIRE OUTPUT (SWO) The Serial Wire Output (SWO) pin can be used in combination with SWD and is used by the processor to emit real-time trace data, thus extending the two SWD pins with a third pin. The combination of the two SWD pins and the SWO pin enables Serial Wire Viewer (SWV) real-time tracing in compatible ARM® processors.
Please note that the SWO is just one pin and it is easy to set a configuration that produces more data than the SWO is able to send.
SERIAL WIRE VIEWER (SWV) Serial Wire Viewer is a real-time trace technology that uses the Serial Wire Debugger (SWD) port and the Serial Wire Output (SWO) pin. Serial Wire Viewer provides advanced system analysis and real-time tracing without the need to halt the processor to extract the debug information.
Serial Wire Viewer (SWV) provides the following types of target information:
Event notification on data reading and writing
Event notification on exception entry and exit
Event counters
Timestamp and CPU cycle information
Based on this trace data, modern debuggers can provide developers with advanced debugger capabilities.
Serial Wire Viewer Tracing
134 | P a g e
INSTRUMENTATION TRACE MACROCELL (ITM) The Instrumentation Trace Macrocell (ITM) enables applications to write arbitrary data to the SWO pin, which can then be interpreted and visualized in the debugger in various
ways. For example, ITM can be used to redirect printf() output to a console view in the debugger. The standard is to use port 0 for this purpose.
The ITM port has 32 channels, and by writing different types of data to different ITM channels, the debugger can interpret or visualize the data on various channels differently.
Writing a byte to the ITM port only takes one write cycle, thus taking almost no execution time from the application logic.
Figure 117 –Different types of tracing
Serial Wire Viewer Tracing
135 | P a g e
STARTING SWV TRACING To use the Serial Wire Viewer (SWV) in Atollic TrueSTUDIO®, the JTAG Probe must support SWV. Older JTAG Probes, such as ST-LINK V1, don’t.
The GDB server must also support SWV. The ST-LINK gdbserver must be of version 1.4.0 or later, and the SEGGER J-LINK gdbserver must be of version 4.32.A or later. Older GDB server versions that may be installed must be upgraded to the versions included in the Atollic TrueSTUDIO® product package in order to use SWV tracing.
To use SWV the board must support SWD. Please note that devices based on ARM Cortex-M0 and Cortex-M0+ cores do not support SWV tracing.
1. Open the Atollic TrueSTUDIO® debug configuration dialog by selecting the current project in the Project Explorer, and clicking the Configure Debug toolbar button.
Figure 118 – Open Debug Configurations Toolbar Button
The Debug configuration panel is then opened.
Figure 119 – Change ST-Link Debug Configuration for SWV
Serial Wire Viewer Tracing
136 | P a g e
Figure 120 – Change SEGGER J-Link Debug Configuration for SWV
2. Enable SWV by selecting the SWD interface.
3. For the ST-Link JTAG probe:
Check the SWV Enable checkbox.
For the SEGGER J-Link JTAG probe:
Select SWV (Serial Wire Viewer) as the Trace system.
4. Enter the Core Clock frequency. This must correspond to the value set by the application program to be executed.
5. Enter the desired SWO Clock frequency. The latter depends on the JTAG Probe and must be a multiple of the Core Clock value.
6. Switch to the Debug perspective by starting a debug session as described earlier in this document. A debug session must be running to enable configuration and start of the Serial Wire Viewer tracing capabilities. Please note that switching to the Debug perspective alone is not sufficient for SWV to work. A debug session must also be running.
7. Pause the target execution by clicking the yellow Pause button.
Serial Wire Viewer Tracing
137 | P a g e
8. Open one of the SWV views. For first-time users, Atollic recommends the SWV Trace log view because it will give a good view of the incoming SWV packages and how well the tracing is working.
Thus, select the View, SWV, SWV Trace log menu command.
Figure 121 – SWV Data Trace Menu Command
9. Open the SWV settings panel by clicking on the Configure Serial Wire Viewer button in the SWV Trace log view toolbar.
Figure 122 – Configure Serial Wire Viewer Button
10. Configure the data to be traced, and the trace method.
Serial Wire Viewer Tracing
138 | P a g e
Figure 123 – The Serial Wire Viewer Settings Dialog
A. Information about the current clock settings for this session.
B. Events that can be traced:
CPI – Cycle per instruction. For each cycle beyond the first one that
an instruction uses, an internal counter is increased with one. The
counter (DWT CPI count) can count up to 256 and is then set to 0.
Each time that happens one of these packages are sent. This is one
aspect of the processors performance and used to calculate
instructions per seconds. The lower the value, the better the
performance.
SLEEP – Sleep cycles. The number of cycles the CPU is in sleep
mode. Counted in DWT Sleep Count Register. Each time the CPU has
been in sleep mode for 256 cycles, one of these packages is sent.
This is used when debugging for power consumption or waiting for
external devises.
FOLD – Folded instruction. A counter for how many instructions are
folded (removed). Every 256 instruction folded (taken zero cycles)
you will receive one of these events. Counted in DWT Fold count
register.
Branch folding is a technique where, on the prediction of most
branches, the branch instruction is completely removed from the
instruction stream presented to the execution pipeline. Branch
Serial Wire Viewer Tracing
139 | P a g e
folding can significantly improve the performance of branches,
taking the CPI for branches below 1.
EXC – Exception overhead. The DWT Exception Count register keeps
track on the number of cycles the CPU spends in exception
overhead. This includes stack operations and returns but not the
time spent processing the exception code. When the timer
overflows one of these events is sent. Used to calculate what the
exception-handling actually costs the program.
LSU – Load Store Unit Cycles. DWT LSU Count Register counts the
total number of cycles the processor is processing an LSU operation
beyond the first cycle. When the timer overflows one of these
events is sent.
With this measurement how much time is spent with memory-
operations can be tracked.
EXETRC – Trace exceptions. Whenever an exception occur one of
these events is sent. These events can be monitored in the SWV
Exception Trace view and the SWV Exception Timeline view. From
these views you can also jump to the exception handler code for
that exception.
C. PC Sampling. Enabling this starts sampling the Program Counter
with some cycle interval. Since the SWO-pin has a limited
bandwidth it is not a good idea to sample to fast. Experiment with
this to be able to sample often, but not too often. The results from
the sample are used, among other things, for the Statistical
Profiling view.
D. Timestamps – Must be enabled to know when an event occurred.
The Prescaler should only be changed as a last effort to reduce
overflow packages.
E. Data Trace - Up to four different symbols or areas of the memory
can be traced, as for an example the value for a global variable. To
do that, enable one comparator and enter the name of the variable
or the memory-address to trace. The value of the traced variables
can be displayed both in the Data trace view and the Data Trace
Timeline graph.
F. ITM stimulus ports – Enable one or more of the 32 ITM ports. The
most common way to use this is to send information
Serial Wire Viewer Tracing
140 | P a g e
programmatically and almost none intrusive. As for an instance the
CMSIS function ITM_SendChar is used to send characters to port
0, see below.
The packages from the ITM ports is display in the SWV console view
and the ITM Timeline Graph.
Overflow while running SWV is an indication that SVW is configured to trace more data than the SWO-pin is able to process. In such a case, decrease the amount of data traced.
To use any of the timeline views in Atollic TrueSTUDIO®, enable Timestamps. The default Prescaler value is 1. Keep this value, unless problems occur related to SWV package overflow.
It is possible to trace up to four different C variable symbols, or fixed numeric areas of the memory.
Below are three examples for the SWV-trace configuration:
Example 1: To trace the value of a global variable, enable a Comparator and enter the name of the variable or the memory address to be traced.
The value of the traced variables is displayed both in the Data Trace view and the Data Trace Timeline graph.
Example 2: To profile the program execution, enable the PC-sampling. In the beginning a high value for the Cycles/sample is recommended.
The result from the PC-sampling is then displayed in the SWV Statistical Profiling view.
Example 3: To trace the exceptions occurring during program execution, enable the Trace Event EXETRC: Trace Exceptions.
Information about the exceptions is then displayed in the SWV Exception Trace Log view and the SWV Exception Timeline Graph.
11. Save the SWV configuration in Atollic TrueSTUDIO® by clicking the OK button. The configuration is saved together with other debug configurations and will remain effective until changed.
12. Press the Start/Stop Trace button to send the SWV configuration to the target board and start SWV trace recoding. The board will not send any
Atollic recommends limiting the amount of data traced. Most ARM® -based microcontrollers reads and writes data faster than the maximum SWO-pin throughput. Too much trace data result in data overflow, lost packages and possibly corrupt data. For optimum performance, trace only data vital to the task at hand.
Serial Wire Viewer Tracing
141 | P a g e
SWV packages until it is properly configured. The SWV Configuration must be resent, if the configuration registers on the target board are reset. Actual tracing will not start until the target starts to execute.
Figure 124 – The Start/Stop Trace Button
Please note the tracing cannot be configured while it is running. Pause debugging and stop tracing before attempting to send a new configuration to the board. Each new, or changed, configuration must be sent to the board to take effect.
The configuration is sent to the board when the Start/Stop Trace-button is pressed.
13. Start the target execution again by pressing the green Resume Debug button.
Figure 125 – Resume Debug Button
14. Packages should now be arriving in the SWV Trace Log view (and possibly other views too, dependent on trace configuration).
Collected data can be cleared by pressing the Empty SWV-Data button. All the timers are also restarted when this button is pressed.
Figure 126 – Empty SWV Data Button
Serial Wire Viewer Tracing
142 | P a g e
THE SWV VIEWS
The views that displays SWV trace data are:
SWV Trace Log - Lists all incoming SWV packages in a spreadsheet. Useful as a first diagnostic for the trace quality. The data in this view can be copied to other applications in CSV-format by selecting the rows to copy and type CTRL-C. The copied data can be pasted into another application with the CTRL-V command.
SWV Trace Timeline Graph – A graph displaying all SWV-packages received as a function of time.
SWV Exception Trace Log – The view has two tabs. The first is similar to the SWV Trace Log, but is restricted to Exception events and also has additional information about the type of event. The data can be copied and pasted into other applications. Each row is linked to the code for the corresponding exception handler. Double click on the event and the corresponding interrupt hander source code is opened in the editor view.
The second tab displays statistical information about the Exception events. This information may be of great value when optimizing the code. Hypertext links to exception handler source code in the editor is included.
SWV Exception Timeline Graph – A graph displaying the distribution of exceptions over time. Remember that each exception sends up to three SWV-packages. Double click on the event in the tool tip and the code for the exception handler is opened up in the editor view.
SWV Console - Prints readable text output from the target application. Typically this is done via printf() with output redirected to ITM channel 0. Other ITM channels can get their own console view too.
SWV ITM Timeline Graph – A graph displaying the distribution of ITM-packages over time. This can be used for code block execution time visualization.
SWV Data Trace – Tracks up to four different symbols or areas in the memory. For example, global variables can be referenced by name.
Serial Wire Viewer Tracing
143 | P a g e
SWV Data Trace Timeline Graph – A graphical display that shows the distribution of variable values over time. Applies to the variables or memory areas in the SWV Data Trace.
SWV Statistical Profiling – Statistics based on Program Counter (OC) sampling. Shows the amount of execution time spent within various functions. This is useful when optimizing code. The data can be copied and pasted into other applications. The view is updated when debugging is suspended.
More than one SWV view may be open at the same time, for simultaneous tracking of various events.
Figure 127 – Several SWV Views Displayed Simultaneously
Serial Wire Viewer Tracing
144 | P a g e
THE TIMELINE GRAPHS All the timeline graphs, except the Data Trace Timeline, have some common features:
Any graph can be saved as an image file by clicking the camera icon.
The graphs show the time in seconds by default.
The zoom range is limited while debugging is running. More details are available when debugging is paused.
Zoom in: Double-click on the left mouse button. Zoom out: Double-click on the right button or use the corresponding toolbar buttons in the view.
The tooltip shows the number of packages in each bar. Except for the Trace Timeline Graph, the content of bars with less than 50 packages is showed in a detailed view.
The Data Trace Timeline displays distinct values for variables during execution and has different features than the above graphs.
STATISTICAL PROFILING This is a way to obtain information about the amount of execution time spent within various functions. It is not based on code analysis but on statistical information regarding the part of the code executed. This is a technical limitation of the SWV protocol.
1. Configure SWV to send Program Counter samples, as described below. Enable the PC Sampling (A) and Timestamps.
With the given Core clock cycle intervals, SWV will report the Program Counter values to Atollic TrueSTUDIO®. Atollic recommends beginning with the PC-sampling set to a high Cycle/sample value. This will ensure that the interface will not overflow.
Serial Wire Viewer Tracing
145 | P a g e
Figure 128 –Statistical Profiling Configuration
2. Open the Statistical Profiling view by selecting View, SWV Statistical Profiling. It will be empty, since no data has been collected.
3. Push the red Start/Stop Trace button to send the configuration to the board.
4. When you start executing code in the target system, Atollic TrueSTUDIO® starts collecting statistics about function usage via SWV.
5. Suspend (Pause) the debugging. The collected data is displayed in the view. The longer the debugging session, the more statistics will be collected.
Figure 129 – Statistical Profiling View
Serial Wire Viewer Tracing
146 | P a g e
EXCEPTION TRACING
To make it possible to trace the exceptions encountered during execution, the exception packages needs to be enabled. Open SWV Configuration as described above.
Enable EXETRC: Trace Exception. This will generate Trace Exception packages. Disable all other packages not needed at the moment.
Figure 130 – Exception Tracing Configuration
EXCEPTION DATA
The exception packages are displayed in the SWV Exception Trace Log view. The view has two tabs, the Data tab and the Statistics tab.
Figure 131 – Exception View, Data Tab
The columns in the Data tab are:
Serial Wire Viewer Tracing
147 | P a g e
Name Description
Index The Id for the exception package. Are shared with the other SWV packages.
Type Normally each exception will generate three packages each; Exception entry, Exception exit and then an Exception return package. TrueSTUDIO displays all three.
Name The name of the exception provided by the manufacturer. Also the exception or interrupt number.
Peripheral The peripheral for the exception.
Function The name of the interrupt handler function for this interrupt. Updated when debug is paused. Is cached during the whole debug session. By double clicking the function, the editor will open that function in the source code.
Cycles The timestamp for the exception in cycles.
Time(s) The timestamp for the exception in seconds
Extra info Optional extra information about that package.
Table 2 – Exception Data Columns
EXCEPTION STATISTICS
The exception statistics is collected whenever Exception packages are received by SWV. It can be found in the SWV Exception Trace Log view, in the Statistics tab.
Figure 132 – Exception View, Statistics Tab
The statistics can be access by selecting the Statistics tab in the view. The available columns are described in the table below:
Serial Wire Viewer Tracing
148 | P a g e
Name Description
Exception The name of the exception provided by the manufacturer. Also the exception or interrupt number.
Handler The name of the interrupt handler for this interrupt. Updated when debug is paused. Is cached during the whole debug session.
By double clicking the handler, the editor will open that function in the source code.
% of This exception type’s share, in percentage, of all exceptions.
Number of The total number of entry packets received by SWV of this exception type.
% of exception time How big part of the execution time for all exceptions that this exception type have.
% of debug time How big part of the total execution time for this debug session that this exception type have. All the timers are restarted when the Empty SWV-Data button is pressed.
Total runtime The total execution time in cycles for this exception type.
Avg runtime The average execution time in cycles for this exception type.
Fastest The execution time in cycles for the fastest exception of this exception type.
Slowest The execution time in cycles for the slowest exception of this exception type.
First The first encounter of an entry event for this exception type in cycles.
First(s) The first encounter of an entry event for this exception type in seconds.
Latest The latest encounter of an entry event for this exception type in cycles.
Latest(s) The latest encounter of an entry event for this exception type in seconds.
Table 3 – Exception Statistics Columns
Serial Wire Viewer Tracing
149 | P a g e
PRINTF() REDIRECTION OVER ITM Since SWV enables target software to send data back to the debugger using any of the 32 ITM channels, this feature can be used to redirect printf() output back to the ITM console view in the debugger (ITM channel 0 is typically used for printf-redirection).
1. To make printf()send ITM-packages, the file syscalls.c must be
configured. If no syscalls.c file was generated when the project where generated, the following steps can be performed to generate it:
In the Project explorer, right click on the project and select New, Other...
Expand System calls.
Select "Minimal System Calls Implementation" and click next.
Click Browse... and select the src folder as new file container and click OK.
Click on Finish and verify that syscalls.c is added to your project.
2. Inside the syscalls.c file, replace the _write()function with the following code:
int _write(int file, char *ptr, int len)
{
/* Implement your write code here, this is used
by puts and printf for example */
int i=0;
for(i=0 ; i<len ; i++)
ITM_SendChar((*ptr++));
return len;
}
3. In syscalls.c, include the file where the macro ITM_SendChar() is defined. This function is part of ARM CMSIS core support and is located
within the file core_cm3.h or core_cm4.h. The core_cmX.h file depends on definitions specific to the device being used.
4. Drop the file in the Include Browser view, and check that the file is included in the project. Also note the location within the project. Include
the file core_cmX.h in the file syscalls.c
Serial Wire Viewer Tracing
150 | P a g e
CHANGE THE TRACE BUFFER SIZE The incoming SWV-packages are saved in the Serial Wire Viewer Trace buffer. It has a default maximum size of 2 000 000 packages. To trace more packages, this figure must be increased.
Select the menu command Widows, Preferences. In the dialog select Run/Debug, Embedded C/C++ Application and then Serial Wire Viewer.
Figure 133 – Serial Wire Viewer Preferences
The buffer is stored in the heap. The allocated heap is displayed by first selecting Windows, Preferences and General; then enabling “Show heap status”. The current heap and allocated memory will be displayed in the lower, right corner. There is an upper limit to the amount of memory Atollic TrueSTUDIO® can allocate. This limit can be increased to store more information during a debug-session.
Proceed as follows:
Navigate to the Atollic TrueSTUDIO® installation directory. Open the folder in which the IDE is stored.
Edit the TrueSTUDIO.ini file and change the -Xmx512m parameter to the desired size in megabytes.
Save the file and try launching Atollic TrueSTUDIO® again.
Serial Wire Viewer Tracing
151 | P a g e
COMMON SWV PROBLEMS Common reasons for SWV not tracing are:
The Core Clock of the target is incorrectly set. It is very important to select the right Core Clock. If the frequency of the target Core Clock is unknown, it can sometimes be found by setting a breakpoint in a program loop and open the Variable View, when the breakpoint is hit.
Right click and select Add Global Variables.., and select SystemCoreClock. This is a global variable that according to the CMSIS-standard must be set to the correct speed of the Core Clock.
In CMSIS standard libraries there should be a function called SystemCoreClockUpdate(). This can be included in main()to set the
SystemCoreClock-variable. Then use the Variable view to track it.
For most devices that do not have libraries that follow the CMSIS-standard, the
Core Clock can be found in the startup code. It is often named SYSCLK, or a similar abbreviation. Also note that if software dynamically change the CPU clock speed during runtime, then SWV might stop as the clocking suddenly becomes wrong during execution.
SWV is not enabled in the currently used debug configuration.
The SWV configuration has not been sent to the target board.
Some manufacturers, such as Energy Micro, disable SWO pin by default. In this case, enable SWO with a function-call, such as DBG_SWOEnable().
The SWO receives too much data. Reduce the amount of data enabled for tracing.
The JTAG Probe, the GDB server, the target board, or possibly some other part, does not support SWV.
152 | P a g e
INSTRUCTION TRACING This section provides information on how to do Instruction Tracing with Atollic TrueSTUDIO® for ARM.
The following topics are covered:
Enable Trace
Configuring Start and Stop Triggers
Start Trace Recording
Analyzing the Trace
Exporting the Trace
Instruction Tracing
153 | P a g e
INSTRUCTION TRACING Atollic TrueSTUDIO® supports instruction tracing, provided that trace-enabled hardware is being used. Instruction tracing records the execution flow of the processor in real-time. The recorded trace buffer can then be analyzed to locate the cause of software errors. Instruction tracing is particularly useful when debugging problems that only occur sporadically.
Atollic TrueSTUDIO supports instructing tracing using both the ETM and the ETB methods:
ETM tracing: ETM tracing works with many Cortex-M devices but requires using an ETM-trace enabled JTAG probe. Atollic TrueSTUDIO supports ETM tracing using the Segger J-Trace JTAG probe. The J-Link and ST-LINK probes cannot be used for ETM tracing as they have no trace buffer. The trace buffer in ETM-compatible trace probes are typically many megabytes in size.
ETB tracing: ETB tracing can only be used with Cortex devices that have this feature enabled in the silicon. ETB tracing can be done using any of the supported JTAG probes, including Segger J-Link, as the trace buffer is not located in the JTAG probe but instead inside the target device. This adds to the chip cost and therefore is not supported by all chip vendors. The on-chip ETB trace buffer is tiny; typically 2KB or 4KB only.
Both ETM and ETB tracing records all executed machine code instructions, until the hardware limits are reached. A trace buffer is filled very quickly even though it is highly compressed. The compressed trace buffer in a JTAG probe with a 16MB of trace buffer typically expands into 200MB of uncompressed machine readable data, and to 2-3GB of human readable data. Instruction tracing thus quickly generates a huge amount of data.
ENABLE TRACE Instruction tracing (using ETM or ETB) must be enabled in the debug configuration. To enable instruction tracing, first open the debug configuration dialog box:
ETM/ETB Trace may not work on max CPU clock speed. Please check the User manual from the board/microcontroller manufacturer if there are any trace clock limitations.
Instruction Tracing
154 | P a g e
Figure 134 – Enable tracing in the debug configuration
Perform the following steps to configure a project for instruction tracing:
1. In the Debug probe dropdown list, select Segger J-Trace (for ETM and ETB tracing) or Segger J-Link (ETB tracing only).
2. In the Trace systems dropdown list, select the ETM or ETB trace system.
3. Ensure the Probe buffer size setting corresponds to the JTAG probe in use (ETM tracing only).
4. For ETM tracing, make sure the Trace Port config selection points to a file that setup the ETM trace pins of the device in a way that works for the target board in use. The commands in the file are sent to the target when trace recording is started first time in a debug session. See the section below for information on how to write a new trace port configuration file for custom designed or unsupported boards.
5. Click OK to save the settings.
The project is now configured to use ETM or ETB tracing.
Instruction Tracing
155 | P a g e
WRITING A TRACE PORT CONFIGURATION FILE
To be able to use ETM tracing the trace port pins must be configured. A specific trace port configuration file can be used for this purpose, if the application software does not configure these pins for tracing a device or board. The configuration file is implemented in gdb script syntax, and configures the special function registers (SFR’s) related to the ETM trace port pins.
Atollic TrueSTUDIO comes with a readymade trace port configuration file for most of the supported boards, but it is possible to edit them, or create new ones, to enable ETM tracing on new boards. It is recommended to copy such readymade file to the Project or some other folder on the files system if any changes are needed. Make the change in the copied file and make sure to point to the correct file in the Trace Port Config selection in the debug configuration.
See the example below for the Freescale Kinetis TWR-K70FN1M0 board:
# SIM_SOPT2: Debug trace clock select = Core/system clock
set *((unsigned long*) 0x40048004) |= 0x00001000
# SIM_SCGC5: Port A Clock Gate Control = Clock enabled
set *((unsigned long*) 0x40048038) |= 0x00000200
# SIM_CLKDIV4: Trace clock divider divisor = 0x0,
# Trace clock divider fraction = 0x0
set *((unsigned long*) 0x40048068) &= ~0x0000000F
if ($tracePortWidth == 1)
# Trace Port 1-bit configuration
# PORTE_PCR6: Pin Mux Control = Alternative 7,
# Drive Strength Enable = High drive strength
set *((unsigned long*) 0x40049018) = 0x00000740
# PORTE_PCR10: Pin Mux Control = Alternative 7,
# Drive Strength Enable = High drive strength
set *((unsigned long*) 0x40049028) = 0x00000740
end
if ($tracePortWidth == 2)
# Trace Port 2-bit configuration
# PORTE_PCR6: Pin Mux Control = Alternative 7,
# Drive Strength Enable = High drive strength
set *((unsigned long*) 0x40049018) = 0x00000740
# PORTE_PCR9: Pin Mux Control = Alternative 7,
# Drive Strength Enable = High drive strength
set *((unsigned long*) 0x40049024) = 0x00000740
Instruction Tracing
156 | P a g e
# PORTE_PCR10: Pin Mux Control = Alternative 7,
# Drive Strength Enable = High drive strength
set *((unsigned long*) 0x40049028) = 0x00000740
end
if ($tracePortWidth == 4)
# Trace Port 4-bit configuration
# PORTE_PCR6: Pin Mux Control = Alternative 7,
# Drive Strength Enable = High drive strength
set *((unsigned long*) 0x40049018) = 0x00000740
# PORTE_PCR7: Pin Mux Control = Alternative 7,
# Drive Strength Enable = High drive strength
set *((unsigned long*) 0x4004901C) = 0x00000740
# PORTE_PCR8: Pin Mux Control = Alternative 7,
# Drive Strength Enable = High drive strength
set *((unsigned long*) 0x40049020) = 0x00000740
# PORTE_PCR9: Pin Mux Control = Alternative 7,
# Drive Strength Enable = High drive strength
set *((unsigned long*) 0x40049024) = 0x00000740
#PORTE_PCR10: Pin Mux Control = Alternative 7,
# Drive Strength Enable = High drive strength
set *((unsigned long*) 0x40049028) = 0x00000740
end
CONFIGURING THE TRACING SESSION Once the JTAG probe and trace system have been configured, and a debug session has been started, the tracing can be configured.
To configure trace, suspend the debug session and open the Trace Log view (Select View in the top menu and then ETM/ETB, Trace Log).
In the Trace Log view toolbar, click on the Configuration toolbar button.
Figure 135 – Configuration Toolbar Button
Instruction Tracing
157 | P a g e
The Trace Configuration dialog box will be displayed:
Figure 136 - Trace Configuration
Configure the Trace Port Width dropdown list to match the number of pins used for ETM tracing on the hardware board.
Using the Stall processor on FIFO full checkbox, select one of these two options:
Stall the processor when the ETM trace FIFO buffer becomes full. With this setting, no trace data is lost but the timing behavior of the application can be changed.
Do not stall the processor when the ETM trace FIFO buffer becomes full. With this setting, the processor will always continue to run at full speed but trace data may be lost.
Some devices support timestamps. Enabling the timestamps can be useful if timing information is needed. It will however reduce the amount of other information available.
Instruction Tracing
158 | P a g e
ETM TRACE PORT CONFIGURATION FILE
REFERENCE When Segger J-TRACE probe is used and ETM tracing selected the Debugger tab in the Debug Configurations dialog contains a file reference to a trace port configuration file. This file is by default located in the installation of Atollic TrueSTUDIO. This means that if such project made with an older Atollic TrueSTUDIO version is imported and used in a new Atollic TrueSTUDIO version, the reference requires that the earlier version also is installed. Please update the reference to point to the ETM trace port configuration file available in the new installation.
Example of location of ETM Trace Port configuration file for STM32F1xx: C:\Program Files (x86)\Atollic\TrueSTUDIO for ARM Pro
5.0.0\ide\plugins\com.atollic.truestudio.tsp.stm32_1.0.0.20140327171
1\tsp\etm\stm32f1xx.init
ADD TRACE TRIGGER The trick with Instruction tracing is to trace only where tracing is needed. Otherwise the important information can be impossible to locate in the huge amount of data that will be collected or lost since it occurred to long time before debugging was suspended and the trace information uploaded.
There are four hardware triggers that can be set to starting and stopping the tracing on different conditions.
To access them, open the Trace Configuration as above and select Add Trace Triggers…
The triggers can also be added from the Breakpoints view.
Figure 137 - Trace Configuration
Instruction Tracing
159 | P a g e
For each of the triggers 0-3, it is possible to define that the trigger shall start or stop tracing, if its configured conditions are met. Each trigger has the following options:
The Action to perform when the condition is triggered:
o Trace Start: Starts collecting trace data
o Trace Stop: Stops collecting trace data
The Type of memory access that triggers the action:
o PC: Triggered when execution reaches an address
o Data Read: Triggered when data is read from an address
o Data Write: Triggered when data is written to an address
o Data Read/Write: Triggered when data is read or written to an address
Enter the address to trigger on in the Expression/Address field. This field accepts:
o Numeric address constants such as 0xffff0010
o Numeric address ranges such as 0xffff0010 to 0xffff001f
o Function symbols such as “main”
o Variable symbols such as “MyGlobalCounter”
o It is also possible to define mathematical expressions like “main + 7”
Typically, at least one trigger is configured to start tracing, and another trigger is configured to stop tracing. Once the trace start and stop conditions have been configured, click OK to save the trace trigger settings.
ADD TRACE TRIGGER IN THE EDITOR
Start and Stop Trace Triggers can also be added directly in the C/C++ Editor Ruler and the Disassembly view. These triggers work in line with the Breakpoints, except that they will not suspend the execution. Instead they will start or stop collecting of trace data when execution reaches that line.
Right click on the ruler to the left in the editor window and select Add Trace Trigger.
Instruction Tracing
160 | P a g e
Figure 138 – Add Trace Trigger in the Editor
A new trigger will be created and tracing starts to be collected when execution reaches that line of code.
Figure 139 –Trace Trigger in the Editor
MANAGING TRACE TRIGGERS All the Trace Triggers are visible from the Breakpoints view.
Figure 140 –Trace Trigger in the Editor
From this view the triggers can easily be inactivated, activated, removed and even added.
Bear in mind that the hardware supports up to a maximum of four simultaneous Trace Triggers.
Instruction Tracing
161 | P a g e
START TRACE RECORDING Once tracing has been enabled, click the Record toolbar button in the Trace Log view to enable recording of trace data.
Figure 141 – Record Toolbar Button
With trace recording enabled, start target execution. When execution is suspended, the Trace Log view is filled with the recorded instruction trace (provided the trace start trigger condition was fulfilled).
ANALYZING THE TRACE When suspending execution, the trace buffer is uploaded to the Trace Log view. It is filled with the recorded instruction stream, along with other data that is provided by analyzing the trace recording.
Figure 142 - The trace log view
The Trace Log view shows detailed information on what the processor was doing up to the point of suspending execution.
Please note the column with graphical icons that annotate the Trace Log view with information about execution flow branches:
Call a new function
Instruction Tracing
162 | P a g e
Return from a function
Jump up in the code
Jump down in the code
Iterate on the same instruction
A conditional branch was not taken
At the end of the view is the End of Trace marker displayed. This is added to the Trace Log each time the buffer is overflowed and it indicates that some trace data most likely is lost.
Figure 143 - Trace restarted
The other important marker is the Trace restarted marker. It indicates that the target wasn’t able to generate all the trace information without affecting the performance of the running application. Some data is lost.
To overcome this issue, enable Stall processor on FIFO full in the Trace Configuration.
DISPLAY OPTIONS
The Log Trace view supports several different display options:
Function call tracing
C tracing
C/Assembler mixed mode tracing
Assembler tracing
Raw trace packet log
Use the different Display Options Toolbar Buttons to switch between the different view-modes.
Enabling Stall processor on FIFO full will slow down the processor in some situation and hence affect the timing of the execution. For some real time applications this is unacceptable.
Instruction Tracing
163 | P a g e
The Function call tracing displays what function the execution is in and from where it is called or returned from.
Figure 144 – Display Options Toolbar Button
SEARCH THE TRACE LOG
The recorded trace buffer can become very large. Atollic TrueSTUDIO supports appended trace buffers of a total of 100 million lines. For this reason, a search function is available, to enable users to find important information in the potentially huge dataset.
Figure 145 – Search Toolbar Button
Using the search feature it is possible to search for certain data of particular interest. For example, assume a system crash sometimes happens because a variable has an illegal value. By searching the instruction trace for the address of the variable, it is possible to understand what code modifies the value and gives it the illegal value causing a system crash.
EXPORTING A TRACE LOG It is possible to save the trace log by clicking on the Export Trace toolbar button in the Trace Log view.
Figure 146 – Export Toolbar Button
The trace log can be saved to either comma separated value files (*.csv) that can be imported into Microsoft® Excel®, or to human readable ASCII text files (*.txt).
Configure the trace record range to export using the From Index and To Index fields.
As the saved trace log becomes approximately 200 times larger than its compressed size in the JTAG probe trace buffer, the saved trace log can optionally be split to many files in order to avoid exported trace logs which are several gigabytes in size (for example, the 16MB compressed trace buffer in Segger J-Trace expands to 2-3GB when saved to a human readable trace log file in *.CSV or *.TXT formats).
Instruction Tracing
164 | P a g e
Figure 147 - Exporting the trace log
Select the file filename and folder to use for the export using the Browse button. In the Save As dialog box, select the desired filename and folder, select *.CSV or *.TXT file format, and click Save to return to the Export Trace dialog box. Click OK to start exporting the trace log.
RTOS Kernel Awareness Debugging
165 | P a g e
RTOS-AWARE
DEBUGGING This section provides information on how to debug Real Time Operating Systems (RTOS) with Atollic TrueSTUDIO® for ARM.
The following topics are covered:
RTOS Kernel Awareness Debugging
Segger embOS
FreeRTOS and OpenRTOS
Freescale MQX
Express Logic ThreadX
Micrium uC/OS-III
HCC Embedded eTaskSync
Quadros RTXC
TOPPERS/ASP
RTOS Kernel Awareness Debugging
166 | P a g e
RTOS KERNEL AWARENESS DEBUGGING This chapter provides information regarding the Atollic TrueSTUDIO Real Time Operating Systems kernel awareness debug features.
Several different Real Time Operating Systems are supported and the current state of the RTOS kernel and the various RTOS kernel objects can easily be inspected in a set of dedicated views, in the Atollic TrueSTUDIO Debug perspective.
RTOS Kernel Awareness Debugging
167 | P a g e
SEGGER EMBOS The kernel awareness features for Segger embOS in Atollic TrueSTUDIO provide the developer with a detailed insight into the internal data structures of the embOS kernel. During a debug session, the current state of the embOS kernel and the various embOS kernel objects such as tasks, mailboxes, semaphores and software timers, can easily be inspected in a set of dedicated views, in the Atollic TrueSTUDIO Debug perspective.
REQUIREMENTS The kernel awareness features require Segger embOS version 3.80 or later.
FINDING THE VIEWS A number of debugger views are available in the Atollic TrueSTUDIO Debug perspective when debugging an application containing the embOS real-time operating system.
These views can be opened from the Show View toolbar dropdown list button.
Figure 148 - View Top Level Menu
Please note that the level of information available in the different views in Atollic TrueSTUDIO depends on the options used when the embOS kernel was built. This manual refers to an embOS kernel built with the debug and profiling (DP) build options. Please note that microcontrollers based on the ARM-cores Cortex-M0 and Cortex-M0+ do not support Serial Wire Viewer tracing.
RTOS Kernel Awareness Debugging
168 | P a g e
Figure 149 -embOS Show View Toolbar Button
SYSTEM INFORMATION
The embOS System Information view displays a number of system variables available in the embOS kernel, such as status, number of tasks, etc.
Figure 150 - embOS System Information View
This view also provides descriptive fault information messages for any fault conditions detected by the OS kernel.
Figure 151 - embOS System Information View (Fault Condition)
RTOS Kernel Awareness Debugging
169 | P a g e
The available system variables are described in the table below:
Name Description
OS_Status The current status of embOS.
OS_Time The number of system ticks since program start.
OS_NumTasks The number of created tasks.
OS_pCurrentTask The address (TCB) and name of the currently running task.
OS_pActiveTask The address (TCB) and name of the next running task.
embOS build The build options of the currently running embOS kernel. In the example, debugging and profiling information (DP) is available.
Table 4 – embOS System Variables
TASK LIST The embOS Task List view displays detailed information regarding all available tasks in the target system. The task list is updated automatically each time the target execution is suspended.
There is one column for each type of task parameter, and one row for each task. If the value of any parameter for a particular task has changed since the last time the debugger was suspended, the corresponding row will be highlighted in yellow.
Figure 152 - embOS Task List View
Please note that due to performance reasons, stack analysis (the Stack Info column) is disabled by default. To enable stack analysis, use the Stack analysis toggle toolbar button in the View toolbar:
RTOS Kernel Awareness Debugging
170 | P a g e
The available parameters are described in the table below:
Name Description
N/A Indicates the currently running task. The currently running task is indicated by a green arrow symbol.
Prio The task priority.
Id The task identifier (TCB address).
Name The task name.
Status The current status of the task. The type of object that currently blocks a task is presented in parenthesis.
Timeout The timeout value (OS_Delay) and in parenthesis the point in time when the timeout will occur.
Preemptions The number of times the task has been preempted by a higher priority task.
Waitable Object The address of the object the task is waiting for.
Events The event mask of the task. A value of 0x0 means that the task is not waiting on any events.
Stack Info The amount of used stack space, the available stack space and the stack start address. [Used/Total@Address]. Note! This feature must be enabled in the View toolbar.
Activations The number of times the task has been activated.
Round Robin The number of remaining time slices (ticks) and the time slice reload value, during round robin scheduling.
Table 5 – embOS Task Parameters
TIMERS The embOS Timers view displays detailed information regarding all available software timers in the target system. The timers view is updated automatically each time the target execution is suspended.
RTOS Kernel Awareness Debugging
171 | P a g e
There is one column for each type of timer parameter, and one row for each timer. If the value of any parameter for a particular timer has changed since the last time the debugger was suspended, the corresponding row will be highlighted in yellow.
Figure 153 - embOS Timers View
The available parameters are described in the table below:
Name Description
Id The timer identifier (address).
Hook The address and name of the function that is called when the timer expires.
Time The current timer value (ticks) and in parenthesis the point in time when the timer expires.
Period The timer time period (ticks).
Active Shows whether the timer is active or not. 1 = Active 0 = Not active
Table 6 – embOS Timer Parameters
RESOURCE SEMAPHORES The embOS Resource Semaphores view displays detailed information regarding all available resource semaphores in the target system. The view is updated automatically each time the target execution is suspended.
There is one column for each type of semaphore parameter, and one row for each semaphore. If the value of any parameter for a particular semaphore has changed since the last time the debugger was suspended, the corresponding row will be highlighted in yellow.
RTOS Kernel Awareness Debugging
172 | P a g e
Figure 154 - embOS Resource Semaphores View
The available parameters are described in the table below:
Column Description
Id The resource semaphore identifier (address).
Owner The address (TCB) and name of the task currently owning the semaphore.
Use counter The semaphore use counter. Keeps track of how many times the semaphore has been claimed by a task.
Waiting tasks The address (TCB) and name of all tasks waiting on the semaphore.
Table 7 – embOS Resource Semaphore Parameters
MAILBOXES The embOS Mailboxes view displays detailed information regarding all available mailboxes in the target system. The view is updated automatically each time the target execution is suspended.
There is one column for each type of mailbox parameter, and one row for each mailbox. If the value of any parameter for a particular mailbox has changed since the last time the debugger was suspended, the corresponding row will be highlighted in yellow.
Figure 155 - embOS Mailboxes View
RTOS Kernel Awareness Debugging
173 | P a g e
The available parameters are described in the table below:
Column Description
Id The mailbox identifier (address).
Messages The current number of messages and the maximum number of messages the mailbox can hold.
Message size The size (in bytes) of a message item.
pBuffer The address of the message buffer.
Waiting tasks The address (TCB) and name of all tasks waiting on the mailbox.
Table 8 – embOS Mailbox Parameters
RTOS Kernel Awareness Debugging
174 | P a g e
HCC EMBEDDED ETASKSYNC The kernel awareness features for eTaskSync in Atollic TrueSTUDIO provide the developer with a detailed insight into the task structures of the eTaskSync kernel. During a debug session, the current state of the tasks can be easily inspected in a dedicated view, in the Atollic TrueSTUDIO Debug perspective.
REQUIREMENTS The kernel awareness features described in this document is based on eTaskSync Versions 3.01.
FINDING THE VIEW One view is available in the Atollic TrueSTUDIO Debug perspective when debugging an application containing the eTaskSync real-time operating system.
It is available from the Show View toolbar dropdown list button.
Figure 156 – eTaskSync Show View Toolbar Button
RTOS Kernel Awareness Debugging
175 | P a g e
TASK LIST The eTaskSync Task List view displays detailed information regarding all available tasks in the target system. The task list is updated automatically each time the target execution is suspended.
There is one column for each type of task parameter, and one row for each task. If the value of any parameter for a particular task has changed since the last time the debugger was suspended, the corresponding row will be highlighted in yellow.
Figure 157 - eTaskSync Task List View
The available parameters are described in the table below:
Name Description
N/A Indicates the currently running task. The currently running task is indicated by a green arrow symbol.
Name The name assigned to the task.
ID The task base ID
Prio The task actual priority
Original Prio The original (base) priority of the task
Remaining ticks Number of remaining ticks
Time Slice The time slice. This is not always present. Only if SYNC_TIME_SLICE_ENABLE option is set at compile time in hcc/src/config/config_sync.h.
State The state of the task.
Table 9 – eTaskSync Task Parameters
RTOS Kernel Awareness Debugging
176 | P a g e
FREERTOS AND OPENRTOS As FreeRTOS and OpenRTOS are technically identical, we will only refer to FreeRTOS here, but the information applies equally to both.
The kernel awareness features for FreeRTOS in Atollic TrueSTUDIO provide the developer with a detailed insight into the internal data structures of the FreeRTOS kernel. During a debug session, the current state of the FreeRTOS kernel and the various FreeRTOS kernel objects such as tasks, mailboxes, semaphores and software timers, can be easily inspected in a set of dedicated views, in the Atollic TrueSTUDIO Debug perspective.
REQUIREMENTS In order for the FreeRTOS Queues and the FreeRTOS Semaphores views to be able to locate the appropriate RTOS kernel data structures, the associated kernel objects need to be added to the FreeRTOS queue registry. Please consult the FreeRTOS reference manual for details.
FINDING THE VIEWS A number of debugger views are available in the Atollic TrueSTUDIO Debug perspective when debugging an application containing the FreeRTOS real-time operating system.
These views are available from the Show View toolbar dropdown list button.
Figure 158 – FreeRTOS View Top Level Menu
RTOS Kernel Awareness Debugging
177 | P a g e
Figure 159 – FreeRTOS Show View Toolbar Button
TASK LIST The FreeRTOS Task List view displays detailed information regarding all available tasks in the target system. The task list is updated automatically each time the target execution is suspended.
There is one column for each type of task parameter, and one row for each task. If the value of any parameter for a particular task has changed since the last time the debugger was suspended, the corresponding row will be highlighted in yellow.
Figure 160 - FreeRTOS Task List View
RTOS Kernel Awareness Debugging
178 | P a g e
Please note that due to performance reasons, stack analysis (the Min Free Stack column) is disabled by default. To enable stack analysis, use the Stack analysis toggle toolbar button in the View toolbar:
The available parameters are described in the table below:
Name Description
N/A Indicates the currently running task. The currently running task is indicated by a green arrow symbol.
Name The name assigned to the task.
Priority (Base/Actual) The task base priority and actual priority. The base priority is the priority assigned to the task. The actual priority is a temporary priority assigned to the task due to the priority inheritance mechanism.
Start of Stack The address of the stack region assigned to the task.
Top of Stack The address of the saved task stack pointer.
State The current state of the task.
Event Object The name of the resource that has caused the task to be blocked.
Min Free Stack The stack “high watermark”. Displays the minimum number of bytes left on the stack for a task. A value of 0 (most likely) indicates that a stack overflow has occurred. Note! This feature must be enabled in the View toolbar.
Table 10 – FreeRTOS Task Parameters
QUEUES The FreeRTOS Queues view displays detailed information regarding all available queues in the target system. The queues view is updated automatically each time the target execution is suspended.
There is one column for each type of queue parameter, and one row for each queue. If the value of any parameter for a particular queue has changed since the last time the debugger was suspended, the corresponding row will be highlighted in yellow.
RTOS Kernel Awareness Debugging
179 | P a g e
Figure 161 - FreeRTOS Queues View
The available parameters are described in the table below:
Name Description
Name The name assigned to the queue in the queue registry.
Address The address of the queue.
Max Length The maximum number of items that the queue can hold.
Item Size The size in bytes of each queue item.
Current Length The number of items currently in the queue.
#Waiting Tx The number of tasks currently blocked waiting to send to the queue.
#Waiting Rx The number of tasks currently blocked waiting to receive from the queue.
Table 11 – FreeRTOS Queue Parameters
SEMAPHORES The FreeRTOS Semaphores view displays detailed information regarding all available synchronization objects in the target system, including:
Mutexes
Counting semaphores
Binary semaphores
Recursive semaphores
RTOS Kernel Awareness Debugging
180 | P a g e
The view is updated automatically each time the target execution is suspended.
There is one column for each type of semaphore parameter, and one row for each semaphore. If the value of any parameter for a particular semaphore has changed since the last time the debugger was suspended, the corresponding row will be highlighted in yellow.
Figure 162 - FreeRTOS Semaphores View
The available parameters are described in the table below:
Column Description
Name The name assigned to the object in the queue registry.
Address The address of the object.
Type The type of the object.
Size The maximum number of owning tasks.
Free The number of free slots currently available.
#Blocked tasks The number of tasks currently blocked waiting for the object.
Table 12 – FreeRTOS Semaphore Parameters
TIMERS The FreeRTOS Timers view displays detailed information regarding all available software timers in the target system. The timers view is updated automatically each time the target execution is suspended.
There is one column for each type of timer parameter, and one row for each timer. If the value of any parameter for a particular timer has changed since the last time the debugger was suspended, the corresponding row will be highlighted in yellow.
RTOS Kernel Awareness Debugging
181 | P a g e
Figure 163 - FreeRTOS Timers View
The available parameters are described in the table below:
Name Description
Name The name assigned to the timer.
Period The time (in ticks) between timer start and the execution of the callback function.
Type The type of timer. Auto-Reload timers are automatically reactivated after expiration. One-Shot timers expire only once.
Id The timer identifier.
Callback The address and name of the callback function executed when the timer expires.
Table 13 – FreeRTOS Timer Parameters
RTOS Kernel Awareness Debugging
182 | P a g e
FREESCALE MQX The kernel awareness features for MQX in Atollic TrueSTUDIO provide the developer with a detailed insight into the task structures of the MQX kernel. During a debug session, the current state of the tasks can be easily inspected in a dedicated view, in the Atollic TrueSTUDIO Debug perspective.
REQUIREMENTS The kernel awareness features described in this document is based on MQX Versions 4.0.
FINDING THE VIEW One view is available in the Atollic TrueSTUDIO Debug perspective when debugging an application containing the MQX real-time operating system.
It is available from the Show View toolbar dropdown list button.
Figure 164 –MQX Show View Toolbar Button
TASK SUMMARY The MQX Task Summary view displays information regarding all available tasks in the task chain which is part of kernel data. The task list is updated automatically each time the target execution is suspended. The items in the view can be expanded to provide more details about the tasks.
RTOS Kernel Awareness Debugging
183 | P a g e
There is one column for each type of task parameter, and one row for each task. If the value of any parameter for a particular task has changed since the last time the debugger was suspended, the corresponding row will be highlighted in yellow.
Figure 165 - MQX Task Summary View
The available parameters are described in the table below:
Name Description
Name The name assigned to the task.
ID The task base ID
TD The Task Descriptor for the task.
Priority The task priority
State The state of the task. The currently running task has the state Active.
Task Error Code OK or the Error Code for the task.
Table 14 – MQX Task Parameters
TASK DETAILS This is not a unique view, but instead each task in the MQX Task Summery view can be expanded to display more information about that task.
RTOS Kernel Awareness Debugging
184 | P a g e
Figure 166 - MQX Task Details
In this mode the view will display extensive information about each task.
A task’s creator is also recursively displayed with its own details. The hierarchy recursion is never infinite – there is always a System task as an owner of some task in the hierarchy. The System task terminates the recursion
RTOS Kernel Awareness Debugging
185 | P a g e
QUADROS RTXC The kernel awareness features for Quadros RTXC RTOS in Atollic TrueSTUDIO provide the developer with a detailed insight into the internal data structures of the RTXC kernel. During a debug session, the current state of the RTXC kernel and the various RTXC kernel objects such as tasks, semaphores, mailboxes, etc, can be easily inspected in a set of dedicated views, in the Atollic TrueSTUDIO Debug perspective.
REQUIREMENTS The kernel awareness features described in this document is based on RTXC Version 2.1.2.
FINDING THE VIEWS The Quadros RTXC Kernel Awareness views, is available in the Atollic TrueSTUDIO Debug perspective when debugging an application containing the RTXC real-time operating system.
The views can be accessed from the Show View toolbar dropdown list button.
Figure 167 – RTXC Show View Toolbar Button
RTOS Kernel Awareness Debugging
186 | P a g e
KERNEL INFORMATION The RTXC Kernel Information view displays general information about the kernel.
Figure 168 – RTXC Kernel Information View
The available system variables are described in the table below:
Name Description
Kernel Version A sixteen-bit quantity defining the version number of the RTXC Quadros kernel.
System RAM Base The base address of the system RAM.
System RAM Size The size of the system RAM.
System RAM Unused The amount of unused system RAM.
Stack Base The base address of the kernel stack.
Stack Size Displays the size of the kernel stack.
Stack Unused The number of bytes unused, high watermark.
Task Scheduling The task scheduler information (on /off).
Table 15 – RTXC Kernel Information
TASKS (TASK LIST AND STACK INFO) The RTXC Tasks view contains one Task List tab and one Stack Info tab. Each tab displays detailed information regarding all available tasks in the target system.
RTOS Kernel Awareness Debugging
187 | P a g e
TASK LIST TAB
The RTXC Task List tab displays detailed information regarding all available tasks in the target system. The task list is updated automatically each time the target execution is suspended.
There is one column for each type of task parameter, and one row for each task. If the value of any parameter for a particular task has changed since the last time the debugger was suspended, the corresponding row will be highlighted in yellow.
Figure 169 - RTXC Task List tab in Task view
The available parameters are described in the table below:
Name Description
N/A Indicates the currently running task. The currently running task is indicated by a green arrow symbol.
# The task id.
Name The name assigned to the task.
Priority The priority for the task.
Entry The task’s entry point address (in hexadecimal).
Arguments The task’s environment arguments address.
Tick Slice Ticks remaining / total ticks.
State The tasks current state.
RTOS Kernel Awareness Debugging
188 | P a g e
Table 16 – RTXC Task List Parameters
STACK INFO TAB
The RTXC Stack info tab displays detailed stack information for each task. The stack information list is updated automatically each time the target execution is suspended.
There is one column for each type of stack information, and one row for each task. If the value of any parameter for a particular task has changed since the last time the debugger was suspended, the corresponding row will be highlighted in yellow.
Figure 170 – RTXC Task Stack Info
The available parameters are described in the table below:
Name Description
# The task id.
Name The name assigned to the task.
Address The base address of the task’s stack.
Size The amount of memory allocated for the stack.
Used The number of bytes unused, high watermark.
Spare The amount of stack space left over.
Table 17 – RTXC Stack Info
RTOS Kernel Awareness Debugging
189 | P a g e
ALARMS The RTXC Alarms view displays detailed information regarding all available alarms in the target system. The view is updated automatically each time the target execution is suspended.
There is one column for each type of alarm parameter, and one row for each alarm. If the value of any parameter for a particular alarm has changed since the last time the debugger was suspended, the corresponding row will be highlighted in yellow.
Figure 171 - RTXC Alarms View
The available parameters are described in the table below:
Name Description
# The object id.
Name The name assigned to the alarm.
Wait order The alarm’s wait order that can be either Priority or FIFO.
Counter The alarm’s parent counter.
State The alarm’s current state.
Initial The alarm’s initial period in ticks.
Recycle The alarm’s recycle value.
Remain The number of remaining ticks.
Waiter(s) The task(s) that is waiting on the alarm, if any. Only the first 5 tasks are shown.
Table 18 – RTXC Alarm Parameters
RTOS Kernel Awareness Debugging
190 | P a g e
COUNTERS The RTXC Counters view displays detailed information regarding all available counters in the target system. The counter information is updated automatically each time the target execution is suspended.
There is one column for each type of parameter, and one row for each counter. If the value of any parameter for a particular task has changed since the last time the debugger was suspended, the corresponding row will be highlighted in yellow.
Figure 172 - RTXC Counters View
The available parameters are described in the table below:
Name Description
# The object id.
Name The name assigned to the counter.
Parent The parent event source.
Accumulator The counter’s accumulator.
Count The counter’s count value.
Modulus The counter’s modulus.
Table 19 – RTXC Counter Parameters
EVENT SOURCES The RTXC Event Sources view displays detailed information regarding all available event sources in the target system. The event source information is updated automatically each time the target execution is suspended.
RTOS Kernel Awareness Debugging
191 | P a g e
There is one column for each type of parameter, and one row for each event source. If the value of any parameter for a particular event source has changed since the last time the debugger was suspended, the corresponding row will be highlighted in yellow.
Figure 173 - RTXC Event Sources View
The available parameters are described in the table below:
Name Description
# The object id.
Name The name assigned to the event source.
Counter(s) The counter(s) associated with this event source.
Accumulator The event source’s accumulator.
Table 20 – RTXC Event Source Parameters
EXCEPTION BACKTRACE The RTXC Exception Backtrace view displays detailed backtrace information during an exception.
Each line represents an exception that is either executing, or was preempted by the item above it. The topmost line shows the active component, which preempted the component listed on the second line, which in turn preempted the third, and so on.
RTOS Kernel Awareness Debugging
192 | P a g e
Figure 174 - RTXC Exception Backtrace View
The available parameters are described in the table below:
Name Description
# The exception id. A zero represents the Kernel.
Name The name of the exception.
Registers The saved register context for the exception (in hexadecimal).
Table 21 – RTXC Exception Backtrace Parameters
EXCEPTIONS The RTXC Exceptions view displays one line entry for each exception in the application.
Figure 175 - RTXC Exceptions View
The available parameters are described in the table below:
Name Description
# The object id.
Name The name assigned to the exception.
Vector The vector number.
Level The interrupt level.
Old Handler Previous handler address.
RTOS Kernel Awareness Debugging
193 | P a g e
Table 22 – RTXC Exception Parameters
MAILBOXES The RTXC Mailboxes view displays detailed information regarding all available mailboxes in the target system. The view is updated automatically each time the target execution is suspended.
There is one column for each type of mailbox parameter, and one row for each mailbox. If the value of any parameter for a particular mailbox has changed since the last time the debugger was suspended, the corresponding row will be highlighted in yellow.
Figure 176 - RTXC Mailboxes View
The available parameters are described in the table below:
Name Description
# The object id.
Name The name assigned to the mailbox.
Wait order The mailbox’s wait order that can be either Priority or FIFO.
Current The current number of messages in the mailbox.
Usage The total number of messages that have been placed in the mailbox. Mailbox statistics must be enabled for displaying this information.
Waiter(s) The task that is waiting on the mailbox, if any. Only the first 5 tasks are shown.
Table 23 – RTXC Mailbox Parameters
RTOS Kernel Awareness Debugging
194 | P a g e
MUTEXES The RTXC Mutexes view displays detailed information regarding all available mutexes in the target system. The view is updated automatically each time the target execution is suspended.
There is one column for each type of mutex parameter, and one row for each mutex. If the value of any parameter for a particular mutex has changed since the last time the debugger was suspended, the corresponding row will be highlighted in yellow.
Figure 177 - RTXC Mutexes View
The available parameters are described in the table below:
Name Description
# The object id.
Name The name assigned to the mutex.
Wait order The mutex’s wait order that can be either Priority or FIFO.
Inversion Shows whether or not priority inversion is enabled for the mutex.
Owner The task currently owning the mutex.
Nest level The nest level.
Usage The total number of releases performed on the mutex. Mutex statistics must be enabled for displaying this information.
Conflicts The number of contentions that have occurred. Mutex statistics must be enabled for displaying this information.
Waiter(s) The task(s) that is waiting on the mutex, if any. Only the first 5 tasks are shown.
RTOS Kernel Awareness Debugging
195 | P a g e
Table 24 – RTXC Mutex Parameters
PARTITIONS The RTXC Partitions view displays detailed information regarding all available partitions in the target system. The view is updated automatically each time the target execution is suspended.
There is one column for each type of parameter, and one row for each partition. If the value of any parameter for a particular partition has changed since the last time the debugger was suspended, the corresponding row will be highlighted in yellow.
Figure 178 - RTXC Partitions View
The available parameters are described in the table below:
Name Description
# The object id.
Name The name assigned to the partition.
Wait order The partition’s wait order that can be either Priority or FIFO.
Available The current number of available blocks in the partition.
Total The total number of blocks in the partition.
BSize The size of each block in the partition.
Usage The usage count for the partition. Partition statistics must be enabled for displaying this information.
Worst The low watermark for the available blocks in the partition. Partition statistics must be enabled for
RTOS Kernel Awareness Debugging
196 | P a g e
Name Description displaying this information.
Waiter(s) The task(s) currently waiting on the partition, if any. Only the first 5 tasks are shown.
Table 25 – RTXC Partition Parameters
PIPES The RTXC Pipes view displays detailed information regarding all available pipes in the target system. The view is updated automatically each time the target execution is suspended.
There is one column for each type of pipe parameter, and one row for each pipe. If the value of any parameter for a particular pipe has changed since the last time the debugger was suspended, the corresponding row will be highlighted in yellow.
Figure 179 - RTXC Pipes View
The available parameters are described in the table below:
Name Description
# The object id.
Name The name assigned to the pipe.
Buffers The maximum number of buffers.
Size The size of each buffer.
Full The current number of full buffers.
RTOS Kernel Awareness Debugging
197 | P a g e
Name Description
Empty The current number of empty buffers.
Usage The usage count for the pipe. Pipe statistics must be enabled for displaying this information.
Worst The maximum full buffer count. Pipe statistics must be enabled for displaying this information.
Table 26 – RTXC Pipe Parameters
QUEUES The RTXC Queus view displays detailed information regarding all available queues in the target system. The view is updated automatically each time the target execution is suspended.
There is one column for each type of queue parameter, and one row for each queue. If the value of any parameter for a particular queue has changed since the last time the debugger was suspended, the corresponding row will be highlighted in yellow.
Figure 180 - RTXC Queues View
The available parameters are described in the table below:
Name Description
# The object id.
Name The name assigned to the queue.
Wait Order The queue’s wait order that can be either Priority or FIFO.
Width The size of each entry in the queue.
RTOS Kernel Awareness Debugging
198 | P a g e
Name Description
Depth The maximum number of entries in the queue.
Current The current number of entries in the queue.
Usage The total number of accesses to the queue. Queue statistics must be enabled for displaying this information.
Worst The maximum numbers of entries that has been in the queue. Queue statistics must be enabled for displaying this information.
Waiter(s) The task(s) currently waiting on the partition, if any. Only the first 5 tasks are shown.
Table 27 – RTXC Queue Parameters
SEMAPHORES The RTXC Semaphores view displays detailed information regarding all available semaphores in the target system. The view is updated automatically each time the target execution is suspended.
There is one column for each type of semaphore parameter, and one row for each semaphore. If the value of any parameter for a particular semaphore has changed since the last time the debugger was suspended, the corresponding row will be highlighted in yellow.
Figure 181 - RTXC Semaphores View
RTOS Kernel Awareness Debugging
199 | P a g e
The available parameters are described in the table below:
Name Description
# The object id.
Name The name assigned to the semaphore.
Wait Order The semaphore’s wait order that can be either Priority or FIFO.
Signal Type The semaphore’s signal type. Can be either Single or Multiple.
Count The semaphore’s current count.
Usage The semaphore’s usage count. Semaphore statistics must be enabled for displaying this information.
Waiter(s) The task(s) currently waiting on the semaphore, if any. Only the first 5 tasks are shown.
Table 28 – RTXC Semaphore Parameters
RTOS Kernel Awareness Debugging
200 | P a g e
EXPRESS LOGIC THREADX The kernel awareness features for Express Logic ThreadX® real-time operating system in Atollic TrueSTUDIO provide the developer with a detailed insight into the internal data structures of the ThreadX kernel. During a debug session, the current state of the ThreadX kernel and the various ThreadX kernel objects such as tasks, mailboxes, semaphores and software timers, can be easily inspected in a set of dedicated views, in the Atollic TrueSTUDIO Debug perspective.
REQUIREMENTS The kernel awareness features described in this document is based on ThreadX Cortex-M4/GNU Version G5.5.5.0.
FINDING THE VIEWS A number of debugger views are available in the Atollic TrueSTUDIO Debug perspective when debugging an application containing the ThreadX real-time operating system.
These views are available from the Show View toolbar dropdown list button.
Figure 182 – ThreadX View Top Level Menu
RTOS Kernel Awareness Debugging
201 | P a g e
Figure 183 - ThreadX Show View Toolbar Button
THREAD LIST The ThreadX Thread List view displays detailed information regarding all available threads in the target system. The thread list is updated automatically each time the target execution is suspended
There is one column for each type of thread parameter, and one row for each thread. If the value of any parameter for a particular thread has changed since the last time the debugger was suspended, the corresponding row will be highlighted in yellow.
Figure 184 - ThreadX Thread List View
Please note that due to performance reasons, stack analysis (the Stack Usage column) is disabled by default. To enable stack analysis, use the Stack analysis toggle toolbar button in the View toolbar:
The available parameters are described in the table below:
RTOS Kernel Awareness Debugging
202 | P a g e
Name Description
N/A Indicates the currently running thread. The currently running thread is indicated by a green arrow symbol.
Name The thread name.
Priority The thread priority.
State The state of the current thread. The name of the object that currently suspends a thread is presented in parenthesis. For sleeping threads, the remaining sleep time (ticks) is presented.
Run Count The threads run counter.
Stack Start The start address of the stack area.
Stack End The end address of the stack area.
Stack Size The size of the stack area (bytes).
Stack Ptr The address of the thread stack pointer.
Stack Usage The maximum stack usage (bytes).
Table 29 – ThreadX Thread Parameters
SEMAPHORES The ThreadX Semaphores view displays detailed information regarding all available resource semaphores in the target system. The view is updated automatically each time the target execution is suspended.
There is one column for each type of semaphore parameter, and one row for each semaphore. If the value of any parameter for a particular semaphore has changed since the last time the debugger was suspended, the corresponding row will be highlighted in yellow.
RTOS Kernel Awareness Debugging
203 | P a g e
Figure 185 - ThreadX Semaphores View
The available parameters are described in the table below:
Column Description
Name The name of the semaphore.
Count The current semaphore count.
Suspended The threads currently suspended because of the semaphore state.
Table 30 – ThreadX Semaphore Parameters
MUTEXES The ThreadX Mutexes view displays detailed information regarding all available mutexes in the target system. The view is updated automatically each time the target execution is suspended.
There is one column for each type of mutex parameter, and one row for each mutex. If the value of any parameter for a particular mutex has changed since the last time the debugger was suspended, the corresponding row will be highlighted in yellow.
Figure 186 - ThreadX Mutexes View
The available parameters are described in the table below:
RTOS Kernel Awareness Debugging
204 | P a g e
Column Description
Name The name of the mutex.
Owner The thread that currently owns the mutex.
Owner Count The mutex owner count (number of get operations performed by the owner thread).
Suspended The threads currently suspended because of the mutex state.
Table 31 – ThreadX Mutex Parameters
MESSAGE QUEUES The ThreadX Message Queues view displays detailed information regarding all available message queues in the target system. The view is updated automatically each time the target execution is suspended.
There is one column for each type of message queue parameter, and one row for each message queue. If the value of any parameter for a particular message queue has changed since the last time the debugger was suspended, the corresponding row will be highlighted in yellow.
Figure 187 - ThreadX Message Queues View
The available parameters are described in the table below:
Column Description
Name The name of the message queue.
Address The address of the message queue.
Capacity The maximum number of entries allowed in the queue.
Used The current number of used entries in the queue.
RTOS Kernel Awareness Debugging
205 | P a g e
Column Description
Free The current number of free entries in the queue.
Message size The size (in 32-bit words) of each message entry.
Suspended The threads currently suspended because of the message queue state.
Table 32 – ThreadX Message Queue Parameters
EVENT FLAGS The ThreadX Event Flags view displays detailed information regarding all available event flag groups in the target system. The view is updated automatically each time the target execution is suspended.
There is one column for each type of parameter, and one row for each event flag group. If the value of any parameter for a particular event flag group has changed since the last time the debugger was suspended, the corresponding row will be highlighted in yellow.
Figure 188 - ThreadX Event Flags View
The available parameters are described in the table below:
Column Description
Name The name of the event flag group.
Flags The current value of the event flag group.
Suspended The threads currently suspended because of the state of the event flag group.
Table 33 – ThreadX Event Flag Parameters
RTOS Kernel Awareness Debugging
206 | P a g e
TIMERS The ThreadX Timers view displays detailed information regarding all available software timers in the target system. The timers view is updated automatically each time the target execution is suspended.
There is one column for each type of timer parameter, and one row for each timer. If the value of any parameter for a particular timer has changed since the last time the debugger was suspended, the corresponding row will be highlighted in yellow.
Figure 189 - ThreadX Timers View
The available parameters are described in the table below:
Name Description
Name The name of the software timer.
Remaining The remaining number of ticks before the timer expires.
Re-init The timer re-initialization value (ticks) after expiration. Contains value 0 for One-Shot timers.
Functions The address and name of the function that will be called when the timer expires.
Table 34 – ThreadX Timer Parameters
MEMORY BLOCK POOLS The ThreadX Memory Block Pools view displays detailed information regarding all available memory block pools in the target system. The view is updated automatically each time the target execution is suspended.
There is one column for each type of parameter, and one row for each memory block pool. If the value of any parameter for a particular memory block pool has changed since the last time the debugger was suspended, the corresponding row will be highlighted in yellow.
RTOS Kernel Awareness Debugging
207 | P a g e
Figure 190 - ThreadX Memory Block Pools View
The available parameters are described in the table below:
Column Description
Name The name of the block pool.
Address The block pool starting address.
Used The current number of allocated blocks.
Free The current number of free blocks.
Size The total number of blocks available.
Block size The size (bytes) of each block.
Pool size The total pool size (bytes).
Suspended The threads currently suspended because of the state of the memory block pool.
Table 35 – ThreadX Memory Block Pool Parameters
MEMORY BYTE POOLS The ThreadX Memory Byte Pools view displays detailed information regarding all available memory byte pools in the target system. The view is updated automatically each time the target execution is suspended.
There is one column for each type of parameter, and one row for each memory byte pool. If the value of any parameter for a particular memory byte pool has changed since the last time the debugger was suspended, the corresponding row will be highlighted in yellow.
RTOS Kernel Awareness Debugging
208 | P a g e
Figure 191 - ThreadX Memory Byte Pools View
The available parameters are described in the table below:
Column Description
Name The name of the byte pool.
Address The byte pool starting address.
Used The current number of allocated bytes.
Free The current number of free bytes.
Size The total number of bytes available.
Fragments The number of fragments.
Suspended The threads currently suspended because of the state of the memory byte pool.
Table 36 – ThreadX Memory Byte Pool Parameters
RTOS Kernel Awareness Debugging
209 | P a g e
TOPPERS/ASP The kernel awareness features for TOPPERS RTOS in Atollic TrueSTUDIO provide the developer with a detailed insight into the internal data structures of the TOPPERS kernel. During a debug session, the current state of the TOPPERS kernel and the various TOPPERS kernel objects such as tasks, semaphores, mailboxes, etc, can be easily inspected in a set of dedicated views, in the Atollic TrueSTUDIO Debug perspective.
Each view for the TOPPERS RTOS contains two tabs - one tab for the hardcoded Static Information and one tab for the Current dynamic status.
REQUIREMENTS The kernel awareness features described in this document is based on TOPPERS/ASP Release 1.7.0.
FINDING THE VIEWS The views are available in the Atollic TrueSTUDIO Debug perspective when debugging an application containing the TOPPERS real-time operating system.
They are available from the Show View toolbar dropdown list button.
Figure 192 – TOPPERS Show View Toolbar Button
RTOS Kernel Awareness Debugging
210 | P a g e
TASKS The TOPPERS Tasks view displays detailed information regarding all available tasks in the target system. The task list is updated automatically each time the target execution is suspended.
There is one column for each type of task parameter, and one row for each task. If the value of any parameter for a particular task has changed since the last time the debugger was suspended, the corresponding row will be highlighted in yellow.
STATIC INFORMATION TAB
Figure 193 – TOPPERS Tasks Static Information Tab
The available system variables are described in the table below:
Name Description
ID The Task base ID
Auto start If the Task is to auto start or not. Displays yes or No.
Initial Prio The Initial Prio for the task.
Entry Task Entry function name or address.
Tex routine Task exception function name or address.
Entry Arg Display exinf value as Hex form.
Stack Area Task Stack bottom address in Hex form.
Stack Size The tasks stack size in decimal form.
Table 37 – TOPPERS Tasks Static Information
RTOS Kernel Awareness Debugging
211 | P a g e
CURRENT STATUS TAB
Figure 194 – TOPPERS Tasks Current Status Tab
The available system variables are described in the table below:
Name Description
ID The Task base ID
Current Prio The current Prio for the Task.
Status Displays Running, Dormant, Ready, Waiting, Suspended, Waiting-Suspended or Unknown
Waiting object When Status is Waiting, this column displays Delay, Sleep, Recv DTQ, Recv PDTQ, Semaphore, EventFlag, Send DTQ, Send PDTQ, Mailbox or Mempool
Remaining time When Status is Waiting, this column displays the remaining time waiting or Forever.
Pending Request (active) Pend or blank.
Pending Request (wake-up) Pend or blank.
Enable Tex Enable, Disable or blank.
Tex Pattern Displays texptn as Hex form or blank.
Sp The Stack Pointer in hex.
Remaining Stack The calculated remaining stack as an integer.
Table 38 – TOPPERS Tasks Current Status
RTOS Kernel Awareness Debugging
212 | P a g e
DATAQUEUES The TOPPERS Dataqueues view displays detailed information regarding all available data queues in the target system. The list is updated automatically each time the target execution is suspended.
There is one column for each type of data queue parameter, and one row for each data queue. If the value of any parameter for a particular data queue has changed since the last time the debugger was suspended, the corresponding row will be highlighted in yellow.
STATIC INFORMATION TAB
Figure 195 – TOPPERS Dataqueues Static Information Tab
The available system variables are described in the table below:
Name Description
ID The Data Queue base ID.
Send Task Queueing Order Displays Priority or FIFO.
Capacity The data queue quantity as a decimal value.
Dataqueue Area The address in Hex form.
Table 39 – TOPPERS Dataqueue Static Information
RTOS Kernel Awareness Debugging
213 | P a g e
CURRENT STATUS TAB
Figure 196 – TOPPERS Dataqueues Current Status Tab
The available system variables are described in the table below:
Name Description
ID The Data Queue base ID
Queuing Data Count Displays count value in decimal form.
Blocking (receive) If there is a waiting task of this object displays Yes, otherwise displays No.
First Waiting Task (receive) When there is a waiting task of this object, displays 1st waiting task ID. When there is no waiting task of this object, displays blank space.
Blocking (send) If there is a waiting task of this object displays Yes, otherwise displays No.
First Waiting Task (send) When there is a waiting task of this object, displays 1st waiting task ID. When there is no waiting task of this object, displays blank space.
Queuing Data Top When there is queuing data, display 1st queuing data address as Hex.
Table 40 – TOPPERS Dataqueues Current Status
RTOS Kernel Awareness Debugging
214 | P a g e
EVENT FLAGS The TOPPERS Event Flags view displays detailed information regarding all available event flags in the target system. The list is updated automatically each time the target execution is suspended.
There is one column for each type of event flag parameter, and one row for each event flag. If the value of any parameter for a particular event flag has changed since the last time the debugger was suspended, the corresponding row will be highlighted in yellow.
STATIC INFORMATION TAB
Figure 197 – TOPPERS Event Flags Static Information Tab
The available system variables are described in the table below:
Name Description
ID The Event Flag ID.
Multi-task Wait If true displays Yes, otherwise displays No.
Task Queueing Order Displays Priority or FIFO.
Auto Clear If true displays Yes, otherwise displays No.
Initial Pattern Display the iflgptn value as Hex form.
Table 41 – TOPPERS Event Flags Static Information
RTOS Kernel Awareness Debugging
215 | P a g e
CURRENT STATUS TAB
Figure 198 – TOPPERS Event Flags Current Status Tab
The available system variables are described in the table below:
Name Description
ID The Event Flag ID
Current Pattern Display flgptn value as Hex form.
Blocking If there is a waiting task of this object displays Yes, otherwise displays No.
First Waiting Task When there is a waiting task of this object, displays 1st waiting task ID. When there is no waiting task of this object, displays blank space.
Table 42 – TOPPERS Event Flags Current Status
MAILBOXES The TOPPERS Mailboxes view displays detailed information regarding all available mailboxes in the target system. The list is updated automatically each time the target execution is suspended.
There is one column for each type of mailbox parameter, and one row for each mailbox. If the value of any parameter for a particular mailbox has changed since the last time the debugger was suspended, the corresponding row will be highlighted in yellow.
RTOS Kernel Awareness Debugging
216 | P a g e
STATIC INFORMATION TAB
Figure 199 – TOPPERS Mailboxes Static Information Tab
The available system variables are described in the table below:
Name Description
ID The Mailbox ID.
Task Queueing Order Displays Priority or FIFO.
Message Queueing Order Displays Priority or FIFO.
Max Priority of Message The maximum prio value in decimal form.
Table 43 – TOPPERS Mailboxes Static Information
CURRENT STATUS TAB
Figure 200 – TOPPERS Mailboxes Current Status Tab
The available system variables are described in the table below:
RTOS Kernel Awareness Debugging
217 | P a g e
Name Description
ID The Mailbox ID
Blocking If there is a waiting task of this object displays Yes, otherwise displays No.
First Waiting Task When there is a waiting task of this object, displays 1st waiting task ID. When there is no waiting task of this object, displays blank space.
Msg Queueing Displays No if there are no messages in this Mailbox, otherwise displays Yes.
Msg Queueing Count When there is no message in this Mailbox display 0. Count posted message when there are messages in this Mailbox.
Table 44 – TOPPERS Mailboxes Current Status
MEMORY POOLS The TOPPERS Memory Pools view displays detailed information regarding all available memory pools in the target system. The list is updated automatically each time the target execution is suspended.
There is one column for each type of memory pool parameter, and one row for each memory pool. If the value of any parameter for a particular memory pool has changed since the last time the debugger was suspended, the corresponding row will be highlighted in yellow.
STATIC INFORMATION TAB
Figure 201 – TOPPERS Memory Pools Static Information Tab
The available system variables are described in the table below:
RTOS Kernel Awareness Debugging
218 | P a g e
Name Description
ID The Memory Pool ID.
Task Queueing Order Displays Priority or FIFO.
Block Count Number of Blocks in this Memory Pool.
Block Size Byte size of 1-block in decimal form.
Table 45 – TOPPERS Memory Pools Static Information
CURRENT STATUS TAB
Figure 202 – TOPPERS Memory Pools Current Status Tab
The available system variables are described in the table below:
Name Description
ID The Memory Pool ID
Allocs Number of allocated blocks in decimal form.
Frees Number of free blocks in decimal form.
Blocking Display Yes if there is a waiting task of this object, otherwise displays No.
First Waiting Task Display 1st waiting task ID when there is a waiting task of this object.
Table 46 – TOPPERS Memory Pools Current Status
RTOS Kernel Awareness Debugging
219 | P a g e
CYCLIC HANDLERS The TOPPERS Cyclic Handlers view displays detailed information regarding all available cyclic handlers in the target system. The list is updated automatically each time the target execution is suspended.
There is one column for each type of cyclic handler parameter, and one row for each cyclic handler. If the value of any parameter for a particular cyclic handler has changed since the last time the debugger was suspended, the corresponding row will be highlighted in yellow.
STATIC INFORMATION TAB
Figure 203 – TOPPERS Cyclic Handlers Static Information Tab
The available system variables are described in the table below:
Name Description
ID The Cyclic Handler ID.
Auto Startup Displays Yes if the Cyclic Handler is set to staring automatically, otherwise No.
Cyclic Handler Entry Cyclic Handler Entry function name or address in hexadecimal form.
Handler Entry Argument The handler argument value in hex form.
Cyclic Interval The Cyclic interval in ms.
Phase Time The Phase interval in ms.
Table 47 – TOPPERS Cyclic Handlers Static Information
RTOS Kernel Awareness Debugging
220 | P a g e
CURRENT STATUS TAB
Figure 204 – TOPPERS Cyclic Handlers Current Status Tab
The available system variables are described in the table below:
Name Description
ID The Cyclic Handler ID
Starting If the Cyclic Handler is starting Yes is displayed, otherwise No.
Rest time until cyclic event Display Remaining time as ms in decimal form when Cyclic event is started.
Table 48 – TOPPERS Cyclic Handlers Current Status
ALARM HANDLERS The TOPPERS Alarm Handlers view displays detailed information regarding all available alarm handlers in the target system. The list is updated automatically each time the target execution is suspended.
There is one column for each type of alarm handler parameter, and one row for each alarm handler. If the value of any parameter for a particular alarm handler has changed since the last time the debugger was suspended, the corresponding row will be highlighted in yellow.
RTOS Kernel Awareness Debugging
221 | P a g e
STATIC INFORMATION TAB
Figure 205 – TOPPERS Alarm Handlers Static Information Tab
The available system variables are described in the table below:
Name Description
ID The Alarm Handler ID.
Alarm Handler Entry Alarm Handler Entry function name or address in hexadecimal form.
Handler Entry Argument The handler argument value in hex form.
Table 49 – TOPPERS Alarm Handlers Static Information
CURRENT STATUS TAB
Figure 206 – TOPPERS Alarm Handlers Current Status Tab
The available system variables are described in the table below:
Name Description
ID The Alarm Handler ID
Starting If the Alarm Handler is starting Yes is displayed, otherwise No.
RTOS Kernel Awareness Debugging
222 | P a g e
Name Description
Rest time until alarm Display Remaining time as ms in decimal form when Alarm is started.
Table 50 – TOPPERS Alarm Handlers Current Status
Source Code Reviews
223 | P a g e
MICRIUM µC/OS-III The kernel awareness features for Micriµm µC/OS-IIITM in Atollic TrueSTUDIO provide the developer with a detailed insight into the internal data structures of the µC/OS-III kernel. During a debug session, the current state of the µC/OS-III kernel and the various µC/OS-III kernel objects such as tasks, memory partitions, message queues, semaphores and software timers, can be easily inspected in a set of dedicated views, in the Atollic TrueSTUDIO Debug perspective.
REQUIREMENTS The kernel awareness features described in this document is based on µC/OS-III V3.02.00.
FINDING THE VIEWS A number of debugger views are available in the Atollic TrueSTUDIO Debug perspective when debugging an application containing the µC/OS-III real-time operating system.
These views are available from the Show View toolbar dropdown list button.
Please note that the level of information available in the different views in Atollic TrueSTUDIO depends on the configuration of the µC/OS-III RTOS. If some feature is not enabled, the views presented in this document may contain columns presenting information such as “N/A” (Not Applicable) or “0” instead of expected values when debugging the target system. The Micriµm µC/OS-III Users Guide contains information on how different features can be enabled in the operating system.
E.g. Enable statistics task in os_cfg.h: #define OS_CFG_STAT_TASK_EN 1u
Source Code Reviews
224 | P a g e
Figure 207 - View Top Level Menu
Figure 208 - Show View Toolbar Button
SYSTEM INFORMATION The µC/OS-III System Information view displays a number of system variables available in the µC/OS-III kernel, such as state, version, CPU usage, different counter information, etc.
Source Code Reviews
225 | P a g e
Figure 209 - µC/OS-III System Information View
The available system variables are described in the table below:
Name Description
µC/OS-III State The current status of µC/OS-III.
µC/OS-III Version The version of the RTOS.
CPU Usage The actual CPU usage of all tasks.
Idle Task Counter The idle task counter.
Statistic Task Counter The statistic task counter.
Tick Task Counter The tick task counter.
Timer Task Counter The timer task counter.
Context Switches The total number of context switches.
Interrupt Nesting Counter The interrupt nesting level counter.
Max Interrupt Disable Time The maximum interrupt disabled time (µs).
Source Code Reviews
226 | P a g e
Name Description
Scheduler Lock Nesting Counter The counter for the nesting level of the scheduler lock.
Max Scheduler Lock Time The maximum amount of time the scheduler was
locked irrespective of which task did the locking
Table 51 – µC/OS-III System Variables
TASK LIST The µC/OS-III Task List view displays detailed information regarding all available tasks in the target system. The task list is updated automatically each time the target execution is suspended.
There is one column for each type of task parameter, and one row for each task. If the value of any parameter for a particular task has changed since the last time the debugger was suspended, the corresponding row will be highlighted in yellow.
Figure 210 - µC/OS-III Task List View
The available parameters are described in the table below:
Name Description
N/A Indicates the currently running task. The currently running task is indicated by a green arrow symbol.
Name The task name.
Prio The task priority. Low number indicates high priority.
State The current state of the task.
Pend On The type of the object the task is waiting on and in
Source Code Reviews
227 | P a g e
Name Description parenthesis the name of the actual object.
Ticks Rem The amount of time (ticks) remaining for a delayed task to become ready-to-run or for a pending task to timeout
CPU Usage The task CPU usage.
CtxSwCtr The number of times the task has executed (switched in).
IDT (Interrupt Disable Time)
The maximum amount of time (µs) interrupts has been disabled by the task.
SLT (Scheduler Lock Time)
The maximum amount of time (µs) the scheduler has been locked by the task.
Stack Info The stack information: Used/Free/Size, expressed in number of stack entries.
Stack Usage The stack usage.
Task Queue Task queue information: Current/Maximum/Size.
Task Queue Sent Times Task queue sent times: Latest/Maximum. The amount of time (µs) it took for a message to be sent and actually read by the task.
Task Sem Ctr The number of times the task has been signaled while the task was not able to run.
Task Sem Signal Times Task semaphore signal times: Latest/Maximum. The amount of time (µs) it took for the task to execute after the semaphore was signaled.
Table 52 – µC/OS-III Task Parameters
SEMAPHORES The µC/OS-III Semaphores view displays detailed information regarding all available resource semaphores in the target system. The view is updated automatically each time the target execution is suspended.
There is one column for each type of semaphore parameter, and one row for each semaphore. If the value of any parameter for a particular semaphore has changed since
Source Code Reviews
228 | P a g e
the last time the debugger was suspended, the corresponding row will be highlighted in yellow.
Figure 211 - µC/OS-III Semaphores View
The available parameters are described in the table below:
Column Description
Item The semaphore item counter.
Name The name of the semaphore.
Counter The current semaphore count.
Time Stamp The semaphore last signal time (µs).
Pend List Entries The number of tasks pending on the semaphore.
Pend List List of tasks pending on the semaphore. Highest priority tasks are sorted first in the list.
Table 53 – µC/OS-III Semaphore Parameters
MUTEXES The µC/OS-III Mutexes view displays detailed information regarding all available mutexes in the target system. The view is updated automatically each time the target execution is suspended.
There is one column for each type of mutex parameter, and one row for each mutex. If the value of any parameter for a particular mutex has changed since the last time the debugger was suspended, the corresponding row will be highlighted in yellow.
Source Code Reviews
229 | P a g e
Figure 212 - µC/OS-III Mutexes View
The available parameters are described in the table below:
Column Description
Item The mutex item counter.
Name The name of the mutex.
Owner The name of the task that currently owns the mutex.
Owner Org Prio The owning task original priority (task priority may have been raised due to priority inheritance).
Owner Nest Ctr The owning task nesting counter. Number of times the owning task acquired the mutex.
Time Stamp Latest release time (µs).
Pend List Entries Number of tasks pending on the mutex.
Pend List List of tasks pending on the semaphore. Highest priority tasks are sorted first in list.
Table 54 – µC/OS-III Mutexes Parameters
MESSAGE QUEUES The µC/OS-III Message Queues view displays detailed information regarding all available message queues in the target system. The view is updated automatically each time the target execution is suspended.
There is one column for each type of message queue parameter, and one row for each message queue. If the value of any parameter for a particular message queue has changed since the last time the debugger was suspended, the corresponding row will be highlighted in yellow.
Source Code Reviews
230 | P a g e
Figure 213 - µC/OS-III Message Queues View
The available parameters are described in the table below:
Column Description
Item The message queue item counter.
Name The name of the message queue.
Size The maximum number of entries allowed in the queue.
Entries The current number of entries in the queue.
Max entries The peak number of entries in the queue.
Pend List Entries The number of tasks pending on the queue.
Pend List List of tasks pending on the queue. Highest priority tasks are sorted first in list.
Table 55 – µC/OS-III Message Queue Parameters
EVENT FLAGS The µC/OS-III Event Flags view displays detailed information regarding all available event flag groups in the target system. The view is updated automatically each time the target execution is suspended.
There is one column for each type of parameter, and one row for each event flag group. If the value of any parameter for a particular event flag group has changed since the last time the debugger was suspended, the corresponding row will be highlighted in yellow.
Source Code Reviews
231 | P a g e
Figure 214 - µC/OS-III Event Flags View
The available parameters are described in the table below:
Column Description
Item The event flag group item counter.
Name The name of the event flag group.
Flags The current value of the event flag group.
Time Stamp The last time the group was posted to.
Pend List Entries The number of tasks pending on the event flag group.
Pend List List of tasks pending on the event flag group. Highest priority tasks are sorted first in list.
Table 56 – µC/OS-III Event Flag Parameters
TIMERS The µC/OS-III Timers view displays detailed information regarding all available software timers in the target system. The timers view is updated automatically each time the target execution is suspended.
There is one column for each type of timer parameter, and one row for each timer. If the value of any parameter for a particular timer has changed since the last time the debugger was suspended, the corresponding row will be highlighted in yellow.
Figure 215 - µC/OS-III Timers View
The available parameters are described in the table below:
Name Description
Item The timer item counter.
Source Code Reviews
232 | P a g e
Name Description
Name The name of the software timer.
Type The type of the timer.
State The state of the timer.
Match The time when the timer expires.
Remain The time remaining before the timer expires.
Delay The expiration time for one-shot timers and initial delay for periodic timers.
Period The timer period (for periodic timers).
Callback The address and name of function to call when the timer expires.
Table 57 – µC/OS-III Timer Parameters
MEMORY PARTITIONS The µC/OS-III Memory Partitions view displays detailed information regarding all available memory partitions in the target system. The view is updated automatically each time the target execution is suspended.
There is one column for each type of parameter, and one row for each memory partition. If the value of any parameter for a particular memory partition has changed since the last time the debugger was suspended, the corresponding row will be highlighted in yellow.
Figure 216 - µC/OS-III Memory Partitions View
Source Code Reviews
233 | P a g e
The available parameters are described in the table below:
Column Description
Item The memory partition item counter.
Name The name of the memory partition.
Total The number of memory blocks available from the partition.
Free The number of free memory blocks available from the partition.
Block size The size of each memory blocks in the partition.
Table 58 – µC/OS-III Memory Partitions Parameters
Source Code Reviews
234 | P a g e
SOURCE CODE REVIEW This section provides information on how to perform source code reviews and hold code review meetings with Atollic TrueSTUDIO® for ARM®.
This section covers information on the following topics:
Introduction to source code reviews and code review meetings
The Review perspective and related views
Creating and configuring reviews sessions
Performing a 3-step source code review
Additional reading and available templates and appendices
Source Code Reviews
235 | P a g e
INTRODUCTION TO CODE REVIEWS Atollic TrueSTUDIO® for ARM® has integrated tool support for performing source code reviews and code review meetings. Code review is one of the most cost-effective ways of improving software quality. In order to learn more about code reviews, please visit the white paper section on the Atollic website and read our white paper on source code review. The tool support can be deployed in any project size, ranging from one to several developers. In this chapter, the project is assumed to contain more than one team member.
In order to put the concepts and terminology used in Atollic TrueSTUDIO into context, the two flow charts below are provided.
The first flow chart shows a commonly deployed software review workflow. The second flowchart shows the individual source code review steps available in Atollic TrueSTUDIO. The dashed lines between the two flow charts, map the steps of one flow chart to the corresponding steps in the other.
Figure 217 – Atollic TrueSTUDIO Support for the Code Review Workflow
Planning Overview Preparation Meeting Rework Follow-up
Create review Individual
phase
Group
phase
Rework phase
Follow-up
Source Code Reviews
236 | P a g e
Process step Traditional content Tool support in Atollic TrueSTUDIO
Planning / Create review
The review is planned by the moderator. A review topic is determined and the work product is outlined.
Create the review, choose authors, work product and configure problem types, resolution types, severity levels, etc.
Overview / Individual or Group phase
The author describes the background of the work product and the reviewers are educated regarding the work product at hand.
Once a review is created, a start-up meeting may be held where the author and the reviewers go through the work product in order to get an overview.
Preparation / Individual phase
Each reviewer examines the work product to identify possible defects.
Each reviewer examines the work product to identify possible defects.
Review meeting / Group phase
During this meeting the chairman goes through the work product, part by part, and the reviewers point out the defects found for each part.
The accumulated defects found by the group of reviewers are discussed.
Rework
The author makes changes to the work product according to the action plans from the inspection meeting.
The author makes changes to the work product according to the action plans from the review meeting.
Follow-up The changes made by the author are checked to make sure everything is correct.
Table 59 – Atollic TrueSTUDIO Support for the Code Review Workflow
Source Code Reviews
237 | P a g e
PLANNING A REVIEW – REVIEW ID
CREATION A pre-requisite that is necessary in order to efficiently deploy code reviews within your project team is the access to a shared Atollic TrueSTUDIO project, either using a version control system, which is recommended, or using a network drive. All review comments are saved as XML formatted files in a selectable folder within the Atollic TrueSTUDIO project.
The comments may thus be shared between reviewers using a commonly accessed version control system. This is a big advantage, as no server-side database needs to be installed, configured and administered to perform code reviews. The normal version control system, such as GIT or Subversion, is used for team collaboration.
In order to perform a code review the first step is to create a review ID for this specific code review session. Creating a review ID is typically done by a moderator, which may be a team leader or an employee from the quality assurance department. This is a simple operation where the user is prompted to configure the following options:
Review ID (= name) and description
Review comment classification types
Review comment severities
Review comment resolution decisions
Review comment statuses
Work product content
Authors/Reviewers for the review
The steps to create a specific code review session can be severely simplified by taking the time to create a project, or company standard, review template. All future reviews that are created later can then be based on this review template. The moderator will thus only be required to configure most of the above options once for each TrueSTUDIO workspace. This is described in next chapter.
Review comments are stored as XML formatted files in a selectable folder within the TrueSTUDIO project; one file for each reviewer. The overall review settings are saved as a hidden XML formatted file in the project root folder in the workspace.
A review ID is tightly connected to inspection of resources (files) for one TrueSTUDIO project. A review ID should preferably not contain any whitespaces as it will be part of the review storage file name.
Source Code Reviews
238 | P a g e
CREATING A REVIEW ID In order to create a review ID the user must access the properties for the Atollic TrueSTUDIO project that is containing the desired work product. This is done by performing the following steps:
1. Select the project in Project Explorer docking view.
2. Click on the Build Settings toolbar button or right-click on the project in the Project Explorer view and click Properties.
Figure 218 – Project Properties Menu Selection
3. Select the Review node
Figure 219 - GUI for Creating and Managing Code Reviews
4. The user may choose to add a New, or Edit or Remove an existing, review in the dialog box.
5. Click New to add a new Review ID.
Source Code Reviews
239 | P a g e
Figure 220 - Dialog for Creating a New Review ID
6. Give the review a Review ID, i.e. a name. It is recommended not to use whitespaces as this will be part of the file name. Also provide a short description for the meeting. Click Next.
7. The next step determines the work product for the meeting. Choose which files that will be subject to this review. Use the buttons to Add and Remove files.
Figure 221 - Dialog for Managing the Work Product of a Review
8. Create a Reviewer ID by clicking Add and entering a reviewer name. Repeat for each reviewer that will attend the meeting. The review issues
Source Code Reviews
240 | P a g e
collected by each reviewer will be stored in corresponding XML-formatted files. It is recommended not to use whitespaces in the reviewer IDs.
Figure 222 - Add Reviewers to the Review
9. Select an author among the reviewers. The review issues identified in the Team Phase will be assigned to the author as default. Naturally, an explicit assignment overrides the default.
Figure 223 - Choose Author for the Review Session
10. In this step, it is possible to configure available parameter options for the review comments. The parameter options are:
Review comment classification types
Review comment severities
Review comment resolution decisions
Review comment statuses
Source Code Reviews
241 | P a g e
Figure 224 - Review Comment Parameter Options
11. It is possible to set a default option for each of the above review parameters. This will be used unless an option is chosen explicitly when a review comment is created or modified.
Figure 225 - Setting Default Options for Review Parameters
12. Choose the folder name where review issue data will be saved within the project. This folder will be stored in the root level of the corresponding project. It is possible to put the review issue data in a subfolder, i.e. “ProjectName/reviews/MileStone1_2013-01-02/” by using “/” (forward slash). The previous example would be specified as: “reviews/MileStone1_2013-01-02”
Source Code Reviews
242 | P a g e
Figure 226 - Naming the Review Issue Data Folder
13. In the final step the user can customize which information shall be shown in the Review Table view during the three different phases; Individual, Team, Rework. This is done by setting up filters. These filter can be toggled on and off in the Code Review Table view during the inspection.
Figure 227 - Filter Settings for the Different Phases
Individual Phase – The default filter allows the reviewers to view their own comments only. It is recommended is to keep this filter, so that reviewers are not biased by each other’s review comments.
Team Phase – The default filter allows the moderator and the reviewers to view only comments which have “Resolution: Unset”. This means that only review comments that still require a decision are shown.
Source Code Reviews
243 | P a g e
Rework Phase – This phase is relevant to reviewers that have had review issues assigned to them (typically: the author), during the Team Phase. The default filter allows such a reviewer to view only the issues assigned to him, or her, and in addition have “Status: Open”.
Click Finish to save all settings for the specific Review ID.
14. As a final, and very important, step, make sure to commit the review settings file which resides in the project root folder and is called .code_review_properties to the version control system. Configuration files are typically hidden from the rest of the project resources by using a leading “.” (dot-character) in the filename. A file with a leading “.” in the filename will not be shown by the Project Explorer docking view. In order to commit this file the user must open the Navigator docking view which also shows hidden configuration files.
TAILORING A REVIEW ID TEMPLATE The DEFAULT review ID contains the template settings which all future reviews will be based on. A company conducting regular code reviews can save a lot of time by making sure that the DEFAULT Review ID correlates well to the outlined terminology used in company process for code reviews and issue tracking. The following information is transferred from the template to any freshly created review ID:
Review comment classification types
Review comment severities
Review comment resolution decisions
Review comment statuses
Default selections
Authors for the review
Phase filter selections
The DEFAULT review ID template can be edited from the Review panel in the Project Properties by selecting DEFAULT and clicking Edit…
Source Code Reviews
244 | P a g e
Figure 228 - Editing the DEFAULT Review Template
The user may also choose to remove a Review ID by clicking Remove… in the Review panel in the Project Properties. This will remove the corresponding sections from the review settings file and all individual reviewer files containing the individual review issue data.
Source Code Reviews
245 | P a g e
CONDUCTING A SOURCE CODE REVIEW The source code review is conducted in a separate perspective called the Code Review perspective. This is accessed from the Open perspective toolbar button; or from the menu command View > Open perspective > Code Review
Figure 229 - Code Review Selected via Open Perspective Command
The Code Review perspective contains a number of unique views and toolbar buttons.
Figure 230 - The Code Review Perspective
Source Code Reviews
246 | P a g e
The Code Review perspective has a toolbar adapted for navigation of review issues. The toolbar has the following buttons and functionality:
Button Name Description
Refresh Opens/refreshes the code review session
Individual Phase Log into the individual phase and add code review comments
Team Phase Log into the Team Phase, and perform a code review meeting
Rework Phase Log into the Rework Phase, and correct the problems assigned to you at the code review meeting
Table 60 - Code Review Toolbar Buttons
The following docking views are primarily associated to the code review perspective:
The main editor area – The editor area of the perspective is needed to review the source code files.
The Code Review Table view – This is the list of review issues. Different set of issues will be listed depending on selected Phase and Reviewer.
The Code Review Editor view – An editor showing the current issue being created or modified. The editor view provides different toolbar buttons depending on the current phase of the review.
Figure 231 – The Code Review Table View
Button Name Description
Go to the source code Select a file from the work product to
Source Code Reviews
247 | P a g e
Button Name Description review
Edit the code review Edit the settings for this specific code review
Add code review issue Adds a code review issue associated to the code line the marker currently is on
Delete code review issue
Delete the currently selected code review issue
Filters… Apply the filter setup for this code review
Table 61 - Code Review Table View Toolbar Button Description
Figure 232 – The Code Review Editor View
Button Name Description
Go to the source code Select a file from the work product to review
Next Next review issue in the review table. Also saves any changes made to the current review issue.
Previous Previous review issue in the review table. Also saves any changes made to the current review issue.
Source Code Reviews
248 | P a g e
Button Name Description
Save Save changes to current review issue
Clear Clears the content of the editor
Table 62 – The Code Review Editor View Toolbar Button Description
INDIVIDUAL PHASE In order to start working in the individual phase and add review comments, the reviewer must use their own associated reviewer ID to log into a review session. The user does this by clicking on the toolbar button Individual Phase in the Code Review perspective.
Figure 233 - Individual Phase Selected in the Code Review Toolbar
The Review ID selection dialog will appear when the user clicks on either of the three code review phase related toolbar buttons. Review ID must be chosen so that the associated work product is shown to the reviewer. The user must also choose his or her name from the Reviewer ID drop-down menu. This will make sure that all review issues found are associated with the specified Reviewer ID.
Figure 234 - Reviewer ID Selection Dialog
When a user has logged into the individual phase of a certain Review ID, it is possible for him or her to start adding review comments. This is done by reviewing the work product.
Source Code Reviews
249 | P a g e
The work product can be browsed by using the Go to the source code button in the Code Review Table.
Figure 235 - The Source Code Button & Drop-Down Menu
By selecting a file from the drop-down menu it will be opened in the Editor window of the IDE. Right-click on any code line or right-click in the editor margin and a menu will be populated containing the Add Code Review Issue… option.
Figure 236 - Add Code Review Issue...
When clicking Add Code Review Issue… the reviewer will be hyper-linked into the Code Review Editor view where a new review issue is being created.
Figure 237 – A Code Review Issue in the Review Editor View
The top of the Code Review Editor view shows information about who found the review issue, in which file and at which code line. The user may also choose to select a code block,
Source Code Reviews
250 | P a g e
right-click and then click Add Cod Review Issue… In this case the content of the code block will be copied into the description field of the Review issue. The type and severity fields are mandatory information for each review issue.
The type field identifies the type of review issue and the severity field defines the severity level for the current issue.
After entering all information into the review issue being added, click the Save button in the Code Review Editor view. Upon saving, the review issue will become visible in the Code Review Table view. A review marker will also be added to the left margin of the main editor window.
Figure 238 - Review Marker Displayed on Editor Line 101
Go through the different files included in the work product and add review comments. When an individual user has finished reviewing the work product, he/she must remember to commit the .review file to the version control system. This enables other reviewers to access the review issues by retrieving the files them from the version control system.
This path to the .review file was specified during the review configuration phase. By default the file is saved in the review subfolder within the project folder.
TEAM PHASE In this phase the team members gather in a code review meeting to discuss all the review issues that were found by the individual reviewers. Before starting this phase it is important that the review moderator assures that all reviewers have committed their .review file, and subsequently updates his or her own local copy of each .review file from the version control system. If this is not done properly, the issues from one, or more, reviewers are not taken into account, and will not show up in the collaborative Code Review Table view.
To start the team phase (code review meeting), click on the Team Phase toolbar button in the Code Review perspective.
Figure 239 - Team Phase Toolbar Button
Source Code Reviews
251 | P a g e
The Review ID Selection dialog will appear where the user is prompted to select a Project, a Review ID and a Reviewer ID. The Reviewer ID in this phase is typically the author of the work product under review or the moderator hosting the meeting.
All review issues collected by all reviewers are now displayed in the Code Review Table view (provided that the review comment files have been committed to the version control system, and have subsequently been updated to the computer being used for the code review meeting).
Click on any review issue in the Code Review Table view and its content will be shown in the Code Review Editor view. If an already existing review issue is modified in this phase, make sure to click Save, Next, or Previous button to automatically save any changes. By double-clicking on any review issue in the Code Review Table view, the associated source code lines are also shown in the editor area of the IDE.
Review issues can also be navigated from the Code Review Editor view by using the Next and Previous buttons. The Go to the source code button allows jumping from the Code Review Editor view into the source code.
Figure 240 - Code Review Editor View Content in Team Phase
The Assigned To field contains the author’s Review ID by default, but can be changed to the Reviewer ID of any other team reviewer. When the group has reached a decision on how to handle the review issue at hand, the Resolution field must be changed to reflect this decision. The Annotation field allows additional information to be added.
By single-clicking on a review marker in the source code, the summary and description of the review issues for the specific code line will be shown in a tooltip.
Figure 241 - Review Markers and Tooltip Information in the Editor
Review markers are also subject to the filter that is configured for the current phase of the code review. For example, if the filter is set to show Resolution: Unset issues only, then only review markers associated with such review issues will be shown.
Source Code Reviews
252 | P a g e
When all review issues have been handled in the code review meeting; a reviewer has been assigned and a resolution has been chosen for all review comments, it is important to remember to commit the .review files to the version control system. When this is done all reviewers are able to access the decision outcome information from the Team Phase, and the code review can enter the Rework Phase.
REWORK PHASE In this phase each reviewer will work on the review issues that were assigned to him/her at the code review meeting, in order to implement the agreed resolution. Before starting this phase it is important that the each reviewer updates the folder containing the .review files from the version control system. If this is not done, the reviewer will not be able to access any assigned-to or resolution information from the code review meeting.
To start this phase click on the Rework Phase toolbar button in the Code Review perspective.
Figure 242 - Team Phase Toolbar Button
The Review ID Selection dialog will appear where the user is prompted to select a Project a Review ID and a Reviewer ID.
In the Code Review Table view, the user will only see the review issues that were assigned to the reviewer that was selected in the Review ID Selection dialog when entering this phase. The purpose of this phase is that each reviewer addresses the review issues that were assigned to him/her respectively.
The Code Review Editor view will now contain the fields Status, Resolution and Revision. The Status field allows the status of each review issue to be changed. The resolution fields simply states the agreed resolution decided at the code review meeting. It can and should not be changed in this phase. The Revision field provides the possibility to write a comment related to the implemented resolution.
Source Code Reviews
253 | P a g e
Figure 243 - Code Review Editor View Content in the Rework Phase
When all fields are filled in for the Review issue at hand, the reviewer must click Save, before the buttons Next and Previous can be used to view the next review issue to fix. When updated statuses for all review issues have been saved, the Code Review Table view will be empty. The reviewer must then remember to commit the .review file to the version control system so that the moderator can verify that everything has been fixed.
ADDITIONAL SETTINGS The Code Review Table view can also be customized temporarily without overwriting the .code_review_properties file. This is done from the Preference settings found in the Code Review Table view toolbar.
Figure 244 - Accessing Code Review Preference Settings
In this customization dialog the user may change filters that are applied on the Code Review Table view in order to show only review issues that have a certain parameter combination. It is also possible to tailor which columns, and thereby which parameters, are visible in each phase.
Source Code Reviews
254 | P a g e
Figure 245 - Customize Filters Applied for All Phases
Figure 246 - Customize Visible Code Review Table Columns
Test Automation
255 | P a g e
STATIC CODE ANALYSIS This section provides information on how to do static source code analysis with Atollic TrueINSPECTOR®, which is included in all Atollic TrueSTUDIO for ARM Pro and Premium installations.
This section covers information on the following topics:
Code complexity analysis
Selecting coding standard rules
Running a code inspection
Analyzing the code inspection results
Static Code Analysis
256 | P a g e
INTRODUCTION Atollic TrueSTUDIO Pro and Premium includes Atollic TrueINSPECTOR®, a professional tool for static source code analysis that helps you to find potential bugs automatically. By using Atollic TrueINSPECTOR, you can easily improve the quality of your software product.
Automated static source code analysis is the process in which a software tool analyzes the source code of an application, and automatically detects potential bugs or other types of problems.
The tool parses the source code of the application and analyzes how the source code is written. Static source code analysis is typically divided into two different areas:
Coding standards compliance
Source code metrics
Atollic TrueINSPECTOR performs static source code analysis and generates source code metrics. The source code is validated against a database of formal coding standards, and coding constructs that are known to be error-prone are detected automatically, thus reducing the number of errors. This in turn reduces development/debug/test time, reduces development cost, and improves the quality of the software product.
CODING STANDARDS COMPLIANCE MISRA (The Motor Industry Software Reliability Association) was established as a collaboration between various vendors in the automotive industry, with the purpose to promote best practices in developing safety-critical systems for road vehicles and other types of embedded systems.
MISRA-C:2004 is a coding standard for the C programming language, developed by MISRA. The purpose is to identify a subset of the C language that improves safety, portability, reliability and maintainability. MISRA-C:2004 contains 141 coding rules, which limit the flexibility for how the source code can be written. Following the MISRA-C coding standard, ensures that unsafe or unreliable coding constructs are not used, thus improving software quality and reducing the time spent on debugging.
Atollic TrueINSPECTOR performs MISRA-C:2004 checking, automatically verifying source code compliance, and pointing out any code lines that break any of the coding standard
Atollic TrueINSPECTOR runs in demo mode in Atollic TrueSTUDIO Lite.
Atollic TrueINSPECTOR only works with managed mode projects, not makefile mode projects.
Static Code Analysis
257 | P a g e
rules. The analysis results are presented in textual form as well as in easy-to-understand graphical charts.
Developers can configure the rules to enable or disable at specific code inspection sessions, and rule violations are directly connected to the corresponding lines in the C/C++ editor. For each violation, Atollic TrueINSPECTOR gives an example of code that triggers the violation, and provides an example of the recommended coding style that solves the reported problem.
SOURCE CODE METRICS In addition to the benefits achieved by complying with a coding standard that enforce best-practice and defensive coding, code metrics can be used to further reduce the likelihood of software problems and to improve maintainability.
Additionally, Atollic TrueINSPECTOR® performs source code metrics on project, file and function level too. This enables developers to view statistics on parameters like number of lines in files, number of lines in functions, commenting level and C function code complexity.
With a better understanding of what parts of the code are too complex, it can be rewritten using less complex coding constructs. Avoiding complex code constructs is a good way to improve maintainability and reducing the risk of errors, thus reducing development time and increasing product quality.
REPORTS Atollic TrueINSPECTOR generates various types of reports that can be exported in Microsoft® Word®, Microsoft® Excel®, HTML and PDF formats, for offline analysis or as technical proof of compliance for management or customers. See the chapter Generate a Report on page 275.
Static Code Analysis
258 | P a g e
SELECTING CODING STANDARD RULES Before a code analysis session is started, please configure the coding standard rules as follows.
1. Select the project in the Project explorer, and select the File, Properties menu command to open the Properties dialog box. Then select the Testing, Rule Setting panel:
Figure 247 - The Rule Setting Panel
Static Code Analysis
259 | P a g e
2. Expand the MISRA 2004 node, and select one of the rules:
Figure 248 - Selecting Rules
The Description field explains the purpose of the selected coding standard rule. Any rule can be enabled or disabled as appropriate using the corresponding checkboxes. Use the Import… and Export… buttons to store rule selection configurations in an external file.
Static Code Analysis
260 | P a g e
3. Optionally exclude files or folders from analysis, using the Testing, Exclusion, TrueINSPECTOR panel.
Figure 249 - Excluding files and folders from analysis
Click the OK button to apply any changes made, or Cancel to exit with no changes.
Static Code Analysis
261 | P a g e
RUNNING A CODE INSPECTION Once the coding standard rules have been configured, a source code analysis session can be started:
1. Switch to the Static code analysis perspective by clicking the Open Perspective toolbar button and select the Static code analysis perspective.
Figure 250 – Opening the Static Code Analysis Perspective
2. Select the project or file to be analyzed; then press the Static code analysis toolbar button.
Figure 251 - Starting a Code Analysis Session
If the Static Code Analysis button is grayed out, no resource that can be analyzed has been selected. In such a case, please select a file or a project to analyze in the Project Explorer.
Static Code Analysis
262 | P a g e
3. Atollic TrueINSPECTOR starts to analyze the source code and present the analysis results in various views. The Static code analysis perspective has several new docking views specific to source code analysis:
Figure 252 - The Static Code Analysis Perspective
Additionally, the editor is annotated with MISRA violations information:
Figure 253 - MISRA violations in the editor
Static Code Analysis
263 | P a g e
ANALYZING THE RESULTS Once a static code analysis session has been performed, a number of docking views display the results:
The Violation summary view
The Violation view
The Code Metric view
The Rule Description view
The screenshot below shows the default view layout in the static source code analysis perspective:
Figure 254 - The static code analysis perspective
THE VIOLATION SUMMARY VIEW The Violation Summary view gives an overview of the result, including a sorted text list and a graphical pie chart for the most important results. By selecting different values in the Table category drop-down list, the view can display other types of information as well.
Static Code Analysis
264 | P a g e
The Rule table category mode display statistics on how many times different MISRA rules have been violated:
Figure 255 - The Inspection Summary View (Rule mode)
The Severity table category mode display statistics on how many rules of different MISRA severity levels (Required or Advisory) have been violated:
Figure 256 - The Inspection Summary View (Severity mode)
The Source table category mode display statistics on how many MISRA violations there are in each file:
Static Code Analysis
265 | P a g e
Figure 257 - The Inspection Summary View (Source mode)
The Syntax Error table category mode display statistics on how many files could not be analyzed due to syntax errors:
Figure 258 - The Inspection Summary View (Syntax Error mode)
THE RULE DESCRIPTION VIEW By clicking on any rule in various other views, the Rule Description view will explain the details of that rule, including bad coding practice that triggers the violation and a
The Total Violations figure expresses the total number of violations. It is a count of the number of rules that are violated. Duplicates on the same line are not counted.
Static Code Analysis
266 | P a g e
recommendation for an alternative, better, coding practice. The Rule Description view is thus a very important tool to learn and improve the coding style over time.
Select a rule in the Violation Summary view list to activate the Rule Description view:
Figure 259 - The Rule Description View
VIOLATION VIEW The Violation view lists the actual violations (connected to specific file:line instances) sorted by file name:
Static Code Analysis
267 | P a g e
Figure 260 - The Violation View
A hash-sign (#) before the rule name indicates that the rule is global. A global rule is a rule that needs to be checked in more than one file to be confirmed as violated.
By using the drop-down arrow toolbar button, you can select other sorting and grouping options, for example group by MISRA rule number instead:
Figure 261 - Individual Violations
By double-clicking on any violation, the corresponding file:line is opened automatically in the editor.
To generate a report, click on the Generate report toolbar button.
Figure 262 – Generate Report Toolbar Button
Static Code Analysis
268 | P a g e
A dialog box is now displayed. Enter the name of the report(s), the desired output directory as well as the file format.
Figure 263 - The Report Generator Dialog Box
Click on the Export button to generate the reports. Once the report(s) are exported, Atollic TrueINSPECTOR gives you the option to open the folder where the reports were generated:
Figure 264 - The Open Folder Message Box
Click the Open Folder button to view the folder containing the report files. Double-click to open any of the generated reports. Atollic TrueINSPECTOR selects a suitable viewer (such as Adobe® Acrobat® Reader or Microsoft® Excel®) depending of file type.
Static Code Analysis
269 | P a g e
Figure 265 - View the Generated Report
INTRODUCTION TO CODE COMPLEXITY ANALYSIS Measuring and managing software complexity is a great way of reducing the risk of bugs, lowering development and testing costs, as well as simplifying maintenance. To support development engineers in this important task, Atollic TrueSTUDIO is the first embedded IDE to include a source code metrics and code complexity measurement feature as standard.
Code complexity is usually presented using a standard measure called the cyclomatic value of code complexity. This measure was introduced by Thomas McCabe in 1976. It calculates the amount of decision logic in a software block, typically a C function. It is therefore an excellent predictor of bug probability, and can also be used to provide judgments on the soundness and confidence for a software implementation in general.
The cyclomatic value of code complexity measures the number of independent execution paths (the structural complexity) through a function, assigning a numerical value to the complexity of each C function. The more iterations and conditional code a function has; the higher its complexity level.
A complex function is likely to include more errors, and be more difficult to understand, test and maintain. Too complex functions should be simplified by rewriting or splitting them into several smaller functions, thus creating less error-prone functions of reduced complexity.
Static Code Analysis
270 | P a g e
In addition to aid in decisions on rewriting and refactoring source code, code complexity analysis can also be used to guide manual source code inspection operations. For a number of reasons, many companies do not perform source code reviews; despite that it is one of the most cost-effective and best ways of improving software quality quickly. But code complexity analysis can be used to flag certain risky functions, which would benefit from a code inspection even if most other functions are not inspected manually.
The cyclomatic complexity level also serves as the minimum number of test cases that must be executed to cover all possible execution paths through a C function (i.e. full branch coverage) during software testing. Therefore, code complexity analysis helps creating or assessing the test procedures as well. And in general, it is advisable to carefully test functions with a high complexity level.
RECOMMENDED LEVEL OF CODE COMPLEXITY While there are many good reasons to limit the code complexity in software projects, the exact number to use as an upper limit has been under much debate in the industry. Thomas McCabe originally proposed that functions with a complexity level up to 10 are acceptable; but complexity limits as high as 15 has been used in the industry as well. On the other hand, in safety-critical projects the acceptable limit may be only 3 or 5.
In other words, an organization can use any complexity limit it prefers. However, they should not accept complexity figures in excess of 10, unless they are willing to devote additional testing efforts required by the more complex software implementation. Regardless of the exact limit, it should be considered alarming if a function becomes more complex than level 20.
A frequently quoted table provides the following rules-of-thumb advice:
Complexity level
Type of function
Risk of bugs
1-4 A simple function Low
5-10 A well-structured function Low
11-20 A complex function Moderate
21-50 An alarmingly complex function High
>50 An error-prone and untestable function Very high
Table 63 – Relation between Complexity Level and Risk of Bugs
The following, often quoted, table lists the probability of a new bug being introduced when fixing another already existing bug:
Static Code Analysis
271 | P a g e
Complexity level
Risk of introducing a new bug when fixing another already existing bug
1-10 5%
20-30 20%
30-60 40%
>60 60%
Table 64 – Risk of Introducing New Bug When Correcting Existing
Finally, to aid in source code review decisions; it is good rule of thumb that at least functions with a code complexity level of 10, or more, ought to undergo a manual source code review.
WHEN TO DO CODE ANALYSIS A common misconception is that a static source code analysis tool is only needed at the end of the project, where some problematic code sections need to be fixed, as a final step.
In fact, nothing is further from the truth. No one will start to rewrite code with tens of thousands of lines of problematic code, at the end of a project. The recommended approach is to use this type of tool hourly or daily from the project start, to ensure a gradual and iterative development where all code additions are checked and fixed as they are added. Good tool support is thus of great importance.
Atollic TrueINSPECTOR’s code metrics and code complexity analysis features are deeply integrated into the Atollic TrueSTUDIO IDE, thus offering a very easy-to-use solution that fits into the workflow of the daily development work.
THE CODE METRICS VIEW In addition to validating the source code against a formal coding standard, Atollic TrueINSPECTOR also provides source code metrics and code complexity measurements.
The Code Metric view displays statistics of the source code. The default mode is to display information at module or Build Configuration level:
Static Code Analysis
272 | P a g e
Figure 266 – The Code Metric View (Module Mode)
The Code Metrics will change when using different Build Configurations.
The Code Metric Value SLOC stands for Source Lines of Code. It is the number of lines with C code. SLOC is typically used to predict the amount of effort that will be required to develop a program, as well as to estimate programming productivity or maintainability once the software is produced.
Click on the Metric mode toolbar button in the Code Metric view to switch to File Mode:
Figure 267 – Metric Mode Toolbar Button
Atollic TrueINSPECTOR now displays detailed information for each file:
Static Code Analysis
273 | P a g e
Figure 268 - The Code Metric View (File Mode)
Figure 269 – Metric dropdown list
The Metric dropdown list is a filtering tool to enable to select the files outside a specific limit value. That way files with too few comments and that needs to be rewritten can quickly be identified.
The Code Metric Value PLOC stands for Physical Lines of Code. It is the number of lines in a file, including comment lines, excluding whitespace. SLOC stands for Statements Lines of Code. It is the number of lines in a file containing C-statements, excluding whitespace and/or comment lines. The file Comment Ratio is computed as:
Comment Ratio = (PLOC – SLOC) / PLOC
Click on the Metric mode toolbar button one more time to switch to Function Mode:
Figure 270 – Metric Mode Toolbar Button
Atollic TrueINSPECTOR now displays detailed information for each function, including code complexity:
Data in column PLOC, SLOC, Comment Ratio is only shown for files that have
been selected for analysis and for included header-files.
Static Code Analysis
274 | P a g e
Figure 271 - The Code Metric View (Function Mode)
Note that the cyclomatic value of code Complexity is listed for each function. It is recommended to refactor or simplify the implementation of functions with very high complexity values (the column can be sorted in ascending or descending order by clicking on the column header).
The Metric dropdown list is also a very useful tool here to identify functions that need to be rewritten.
Atollic recommends rewriting functions with a high complexity level in order
to reduce risk of programming errors and improving maintainability.
Static Code Analysis
275 | P a g e
GENERATE A REPORT Click on the Generate report toolbar button in the Code Metric view:
Figure 272 – Generate Report Toolbar Button
A dialog box is now displayed. Enter the name of the report(s), the desired output directory as well as file format:
Figure 273 - The Report Generator Dialog Box
Click on the Export button to generate the reports. Once the report(s) are exported, Atollic TrueINSPECTOR® gives you the option to open the folder where the reports were generated:
Figure 274 - The Open Folder Message Box
Click the Open Folder button to view the folder containing the report files. Double-click to open any of the generated reports. Atollic TrueINSPECTOR selects a suitable viewer (such as Adobe® Acrobat® Reader or Microsoft® Excel®) depending of file type.
Static Code Analysis
276 | P a g e
REMOVE ANALYSIS RESULTS Rule violations will result in rule violation markers that will remain in the project until removed manually. This way the violations can be handled during normal development in the C/C++ Perspective.
Figure 275 – Rule Violation Markers
To remove the rule violation markers, select the Violation View and press the red Remove Violation button.
Figure 276 – Remove Violation Button
Static Code Analysis
277 | P a g e
SUMMARY By using the static source code analysis features of Atollic TrueINSPECTOR, developers get many benefits that can improve software quality, in particular, validation against industry best-practice coding standards and source code metrics including code complexity measurements.
Atollic strongly recommends deploying Atollic TrueINSPECTOR in any software development your project, thus elevating software quality to a completely new level.
Test Automation
278 | P a g e
TEST AUTOMATION This section provides information on how to use Atollic TrueVERIFIER®. Atollic TrueVERIFIER is fully integrated into Atollic TrueSTUDIO Premium. A demo version is included in all other Atollic TrueSTUDIO installations.
This section covers:
Creating unit tests
Creating and managing test cases
Running tests
Analyzing test results
Test Automation
279 | P a g e
INTRODUCTION Atollic TrueSTUDIO Premium includes Atollic TrueVERIFIER, as a deeply integrated module for professional in-target software test automation. Atollic TrueVERIFIER automates testing of your software, and is a very powerful tool for improving the quality of your embedded system.
Atollic TrueVERIFIER allows the user to create the test cases with or without looking in the source code implementation.
The tests can be created before the implementation is written (manually created test cases only, using empty function stubs), or after the source code implementation is written (manually, or automatically generated test cases).
Atollic TrueVERIFIER automates the process of creating and running tests. Two types of test suites can be created:
Function tests
Integration tests
For function tests, the generated test suite checks that each C function behaves as expected. For integration tests, the generated test suite builds a more complete, real-world type of use case, testing that several C functions works as expected when put together into a larger scenario.
Additionally, Atollic TrueVERIFIER provides code coverage information, presenting information on how well the tests test each function.
FUNCTION TESTS For function tests, Atollic TrueVERIFIER auto-generates and auto-executes test suites that exercise as many as possible of all potential execution paths, as the illustration below shows:
Test Automation
280 | P a g e
Figure 277 – Example of Auto-Generated Unit Tests
In this trivial example, the source code of the function MyFunc() is analyzed, and a test suite will be created that makes a number of function calls, one for each important combination of input parameter values. In this case, the function has only one input parameter, and so tests will be created for the minimum and maximum values the input parameter data-type, along with the value 0 (and some values around it), as well as any other parameter values that were found to modify the execution flow inside the function (such as the value of 50 in the example above).
Please note that test cases are auto-generated only for expressions whose value can be calculated completely by statically analyzing the code. In case an expression depends on dynamically calculated values, for instance the return value of a function call, no test case is generated for that expression. In such a case, a test case must be added manually.
Once the test suite(s) have been created, expected return code values and affected global variable values can be defined, and the tests are executed automatically in the target board and test results and achieved test coverage is uploaded and displayed in the IDE.
In more detail, Atollic TrueVERIFIER performs the following tasks automatically:
Analyses the source code of the project.
Calculates the test cases needed to exercise as many as possible of the potential execution paths in each C function included in the test suite.
Generates an optimized set of unit tests in C source code.
Builds the unit test implementation.
Test Automation
281 | P a g e
Downloads the unit tests to the target board using the same JTAG probe being used for Atollic TrueSTUDIO debugging.
Executes the unit tests in the target board with execution-path monitoring.
When the tests are completed, test results as well as rigorous test coverage information is uploaded and presented in Atollic TrueSTUDIO (i.e. the quality of the test coverage is measured automatically).
In Atollic TrueVERIFIER, one or more test suites can be defined. Each test suite contains tests for one or more C functions. Test suites are therefore a way of grouping tests for various C functions into logical groups, such as a test suite for graphic functions, another test suite for mathematical functions, etc.
When testing a C function, typically many test cases are needed to exercise as many as possible of all the potential execution paths in the function. In this context, a test case can be considered to be a function call with a defined set of input parameter values. Several test cases for a function can thus be considered to be several function calls to that function, using different combinations of input parameter values.
As all test cases are executed in the target board with full execution-path monitoring, Atollic TrueVERIFIER provides a stringently defined value for the quality of the test cases being run (in practice, how many of all the possible combinations of potential execution paths and branch conditions have been executed by the current test suite). Atollic TrueVERIFIER provides test coverage information for Block Coverage, the more advanced Branch Coverage, and even Modified Condition/Decision Coverage (MC/DC).
INTEGRATION TESTS For integration tests, Atollic TrueVERIFIER creates a more complete, real-life type of test scenario testing that several functions perform a task as expected together.
Test Automation
282 | P a g e
Figure 278 - Test scenarios
Integration tests using test scenarios are created by selecting a number of functions, rearranging their calling order, and combine them into one test that runs all the functions in the defined order. The test scenarios can be modified manually to provide more advanced types of scenarios, for example by wrapping function calls into if-statements and for-loops.
CONCEPTS Some concepts and their usage in Atollic TrueVERIFIER are outlined in the table below:
Test Automation
283 | P a g e
Term Description Test system
Atollic TrueSTUDIO with the Atollic TrueVERIFIER module enabled, as well as a JTAG probe and the target board where the test suites will be executed. The test system can run one or many test suites automatically.
Test suite
A test suite contains the set of tests to be run. A test typically drives the testing of one C function, and so a test suite tests a set of functions. A test suite can contain one or many tests as desired. You can define as many test suites as you like in a project. Technically, each test suite driver is implemented in an auto-generated C file that is located in the <ProjectLocation>\unittest\ folder.
Test A test is typically the definition of the testing of one function in a test suite. The test can have many test cases, in order to check for many combinations of input parameter values
Test vector The set of test cases for a test. It can be considered to be the
number of function calls and their input parameter values used to test one C function, along with the expected return codes and affected output values for that combination of input parameter values.
Test case One test case will drive one test of a function. As such, a test case can be considered to be one function call with predefined input parameter values.
Test driver The C code that drives a test. Test drivers are located in the <ProjectLocation>\unittest\ folder.
Instrumentation macro
C macro’s that are used to implement the test driver. The test driver of a test can be instrumented with assertion macros that check the functional behavior and determines if the result of a test case is the expected or not.
Test result
When test cases have been executed in the target board, each test case provides a test result of one of these 3 types: Success The function behaved as expected. Failure The function did not behave as expected. Error The test could not be executed for some reason, such as the JTAG probe not being connected.
Test Automation
284 | P a g e
Term Description
Test coverage
A quality measurement that provides information on how well one or multiple tests have tested the function. Technically, it measures how many of the potential execution paths and branch conditions have been exercised by the test.
Table 65 – TrueVERIFIER Terms and Concepts
Test Automation
285 | P a g e
FUNCTION TESTING TUTORIAL For the purpose of this tutorial, we will be testing two trivial C functions running on a STM32F4DiscoveryKit evaluation board from STMicroelectronics®. The board is connected to the PC using an ST-LINK JTAG probe.
THE CODE TO BE TESTED The functions to test in this tutorial are inserted into main.c and are defined as follows:
typedef struct
{
int a;
int b;
} MyValueStruct;
int a = 0;
int MySimpleFunc( int x )
{
if( x < 25 )
return 0;
else
return 1;
}
int MyComplexFunc( int x, MyValueStruct *MyStruct)
{
if( a < 8 )
{
a = x;
return MyStruct->a;
}
It is a good idea to not perform the actual unit testing in the same project as
the development is done in. Instead the code should be linked in to a
dedicated testing project.
That way test code and data can be separated from the development code
and data. It will also reduce the time it takes to build the test binary.
This will allow testing a function during development without having a
successful build of the whole project. Only the tested function and its
dependencies needs to successfully rebuild.
Test Automation
286 | P a g e
else
return MyStruct->b;
}
The function MySimpleFunc() takes an integer parameter, and returns 0 or 1 dependent on the value of the input parameter. The function MyComplexFunc() takes two input parameters; an integer and a pointer to a complex data structure, in this case a struct containing two integer variables. The function MyComplexFunc() also has a dependency on a global variable, which it also modifies.
INITIAL STEPS Before unit testing your project, make sure that your project can be built properly and that you can debug it using the normal Atollic TrueSTUDIO debugger (i.e. compile the project, download the resulting binary file to the target board and ensure that it is possible to single step through a couple of source code lines).
When done, switch to the Unit Test perspective. To switch perspective, select the Open Perspective toolbar button.
Figure 279 - Switch Perspective
Then select the Unit Test perspective to switch to TrueVERIFIER.
Test Automation
287 | P a g e
Figure 280 – The Unit Test Perspective
EXCLUDING FILES AND FOLDERS Atollic TrueVERIFIER instruments the code in the selected project, for the purpose of code coverage analysis. This is necessary to measure the achieved test quality during test execution.
The code coverage instrumentation must not be applied to files that contain code that executes before the main() function is called during the power-on-reset and boot-up phase. The C language runtime environment is not properly initialized at this early stage. Consequently, instrumenting this code will lead to a program crash when executing the test in the target board.
Therefore, all files containing C code that execute before the main() function, must be excluded from test analysis and instrumentation. However the file containing the main() function must not be excluded, excluding this fill will lead to build errors upon test execution.
Test Automation
288 | P a g e
It may also be desirable to exclude 3rd party libraries (such as RTOS, device drivers or other middleware) from the tests analysis as TrueVERIFIER do not have to spend time analyzing source code that is not likely to be tested anyway. Also please note that it is not possible to test hardware interrupt service routines (ISR’s).
To exclude files and folders from testing, please select the project in the Project Explorer view, right-click and select the Properties menu command.
Drill down to the Testing, Exclusion, TrueVERIFIER panel and select any files and folders to exclude. In this example, we have excluded the device driver library that is implemented in the Libraries and Utilities folders, as well as two C files that belong to the hardware drivers, and may contain C code that runs before main() is called.
Figure 281 - The File and Folder Exclusion Dialog Box
Exclude the relevant files and folders from your project. Close the dialog box by clicking the OK or Cancel button.
Atollic TrueVERIFIER cannot run the tests in hardware, unless all C files that
execute code before main() are excluded.
The source file containing main() must NOT be excluded in the exclusion
filters. If the exclusion filters contain the source file implementing main(),
Atollic TrueVERIFIER will display a dialog box: “Build failed. Please check
build log”. In the build console there will also be a build error “multiple
definitions of ‘main’”.
Test Automation
289 | P a g e
ADDING FUNCTIONS TO TEST To enable the project for testing, please click on the project root name in the Project Explorer view:
Figure 282 - Selecting the Project Root Item
The toolbar buttons related to Unit testing are:
Figure 283 -Unit Test Buttons
A. Add unit tests. All selected C functions are grouped into one test suite. The C functions are tested in isolation (one-by-one).
B. Add unit tests. Each selected C-function gets its own individual test suite. The C functions are tested in isolation (one-by-one).
C. Add a unit test scenario. All selected C functions are grouped into one test suite. The C functions are tested by calling them in the pre-defined sequence.
D. Generate Unit test report.
E. Launch Unit test (i.e., run the tests in the target board).
For the purposes of this tutorial, click on toolbar button B to create unit tests where each function will have an individual test suite. A reminder to exclude all startup files is now displayed. Click OK to dismiss the warning dialog box.
Atollic TrueVERIFIER now parses the source code files in the project and detects the C functions available for testing. The currently selected project, along with available functions for testing, is displayed in the dialog box below.
Test Automation
290 | P a g e
Figure 284 - The Function Selection Dialog Box
The Test target functions edit field can be used to filter the function name list in case the project contains a large amount of functions. As the functions MySimpleFunc() and MyComplexFunc() are to be tested, we type “My” in the Test Target functions edit field to filter the list.
Test Automation
291 | P a g e
Figure 285 – Function filtering
The list of functions available for testing is reduced to a more convenient size. In this tutorial, we will auto-generate unit tests for the functions MySimpleFunc() and MyComplexFunc(). Thus, select those functions for testing in the function list.
In addition, select the checkbox Generate test data for the selected test target functions. This ensures the auto-generated test suites are initialized with an auto-generated set of test cases in the test vector for each function.
Test Automation
292 | P a g e
Figure 286 - Selecting functions to test
These settings will enable auto-generation of:
A test suite with unit tests for MySimpleFunc()
A test suite with unit tests for MyComplexFunc()
Test data, i.e. Atollic TrueVERIFIER will parse the source code inside the selected functions, and determine the various input parameter values that will affect the execution flow in the function. Atollic TrueVERIFIER will then auto-generate test cases with different combinations of parameter values, ensuring that as many combinations of potential execution paths as possible will be tested.
Click Next to move on to the next wizard step. In this step, Atollic TrueVERIFIER will display the input and output dependencies for each of the C functions that are to be added for testing.
Test Automation
293 | P a g e
Figure 287 - Input and output dependencies
It is possible to drill down into complex data structures, as have been done in the screenshot above. By selecting the corresponding checkboxes, it is possible to configure whether checking shall be done for the detected input and output dependencies.
In the example above, Atollic TrueVERIFIER has detected that the function MyComplexFunc() has input dependencies on both the parameter x, some members in a complex data structure pointed to by the parameter MyStruct, as well as the global variable a. Furthermore, checking for the return code is enabled, as are checking for the modified value of the global variable a.
Now click the Finish button to auto-generate the new test suites according to the configurations made. Expand the test suite tree nodes in the Unit test view and select the test MySimpleFunc_test0. Atollic TrueVERIFIER will now look like this:
Test Automation
294 | P a g e
Figure 288 - The Unit Test Perspective with test suites added
Test drivers for each test (i.e. each C function to be tested) are auto-generated in the unittest\ folder. The test drivers are normal C files and can be edited if needed to accommodate for more advanced testing needs.
Figure 289 - Test drivers in the unittest folder
Test Automation
295 | P a g e
As this tutorial adds tests for the functions MySimpleFunc() and MyComplexFunc(), the following two test driver C files have been auto-generated in the unittest folder:
MyComplexFunc_test0.c
MySimpleFunc_test0.c
As can be seen in the screenshot below, Atollic TrueVERIFIER displays the generated test suites in the Unit test view.
Figure 290 - Test suites and tests in the Unit Test view
Selecting a test in a test suite (such as MySimpleFunc_test0) in the Unit Test view has two effects:
The test configuration form is opened in the main editor area
The Test cases view is filled with the test vector (test case data) for the testing of the C function tested by the selected test.
The Unit Test view provides an overview of test status and test results for each of the functions to test. This includes the number of test cases that have been run for each function (with success, failure or error if the test could not be run). The test status is displayed for each test in the format of (Runs/Errors/Successes).
Successes – The number of test cases detecting correct behavior
Errors – The number of tests that could not be executed, for example due to build or debug errors.
Failures – The number of test cases failing
In the Unit Test view there are some important toolbar buttons.
Figure 291 – Unit Test Toolbar Buttons
A. Synchronize unit tests. Manually trigger a re-parsing of the source code. This might be needed if the source code has been changed outside of Atollic TrueSTUDIO.
B. Filter the view and only show tests that have not been run yet.
C. Filter the view and only show tests that have failure or error status
Test Automation
296 | P a g e
DEFINE EXPECTED RESULTS Once test suites and tests have been created to test the selected functions, it is necessary to define the expected results for each test case in each test. This is done in the Test cases view that contains one column with test data for each test case.
One test case equals one call to the function being tested, so each column contains the input parameter values for the specific function call being made but that particular test case.
The Test cases view contains the test cases (function calls) for one test (one function). Select the desired test in the Unit Test view, and the Test cases view will be updated to show the test cases of that particular test. The Test cases view thus display the test cases (function calls) that has been auto-generated by Atollic TrueSTUDIO. By default, test cases are auto-generated to test the MIN and MAX values of the data-type being used, values around 0, and values around any other numeric constant in the source code that can be determined to affect the execution flow.
In this case, select the test MySimpleFunc_test0 in the Unit Test view, and the Test cases view will look like this:
Figure 292 - The test cases in the selected test
The input parameter values to use during testing of this C function are auto-generated, but can be edited manually. It is also possible to delete test cases, and add more test cases, testing other combinations of input parameter values.
To fill in the expected return code for each test case (i.e. for each combination of input parameter values), enter the expected return code value in the int _rMySimpleFunc row.
Again, study the implementation of the function we are to test (MySimpleFunc):
int MySimpleFunc( int x )
{
if( x < 25 )
return 0;
Test Automation
297 | P a g e
else
return 1;
}
For any value where the input parameter x is less than 25, the return code is expected to be 0, otherwise 1 (normally this should be determined by knowing what the function is supposed to do, not by studying the actual implementation, as the implementation may contain errors that we want to detect using the tests).
In the screenshot below, this has been defined. But please notice a “wrong” return code has been defined for test case #5, where the function is called with the input parameter x being 24. For this case, the code implementation should return 0, but we have defined the expected return code to be 1. This is done to simulate an error, where the actual return code differs from the expected one.
Figure 293 - Define expected return codes
In effect, we have created an error in the test vector instead of having an error in the code. But it will show what happens when the actual return code is not equal to the expected one.
With test suites now created for testing of two functions, and expected return code values defined for one of the functions, we can run the test suites in the hardware board.
DEFINING ADVANCED TYPES OF EXPECTED VALUES
When checking for expected output values (return codes or modified global variables),
it is possible to use a more flexible approach than just enter a fixed expected value.
Atollic TrueVERIFIER supports using expressions to define more advanced types of expected values, using logical operators and value ranges. The table below outlines the accepted operators for expected value definitions:
Test Automation
298 | P a g e
Expected value type
Description Example
& AND operator Cannot be used with the OR operation
10 & 11 ~30& 7~20
| OR operator Cannot be used with the AND operation
13 | 15~19 3 | 7
! NOT operator !13~20, !30
~ Range 50~70, ~50, 20~
& AND operator Cannot be used with the OR operation
10 & 11 ~30& 7~20
Table 66 – Defining Advanced Types Of Expected Values
RUNNING TESTS When running test suites and tests, it is sometimes useful to manually select which test suites or individual tests should be executed. This is done using the checkboxes adjacent to the test suites and tests in the Unit Test view. In the screenshot below, both test suites and their corresponding test suites will be executed.
Figure 294 - Selecting test suites and tests
To start to run the tests, select the project in the Project Explorer view and then click the Launch Unit test icon to start the test on the target board.
Figure 295 – Run Unit Test Button
The first time a project is downloaded to the target board, a dialog box with launch configuration properties is displayed – this is not displayed for projects that have previously been run in target:
Test Automation
299 | P a g e
Figure 296 - Unit Test Launch Configurations
Select the Debugger tab and ensure that the settings for the JTAG probe to use are correct (for example, we need to select the ST-LINK JTAG probe for the board being used in this example).
Test Automation
300 | P a g e
Figure 297 – The Debugger JTAG Probe Configuration
Then switch to the Startup Unit Test panel, which provides the possibility to modify the debugger initialization script to use when connecting to the hardware. For this tutorial, make no changes unless required by a different board setup.
Test Automation
301 | P a g e
Figure 298 - The Unit Test Debugger Startup Script Tab
Click the OK button to start the unit test. The source code is now analyzed. Unit test source code files are generated and built, resulting in a binary file. Finally, the binary file is downloaded and executed on the target board.
Running large sets of test suites and tests can sometimes take a long time. Open the Progress view to get detailed information on what Atollic TrueVERIFIER is doing:
Test Automation
302 | P a g e
Figure 299 - The progress view
Once tests are completed, the docking views in the Unit test perspective provide information on the test results.
Figure 300 - The Unit Test Perspective after a Test Session
Test Automation
303 | P a g e
ANALYZING THE TEST RESULTS Once a test run has completed, the Unit Test view shows the status of the executed test suites and tests. In the screenshot below, the test suite and test for MyComplexFunc() have 9 successful test cases (as no expected return code value was defined for that test, all of them pass). But the testsuite and test for MySimpleFunc() had one test case going into failure, which is also highlighted using different icons.
Figure 301 - Test result overview
As can be seen in the screenshot below, the Test cases view show detailed information on the test results for each test case.
Figure 302 - Detailed test results
More specifically, the Test cases view provides test results in the following ways:
For each test case (column), the Result row show “success”, “failure” or if the test could not be run, “error”.
The return code row now has two values in the form x/y for each test case. The value x is the expected value defined by the user earlier in this tutorial, and the value y is the actual return code received when testing the function in the hardware board.
The Diagnosis row may contain text strings printed by the test driver.
Test Automation
304 | P a g e
Test cases that failed (the actual return code did not match the expected one) are highlighted in red color as well.
ANALYZING THE TEST QUALITY In addition to analyzing the test result, it is also of interest to analyze the test quality that was achieved by the tests. This can be done using the Coverage view. The Coverage view works in 3 different modes:
Display of the test coverage per function in the project
Display of the test coverage achieved by all test cases for one function
Display of the test coverage achieved by one test case for one function
Click on the Show All option in the panel to the left. The test coverage (test quality) achieved is listed for all functions in the project:
Figure 303 - Test coverge of the functions in the project
Click on a test name (such as MySimpleFunc_test0) in the panel to the left. The test coverage (test quality) by all test cases is displayed for the tested function:
Test Automation
305 | P a g e
Figure 304 - The test coverage of the functions in a test
Expand a test name (such as MySimpleFunc_test0) in the panel to the left, and all test cases for that test will be listed. Select any of the test cases, and the test coverage (test quality) achieved by that test case is displayed:
Figure 305 - The test coverge for each test case
Figure 306 - The test coverge toolbar buttons
The toolbar buttons are:
A. Show in coverage - To show the status of covered or uncovered over the source code.
B. Refresh- if the test has been executed, but the view is not updated, clicking refresh button will help.
C. Link with editor - To show the source code on the editor through selecting specific function.
Test Automation
306 | P a g e
Using the Coverage view, it is thus easy to analyze the achieved test coverage, and determine if additional test cases are needed to provide better test quality.
ADVANCED INPUT AND OUTPUT DEPENDENCIES So far, the tutorial have only made meaningful testing of the MySimpleFunc(), as no expected return codes was defined for MyComplexFunc(). Let us again study the implementation of MyComplexFunction():
typedef struct
{
int a;
int b;
} MyValueStruct;
int a = 0;
int MyComplexFunc( int x, MyValueStruct *MyStruct)
{
if( a < 8 )
{
a = x;
return MyStruct->a;
}
else
return MyStruct->b;
}
The function takes a pointer to a complex data structure as its second parameter. We can also notice that the execution flow of the function depends on a global variable a, which can be modified by the function as well. And so this function have several new complexities, which we did not see when we tested the MySimpleFunc() earlier:
Pointer variables and complex data structures must be handled as input dependencies
Global variables must be handled as input dependencies
Modified global variables must be handled as output values, similar to return codes
By selecting the MyComplexFunc_test0 in the Unit Test view, the Test cases view will show all the auto-generated test cases for the MyComplexFunc() function.
Test Automation
307 | P a g e
Figure 307 - More complex input and output dependencies
By studying the Test cases view for the test MyComplexFunc_test0, it can be noticed that there are now more input and output dependencies, compared to the test of the MySimpleFunc() we tested earlier.
Figure 308 - More inputs and output dependencies
For example, Atollic TrueVERIFIER have detected that the global variable a can affect the execution flow if it has a value around 8. It is thus listed as an input dependency and when the function is tested, it will be tested in an environment where the global variable a will have values around 8 too.
Test Automation
308 | P a g e
Additionally, Atollic TrueVERIFIER have detected that the function can modify the value of the global variable a, and so it is possible to define expected output values for a too, in addition to the return code.
Test Automation
309 | P a g e
INTEGRATION TEST TUTORIAL In the previous sections, testing of individual C functions has been covered. This section will outline how to use test suites that are configured for test scenarios. A test scenario is a test that typically performs a more complete, real-life type of test. A test scenario test several functions interacting together, as opposed to testing just one function in isolation.
For example, a test scenario may be to connect to a server over TCP/IP, download a file, open it to extract information from the file, and then closing the file and the TCP/IP channel.
Test scenarios can only include functions that are implemented in any file, but the test driver file may need to be modified to #include necessary dependencies.
THE CODE TO BE TESTED The functions to test in this tutorial are inserted into main.c and are defined as follows:
int FTP_Connect( char *Address )
{
return 1;
}
int FTP_ReadFile( char *Filename, int Size )
{
return 1;
}
int FTP_Disconnect()
{
return 1;
}
This simplified example will show how a test scenario can be created, to test the more real-life use-case of connecting to an FTP server, download a file, and disconnect. In other words, an integration test will be created and executed.
CREATING THE TEST SCENARIO A test scenario can be created using the corresponding toolbar button (C) below.
Test Automation
310 | P a g e
Figur 1 - Toolbar buttons to add new test suites
The Create new test suite wizard is then opened:
Figure 309 - Creating a test scenario
Initially, the test scenario is empty as no functions have been selected. Click the Add button to select what C functions should take part in the test scenario. The function selection dialog box will then be displayed:
Test Automation
311 | P a g e
Figure 310 - Select functions for the scenario
In this example, 3 functions for FTP access are selected, to create a test scenario of connecting to an FTP server, downloading a file, and disconnecting.
Figure 311 - Functions are selected
Click OK to accept the function selection. The wizard now lists the selected functions, although they are not in the correct calling sequence:
Test Automation
312 | P a g e
Figure 312 - Functions are added to the test scenario
Select the function FTP_Readfile and click on the Up button, to position it between the FTP_Connect and FTP_Disconnect functions. Then click Next to see the detected input and output dependencies of all the functions in the test scenario:
Test Automation
313 | P a g e
Figure 313 - The input and output dependencies
Click Finish to generate the test scenario. The Unit test perspective now looks like this:
Test Automation
314 | P a g e
Figure 314 - The generated test scenario
Select the test suite node in the test configuration form, and change the test suite name to “FTP_Download_scenario”:
Figure 315 - The FTP Download Scenario
Then select the test node and notice the Test cases view contains the input parameter values and return codes and output parameter values for all functions in the test scenario.
Test Automation
315 | P a g e
Figure 316 - Input and output values for all functions in the test scenario
The generated test driver file in the unittest\ folder implements the test scenario, by transferring the input values from the Test cases view to the test driver, calling the functions in sequence, and transferring the return codes from the test driver back to the Test cases view.
Figure 317 - The test scenario implementation in the test driver file
The test driver can be modified as desired, to create a more advanced type of test scenario, for example by wrapping some function calls in if-statements or for-loops, etc.
Test Automation
316 | P a g e
The test scenarios are executed in the same way as function tests, by clicking on the test start button in the toolbar.
Test Automation
317 | P a g e
MANAGING THE TEST CONFIGURATION When testing a project with Atollic TrueVERIFIER, it might sometimes be necessary to add/edit/delete test suites, tests and test cases. Test suites and tests are managed in the test configuration form in the main editor area, while test cases are managed in the Test cases view.
MANAGING TEST SUITES To add more test suites, click on one of the main toolbar buttons:
Figure 2 - Toolbar buttons to add new test suites
A. Add unit tests. All selected C functions are grouped into one test suite. The C functions are tested in isolation (one-by-one).
B. Add unit tests. Each selected C-function gets its own individual test suite. The C functions are tested in isolation (one-by-one).
C. Add a unit test scenario. All selected C functions are grouped into one test suite. The C functions are tested by calling them in the pre-defined sequence.
To manage an existing test suite, select it in the Unit Test view and it will be highlighted in the test configuration form in the main editor area.
Test Automation
318 | P a g e
Figure 318 - The test configuration form
The test suite name can be changed to the right in the form, and it is possible to enter a description of the test suite. By right-clicking on the test suite name in the test configuration panel, a context menu is opened where it is possible to add or import more tests (i.e. let the test suite test more functions), as well as deleting it. The setup and teardown options will be discussed in a subsequent section, and will not be covered here.
Figure 319 – Adding or importing more tests to a test suite
Test Automation
319 | P a g e
MANAGING TESTS To edit a test, select it in the test configuration form. It is possible to define a description of the test. A link to the C file containing the function(s) tested by this test is provided as well.
Figure 320 - Editing a test
To re-generate test cases, click on the Generate Data button. If the function being tested has 3 parameters or more, the Pairwise Combination option generates many more test cases. For the Flat Combination option, different permutations of parameter value combinations are not generated. The Pairwise Combination and the Flat Combination settings have no effect on functions with less than 3 parameters.
Using the import and export buttons, test data can be imported or exported.
Figure 321 - Import and export test data
MANAGING TEST CASES Atollic TrueVERIFIER analyses the source code of the application to test, and auto-generates a number of test cases with function calls with a suitable set of input parameter values. These are selected by parsing the source code of the functions.
Let’s for example study the MyFunc1() function below:
Test Automation
320 | P a g e
int MyFunc1( int x )
{
if( x < -10 || x > 65 )
{
return 1;
}
else
return 0;
}
For this function, there is only one function parameter, of type integer. Please note that two particular values of the parameter x affect the execution flow: -10 and 65. If x is less than -10 or greater than 65, a different execution path is taken.
Obviously it is important to test these input parameter values, as they do in fact affect the execution flow of the function. For complete testing though, it is also of interest to test other “corner case” values, like MAXINT, MININT, zero, and one or two values next to these values.
For the above function, Atollic TrueVERIFIER will automatically generate unit tests with input parameters for x as follows: -2147483648, -11, -10, -9, -1, 0, 1, 64, 65, 66 and 2147483647. This is the test vector for the function MyFunc1().
In C program syntax, unit tests with the following function calls have been auto-generated:
MyFunc1(-2147483648);
MyFunc1(-11);
MyFunc1(-10);
MyFunc1(-9);
MyFunc1(-1);
MyFunc1(0);
MyFunc1(1);
MyFunc1(64);
MyFunc1(65);
MyFunc1(66);
MyFunc1(2147483647);
While it is extremely helpful to get the test cases auto-generated, it is sometimes of interest to manually add more test cases. Reasons can for example be:
Test cases are created for empty function stubs, before the implementation is written.
More test cases are needed to improve the test coverage (test quality)
To toolbar in the Test cases view provides commands for managing the test cases. Several of these commands only work if the column heading have been clicked, to select the full column.
Test Automation
321 | P a g e
Figure 322 - The Test cases view toolbar
A. Save changes to the expected output values or added descriptions
B. Cut a test case to the clipboard.
C. Copy a test case to the clipboard
D. Paste a test case from the clipboard
E. Add a new test case
F. Delete a test case
G. Delete all test cases
H. Delete the test results from the previous test run
I. Filter test cases using different filter options
Test Automation
322 | P a g e
SETUP AND TEARDOWN CODE Sometimes, there is a need to write special code that prepares the H/W or S/W environment for the test, and close it down in a controlled manner after the test. Reasons for this may be to open a file before it can be read from or written to during test, and to close it after the test. Or to initialize a hardware peripheral before sensor data can be read, and to close down the hardware peripheral after the test.
For this purpose, Atollic TrueVERIFIER provides the possibility to “hook in” setup and teardown code, which runs before or after a test suite, as well as before or after each test, as appropriate.
CREATING SETUP AND TEARDOWN DEFINITIONS Any number of setup and teardown code implementations can be defined. The setup and teardown definitions can later be connected to specific test suites or tests; they are thus created independent of where they are to be used, and they can therefore be reused as many times as appropriate in different test suites or tests.
To create setup or teardown definitions, open the test configuration form by clicking on a test suite or a test in the Unit test view. In the bottom of the test form, click on the Assist Code tab.
Figure 323 - The Assist Code tab
The Assist Code tab shows a list of all setup and teardown code definitions. To create new setup code or teardown code definitions, click the Add button.
Select the Setup or Teardown radio button dependent on if initialization or shutdown code is to be defined. Additionally, give the setup or teardown definition a name and optionally a description. Finally, browse to the file being used with the setup or teardown definition using the Connected source browse button.
Test Automation
323 | P a g e
Figure 324 - Adding a setup or teardown definition
Click Finish to complete the setup or teardown definition and return to the AssistCode tab in the test configuration form. Press CTRL-S or click on the save toolbar button to save the changes. New setup and teardown definitions will not be available to use for before the AssistCode tab has been saved.
Once the setup and teardown definitions have been created, C code must be added to them to make them useful.
ADDING CODE Once the changes to the Assist Code tab have been saved, a file is auto-generated in the unittest\ folder for each setup and teardown definition that has been added. The file names are equal to the names that were defined in the Assist Code tab.
Figure 325 - Setup and teardown implementations in the unittest folder
If the source code file is not listed when browsing the Connected Source list, the user must synchronize the project to have all latest code changes take into consideration. Do this by clicking Synchronize unit tests button, see Figure 291.
Test Automation
324 | P a g e
Double-click on any of the auto-generated setup or teardown files to open it in the C/C++ editor. Edit the setup code or teardown code as appropriate, to make them initialize and shutdown the S/W or H/W environment as needed for different test suites or tests.
Figure 326 - Edit the setup and teardown implementation files
Once the setup and teardown definitions are created, and code implementations have been added to make them useful, they can be connected to various test suites and tests to make them initialize the environment before a test-suite or test, or make them shutdown the environment after a test-suite or test. This is done in the SuiteList tab in the test form.
USING SETUP AND TEARDOWN DEFINITIONS Once setup and teardown definitions have been created in the Assist Code tab, and they have been implemented in the unittest\ folder files, they can be connected to various test suites or tests.
To do this, right-click on any test suite or test in the configuration tree in the SuiteList tab. In the context menu, select Add, and choose Setup or Teardown dependent on what action you want to add to the test suite or test.
Test Automation
325 | P a g e
Figure 327 - Adding a setup definition to a test suite
This creates a setup node under the test suite node in the tree. Expand the test suite node and select the setup node. In the right part of the test configuration form, click the Browse button to open the setup definition selection dialog box.
Figure 328 - Select the setup definition to use
Then select the desired setup or teardown definition in the list and click OK.
During a test run, the test suite or test will now execute the selected setup code or teardown code as desired to make the appropriate initialization and shutdown steps needed to make the test run work in its surrounding environment.
Test Automation
326 | P a g e
DETAILED TEST CONTROL The test drivers and setup and teardown implementations are stored as normal C files in the unittest\ folder. These files are auto-generated but can be edited manually for advanced use cases.
TEST DRIVERS A test driver function manages the execution of a test, by feeding the input values of the functions to be tested, and calling the functions to be tested with the appropriate test data. It also returns the output values to the Test cases view.
Test driver functions use the #pragma cstest to specify them as being test driver functions. Test driver functions have no input parameters and no return codes:
#pragma cstest
void test_function( void )
{
…
}
If the return type of functions to be tested is of a primitive type, the macro that checks the return code is created automatically.
INSTRUMENTATION MACROS Atollic TrueVERIFIER provides various macros enabling the user modify the test driver for more advanced use.
INPUT MACROS
Input macros transfer the input values from the Test cases view to the function call in the test driver.
Test Automation
327 | P a g e
Macro Parameters CS_INT_INPUT(_t, _s)
_t: integer type name _s: test data name
CS_UINT_INPUT(_t, _s) _t: unsigned integer type name
_s: test data name
CS_FLT_INPUT(_t, _s)
_t: floating point type name
_s: test data name CS_STR_INPUT(_t, _s) _t: char type name
_s: test data name
Table 67 – Input macros
This example shows how an input macro transfers a parameter from the Test cases view to the function call in the test driver:
void MySimpleFunc_test0()
{
/* Do not modify CS_TEST_PARAMETER() */
CS_TEST_PARAMETER(MySimpleFunc_test0);
/* FUNCTION PARAMETER, RETURN VARIABLES */
int x = 0;
int _rMySimpleFunc;
/* INPUT */
x = CS_INT_INPUT(int,x);
/* CALL TEST TARGET FUNCTION */
/* int MySimpleFunc(int) */
_rMySimpleFunc = MySimpleFunc (x);
/* OUTPUT */
CS_INT_OUTPUT(_rMySimpleFunc,"_rMySimpleFunc");
}
TEST RESULT MACROS
Test result macros let Atollic TrueVERIFIER know if the test driver considers a test to fail. By default, Atollic TrueVERIFIER can check if return codes and modified global variables have the expected values. But in some advanced test situations, this is not enough. By using the test result macros, developers can add advanced failure detection to determine if a test should fail or not.
Test Automation
328 | P a g e
Macro Parameters CS_ASSERT(_b)
_b: conditional expression
CS_ASSERT_MSG(_b, _msg) _b: conditional expression
_msg: message to be outputted if the conditional expression is not true
Table 68 – Test result macros
This example shows a test driver that has been instrumented to make additional checking of the behavior of the function being tested. In this case, it is considered to be an error if the return code is not equal to 1, for test case number 4 and above. Using the test result macros, failure detection of arbitrary complexity can be designed.
void MySimpleFunc_test0()
{
/* Do not modify CS_TEST_PARAMETER() */
CS_TEST_PARAMETER(MySimpleFunc_test0);
/* FUNCTION PARAMETER, RETURN VARIABLES */
int x = 0;
int _rMySimpleFunc;
/* INPUT */
x = CS_INT_INPUT(int,x);
/* CALL TEST TARGET FUNCTION */
/* int MySimpleFunc(int) */
_rMySimpleFunc = MySimpleFunc (x);
if( CS_TESTCASENO() > 3 )
{
CS_ASSERT_MSG( _rMySimpleFunc == 1, "RC!=1 for TC>3" );
}
/* OUTPUT */
CS_INT_OUTPUT(_rMySimpleFunc,"_rMySimpleFunc");
}
The screenshot below shows how the Test cases view show how the test result macros cause test case number 4 and above to fail if the return code is not equal to one. The failure message string is displayed in the Diagnosis field too.
Test Automation
329 | P a g e
Figure 329 - Advanced failure detection
OUTPUT MACROS
Output macros transfer the actual output values (such as the return code) from the test driver to the Test cases view.
Macro Parameters CS_INT_OUTPUT(_t, _s)
_v: integer type variables
_s: test data name
CS_UINT_OUTPUT(_t, _s) _v: unsigned integer type variables
_s: test data name
CS_FLT_OUTPUT(_t, _s)
_v: floating point type variables
_s: test data name CS_STR_OUTPUT(_t, _s) _v: char* or char[] type variables
_s: test data name
Table 69 – Output macros
Test Automation
330 | P a g e
This example shows how an output macro transfers the actual return code to the Test cases view:
void MySimpleFunc_test0()
{
/* Do not modify CS_TEST_PARAMETER() */
CS_TEST_PARAMETER(MySimpleFunc_test0);
/* FUNCTION PARAMETER, RETURN VARIABLES */
int x = 0;
int _rMySimpleFunc;
/* INPUT */
x = CS_INT_INPUT(int,x);
/* CALL TEST TARGET FUNCTION */
/* int MySimpleFunc(int) */
_rMySimpleFunc = MySimpleFunc (x);
/* OUTPUT */
CS_INT_OUTPUT(_rMySimpleFunc,"_rMySimpleFunc");
}
OTHER MACROS
Additional macros prints text strings to the Diagnosis fields of the Test cases view, and return the test case number currently being executed.
Macro Parameters CS_LOG(_msg)
_msg: log message
CS_TESTCASENO()
The currently executing test case number
Table 70 – Test result macros
An example of a test driver that prints the test case number in the Diagnosis field of the Test cases view is available here:
void MySimpleFunc_test0()
{
char Str[ 256 ];
/* Do not modify CS_TEST_PARAMETER() */
CS_TEST_PARAMETER(MySimpleFunc_test0);
Test Automation
331 | P a g e
/* FUNCTION PARAMETER, RETURN VARIABLES */
int x = 0;
int _rMySimpleFunc;
/* INPUT */
x = CS_INT_INPUT(int,x);
/* CALL TEST TARGET FUNCTION */
/* int MySimpleFunc(int) */
_rMySimpleFunc = MySimpleFunc (x);
sprintf( Str, "TC=%d", CS_TESTCASENO() );
CS_LOG( Str );
/* OUTPUT */
CS_INT_OUTPUT(_rMySimpleFunc,"_rMySimpleFunc");
}
With the test driver modification above, the Test cases view will print information in the Diagnosis view as shown here:
Figure 330 - Test driver instrumented for diagnosis output
Test Automation
332 | P a g e
TESTING STRING VARIABLES Atollic TrueVERIFIER supports basic data-types as well as complex data-structures including pointers, structures, etc. A special case however, is the handling of tests using character strings. In the C programming language, strings are defined as a pointer to a character (char *), that contains valid data bytes in consecutive memory locations until a terminating zero byte is found.
The problem is that the syntax for a pointer to a character and the syntax for a string variable are the same. If Atollic TrueVERIFIER detects that a function parameter or variable to be used in a test is of type char *, it has no way of knowing if the function want to use this as a pointer to a character, or if it wants to treat it as a zero terminated string.
So the user must help Atollic TrueVERIFIER to resolve this ambiguity. By default, Atollic TrueVERIFIER treats all char * parameters and variables as being a pointer to a character, which is the literate default value. In effect, this becomes a pointer to a small integer data-type.
To make Atollic TrueVERIFIER treat char * parameters and variables as strings, two steps must be performed manually:
The test driver source code file must get its use of instrumentation macro’s modified, from a pointer to character (i.e. small integer), to a string.
Once the change has been done to the test driver file, click the Synch Test toolbar button in the Unit test view to notify Atollic TrueVERIFIER that the test driver has been modified.
Here follows an example of how to do it:
First create a test suite containing a test of the function MyStrFunc(). This function takes a string as its second parameter, and returns the number of characters in the string:
int MyStrFunc( int x, char* str )
{
int y = 0;
y = strlen(str);
return y;
}
Since this function takes a char * parameter, Atollic TrueVERIFIER will by default consider this to be a pointer to a small integer data-type. This is also reflected in the test vector in the Test cases view; the generated test values for the char * parameter are integer values.
Test Automation
333 | P a g e
Figure 331 - Character pointers are not string variables by default
Now open the corresponding test driver file in the unittest\ folder, and scroll to the /* INPUT */ section of the test driver for the MyStrFunc() function.
Test Automation
334 | P a g e
Figure 332 - Open the test driver in the editor
To change the data-type of the char * parameter from being a pointer to a character to become a zero-terminated string, change the last line that defines the second parameter:
/* INPUT */
x = CS_INT_INPUT(int,x);
str[0] = CS_UINT_INPUT(char,str0);
To:
/* INPUT */
x = CS_INT_INPUT(int,x);
str = CS_STR_INPUT(char[200], str);
The test driver source code should now look like this:
Test Automation
335 | P a g e
Figure 333 - Modified test driver
Once the test driver source code have been updated and saved according to the above example, click on the Synch Test toolbar button in the Unit test view to let Atollic TrueVERIFIER know it needs to update its internal data structures following the change in the test driver.
After some processing, the Test cases view is now updated with refreshed knowledge about the data-type of the char * parameter:
Figure 334 - The test vector now accepts strings
Once Atollic TrueVERIFIER has detected a parameter value is to be treated as a string data-type, string values can be entered into the test vector input value list:
Test Automation
336 | P a g e
Figure 335 - Test cases with character strings
As the function to be tested returns the number of characters in the input parameter string, the screenshot above also contains definitions of the expected return code for each input value in the test vector.
After running the test, it can be seen that the actual return codes match the expected ones, and so all test cases using string variables passed with success in this example.
Figure 336 - Test results with strings
Test Automation
337 | P a g e
GENERATE REPORTS To be able to properly follow up all tests and make sure all bugs are fixed, correct and detailed reports are vital. To create the reports, click on the Generate Unit test Report button (C).
Figure 337 - The Generate Unit Test Report Button
A dialog box will be displayed with report generation options, such as file format and export folder location.
Figure 338 – Export Unit Test Result
Test Automation
338 | P a g e
IMPORTANT NOTES This section contains additional information related to the use of Atollic TrueVERIFIER.
PROJECT SHARING Atollic TrueVERIFIER stores data about the test configurations in various database files in the .csdata/ut folder, which is located in the project root folder. In order to share the test configuration of a project amongst several users on different computers, for example by checking-and and checking-out the project using a version control system, the .csdata/ut folder needs to be properly managed and distributed with the project.
Therefore, make sure the .csdata/ut folder is properly checked-in and checked-out, or otherwise included with the project, when moving a TrueVERIFIER-enabled project from one location to another.
The different files in the .csdata/ut folder MUST be modified as an atomic operation by one user at a time only. If this is not enforced, the test data will become corrupt and cannot be used onwards. Therefore, make sure only one user at a time modify any of the files in the .csdata/ut folder.
This can for example be enforced using some version control systems using a “locking mechanism”, whereby one user “locks” the files in the folder such that no other user can modify any of the locked files before they are unlocked by the first user. Using such a mechanism ensures only one user at a time can modify the files in the .csdata/ut folder, and several users can thus move around the test data to different computers, without the risk of getting corrupt test data.
In addition to the test data files in the .csdata/ut folder, also the test drivers and setup/teardown implementations in the unittest folder must be checked-in and checked-out. But these files are standards C files and are thus to be managed as any other source code file in the project.
SYNCHRONIZE TRUEVERIFIER Atollic TrueVERIFIER automatically keeps the test database and the project source code in synch (re-parsing and updating the test data if needed) in the following situations:
When editing test vectors (parameter or return code editing)
When adding more functions to test
When running the tests
Test Automation
339 | P a g e
In rare situations however, Atollic TrueVERIFIER needs to be manually synched, for example when source code is changed outside the IDE. When source code is changed outside Atollic TrueVERIFIER, the test database and the source code may be inconsistent.
Manual synch is sometimes also needed when importing or updating source code from a version control system, and sometimes when adding, editing, deleting or renaming source code files. To manually synchronize Atollic TrueVERIFIER, press the Sync Test button in the Unit Test view toolbar.
Figure 339 – Unit Test Toolbar Buttons
If the test database and source code should become corrupt and a Synch Test command does not help, it is also possible to make a deeper reanalysis of the full project. During this operation, all test data is recollected from the source code. To do a full reanalysis, right-click on the project in the Project Explorer view and select Reanalysis in the context menu:
Figure 340 - Reanalyze the project
Test Automation
340 | P a g e
TEST QUALITY
MEASUREMENT This section provides information on how to use Atollic TrueANALYZER®. Atollic TrueANALYZER is fully integrated into Atollic TrueSTUDIO Premium. A demo version is included in all other Atollic TrueSTUDIO installations.
This section covers:
Different types of code coverage analysis
Starting a code coverage analysis session
Retrieving and analyzing the results
Test Quality Measurement
341 | P a g e
INTRODUCTION Atollic TrueSTUDIO Premium includes Atollic TrueANALYZER® as a module providing advanced features for system-level test quality measurements. Test quality is measured by analyzing how large portion of the execution paths and branch decisions in a software product have been exercised when running one or more test cases in the target board. Test quality is measured by performing code coverage measurement, and is sometimes also called test coverage measurement.
TrueANALYZER performs dynamic execution flow analysis as the application runs in the target board during test, and it provides code coverage analysis up to the level of Modified Condition/Decision Coverage (MC/DC). This is the most stringent level of test quality measurement, and is required for testing of safety critical systems, such as avionics and flight control systems.
It is important to understand that code coverage analysis do not measure the quality of the software; rather it measures the quality of the test procedures. But by performing test quality measurement, test cases can be improved to the level they test enough, or everything, dependent on the requirements of your project. A code coverage analysis tool can thus be considered to be a test debugger, or a test case debugger.
For example, if a microwave oven is to be tested, one of the test cases in the test specification may include the following steps:
Switch the “Mode” button to “Defrost”
Put a frozen chicken in the microwave oven
Click the “Start” button
Check that the chicken is completely defrosted
When running such a system-level test case under TrueANALYZER monitoring, TrueANALYZER will detect which software execution paths and branch decision conditions were in fact exercised in the target board by performing this test case. It is thus easy to check if the test cases in the test specification test enough parts of the software or not. If some parts of the software are not exercised by the test cases, the test specification can be improved and additional test cases can be added until the test coverage is good enough.
With test cases that are known to be good enough, the improved test cases will in turn detect more bugs, and offer test confidence and quality confidence as well.
Atollic TrueANALYZER runs in demo mode in Atollic TrueSTUDIO Lite and Pro.
Also note Atollic TrueANALYZER only works with managed mode projects, not makefile mode projects.
Test Quality Measurement
342 | P a g e
WHY PERFORM CODE COVERAGE ANALYSIS? All embedded developers are familiar with debugging; the process used to understand the behavior of a program, such that an error can be corrected. Debugging however, requires that an error has been detected in the first place. Programming errors not yet known to exist cannot be debugged or corrected.
For example, a NASA report (www.nasa.gov/pdf/418878main_FSWC_Final_Report.pdf) concludes that software products that are developed by a good software team, using a very good development process, typically are released with one bug for every thousand line of code. And so, a project with 100.000 lines of code most like ship with 100 bugs that were never detected during the testing phase. Additionally, these figures are for good teams with a very good development process. It is likely many embedded projects do not use development processes that are considered “very good” by NASA standards.
A recent real-world example is Chrysler, who had to recall 475.000 cars in May 2013, as a software upgrade was needed since the cars had an error that had caused 26 accidents and 2 injuries up to that point. Imagine the cost of recalling almost half a million cars worldwide, and the cost savings achieved if a software upgrade had not been needed.
Knowing what test quality is being achieved gives both test confidence and quality confidence. Code coverage analysis is a powerful tool to measure and debug test cases, such that they find more of the bugs that most likely do exist in the software – whether you are aware of them or not. A code coverage analysis tool can find more programming errors, which enables you to release a software product of higher quality.
Code coverage analysis finds areas in the program that may not be covered by existing test cases, enabling you to create additional tests that cover otherwise untested parts.
It is important to realize that code coverage helps to assess the quality of the test cases, not the quality of the actual source code. However, better test cases will lead to higher source code quality indirectly, since a substantially larger portion of the faults will be discovered before release.
Performing code coverage analysis by hand can be very time consuming. Atollic TrueANALYZER automates this task.
DIFFERENT TYPES OF ANALYSIS There are many different types of code coverage analysis, some very basic and others that are very rigorous and complicated to perform without advanced tool support. Different types of code coverage analysis can be classified formally as follows.
Test Quality Measurement
343 | P a g e
BLOCK COVERAGE
This type of code coverage analysis is sometimes included in standard debuggers. Block coverage is a very weak type of code coverage analysis. It can only verify what blocks of code have been executed, but not under what circumstances. For example, it does not test what alternative execution paths have been taken.
FUNCTION COVERAGE
This basic type of code coverage analysis is sometimes included in standard debuggers too. Function coverage can only report whether or not a function has been called. It does not disclose anything about the execution within the function, or how or why the function was called. And it does not test all the potential ways to call the function.
Test Quality Measurement
344 | P a g e
FUNCTION CALL COVERAGE
Function call coverage measures how many of the function calls that have actually occurred. For example, a function may have been called from many different places. To reach 100% function call coverage, all calls must have been made.
BRANCH COVERAGE
Branch coverage is a more advanced type of code coverage analysis. Branch coverage measures to what extent all code blocks have been exercised, and all execution paths have been executed. In complex branch decisions, the analysis does not consider the inner details that affect the value of the expression (i.e. the conditions that formed the branch decision is not considered).
To achieve Branch coverage, it is sometimes necessary to execute a code section two or more times, to make sure all alternate execution paths have to be taken (if-else and switch statements for example, require several execution passes to cover all their alternate execution paths).
Test Quality Measurement
345 | P a g e
Modified Condition/Decision Coverage
Modified condition/decision coverage (MC/DC) is a very advanced type of code coverage analysis, and basically builds on top of Branch coverage to create an even more rigorous code coverage analysis.
To fulfill MC/DC coverage for a code section, the following must apply:
Every entry and exit point must be invoked at least once.
Every condition in all decisions must take each possible outcome at least once.
Every decision must have taken each possible outcome at least once.
Each condition in a decision must be shown to independently affect the decision outcome.
A condition must be shown to independently affect a decision outcome, by varying only that condition, while holding all other conditions fixed.
Safety-critical software is often required to be tested such that it fulfills MC/DC-level code coverage analysis successfully. Testing on MC/DC level is for example required for flight control system software and avionics software. But any software project benefits from rigorous code coverage analysis:
Companies with a good reputation want to avoid bad reputation on the market caused by releasing products of poor quality.
Products that are very difficult or expensive to upgrade in the field, are good candidates for rigorous testing.
Products that are produced in high volumes should be tested formally.
Safety-critical and semi-safety-critical products should be tested formally.
Test Quality Measurement
346 | P a g e
Note that in the MC/DC example above, the code section must be executed many times, such that different combinations of condition values in the branch decision is taken to test not only all different execution paths, but also all conditions that can affect the execution path decision.
TOOL SUPPORT Atollic TrueANALYZER is a very powerful tool for code coverage analysis. It performs the following types of code coverage:
Block coverage
Function coverage
Function call coverage
Branch coverage
Modified condition/decision coverage (MC/DC)
To achieve such rigorous code coverage analysis, Atollic TrueANALYZER performs the following tasks:
Analyzes the application source code
Instruments & rebuilds a copy of the application source code
Downloads the instrumented application to target using a JTAG probe
Executes the instrumented application with dynamic execution flow monitoring on the target board
On request, uploads latest analysis result to the PC for visualization
Test Quality Measurement
347 | P a g e
PERFORMING CODE COVERAGE ANALYSIS Follow the instructions below to prepare for, and perform, a code coverage analysis session to measure the test quality of an application running in a target board.
PREPARATIONS Before performing a code coverage analysis session, Atollic TrueANALYZER must establish a connection with the target board. This is commonly done using the same JTAG probe used for ordinary debugging with Atollic TrueSTUDIO.
To prepare for code coverage analysis, using an ST-LINK or SEGGER JTAG probe connected to your electronic board, perform the following steps:
1. Verify that the RAM/FLASH switch on the board (if available) is set in the right mode (it must match the Atollic TrueSTUDIO project configuration).
2. Find out if your board supports JTAG-mode or SWD-mode debug connectors. The JTAG probe settings must be configured accordingly.
3. Connect the JTAG cable between the JTAG dongle and the target board.
4. Connect the USB cable between the PC and the JTAG dongle.
5. Make sure that a proper power supply is connected to the target board.
Once the steps above are performed, a code coverage analysis session in Atollic TrueANALYZER can be started.
STARTING CODE COVERAGE ANALYSIS Starting a code coverage analysis session is very easy. But before doing that, please check the following prerequisites:
Make sure that a suitable project is selected in Atollic TrueSTUDIO.
Ensure that the project has been built without errors and that the build process has resulted in a binary file containing the application to be analyzed.
It is highly recommended to always place a library and the library code in a separate project and never include them in the main-project. That way code that are tested separately don’t have to be included in the Code Coverage Analysis.
Test Quality Measurement
348 | P a g e
Ensure that the application binary can be downloaded to the target board using the JTAG probe and the Atollic TrueSTUDIO debugger.
Ensure that the application can be debugged on the target board using the JTAG probe and the Atollic TrueSTUDIO debugger.
Provided that the pre-requisites outlined above are fulfilled, perform the following steps to start a code coverage analysis session:
1. Switch to the Coverage perspective by pressing Open Perspective and selecting the perspective.
Figure 341 – Switch to Coverage Perspective
Unless the application can be debugged and executed successfully on the target board using the Atollic TrueSTUDIO debugger and the associated JTAG probe, code coverage analysis cannot be performed. Thus, make sure debugging works as expected before starting any code coverage analysis.
Test Quality Measurement
349 | P a g e
Figure 342 – Coverage Perspective
2. Select the project by clicking on the Project name (root node) in the Atollic TrueSTUDIO Project Explorer view.
3. Exclude files (or folders with files) that should not be instrumented or measured.
In particular, all C files that contain C code that execute before main() during the power-on-reset and startup procedure must in most cases be excluded, as the instrumentation do not work before the C runtime environment is properly initialized.
Folder and files from quality measurement will not be instrumented by Atollic TrueANALYZER. That code will then be able to keep its real-time behavior. Also this is a good way to reduce the overhead in memory consumption introduced by the tested code.
4. Right-click on the project root node in the Project Explorer view, and select Properties to open the project properties dialog box. Then navigate to the Testing > Exclusion > TrueANALYZER panel, and exclude files and folders as appropriate. Click OK to save any changed settings.
Test Quality Measurement
350 | P a g e
Figure 343 - Exclude files and folders from analysis
5. Press the Analyze button to start a code coverage measurement session.
Figure 344 - Starting a Code Coverage Analysis Session
6. The launch configuration dialog box is displayed:
Test Quality Measurement
351 | P a g e
Figure 345 – Launch Configuration Dialog
7. Select the Debugger panel and make sure the settings are correct for your board and JTAG probe:
Figure 346 – Select and Configure the Debugger
Test Quality Measurement
352 | P a g e
8. Select the Startup Scripts, Target Software Startup Scripts, Analyze tab. Enter any debugger initialization required by the target board or the JTAG probe:
Figure 347 – The Startup Analyze Panel
9. Click Analyze to start analysis of the source code. A copy of the source code will be instrumented and rebuilt. This may take several minutes:
Figure 348 - Analysis, Instrumentation and Rebuild
10. In the Analysis view, the text localhost:<port number> is displayed in the project tree. This tree item is the “handle” to the target connection (JTAG probe connection to the target board). Initially this tree item also displays information regarding the download of the instrumented application to the target, and other connection related information.
Test Quality Measurement
353 | P a g e
Figure 349 - Code Coverage Perspective with no Analysis Results
11. In the Analysis view, click on the text localhost:<port number> to select the hardware connection to the target board. Note that the Refresh button in the Analysis view toolbar becomes enabled:
Test Quality Measurement
354 | P a g e
l
Figure 350 - Hardware Connection Selected
12. At any time during execution of the application on the target board, click on the Refresh button. This refreshes the code coverage analysis data in Atollic TrueANALYZER (i.e. updated test quality measurements are uploaded and displayed).
The Coverage view shows an overall summary of the complete application, as well as the analysis results for each C function.
By clicking on a function name, the corresponding source code file is opened and the function implementation is displayed in the editor. The source code lines that have been executed are highlighted with color coding:
Test Quality Measurement
355 | P a g e
Figure 351 - Visualization of Code Coverage Analysis Results
13. Interact with the target board (for example by pressing buttons, sending communication packages, or activating various sensors) as desired, to simulate various use cases as part of the test procedures.
14. To obtain the latest analysis results, simply click the Refresh button at any point in time. This procedure effectively displays information about how good the test quality is after running a test case, but it does not display where or why the test coverage did not become better.
15. For each C function, it is possible to analyze where the test cases do not reach better test coverage, and why. This is done in the MC/DC view.
Figure 352 - Using the MC/DC view to understand why the test coverage is not better
Test Quality Measurement
356 | P a g e
The MC/DC view lists all branch decisions in the selected function, such as all if-statements, switch-statements etc. Expand a branch decision to view its conditions. The Covered column display if each of the conditions in the branch decision fulfill MC/DC coverage or not.
With a specific branch decision selected in the MC/DC view, the truth table of the fulfilled conditions is displayed in the bottom of the view. Using this truth table, it is easy to detect which conditions are not fulfilled, to achieve better test coverage in a branch decision of a C function.
Using the Coverage and MC/DC views, it is thus possible to understand both what test coverage (test quality) is achieved by the test procedures, as well as where and why the test cases do not provide better test coverage.
16. To terminate the code coverage analysis session, first make sure the localhost:<port number> text is selected in the Analysis view. Then click the Terminate button in the toolbar:
Test Quality Measurement
357 | P a g e
Figure 353 - Terminating an Analysis Session
Execution is stopped on the target board and Atollic TrueANALYZER releases the connection to the gdbserver
17. Switch to the C/C++ perspective using the Window, Close Perspective menu command.
Figure 354 – Return to C/C++ Perspective Toolbar Button
Please remember to terminate the code coverage analysis session, as the
connection to the target board remains in use. It will block future attempts to
launch Atollic TrueSTUDIO debugger sessions, Atollic TrueANALYZER code
coverage analysis sessions, or Atollic TrueVERIFIER test sessions.
Test Quality Measurement
358 | P a g e
GENERATE A CSV-FILE The test coverage result can be exported as a CSV-file that later can be inspected in an external program, such as Microsoft Excel. Click on the Export CSV-file toolbar button in the Coverage view:
Figure 355 – Generate Report Toolbar Button
A dialog box is now displayed. Enter the name of the report.
Figure 356 - The Export CSV file Dialog Box
Click on the Save button to generate the report.
Test Quality Measurement
359 | P a g e
MEASUREMENT STRATEGY AND RESOURCE
USAGE Some code coverage solutions are based on analyzing recorded instruction trace data. Using this methodology, the size of the recorded data is massive even for very short measurement periods. For example, a JTAG probe with a 16MB compressed trace buffer can typically store a test run of about 0.1 seconds only on a 168MHz Cortex-M4. The 16MB compressed trace buffer expands to about 200MB of data in uncompressed machine readable format, and about 2GB in human readable format. Even with gigabytes of recorded trace data, only short periods of execution time can be measured. This strategy is thus not working for test quality measurement during system tests that are non-trivial in length.
As fractions of seconds is typically not enough to test and debug the quality of real-world test procedures (as system tests typically involves users pushing buttons, stepper motors move mechanics, and pumps pour water into washing machines, interaction with other systems is required, etc.), a different approach is needed.
For this reason, Atollic TrueANALYZER uses instrumentation instead of instruction trace recordings. This enables unlimited execution time, since the time of execution does not at all affect the amount of data that is recorded.
On the other hand, the instrumentation uses some memory resources in the target board:
In general, the need of RAM memory in the target board equals to the sum of the cyclomatic value of code complexity + 1 of all functions being instrumented. So if the application has two functions; one function with code complexity level 12, and the other function having a code complexity level of 7, then the total RAM usage will be (12+1) + (7+1) = 21 bytes. Branch decisions with multiple conditions also add one byte of RAM usage for each condition.
In general, the need of Flash memory in the target board scales with the total sum of the cyclomatic value of code complexity of all functions in the application being analyzed, although the exact number of bytes being used depends on many factors, including compiler version and optimization level.
In case the instrumented application becomes too large to fit in the target board, it is possible to exclude files, and folders of files, from instrumentation, thus only instrumenting parts of the system.