Upload
cameron-payne
View
213
Download
0
Embed Size (px)
Citation preview
IBM Research
© 2006 IBM Corporation
CDT Static Analysis Features
CDT Developer Summit - Ottawa
Beth Tibbitts [email protected] 20, 2006
This work has been supported in part by the Defense Advanced Research Projects Agency (DARPA) under contract No. NBCH30390004.
IBM Research
© 2006 IBM Corporation
2
The Problem
Static Analysis of C programs is useful
Existing Abstract Syntax Tree (AST) in Eclipse CDT provides basic navigation and information, but needs more
IBM Research
© 2006 IBM Corporation
3
CDT AST Extensions
Enhance existing CASTNode and Visitor – bottom-up traversal
Add other additional graphs:– Call Graph– Control Flow Graph– Data Dependence Graph
Traversal of these new graphs available in:– Topological Order– Reverse Topological Order
IBM Research
© 2006 IBM Corporation
4
Bottom-up AST traversal
org.eclipse.cdt.core.dom.ast.ASTVisitor
org.eclipse.cdt.core.dom.ast.c.CASTVisitor
Existing:
public int visit(IASTxxx..){
return PROCESS_CONTINUE;
}
New:
public int leave(IASTxxx..){
return PROCESS_CONTINUE;
}
IBM Research
© 2006 IBM Corporation
5
Call Graph
main
a
kei
gee
foo
Recursive calls detected…
A cycle is detected on foo, gee and kei
#include "mpi.h"#include "stdio.h" void foo(int x);void gee(int x);void kei(int x);
void foo(int x){ x ++; gee(x);}
void gee(int x){ x *= 3; kei(x);}
void kei(int x){ x = x % 10; foo(x);}
void a(int x){ x --;}
int main3(int argc, char* argv[]){ int x = 0; foo(x); a(x);}
IBM Research
© 2006 IBM Corporation
6
Control Flow Graph & Data Flow Dependence Graph– sample program
#include <stdio.h>
#include <string.h>
#include "mpi.h"
// Sample MPI program
int main(int argc, char* argv[]){
printf("Hello MPI World the original.\n");
int my_rank; /* rank of process */
int p; /* number of processes */
int source; /* rank of sender */
int dest; /* rank of receiver */
int tag=0; /* tag for messages */
char message[100], *tmp; /* storage for message */
MPI_Status status ; /* return status for receive */
int * array;
/* start up MPI */
array = (int *)malloc(sizeof(int) * 10);
MPI_Init(&argc, &argv);
/* find out process rank */
MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
/* find out number of processes */
MPI_Comm_size(MPI_COMM_WORLD, &p);
MPI_Barrier(MPI_COMM_WORLD);
if (my_rank !=0){
/* create message */
sprintf(message, "Greetings from process %d!", my_rank);
dest = 0;
/* use strlen+1 so that '\0' get transmitted */
MPI_Send(message, strlen(message)+1, MPI_CHAR,
dest, tag, MPI_COMM_WORLD);
MPI_Barrier(MPI_COMM_WORLD);
}
else{
printf("From process 0: Num processes: %d\n",p);
for (source = 1; source < p; source++) {
MPI_Recv(message, 100, MPI_CHAR, source, tag,
MPI_COMM_WORLD, &status);
printf("%s\n",message);
}
MPI_Barrier(MPI_COMM_WORLD);
}
/* shut down MPI */
MPI_Finalize();
free(array);
return 0;
}
if
for
IBM Research
© 2006 IBM Corporation
7
Control Flow Graphentry
printf
Int my_rank
Int source
Int p
Char message[100], *tmp
Int tag = 0
Int dest
MPI_Init()
MPI_Status status
array = malloc()
Int *array
MPI_Comm_rank()
MPI_Comm_size
my_rank != 0
dest = 0
sprintf
MPI_Barrier()
A
A
MPI_Barrier()
MPI_Send()
free(array)
(join block)
MPI_Barrier()
source ++
printf
MPI_Recv
source < p
source = 1
printf
MPI_Finalize()
B
B
return 0
exitif
for
IBM Research
© 2006 IBM Corporation
8
Data Dependence Graph (DDG)entry
printf
Int my_rank
Int source
Int p
Char message[100], *tmp
Int tag = 0
Int dest
MPI_Init()
MPI_Status status
array = malloc()
Int *array
MPI_Comm_rank()
MPI_Comm_size
my_rank != 0
dest = 0
sprintf
MPI_Barrier()
A
A
MPI_Barrier()
MPI_Send()
free(array)
(join block)
MPI_Barrier()
source ++
printf
MPI_Recv
source < p
source = 1
printf
MPI_Finalize()
B
B
return 0
exit
Control flowData flowControl flow
Work in Progress(graph not complete)
IBM Research
© 2006 IBM Corporation
9
Summary
MPI Barrier Analysis uses these structures
Is this valuable as an addition to CDT?
Other Future plans
– Parallel Tools Platform (PTP) Analysis:• Static and Dynamic Analysis of MPI, OpenMP, and LAPI
programs for detection of common errors• Code Refactorings for Performance Optimization, e.g.
refactoring for improved computation / communication overlap