25
1 ITCS 4/5010 GPU Programming, UNC-Charlotte, B. Wilkinson, Jan 14, 2013 CUDAProgModel.ppt CUDA Programming Model These notes will introduce: Basic GPU programming model CUDA kernel Simple CUDA program to add two vectors togethe Compiling the code on a Linux system

ITCS 4/5010 GPU Programming, UNC-Charlotte, B. Wilkinson, Jan 14, 2013 CUDAProgModel

Embed Size (px)

DESCRIPTION

CUDA Programming Model. These notes will introduce: Basic GPU programming model CUDA kernel Simple CUDA program to add two vectors together Compiling the code on a Linux system. ITCS 4/5010 GPU Programming, UNC-Charlotte, B. Wilkinson, Jan 14, 2013 CUDAProgModel.ppt. Programming Model. - PowerPoint PPT Presentation

Citation preview

Page 1: ITCS 4/5010 GPU Programming, UNC-Charlotte, B. Wilkinson, Jan 14, 2013  CUDAProgModel

1ITCS 4/5010 GPU Programming, UNC-Charlotte, B. Wilkinson, Jan 14, 2013 CUDAProgModel.ppt

CUDAProgramming Model

These notes will introduce:

•Basic GPU programming model•CUDA kernel•Simple CUDA program to add two vectors together•Compiling the code on a Linux system

Page 2: ITCS 4/5010 GPU Programming, UNC-Charlotte, B. Wilkinson, Jan 14, 2013  CUDAProgModel

2

Programming Model

Historically, GPUs designed for creating image data for displays.

That application involves manipulating image pixels (picture elements) and often the same operation each pixel

SIMD (single instruction multiple data) model - An efficient mode of operation in which the same operation is done on each data element at the same time

Page 3: ITCS 4/5010 GPU Programming, UNC-Charlotte, B. Wilkinson, Jan 14, 2013  CUDAProgModel

3

SIMD (Single Instruction Multiple Data) model

Also know as data parallel computation (pattern).One instruction specifies the operation:

Instructiona[] = a[] + k

a[0] a[n-1]a[n-2]a[1]

Very efficient of this is what you want to do. One program.Can design computers to operate this way.

ALUs

Page 4: ITCS 4/5010 GPU Programming, UNC-Charlotte, B. Wilkinson, Jan 14, 2013  CUDAProgModel

4

Single Instruction Multiple Thread Programming Model (SIMT)

A version of SIMD used in GPUs.

GPUs use a thread model to achieve very high parallel performance and to hide memory latency

Multiple threads, each execute the same instruction sequence.

On a GPU, a very large number of threads (10,000’s) possible.

Threads mapped onto available processors on GPU (100’s of processors all executing same program sequence)

Page 5: ITCS 4/5010 GPU Programming, UNC-Charlotte, B. Wilkinson, Jan 14, 2013  CUDAProgModel

5

Programming applications using SIMT model

Matrix operations -- very amenable to SIMT•Same operations done on different elements of matrices

Some “embarassingly” parallel computations such as Monte Carlo calculations• Monte Carlo calculations use random selections

Random selections are independent of each other

Data manipulations• Some sorting can be done quite efficiently

Page 6: ITCS 4/5010 GPU Programming, UNC-Charlotte, B. Wilkinson, Jan 14, 2013  CUDAProgModel

6

To write a SIMT program, one needs to write a code sequence that all the threads on the GPU will do.

In CUDA, this code sequence is called a Kernel routine

Kernel code will be regular C except one typically needs to use the thread ID in expressions to ensure each thread accesses different data:

Example…index = ThreadID;c[index] = a[index] + b[index];

CUDA kernel routine

All theads do this

Page 7: ITCS 4/5010 GPU Programming, UNC-Charlotte, B. Wilkinson, Jan 14, 2013  CUDAProgModel

7

CPU and GPU memory

• Program once compiled has code executed on CPU and (kernel) code executed on GPU

• Separate memories on CPU and GPU

Need to *

• Explicitly transfer data from CPU to GPU for GPU computation, and

• Explicitly transfer results in GPU memory copied back to CPU memory

Copy from CPU to GPU

Copy from GPU to CPU

GPU

CPU

CPU main memory

GPU global memory

* CUDA version 3. Version 4 (May 2011) can eliminate that.

Page 8: ITCS 4/5010 GPU Programming, UNC-Charlotte, B. Wilkinson, Jan 14, 2013  CUDAProgModel

8

Basic CUDA program structure

int main (int argc, char **argv ) {

1. Allocate memory space in device (GPU) for data2. Allocate memory space in host (CPU) for data

3. Copy data to GPU

4. Call “kernel” routine to execute on GPU(with CUDA syntax that defines no of threads and their physical structure)

5. Transfer results from GPU to CPU

6. Free memory space in device (GPU)7. Free memory space in host (CPU)

return;}

Page 9: ITCS 4/5010 GPU Programming, UNC-Charlotte, B. Wilkinson, Jan 14, 2013  CUDAProgModel

9

1. Allocating memory space in “device” (GPU) for data

Use CUDA malloc routines:

int size = N *sizeof( int); // space for N integers

int *devA, *devB, *devC; // devA, devB, devC ptrs

cudaMalloc( (void**)&devA, size) );cudaMalloc( (void**)&devB, size );cudaMalloc( (void**)&devC, size );

Derived from Jason Sanders, "Introduction to CUDA C" GPU technology conference, Sept. 20, 2010.

Page 10: ITCS 4/5010 GPU Programming, UNC-Charlotte, B. Wilkinson, Jan 14, 2013  CUDAProgModel

10

2. Allocating memory space in “host” (CPU) for data

Use regular C malloc routines:

int *a, *b, *c;… a = (int*)malloc(size);b = (int*)malloc(size);c = (int*)malloc(size);

or statically declare variables:

#define N 256…int a[N], b[N], c[N];

Page 11: ITCS 4/5010 GPU Programming, UNC-Charlotte, B. Wilkinson, Jan 14, 2013  CUDAProgModel

11

3. Transferring data from host (CPU) to device (GPU)

Use CUDA routine cudaMemcpy

cudaMemcpy( devA, a, size, cudaMemcpyHostToDevice);

cudaMemcpy( devB, b, size, cudaMemcpyHostToDevice);

where:

devA and devB are pointers to destination in device

a and b are pointers to host data

Destination Source

Page 12: ITCS 4/5010 GPU Programming, UNC-Charlotte, B. Wilkinson, Jan 14, 2013  CUDAProgModel

12

4. Declaring “kernel” routine to execute on device (GPU)

CUDA introduces a syntax addition to C:Triple angle brackets mark call from host code to device code. Contains organization and number of threads in two parameters:

myKernel<<< n, m >>>(arg1, … );

n and m will define organization of thread blocks and threads in a block.

For now, we will set n = 1, which say one block and m = N, which says N threads in this block.

arg1, … , -- arguments to routine myKernel typically pointers to device memory obtained previously from cudaMalloc.

Page 13: ITCS 4/5010 GPU Programming, UNC-Charlotte, B. Wilkinson, Jan 14, 2013  CUDAProgModel

13

Example – Adding to vectors A and B

#define N 256__global__ void vecAdd(int *a, int *b, int *c) { // Kernel definition

int i = threadIdx.x; c[i] = a[i] + b[i];

}

int main() { // allocate device memory &

// copy data to device // device mem. ptrs devA,devB,devC

vecAdd<<<1, N>>>(devA,devB,devC); // Grid of one block, N threads in block …

}

Loosely derived from CUDA C programming guide, v 3.2 , 2010, NVIDIA

Declaring a Kernel Routine

Each of the N threads performs one pair-wise addition: Thread 0: devC[0] = devA[0] + devB[0];Thread 1: devC[1] = devA[1] + devB[1];

Thread N-1: devC[N-1] = devA[N-1]+devB[N-1];

CUDA structure that provides thread ID in block

Two underscores each side

A kernel defined using CUDA specifier __global__

Page 14: ITCS 4/5010 GPU Programming, UNC-Charlotte, B. Wilkinson, Jan 14, 2013  CUDAProgModel

14

5. Transferring data from device (GPU) to host (CPU)

Use CUDA routine cudaMemcpy

cudaMemcpy( c, devC, size, cudaMemcpyDeviceToHost);

where:

devC is a pointer in device and c is a pointer in host.

Destination Source

Page 15: ITCS 4/5010 GPU Programming, UNC-Charlotte, B. Wilkinson, Jan 14, 2013  CUDAProgModel

15

6. Free memory space in “device” (GPU)

Use CUDA cudaFree routine:

cudaFree( devA);

cudaFree( devB);

cudaFree( devC);

Page 16: ITCS 4/5010 GPU Programming, UNC-Charlotte, B. Wilkinson, Jan 14, 2013  CUDAProgModel

16

7. Free memory space in (CPU) host(if CPU memory allocated with malloc)

Use regular C free routine to deallocate memory if

previously allocated with malloc:

free( a );

free( b );

free( c );

Page 17: ITCS 4/5010 GPU Programming, UNC-Charlotte, B. Wilkinson, Jan 14, 2013  CUDAProgModel

17

Complete CUDA program

Adding two vectors, A and B

N elements in A and B, and

N threads

(without code to load arrays with data)

#define N 256

__global__ void vecAdd(int *A, int *B, int *C) { int i = threadIdx.x; C[i] = A[i] + B[i];

} int main (int argc, char **argv ) {

int size = N *sizeof( int);int a[N], b[N], c[N], *devA, *devB, *devC;

cudaMalloc( (void**)&devA, size) );cudaMalloc( (void**)&devB, size );cudaMalloc( (void**)&devC, size );

cudaMemcpy( devA, a, size, cudaMemcpyHostToDevice);cudaMemcpy( devB, b size, cudaMemcpyHostToDevice);

vecAdd<<<1, N>>>(devA, devB, devC);

cudaMemcpy( c, devC size, cudaMemcpyDeviceToHost);

cudaFree( devA);cudaFree( devB);cudaFree( devC);

return (0);}

Page 18: ITCS 4/5010 GPU Programming, UNC-Charlotte, B. Wilkinson, Jan 14, 2013  CUDAProgModel

18

Compiling CUDA programs“nvcc”

NVIDIA provides nvcc -- the NVIDIA CUDA “compiler driver”.

Will separate out code for host and for device

Regular C/C++ compiler used for host (needs to be available)

Programmer simply uses nvcc instead of gcc/cc compiler on a Linux system

Command line options include for GPU features

Page 19: ITCS 4/5010 GPU Programming, UNC-Charlotte, B. Wilkinson, Jan 14, 2013  CUDAProgModel

19

Compiling code - Linux

Command line:

nvcc –O3 –o <exe> <source_file> -I/usr/local/cuda/include

–L/usr/local/cuda/lib –lcuda –lcudart

CUDA source file that includes device code has the extension .cu

Need regular C compiler installed for CPU.

Make file convenient – see next.

See “The CUDA Compiler Driver NVCC” from NVIDIA for more details

Optimization level if you want optimized code

Directories for #include files

Directories for libraries (Dynamic) CUDA libraries to be linked (core and runtime, both needed)

Page 20: ITCS 4/5010 GPU Programming, UNC-Charlotte, B. Wilkinson, Jan 14, 2013  CUDAProgModel

20

Very simple sample Make file

NVCC = /usr/local/cuda/bin/nvccCUDAPATH = /usr/local/cuda

NVCCFLAGS = -I$(CUDAPATH)/includeLFLAGS = -L$(CUDAPATH)/lib64 -lcuda -lcudart -lm

prog1:cc -o prog1 prog1.c –lm

prog2:cc -I/usr/openwin/include -o prog2 prog2.c -L/usr/openwin/lib -L/usr/X11R6/lib

-lX11 –lm prog3:

$(NVCC) $(NVCCFLAGS) $(LFLAGS) -o prog3 prog3.cu

prog4:$(NVCC) $(NVCCFLAGS) $(LFLAGS) -I/usr/openwin/include -o prog4

prog4.cu -L/usr/openwin/lib -L/usr/X11R6/lib -lX11 -lm

A regular C program

A C program with X11 graphics

A CUDA program

A CUDA program with X11 graphics

Page 21: ITCS 4/5010 GPU Programming, UNC-Charlotte, B. Wilkinson, Jan 14, 2013  CUDAProgModel

21

Compilation process

nvcc

gccptxas

nvcc “wrapper” divides code into host and device parts.

Host part compiled by regular C compiler

Device part compiled by NVIDIA “ptxas” assembler

Two compiled parts combined into one executable

executable

CombineObject file

nvcc –o prog prog.cu –I/includepath -L/libpath

Executable file a “fat” binary” with both host and device code

Page 22: ITCS 4/5010 GPU Programming, UNC-Charlotte, B. Wilkinson, Jan 14, 2013  CUDAProgModel

22

Executing Program

Simple type name of executable created by nvcc:

./prog1

File includes all the code for host and for device in a “fat binary” file.

Host code starts running

When first encounter device kernel, GPU code physically sent to GPU and function launched on GPU.

Present NVIDIA GPUs do not have instruction caches, so this process is repeated for each call. I am told by NVIDIA the overhead is very small.

* Correction from previous slides.

Page 23: ITCS 4/5010 GPU Programming, UNC-Charlotte, B. Wilkinson, Jan 14, 2013  CUDAProgModel

23

Compiling and executing on a Windows system

Can use Microsoft Visual Studio and a PC with a NVIDIA GPU card.

Basic set up described in “Configuring Microsoft Visual Studio 2008 for CUDA Tookit Version 3.2,” B. Wilkinson and Brian Nacey, Feb 24, 2012, found at

http://coitweb.uncc.edu/~abw/SIGCSE2011Workshop/ConfiguringVSforCUDA.pdf

but NVIDA now provides a fully configured NVIDIA Nsight Visual Studio Edition found at

http://developer.nvidia.com/nvidia-nsight-visual-studio-edition

and Eclipse version found at

http://developer.nvidia.com/nsight-eclipse-edition

Page 24: ITCS 4/5010 GPU Programming, UNC-Charlotte, B. Wilkinson, Jan 14, 2013  CUDAProgModel

24

NVIDIA Nsight Visual Studio Edition

http://developer.nvidia.com/nvidia-nsight-visual-studio-edition

Page 25: ITCS 4/5010 GPU Programming, UNC-Charlotte, B. Wilkinson, Jan 14, 2013  CUDAProgModel

Questions