View
216
Download
1
Embed Size (px)
Citation preview
Agenda
• Quantum Monte Carlo• Star HPC• Parallel Matlab• GPGPU Computing• Hybrid Parallel Computing (MPI + OpenMP)• OpenFoam
מנהלה
• - ,' . ה ה יום במקום הוזז האחרון השעור6/1/11 - ,' ה , א ביום יתקיים , 2/1/11השיעור
.141חדר 90בבניין, 17:00-20:00בחרו – • טרם תלמידים כמה הגמר פרויקטי
. הנושא עם מייל אלי כתבו לא או נושא*** *** הפרוייקטים את להסדיר דחוף
News…
• AMD, 16 cores in 2011– How can a scientist continue to program in the old
fashioned serial way???? Do you want to utilize only 1/16 of the power of your computer????
Parallel Computing!
Star-HPC
• http://web.mit.edu/star/hpc/index.html• StarHPC provides an on-demand computing
cluster configured for parallel programming in both OpenMP and OpenMPI technologies. StarHPC uses Amazon's EC2 web service to completely virtualize the entire parallel programming experience allowing anyone to quickly get started learning MPI and OpenMP programming.
MatlabMPIhttp://www.ll.mit.edu/mission/isr/matlabmpi/matlabmpi.html#introduction
Add to Matlab path:
vdwarf2.ee.bgu.ac.il> cat startup.maddpath /usr/local/PP/MatlabMPI/srcaddpath /usr/local/PP/MatlabMPI/examplesAddpath ./MatMPI
xbasic%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Basic Matlab MPI script that% prints out a rank.%% To run, start Matlab and type:%% eval( MPI_Run('xbasic',2,{}) );%% Or, to run a different machine type:%% eval( MPI_Run('xbasic',2,{'machine1' 'machine2'}) );%% Output will be piped into two files:%% MatMPI/xbasic.0.out% MatMPI/xbasic.1.out%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% MatlabMPI% Dr. Jeremy Kepner% MIT Lincoln Laboratory% [email protected]%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Initialize MPI.MPI_Init;
% Create communicator.comm = MPI_COMM_WORLD;
% Modify common directory from default for better performance.% comm = MatMPI_Comm_dir(comm,'/tmp');
% Get size and rank.comm_size = MPI_Comm_size(comm);my_rank = MPI_Comm_rank(comm);
% Print rank.disp(['my_rank: ',num2str(my_rank)]);
% Wait momentarily.pause(2.0);
% Finalize Matlab MPI.MPI_Finalize;disp('SUCCESS');if (my_rank ~= MatMPI_Host_rank(comm)) exit;end
An interesting new articleURL: http://www.computer.org/portal/c/document_library/get_file?uuid=2790298b-dbe4-4dc7-b550-b030ae2ac7e1&groupId=808735
>> GPUstartCopyright gp-you.org. GPUmat is distribuited as Freeware.By using GPUmat, you accept all the terms and conditionsspecified in the license.txt file. Please send any suggestion or bug report to [email protected]. Starting GPU- GPUmat version: 0.270- Required CUDA version: 3.2There is 1 device supporting CUDACUDA Driver Version: 3.20CUDA Runtime Version: 3.20
Device 0: "GeForce 310M" CUDA Capability Major revision number: 1 CUDA Capability Minor revision number: 2 Total amount of global memory: 455475200 bytes - CUDA compute capability 1.2...done- Loading module EXAMPLES_CODEOPT- Loading module EXAMPLES_NUMERICS -> numerics12.cubin- Loading module NUMERICS -> numerics12.cubin- Loading module RAND
A = rand(100, GPUsingle); % A is on GPU memoryB = rand(100, GPUsingle); % B is on GPU memoryC = A+B; % executed on GPU.D = fft(C); % executed on GPU
Executed on GPU
A = single(rand(100)); % A is on CPU memoryB = double(rand(100)); % B is on CPU memoryC = A+B; % executed on CPU. D = fft(C); % executed on CPU
Executed on CPU
Let’s try this
GPGPU DemosOpenCL demos are here:C:\Users\telzur\AppData\Local\NVIDIA Corporation\NVIDIA GPU Computing SDK\OpenCL\bin\Win64\Release
And
C:\Users\telzur\AppData\Local\NVIDIA Corporation\NVIDIA GPU Computing SDK\SDK Browser
My laptop has Nvidia Geforce 310M with 16 cuda cores
Hybrid MPI + OpenMP DemoMachine File:
hobbit1hobbit2hobbit3hobbit4
Each hobbit has 8 cores
mpicc -o mpi_out mpi_test.c -fopenmp
MPI
OpenMP
An Idea for a final project!!!
cd ~/mpi program name: hybridpi.c