Example of a concrete Optimization Step by Step on a GPU

10 Juin 2016

In this post I will look how to port the thermal diffusion code form the CPU Optimization Example to GPU. In this precise case I will port it to CUDA, but we can imagine that comparable performances can be achieved with other GPGPU languages. Before starting this portage it can be interesting to have a quick look on the CPU version.

The example code

I chose this specific code that I used recently, to prove some performance. I like it because it is a very simple code that we can parallelize very easily. The code is a simulation of 3D temperature diffusion, by a finite difference approach. The discussion here is not about the algorithm efficiency, but about the optimization process.

It's the same code that in the CPU version without the reduction of the variation

#include 	<stdlib.h>
#include 	<stdio.h>
#include 	<ctime>
#include 	<chrono>
#include	<cstring>
#ifdef __linux__ 
	#include 	<unistd.h>
	#include 	<sys/resource.h>
#endif


#define valueLevel0(x,y,z) (Te_in[sizeX*sizeY*(z)+sizeX*(y)+(x)])


#define localVariation(x,y,z) (1.f*(-lam)*( \
					-(valueLevel0(x+1,y,z)+valueLevel0(x-1,y,z))	 \
					-(valueLevel0(x,y+1,z)+valueLevel0(x,y-1,z))	 \
					-(valueLevel0(x,y,z+1)+valueLevel0(x,y,z-1))	 \
					+6*valueLevel0(x,y,z)	 \
					)*dt)


float compute(float * Te_out, float * Te_in, float * Cp, int sizeX, int sizeY, int sizeZ, float dx, float dy, float dz, float dt, float lam){

	float variation=0.f;
	for (int k = 1; k 	< sizeZ-1; ++k)
	{
		for (int j = 1; j 	< sizeY-1; ++j)
		{
			for (int i = 1; i 	< sizeX-1; ++i)
			{
				Te_out[sizeX*sizeY*(k)+sizeX*(j)+(i)]=valueLevel0(i,j,k)+localVariation(i,j,k);
			}	
		}
	}
	return variation;

}

int main(int argc, char const *argv[])
{

	int nx=128;
	int ny=128;
	int nz=128;
	float dx=1;
	float dy=1;
	float dz=1;
	float lam=1.0f;
	float max_dt=dx*dx/lam/6.1;
	int rad=10;


	float *Te=(float*)malloc(nx * ny * nz * sizeof(float));
	float *Cp=(float*)malloc(nx * ny * nz * sizeof(float));
	float *Te_bis=(float*)malloc(nx * ny * nz * sizeof(float));

	memset(Te,0,nx * ny * nz * sizeof(float));
	memset(Cp,0,nx * ny * nz * sizeof(float));
	memset(Te_bis,0,nx * ny * nz * sizeof(float));

//init

	int centerX=0.5*(nx)*dx;
	int centerY=0.5*(ny)*dy;
	int centerZ=0.5*(nz)*dz;
	for (int k = 0; k 	< nz; ++k)
	{
		for (int j = 0; j 	< ny; ++j)
		{
			for (int i = 0; i 	< nx; ++i)
			{
				int positionX=i*dx-centerX;
				int positionY=j*dy-centerY;
				int positionZ=k*dz-centerZ;
				if(positionX*positionX + positionY*positionY + positionZ*positionZ 	<rad*rad) {
					Te[i+nx*j+k*nx*ny]=10;
				}
			}
		}
	}
	// end init

	//computation
	float totaltime=0.f;
	std::clock_t start,stop;
	int facteur=1;  // after optimization maybe the time is coming to small so we need average the time on a bigest problem
	auto t_start = std::chrono::high_resolution_clock::now();
	{

		float dt = max_dt;
		float variation=0.f;
		float oldVariation=0.f;
		double delatVariation=1/0.;
		for (int i = 0; i 	< facteur*5000/2; ++i)
		{
			compute(Te_bis,Te,Cp,nx,ny,nz,dx,dy,dz,dt,lam);
			variation=compute(Te,Te_bis,Cp,nx,ny,nz,dx,dy,dz,dt,lam);
		}
	}
	auto t_end = std::chrono::high_resolution_clock::now();
	totaltime = std::chrono::duration	<double, std::milli>(t_end-t_start).count();
	printf("time = %.3f s \n",totaltime/facteur/1000. );

	return 0;
}

First analyze

Memory

Like in the CPU analyses, the memory structure is managed in two big arrays, and each memory positions are computed at each memory access. On GPU contiguous memory approach have good performance too. More generally, an efficient memory access on the GPU, is also efficient on a CPU.
The size of the problem stay at 8MB for each array. That is definitely a very small problem for a GPU, and we will probably have better speedup with bigger array.

Possible point of speedup

With a fast overview of the all code, it look possible to kernelize (I don't know if this word does exist, but I'll use it anyway!) for-loops.

Speed test

To check the speed-up of each improvement, I will use different GPU.

0. Geforce 750 M
Kepler, 2GB
1. Tesla-k20c
Kepler, 5GB
1.1 Tesla-k20c without ECC*
Kepler, 5GB
2. Titan Black
Kepler, 6GB
3. Titan X
Maxwell, 12GB

And maybe soon on a Pascal.

* to disable ECC on a Tesla juste do:

sudo nvidia-smi -g 0 --ecc-config=0

Portage

This part is deals with the portage from the CPU to the GPU, without big specific optimization. The first step is to convert the compute function as a CUDA Kernel.

To achieve this conversion, we remove loops and transform the function as a kernel with 3D indexes.
I know that the "const" and "restrict" information can bring some extra improvement (I checked in this specific case, we are around 10%)

__global__ void compute(float * __restrict__ Te_out, const float * __restrict__ Te_in, const float * __restrict__ Cp, const int sizeX, const int sizeY, const int sizeZ, const float dx, const float dy, const float dz, const float dt, const float lam){

	const int i = threadIdx.x + blockIdx.x * blockDim.x;
	const int j = threadIdx.y + blockIdx.y * blockDim.y;
	const int k = threadIdx.z + blockIdx.z * blockDim.z;

	if((i>0) && (i< (sizeX-1)) && (j>0) && (j< (sizeY-1)) && (k>0) && (k< (sizeZ-1)))
	{
		Te_out[sizeX*sizeY*(k)+sizeX*(j)+(i)]=valueLevel0(i,j,k)+localVariation(i,j,k);
	}
}

After this modification we need to bring some extra modifications to use the kernel (memory allocation, transfer, kernel call, ...)

#include <stdlib.h>
#include <stdio.h>
#include <ctime>
#include <chrono>
#include <cstring>
#ifdef __linux__ 
	#include <unistd.h>
	#include <sys/resource.h>
#endif


#define valueLevel0(x,y,z) (Te_in[sizeX*sizeY*(z)+sizeX*(y)+(x)])


#define localVariation(x,y,z) (1.f*(-lam)*( \
					-(valueLevel0(x+1,y,z)+valueLevel0(x-1,y,z))	 \
					-(valueLevel0(x,y+1,z)+valueLevel0(x,y-1,z))	 \
					-(valueLevel0(x,y,z+1)+valueLevel0(x,y,z-1))	 \
					+6*valueLevel0(x,y,z)	 \
					)*dt)


__global__ void compute(float * Te_out, float * Te_in, float * Cp, int sizeX, int sizeY, int sizeZ, float dx, float dy, float dz, float dt, float lam){

	const int i = threadIdx.x + blockIdx.x * blockDim.x;
	const int j = threadIdx.y + blockIdx.y * blockDim.y;
	const int k = threadIdx.z + blockIdx.z * blockDim.z;

	if((i>0) && (i< (sizeX-1)) && (j>0) && (j< (sizeY-1)) && (k>0) && (k< (sizeZ-1)))
	{
		Te_out[sizeX*sizeY*(k)+sizeX*(j)+(i)]=valueLevel0(i,j,k)+localVariation(i,j,k);
	}
}

int main(int argc, char const *argv[])
{

	if (argc>1)
	{
		cudaSetDevice(atoi(argv[1]));
	}
	#ifdef __linux__ 
		int which = PRIO_PROCESS;
		id_t pid;
		int priority = -20;
		int ret;

		pid = getpid();
		ret = setpriority(which, pid, priority);
	#endif

	int nx=128;
	int ny=128;
	int nz=128;
	float dx=1;
	float dy=1;
	float dz=1;
	float lam=1.0f;
	float max_dt=dx*dx/lam/6.1;
	int rad=10;


	float *Te=(float*)malloc(nx * ny * nz * sizeof(float));
	float *Cp=(float*)malloc(nx * ny * nz * sizeof(float));
	float *Te_bis=(float*)malloc(nx * ny * nz * sizeof(float));



	float *Te_d=NULL;
	cudaMalloc((void**)&Te_d,nx * ny * nz * sizeof(float));
	float *Cp_d=NULL;
	cudaMalloc((void**)&Cp_d,nx * ny * nz * sizeof(float));
	float *Te_bis_d=NULL;
	cudaMalloc((void**)&Te_bis_d,nx * ny * nz * sizeof(float));







	#pragma omp parallel default(none) firstprivate(Te_out,Te_in,Cp,sizeX,sizeY,sizeZ,dx,dy,dz,dt,lam) //reduction(max:variation)
	#pragma omp for collapse(2) schedule(static) nowait
	for (int k = 1; k < nz-1; ++k)
	{
		for (int j = 1; j < ny-1; ++j)
		{
			#pragma omp simd
				for (int i = 1; i < nx-1; ++i)
				{
					Te[i+nx*j+k*nx*ny]=1.f;
					Cp[i+nx*j+k*nx*ny]=1.f;
					Te_bis[i+nx*j+k*nx*ny]=1.f;
				}		
		}
	}

	int centerX=0.5*(nx)*dx;
	int centerY=0.5*(ny)*dy;
	int centerZ=0.5*(nz)*dz;
	#pragma omp parallel default(none) firstprivate(Te,nx,ny,nz,centerY,centerX,centerZ,rad,dx,dy,dz)
	#pragma omp for collapse(2) schedule(static) nowait
	for (int k = 0; k < nz; ++k)
	{
		for (int j = 0; j < ny; ++j)
		{
			for (int i = 0; i < nx; ++i)
			{
				int positionX=i*dx-centerX;
				int positionY=j*dy-centerY;
				int positionZ=k*dz-centerZ;
				if(positionX*positionX + positionY*positionY + positionZ*positionZ <rad*rad) {
					Te[i+nx*j+k*nx*ny]=10;
				}
			}
		}
	}
	// end init

	//copy init memory
	
	cudaMemcpy(Te_d, Te, nx * ny * nz * sizeof(float), cudaMemcpyHostToDevice);
	cudaMemcpy(Cp_d, Cp, nx * ny * nz * sizeof(float), cudaMemcpyHostToDevice);
	cudaMemcpy(Te_bis_d, Te_bis, nx * ny * nz * sizeof(float), cudaMemcpyHostToDevice);


	//computation
	float totaltime=0.f;
	float totaltimeCuda=0.f;
	int facteur=20;  // after optimization maybe the time is coming to small so we need average the time on a bigest problem

	dim3 block={32,1,1};
	dim3 grid={ceil(float(nx)/block.x),ceil(float(ny)/block.y),ceil(float(nz)/block.z)};
	cudaStream_t stream1=NULL;

	printf("block = %d, %d, %d\n", block.x,block.y,block.z);
	printf("grid = %d, %d, %d\n", grid.x,grid.y,grid.z);

	cudaStreamCreate(&stream1);

	cudaEvent_t start, stop;

	cudaEventCreate(&start);
	cudaEventCreate(&stop);

	auto t_start = std::chrono::high_resolution_clock::now();
	cudaEventRecord(start, stream1);
	{

		float dt = max_dt;
		float variation=0.f;
		float oldVariation=0.f;
		double delatVariation=1/0.;
		for (int i = 0; i < facteur*5000/2; ++i)
		{
			
			compute<<<grid, block, 0, stream1>>>(Te_bis_d,Te_d,Cp_d,nx,ny,nz,dx,dy,dz,dt,lam);
			compute<<<grid, block, 0, stream1>>>(Te_d,Te_bis_d,Cp_d,nx,ny,nz,dx,dy,dz,dt,lam);
		}
		cudaEventRecord(stop, stream1);
	}

	cudaEventSynchronize (stop) ;
	cudaEventElapsedTime(&totaltimeCuda, start, stop) ;
	cudaEventDestroy(start);
	cudaEventDestroy(stop);

	cudaStreamSynchronize(stream1);
	auto t_end = std::chrono::high_resolution_clock::now();
	cudaStreamDestroy(stream1);
	cudaMemcpy(Te,Te_d, nx * ny * nz * sizeof(float), cudaMemcpyDeviceToHost);
	stream1=NULL;
	totaltime = std::chrono::duration<double, std::milli>(t_end-t_start).count();
	printf("time = %.3f s \n",totaltime/facteur/1000. );
	printf("time cuda = %.3f s \n",totaltimeCuda/facteur/1000. );

	cudaError_t error = cudaGetLastError();
	if(error != cudaSuccess)
	{
		// print the CUDA error message and exit
		printf("CUDA error: %s\n", cudaGetErrorString(error));
	}

	return 0;
}

To compile the CUDA code, I use the Nvidia compiler NVCC, with the specific architecture flag regarding the supported version, in case of the Tesla and the Titan Black it's 35, in the case of the Titan X it's 52 and 30 for the 750M, I will give examples with the 35 architecture.

nvcc -O3 -std=c++11 cuda_V1.cu -lcublas -arch compute_35 -o cuda_V1

We can explain this difference between results from the Tesla and the Titan Black (both have the same processor generation) in a very simple way, by just checking specification of cards, and especially memory bandwidth.

We can see that the ratio is around 0.6 in case of computation time and memory bandwidth.

Optimization

Now we seek to obtain the best performance of GPUs

Blocks size !

Find the optimal size is some time trivial, but most times it is a nightmare! I have read a lot about the subject, and yet I haven't found an easy and sure way to determine it. Most of the time when product of dimensions is 512 or 1024 results are not to bad, especially with small kernel.
Besides, this magic value can change regarding the hardware. In this specific case I find a solution which doesn't look to bad :{64,8,1}. As you may understood I determined it by an empirical approach.

Const and restrict

The compiler doesn't look very efficient to find which variable can be managed as const or restrict. The main reason of this from my point of view, is simply because we can not inline the function, and in this case we don't have extra information about pointer.

So let's change the header of the kernel.

__global__ void compute(float * __restrict__ Te_out, const float * __restrict__ Te_in, const float * __restrict__ Cp, const int sizeX, const int sizeY, const int sizeZ, const float dx, const float dy, const float dz, const float dt, const float lam){

Reduce the number of threads

Currently, we are doing a thread for each cell of array, it can be a good idea to give many cell a each thread. That would reduce overhead at the creation of the thread. And probably the compiler can find some memory load optimization.

#include <stdlib.h>
#include <stdio.h>
#include <ctime>
#include <chrono>
#include <cstring>
#ifdef __linux__ 
	#include <unistd.h>
	#include <sys/resource.h>
#endif

#define X_FACTOR 1
#define Y_FACTOR 1
#define Z_FACTOR 4

#define valueLevel0(x,y,z) (Te_in[sizeX*sizeY*(z)+sizeX*(y)+(x)])


#define localVariation(x,y,z) (1.f*(-lam)*( \
					-(valueLevel0(x+1,y,z)+valueLevel0(x-1,y,z))	 \
					-(valueLevel0(x,y+1,z)+valueLevel0(x,y-1,z))	 \
					-(valueLevel0(x,y,z+1)+valueLevel0(x,y,z-1))	 \
					+6*valueLevel0(x,y,z)	 \
					)*dt)


__global__ void compute(float * __restrict__ Te_out, const float * __restrict__ Te_in, const float * __restrict__ Cp, const int sizeX, const int sizeY, const int sizeZ, const float dx, const float dy, const float dz, const float dt, const float lam){

	int i = (threadIdx.x + blockIdx.x * blockDim.x)*X_FACTOR;
	int j = (threadIdx.y + blockIdx.y * blockDim.y)*Y_FACTOR;
	int k = (threadIdx.z + blockIdx.z * blockDim.z)*Z_FACTOR;

	#pragma unroll
	for (int step = 0; step < Z_FACTOR; ++step)
	{
		if((i>0) && (i< (sizeX-1)) && (j>0) && (j< (sizeY-1)) && (k>0) && (k< (sizeZ-1)))
		{
			Te_out[sizeX*sizeY*(k)+sizeX*(j)+(i)]=valueLevel0(i,j,k)+localVariation(i,j,k);
		}
		k++;
	}

}

int main(int argc, char const *argv[])
{

	if (argc>1)
	{
		cudaSetDevice(atoi(argv[1]));
	}
	#ifdef __linux__ 
		int which = PRIO_PROCESS;
		id_t pid;
		int priority = -20;
		int ret;

		pid = getpid();
		ret = setpriority(which, pid, priority);
	#endif

	int nx=128;
	int ny=128;
	int nz=128;
	float dx=1;
	float dy=1;
	float dz=1;
	float lam=1.0f;
	float max_dt=dx*dx/lam/6.1;
	int rad=10;


	float *Te=(float*)malloc(nx * ny * nz * sizeof(float));
	float *Cp=(float*)malloc(nx * ny * nz * sizeof(float));
	float *Te_bis=(float*)malloc(nx * ny * nz * sizeof(float));



	float *Te_d=NULL;
	cudaMalloc((void**)&Te_d,nx * ny * nz * sizeof(float));
	float *Cp_d=NULL;
	cudaMalloc((void**)&Cp_d,nx * ny * nz * sizeof(float));
	float *Te_bis_d=NULL;
	cudaMalloc((void**)&Te_bis_d,nx * ny * nz * sizeof(float));







	#pragma omp parallel default(none) firstprivate(Te_out,Te_in,Cp,sizeX,sizeY,sizeZ,dx,dy,dz,dt,lam) //reduction(max:variation)
	#pragma omp for collapse(2) schedule(static) nowait
	for (int k = 1; k < nz-1; ++k)
	{
		for (int j = 1; j < ny-1; ++j)
		{
			#pragma omp simd
				for (int i = 1; i < nx-1; ++i)
				{
					Te[i+nx*j+k*nx*ny]=1.f;
					Cp[i+nx*j+k*nx*ny]=1.f;
					Te_bis[i+nx*j+k*nx*ny]=1.f;
				}		
		}
	}

	int centerX=0.5*(nx)*dx;
	int centerY=0.5*(ny)*dy;
	int centerZ=0.5*(nz)*dz;
	#pragma omp parallel default(none) firstprivate(Te,nx,ny,nz,centerY,centerX,centerZ,rad,dx,dy,dz)
	#pragma omp for collapse(2) schedule(static) nowait
	for (int k = 0; k < nz; ++k)
	{
		for (int j = 0; j < ny; ++j)
		{
			for (int i = 0; i < nx; ++i)
			{
				int positionX=i*dx-centerX;
				int positionY=j*dy-centerY;
				int positionZ=k*dz-centerZ;
				if(positionX*positionX + positionY*positionY + positionZ*positionZ <rad*rad) {
					Te[i+nx*j+k*nx*ny]=10;
				}
			}
		}
	}
	// end init

	//copy init memory
	
	cudaMemcpy(Te_d, Te, nx * ny * nz * sizeof(float), cudaMemcpyHostToDevice);
	cudaMemcpy(Cp_d, Cp, nx * ny * nz * sizeof(float), cudaMemcpyHostToDevice);
	cudaMemcpy(Te_bis_d, Te_bis, nx * ny * nz * sizeof(float), cudaMemcpyHostToDevice);


	//computation
	float totaltime=0.f;
	float totaltimeCuda=0.f;
	int facteur=20;  // after optimization maybe the time is coming to small so we need average the time on a bigest problem

	dim3 block={128,4,1};
	dim3 grid={ceil(float(nx)/X_FACTOR/block.x),ceil(float(ny)/Y_FACTOR/block.y),ceil(float(nz)/Z_FACTOR/block.z)};
	cudaStream_t stream1=NULL;

	printf("block = %d, %d, %d\n", block.x,block.y,block.z);
	printf("grid = %d, %d, %d\n", grid.x,grid.y,grid.z);

	cudaStreamCreate(&stream1);

	cudaEvent_t start, stop;

	cudaEventCreate(&start);
	cudaEventCreate(&stop);

	auto t_start = std::chrono::high_resolution_clock::now();
	cudaEventRecord(start, stream1);
	{

		float dt = max_dt;
		float variation=0.f;
		float oldVariation=0.f;
		double delatVariation=1/0.;
		for (int i = 0; i < facteur*5000/2; ++i)
		{
			
			compute<<<grid, block, 0, stream1>>>(Te_bis_d,Te_d,Cp_d,nx,ny,nz,dx,dy,dz,dt,lam);
			compute<<<grid, block, 0, stream1>>>(Te_d,Te_bis_d,Cp_d,nx,ny,nz,dx,dy,dz,dt,lam);
		}
		cudaEventRecord(stop, stream1);
	}

	cudaEventSynchronize (stop) ;
	cudaEventElapsedTime(&totaltimeCuda, start, stop) ;
	cudaEventDestroy(start);
	cudaEventDestroy(stop);

	cudaStreamSynchronize(stream1);
	auto t_end = std::chrono::high_resolution_clock::now();
	cudaStreamDestroy(stream1);
	cudaMemcpy(Te,Te_d, nx * ny * nz * sizeof(float), cudaMemcpyDeviceToHost);
	stream1=NULL;
	totaltime = std::chrono::duration<double, std::milli>(t_end-t_start).count();
	printf("time = %.3f s \n",totaltime/facteur/1000. );
	printf("time cuda = %.3f s \n",totaltimeCuda/facteur/1000. );

	cudaError_t error = cudaGetLastError();
	if(error != cudaSuccess)
	{
		// print the CUDA error message and exit
		printf("CUDA error: %s\n", cudaGetErrorString(error));
	}

	return 0;
}

Here I propose a loop over Z but I don't think it will change a lot regarding Y. Along X it's more complicated because we are loosing the memory continuity in warps

Again with an empirical approach, I find that 4 is the best value. We can imagine that this value is depending the size of problem (number of cells) and the number of cores in the GPU, we need to increase the value so much as possible to have less thread possible, but in way that the GPU stay full.

Conclusion

Final performances are good on the GPU, they look better than on the CPU, but we shouldn't forget that this problem is a problem designed for GPU parallelization.

We can see that it gives a lot of magic number that we need to find to get good performances, and sometimes it can be complicated to determine these values!

GPU Times

I will prepare an other article about performance of GPU, in a none perfect case.