| 62 | In this research, I used the double-double (DD) precision data type for calculations. |
| 63 | DD precision here is pseudo DD precision and implemented by not a hardware but |
| 64 | a software as programming. |
| 65 | This DD precision value uses two double precision values to stand for it. |
| 66 | An operation in DD precision requires dozen double precision operations. |
| 67 | So it takes much time to calculate, we need to accelerate the DD calculations. |
| 68 | The purpose of my research is how fast I can accelerate matrix multiplication in DD precision. |
| 69 | For the acceleration, I used the technique of General Purpose GPU (GPGPU). |
| 70 | GPGPU is a method of parallel processing by using GPUs. |
| 71 | But in this time I can also use multi-core CPUs for acceleration by OpenCL. |
| 72 | By OpenCL, matrix multiplication was about 480 times as fast as that of non-parallel processing on GPU |
| 73 | and about 28 times faster on multi-core CPU. |
| 74 | And I tested LU factorization as an application of accelerated matrix multiplication. |
| 75 | In the result about 12 times faster on GPU and about 9 times faster on multi-core CPU. |
| 76 | |
| 77 | |
| 78 | file:///home/committee/aac/Thesis2011/s1160154 |
| 79 | |
| 80 | |