MultiCore and (Multi-)GPU Computing
The progress in modern desktop computer architectures has driven numerical analysts and scientific computing researchers to investigate the capabilities of shared memory parallelization techniques in all kinds of numerical algorithms. Multicore CPUs have been wirdely addressed in basic linear algebra libraries as BLAS and LAPACK. Threaded variants of the aforementioned libraries have enabled many algorithms to exploit multiple cores available in modern CPUs. Other algorithms (especially) addressing large and sparse problems still need research.