This blog forms part of the http://performanceGuru.com website which focuses on performance tuning UNIX-like Operating Systems (including Linux).
Tuesday, March 28, 2006
links for 2006-03-27
Just as BPM (business process management) technology is markedly different from conventional approaches to application support, the methodology of BPM development is markedly different from traditional software implementation techniques. With CPI (continu
There has never been a better time to increase memory capacity. Consider that the increased use of memory-intensive applications such as video encoders has already caused 1 GB configurations to go mainstream.
In part 3 of this series, we take a closer look at a cluster-based text-file analysis utility that has been implemented using both the Message Passing Interface (MPI) system and the Parallel Virtual Machine (PVM) library. The utility is a simple cluster a
Software engineering issues notwithstanding, the migration to 64-bit architectures poses new challenges in the linguistic domain as well. Join me in a brief lesson in memory storage jargon, the interesting etymology behind it and -- a pinch of Greek.
The first phase of designing a parallel algorithm consists of analyzing the problem to identify exploitable concurrency, usually by using the patterns of the Finding Concurrency design space.
The software development process now requires a working knowledge of parallel and distributed programming. The requirement for a piece of software to work properly over the Internet, on an intranet, or over some network is almost universal.
THE SON of the Alpha EV7 bus, also known as Hypertransport these days, aimed to provide for very versatile functionality from Day One. That's whether we talk about cache-coherent NUMA-like SMP (but without most of NUMA latency penalties), or high-performa
"Adaptive supercomputing will cause a paradigm shift in the way users select and use HPC systems. Adaptive supercomputing is necessary to support the future needs of HPC users as their need for higher performance on more complex applications outpaces Moor
As a number of large-scale, multinational experiments prepare to go online in the next 2-3 years, a new generation of data retrieval and transmission techniques and tools will be required. The data yielded by these experiments will be prolific, and a dive