- Visual Studio 2008 (any SKU would do, including Visual C# Express Edition)
- Parallel Extensions to .NET Framework 3.5 June 2008 CTP
Archive for the ‘Parallel Programming’ Category
Yesterday I co-presented at the Washington D.C. area HPCPAL Jumpstart event. The purpose of the event was to provide IT professionals and developers with an overview of the upcoming release of Microsoft’s high performance computing (HPC) platform, Windows HPC Server 2008. I presented the developer portion of the event and Ed Leary, the other presenter, delivered two sessions aimed at the IT professionals and focused on cluster deployment and management. My presentation highlighted the following three topics:
- Jobs, job templates, job scheduler, scheduling policies, manual job submission, and programmatic job submission and monitoring.
- Windows Communication Foundation (WCF) Broker – completely new programming model and runtime system for building applications that leverage computational clusters interactively for offloading embarrassingly parallel computations. As you might have already guessed, this feature is using WCF for application to cluster interactions.
- Microsoft’s implementation of the Message Passing Interface (MPI) standard, MPI tracing support and MPI.NET. Microsoft MPI implementation, known as MS-MPI, is based on the MPICH2 implementation from the Argonne National Laboratory. MPI.NET is an extremely elegant and efficient wrapper over the MS-MPI developed at the Indiana University. MPI.NET puts the full power of MPI, up until now accessible only to a relatively small group of Fortran/C/C++developers, in the hands of .NET developer.
My slides and code samples, as well as Ed’s slides, can be downloaded from here. The following pre-requisites must be installed in order to compile the code samples:
- .NET Framework 3.5
- Visual Studio 2008
- HPC 2008 SDK Beta 2 (Enroll into the Beta program in order to download. Live ID is required for enrollment.)
You will need to build a Windows HPC Server 2008 Beta 2 cluster to be able to run the samples. You can find all the necessary software downloads and instructions here.
Free performance lunch is over. During the past few decades we witnessed continuous growth of single core processor performance powered, for the most part, by increases in processor clock speed and improvements in instruction level parallelism. All applications benefited from this trend and, even unchanged, ran faster with each passing year. Unfortunately, it appears that these two engines ran out of steam, at least for the time being. Moving forward and for the foreseeable future transistor budgets will increasingly be spent on increasing the number of processor cores per chip, leading to the appearance of multi-core chips at first (4 to 8 core chips are a reality today) and many-core chips in the future (tens to potentially hundreds of cores per ship). To take advantage of this new hardware reality the whole software stack – operating systems, programming frameworks, programming languages, tools, and applications will have to adjust. From now on, to ride the performance wave again, developers will have to specifically tailor their applications for scale out on multi/many-core processor architectures. That is challenging task today, given the state of tools and abstractions provided by platform and tool vendors. Recognizing this shift and existing technology gap, Microsoft has been conducting research and development activities in the area of parallel development for a number of years now. Yesterday, Microsoft announced the June CTP release of the Parallel Extensions to .NET. Parallel Extensions for .NET is a library aimed at significantly reducing the complexity of parallel application development, using .NET Framework. It is a compact and simple install – just a single DLL and a documentation file. It is purely additive to the .NET Framework 3.5 and doesn’t affect existing applications at all. So, please download it, try it out, and provide feedback. Since this is a CTP there is still plenty of opportunity left to shape the final product.