VizStack A framework to help you use your GPUs

19May/100

VizStack 1.1-2 Released on SourceForge !

Download it NOW from SourceForge.

Support for multiple remote desktop sessions per GPU is surely the most awaited feature of this release. It is possible to control the number of users that share a GPU on a per-GPU basis.

You will also find several enhancements and fixes that make like better (I'll mention only the most important ones here)

  • Binary packages for RHEL5, SLES11, Ubuntu 9.10 and SLES10.
  • VizStack now uses libxml2 for parsing - so just download and install on any Linux distro!
  • VizStack can compensate for bezels in Tiled Displays using "invisible pixels". Note that current nvidia drivers have issues w.r.t handling these, so this may or may not work for you.
  • The Remote Access Tools can allocate whole nodes for users. Since GPUs can now be shared, you can allocate a complete GPU all for yourself too.
  • Support for a "fast" network; this can be used by parallel applications.
  • The configure script can now generate templates for GPUs and display devices not known to it. This should make it easy to get those first things up and running. This also means that GPUs meant for compute purposes, e.g. Tesla series cards, should work with VizStack (hasn't been tested, though). GeForce cards should work too.
  • Templates for Displays(including EDID files), GPUs, Keyboards and Mice are loaded from the master node. There is no need to propagate these files to the slave nodes in a cluster. Also, the node configuration file is picked up only from the master node.  These minimizes impacts of cluster management techniques like Golden Imaging.
  • The documentation just got better, and is now split into a User Guide, and an Admin Guide; a developer guide also makes an appearance, though admittedly it is still basic !
  • Small fixes and face-lifts have been given to most user scripts (viz-*)
  • Some more samples scripts show usage of VizStack's Python API
    • Script that can run applications written using the Equalizer framework
    • Script that shows how to run benchmarks in parallel. Run SPECViewPerf 9 in parallel on all GPUs of a cluster. Benchmark a whole cluster in 30 mins - sweet ! Another example shows how to runs the CUDA bandwidth test on all GPUs.

If you are upgrading from an earlier release (e.g., 1.0-2), note that any XML template files you may have created are now invalid. Please keep backups of this. Sorry for this break from backward compatibility, but it was necessary !

I need to thank the following individuals for their contributions

  • Simon Fowler : found a few issues in VizStack on Ubuntu, requested support for Bezels, and generally for being the first (and very active) subscriber on the mailing list :-) Simon also contributed templates for the Dell 3008WFP monitor, and the Quadro NVS 420 card.
  • Paul Melis : suggested source documentation changes.

Before I finish, many thanks to the following software packages

  • The VirtualGL project for VirtualGL and TurboVNC.
  • AsciiDoc, used for VizStack's documentation. Inkscape was used to draw the images.
  • SCons, used by our build system
  • The usual suspects : libxml2, Python, Subversion and Ubuntu Linux (Linux for human beings, indeed!)
  • wxPython, and paramiko used for the Remote Access Tools
  • InnoSetup and ISTool, used to create the Windows installer
  • The ParaView project; we expect a number of vizstack users will be ParaView users as well
  • The Equalizer project, for providing such a flexible framework. Writing out a VizStack script that can support all Equalizer capabilities would be a task by itself.
  • OpenSG scenegraph library, used for test programs
  • Underlying software packages of these packages...
Comments (0) Trackbacks (0)

No comments yet.


Leave a comment


No trackbacks yet.