VizStack A framework to help you use your GPUs

14Jul/100

VizStack 1.1-3 Released

I'm pleased to announce VizStack release 1.1-3. Download it NOW from SourceForge.

This release is primarily a bug-fix release.

  • VizStack now works well with SLURM version 2.10 and later. Release 1.1-2 used to result in jobs not being cleaned up properly, and SSM crashes after a while
  • Fixed issue in the resource allocator
  • Configure script now works on single GPU machines installed with SuSE Linux
  • Included functionality that was missing in 1.1-2 : clip_last_block for tiled displays.
  • Various other small changes to code and documentation

This release also includes one enhancement : viz-paraview can now use shared GPUs for parallel rendering. This is useful in the common case where the GPU rendering load for a model is much smaller compared to generation of the rendering commands. So, you can now put your many multi-core processors to use. VizStack will share the GPUs with multiple rendering processes, and configure ParaView for offscreen rendering. This should maximize your GPU and system utilization.

I'd highly recommend users of VizStack 1.1-2 to consider shifting to this release. All it takes is a package removal & package installation on all the nodes and you will be up and running !

Contributors for this release:

  • Peter Tyson (CSIRO): Found the interoperability issue with SLURM 2.10, and suggested the fix.
  • Simon Fowler (CSIRO): Pointed out the missing clip_last_block functionality and the bad merge
  • Wolfgang Dehn(HP): Found a SLURM nodelist expansion issue
  • Paul Melis : helped correct documentation.
Tagged as: No Comments
19May/100

VizStack 1.1-2 Released on SourceForge !

Download it NOW from SourceForge.

Support for multiple remote desktop sessions per GPU is surely the most awaited feature of this release. It is possible to control the number of users that share a GPU on a per-GPU basis.

You will also find several enhancements and fixes that make like better (I'll mention only the most important ones here)

  • Binary packages for RHEL5, SLES11, Ubuntu 9.10 and SLES10.
  • VizStack now uses libxml2 for parsing - so just download and install on any Linux distro!
  • VizStack can compensate for bezels in Tiled Displays using "invisible pixels". Note that current nvidia drivers have issues w.r.t handling these, so this may or may not work for you.
  • The Remote Access Tools can allocate whole nodes for users. Since GPUs can now be shared, you can allocate a complete GPU all for yourself too.
  • Support for a "fast" network; this can be used by parallel applications.
  • The configure script can now generate templates for GPUs and display devices not known to it. This should make it easy to get those first things up and running. This also means that GPUs meant for compute purposes, e.g. Tesla series cards, should work with VizStack (hasn't been tested, though). GeForce cards should work too.
  • Templates for Displays(including EDID files), GPUs, Keyboards and Mice are loaded from the master node. There is no need to propagate these files to the slave nodes in a cluster. Also, the node configuration file is picked up only from the master node.  These minimizes impacts of cluster management techniques like Golden Imaging.
  • The documentation just got better, and is now split into a User Guide, and an Admin Guide; a developer guide also makes an appearance, though admittedly it is still basic !
  • Small fixes and face-lifts have been given to most user scripts (viz-*)
  • Some more samples scripts show usage of VizStack's Python API
    • Script that can run applications written using the Equalizer framework
    • Script that shows how to run benchmarks in parallel. Run SPECViewPerf 9 in parallel on all GPUs of a cluster. Benchmark a whole cluster in 30 mins - sweet ! Another example shows how to runs the CUDA bandwidth test on all GPUs.

If you are upgrading from an earlier release (e.g., 1.0-2), note that any XML template files you may have created are now invalid. Please keep backups of this. Sorry for this break from backward compatibility, but it was necessary !

I need to thank the following individuals for their contributions

  • Simon Fowler : found a few issues in VizStack on Ubuntu, requested support for Bezels, and generally for being the first (and very active) subscriber on the mailing list :-) Simon also contributed templates for the Dell 3008WFP monitor, and the Quadro NVS 420 card.
  • Paul Melis : suggested source documentation changes.

Before I finish, many thanks to the following software packages

  • The VirtualGL project for VirtualGL and TurboVNC.
  • AsciiDoc, used for VizStack's documentation. Inkscape was used to draw the images.
  • SCons, used by our build system
  • The usual suspects : libxml2, Python, Subversion and Ubuntu Linux (Linux for human beings, indeed!)
  • wxPython, and paramiko used for the Remote Access Tools
  • InnoSetup and ISTool, used to create the Windows installer
  • The ParaView project; we expect a number of vizstack users will be ParaView users as well
  • The Equalizer project, for providing such a flexible framework. Writing out a VizStack script that can support all Equalizer capabilities would be a task by itself.
  • OpenSG scenegraph library, used for test programs
  • Underlying software packages of these packages...
12May/100

GPU Sharing Support Coming Real Soon !

A quick update. I'm excited to tell you that we have GPU sharing support in VizStack now. The most important application of this perhaps is multiple remote users per GPU using VirtualGL/TurboVNC.

I'll be cutting out a new release 1.1 on Monday. VizStack has seen many other changes since 1.0-2. The key ones would be

  1. Multiple Remote users per GPU using VirtualGL/TurboVNC
  2. Support for bezels on tiled displays
  3. Much improved documentation. The older manual is now split into a user guide, an admin guide and a dev guide. The user guide documents how users would typically use the utilities provided by VizStack. The dev guide shows how to program using the vizstack API, and explains several details. Note that the documentation is still a work-in-progress.
  4. Some more examples of using the python API to automate tasks (running SPECViewPerf and CUDA bandwidth benchmarks).
  5. Sample script that lets you run Equalizer applications
  6. Automatic detection of unknown GPUs and display devices in the configure script.
  7. The template files for GPUs, displays, etc are needed only on the master node. This ensures you don't need to copy the files to other nodes.
  8. Many fixes !

If you want to live life on the bleeding edge, then checkout the "shree" branch from SVN. Those who want to play it safe, monday isn't that far away!

23Feb/100

VizStack Release 1.0-2 is out!

I'm pleased to announce that a new release of VizStack is available - version 1.0-2. This consists of bugfixes, small enhancements and documentation changes over 1.0-1.

The major changes are

  1. the configuration commands now work with nVidia's release 190 drivers
  2. the configuration commands accept a "remote network" instead of a "remote netmask".
  3. 2 bug fixes in VizStack's SLURM support
  4. improved documentation

Please download the files from this link and give it a spin !

Filed under: Updates No Comments
18Jan/100

Welcome to VizStack !

Hello, and welcome ! VizStack is relatively new software. It's also fairly unique in terms of its capabilities, so be sure to read the introduction and features page for more information. There are links at the top of this page to allow you to download the software, as well as how to contact the developers for support.

Filed under: Uncategorized No Comments
7Jan/100

Page setup in progress…

This blog is being created as the home page of the VizStack project. After much thought, I've chosen to use WordPress as the content management system for hosting this.

We'll populate this blog with information related to the VizStack project : who it is for, how to install, configure and use it, example usage cases, links to pages and documentation, etc.

Filed under: Updates No Comments