Download it NOW from SourceForge.
Support for multiple remote desktop sessions per GPU is surely the most awaited feature of this release. It is possible to control the number of users that share a GPU on a per-GPU basis.
You will also find several enhancements and fixes that make like better (I'll mention only the most important ones here)
- Binary packages for RHEL5, SLES11, Ubuntu 9.10 and SLES10.
- VizStack now uses libxml2 for parsing - so just download and install on any Linux distro!
- VizStack can compensate for bezels in Tiled Displays using "invisible pixels". Note that current nvidia drivers have issues w.r.t handling these, so this may or may not work for you.
- The Remote Access Tools can allocate whole nodes for users. Since GPUs can now be shared, you can allocate a complete GPU all for yourself too.
- Support for a "fast" network; this can be used by parallel applications.
- The configure script can now generate templates for GPUs and display devices not known to it. This should make it easy to get those first things up and running. This also means that GPUs meant for compute purposes, e.g. Tesla series cards, should work with VizStack (hasn't been tested, though). GeForce cards should work too.
- Templates for Displays(including EDID files), GPUs, Keyboards and Mice are loaded from the master node. There is no need to propagate these files to the slave nodes in a cluster. Also, the node configuration file is picked up only from the master node. These minimizes impacts of cluster management techniques like Golden Imaging.
- The documentation just got better, and is now split into a User Guide, and an Admin Guide; a developer guide also makes an appearance, though admittedly it is still basic !
- Small fixes and face-lifts have been given to most user scripts (viz-*)
- Some more samples scripts show usage of VizStack's Python API
If you are upgrading from an earlier release (e.g., 1.0-2), note that any XML template files you may have created are now invalid. Please keep backups of this. Sorry for this break from backward compatibility, but it was necessary !
I need to thank the following individuals for their contributions
- Simon Fowler : found a few issues in VizStack on Ubuntu, requested support for Bezels, and generally for being the first (and very active) subscriber on the mailing list Simon also contributed templates for the Dell 3008WFP monitor, and the Quadro NVS 420 card.
- Paul Melis : suggested source documentation changes.
Before I finish, many thanks to the following software packages
- The VirtualGL project for VirtualGL and TurboVNC.
- AsciiDoc, used for VizStack's documentation. Inkscape was used to draw the images.
- SCons, used by our build system
- The usual suspects : libxml2, Python, Subversion and Ubuntu Linux (Linux for human beings, indeed!)
- wxPython, and paramiko used for the Remote Access Tools
- InnoSetup and ISTool, used to create the Windows installer
- The ParaView project; we expect a number of vizstack users will be ParaView users as well
- The Equalizer project, for providing such a flexible framework. Writing out a VizStack script that can support all Equalizer capabilities would be a task by itself.
- OpenSG scenegraph library, used for test programs
- Underlying software packages of these packages...
A quick update. I'm excited to tell you that we have GPU sharing support in VizStack now. The most important application of this perhaps is multiple remote users per GPU using VirtualGL/TurboVNC.
I'll be cutting out a new release 1.1 on Monday. VizStack has seen many other changes since 1.0-2. The key ones would be
- Multiple Remote users per GPU using VirtualGL/TurboVNC
- Support for bezels on tiled displays
- Much improved documentation. The older manual is now split into a user guide, an admin guide and a dev guide. The user guide documents how users would typically use the utilities provided by VizStack. The dev guide shows how to program using the vizstack API, and explains several details. Note that the documentation is still a work-in-progress.
- Some more examples of using the python API to automate tasks (running SPECViewPerf and CUDA bandwidth benchmarks).
- Sample script that lets you run Equalizer applications
- Automatic detection of unknown GPUs and display devices in the configure script.
- The template files for GPUs, displays, etc are needed only on the master node. This ensures you don't need to copy the files to other nodes.
- Many fixes !
If you want to live life on the bleeding edge, then checkout the "shree" branch from SVN. Those who want to play it safe, monday isn't that far away!