New bio

I added a new biographical summary to the “About me” page. It covers my work at Northrop Grumman, UCI, and Hiperwall Inc.

The History of HIPerWall:Hardware and Architecture

Once we won the NSF grant to develop HIPerWall, we had to decide the exact details of the hardware to purchase and nail down the hardware and software architecture. We knew that we wanted high-resolution flat panel monitors driven by powerful computers connected via a Gigabit Ethernet switch. We also knew that we did not want a rendering server (i.e., centralized rendering), but instead wanted the display computers to do all the work. We did want a control node that could coordinate the display computers, but it was only to provide system state to the display nodes and they would independently determine what needed to be rendered on their screen real estate and do so. We were not worried about things like software genlock or extremely tight timing coordination at the time.

We initially planned on using some Viewsonic-rebranded IBM 9 megapixel monitors that were intended for medical applications. These monitors met our “extremely high-res” requirement easily, but had three problems: They took 4 DVI inputs to drive the thing at full resolution, so we needed computers that could handle multiple video cards (not easy back in the AGP days); their refresh rate was something like 43 Hz when driven at full resolution, so movement and videos may not be smooth; and they were being discontinued, so became quite hard to get.

Just as we were getting discouraged, Apple came up with what became our solution: the amazingly beautiful 30″ Cinema Display. This display, which we ultimately chose, is 4 megapixels with reasonable bezel width (for the time), and was nearly as expensive as the computer that drives it. It requires Dual-Link DVI, because at 2560×1600 resolution, it is twice the resolution, hence twice the bandwidth, of a 1920×1080 High-Def TV. At the time, the only commodity machine that could drive the displays was the PowerMac G5. Apple had an agreement with NVIDIA that they were the only company that could sell the GeForce 6800 cards that had Dual-Link DVI for a while, so if we wanted those monitors, we would have to drive them with Macs. Since Frank Wessel and I were Mac users, this was fine with us, because we liked the development environment and Mac OS X. Falko was rightly concerned that Macs had typically lagged Windows in graphics driver support from NVIDIA, which may have meant that we would miss out or be delayed on important performance updates and capabilities. We arranged a trip to Apple HQ in Cupertino (at our own expense, though Apple did give us a nice lunch) to meet with some of Apple’s hardware and software leadership so we could make sure the G5s would work for us. We learned a few interesting things, but one that sticks with me is Apple’s philosophy of hiding hardware details. Both Windows and Linux allow programmers to set CPU affinity so a thread is locked to a CPU in order to prevent cache interference and pollution due to arbitrary scheduling (this was a big problem in operating systems at the time, but has been remedied somewhat since). Apple refused to expose an affinity API, because they figured the OS knew better than the programmers, just as Steve Jobs knows better than everyone else about everything (OK, so that’s a reasonable point). While we could live with that restriction, I was amused at the (possibly correct) assumption that programmers don’t know best.

Once we decided on the Apple 30″ Cinema Display and the PowerMac G5, we carefully worked to devise the right wall configuration to fit within our budget. We ended up deciding on an array of 55 monitors, 5 high and 11 wide. With the help of Greg Dawe from Calit2 at UCSD, we designed and contracted for a frame that would attach to the VESA mounts on the monitors. We ordered 10 PowerMac G5s (2.5 GHz wit 2GB RAM) and 10 monitors, so we could build a small 3×3 configuration and make sure things were working. Because these were dual-core G5s, I took one over to my Scalable Parallel and Distributed Systems (SPDS) Lab so one of my Ph.D. students could experiment to measure the cache-to-cache bandwidth. Unfortunately, someone broke into my lab (and a few others) and stole the G5, as well as a couple Dell machines and my PowerBook G3. Since the university is self-insured and the value of everything was probably less that $5000, we didn’t get any reimbursement. As a side note, I did install a video camera in my lab, which helped capture the guy when he came back for more.

Initial HIPerWall prototype system

The 3×3 wall was a success, but we decided to drive two monitors with each Mac, because the performance was pretty good and we could save some money. Because we built the wall 5 monitors high, we couldn’t easily use vertical pairs per machine, so we decided to skip the 11th column, which is how the HIPerWall ended up with 50 monitors and only 200 megapixels of resolution. Next time, I’ll write about the software.

The History of HIPerWall:Origins

This is my attempt to relate the history of the Highly Interactive Parallelized display Wall (HIPerWall) research project that led to the development of some of the highest resolution tiled display walls in the world and eventually led to Hiperwall Inc., which commercialized the technology. This is the first part of several that will explore the origins, architecture, and software evolution of the HIPerWall and related projects.

The project was conceived as a result of collaborative brainstorming between myself and my c0lleague Falko Kuester, an expert in computer graphics and visualization. For a few years, we had been exploring project ideas to combine large-scale parallel and distributed computing with visualization to allow scientists to explore enormous data sets. An earlier proposal to build a 100 megapixel cave wasn’t funded, but was well enough received that we were encouraged that we were on the right track.

We saw a Major Research Instrumentation opportunity from the National Science Foundation and decided to propose a flat-panel based high resolution display system. There were other, somewhat similar systems being developed, including one from Jason Leigh at the Electronic Visualization Laboratory at UIC called SAGE. These other systems made architectural choices and control choices that we wanted to try differently. For example, the best SAGE systems at the time consisted of a rendering cluster connected by 10Gbps networks to the machines driving the displays, thus turning all the data to be rendered into network traffic. We wanted to develop an approach that worked very well over 1Gbps networks, which were becoming common and inexpensive at the time. We also intended to make the system highly interactive and flexible enough to show lots of different data types at once.

We wrote an NSF MRI proposal entitled HIPerWall:Development of a High-Performance Visualization System for Collaborative Earth System Sciences asking for $393,533. We got a great deal of help from Dr. Frank Wessel, who led UCI’s Research Computing effort, in developing the project management approach and with reviewing and integration of the proposal. We included several Co-PIs from appropriate application disciplines, including pollution and climate modelling, and hydrology.

The proposal was particularly well-timed because of the pending completion of the California Institute of Telecommunications and Information Technology (Calit2) building at UCI. The HIPerWall would have a prime spot in the visualization lab of the new building, thus would not have to fight for space with existing project. The proposal also explored the connectivity to the OptIPuter, Larry Smarr’s ambitious project to redefine computing infrastructure, and to Charlie Zender’s Earth System Modelling Facility, an IBM supercomputer at UCI.

I got the call from the program manager that we won the proposal and we needed to prepare a revised budget, because as with most NSF proposals, they were cutting the budget somewhat. I called Falko and he quickly called the program manager back. He was sufficiently convincing and enthusiastic, that all the money was restored, as the NSF project page shows.

The next part of this series will cover the hardware and initial software architecture of the HIPerWall.

My status

For those who don’t know, I left my academic position at UCI after the 2009-10 academic year and am now Chief Scientist at Hiperwall Inc. full time.

Hiperwall is keeping me very busy as we are getting ready to release our version 1.2 software. This new software is significantly enhanced from our first release. We added a powerful slideshow creator/editor, made the Sender more capable, added GPU assist and other enhancements to the Streamer, and introduced a scheduling capability to allow the wall to switch environments unattended. We also added a simple web services interface to allow control of the wall via web browsers (including iPad and iPhone) and other networked control devices. Under the hood, we provide a faster and more intuitive content manager.

Once the new version is done, we have lots of ideas for future capabilities, so keep your eyes on this space and hiperwall.com.

Working on research section

Orig: 1/28/2010

I’ve started filling in content on the Research section of the website, and will continue over the next few weeks.

IEEE CS Orange County Chapter slides

The slides for my talk on 9/28/09 at the Orange County IEEE Computer Society meeting are:

http://stephenjenks.com/UbiquitousParallelComputing.pdf