Once we won the NSF grant to develop HIPerWall, we had to decide the exact details of the hardware to purchase and nail down the hardware and software architecture. We knew that we wanted high-resolution flat panel monitors driven by powerful computers connected via a Gigabit Ethernet switch. We also knew that we did not want a rendering server (i.e., centralized rendering), but instead wanted the display computers to do all the work. We did want a control node that could coordinate the display computers, but it was only to provide system state to the display nodes and they would independently determine what needed to be rendered on their screen real estate and do so. We were not worried about things like software genlock or extremely tight timing coordination at the time.
We initially planned on using some Viewsonic-rebranded IBM 9 megapixel monitors that were intended for medical applications. These monitors met our “extremely high-res” requirement easily, but had three problems: They took 4 DVI inputs to drive the thing at full resolution, so we needed computers that could handle multiple video cards (not easy back in the AGP days); their refresh rate was something like 43 Hz when driven at full resolution, so movement and videos may not be smooth; and they were being discontinued, so became quite hard to get.
Just as we were getting discouraged, Apple came up with what became our solution: the amazingly beautiful 30″ Cinema Display. This display, which we ultimately chose, is 4 megapixels with reasonable bezel width (for the time), and was nearly as expensive as the computer that drives it. It requires Dual-Link DVI, because at 2560×1600 resolution, it is twice the resolution, hence twice the bandwidth, of a 1920×1080 High-Def TV. At the time, the only commodity machine that could drive the displays was the PowerMac G5. Apple had an agreement with NVIDIA that they were the only company that could sell the GeForce 6800 cards that had Dual-Link DVI for a while, so if we wanted those monitors, we would have to drive them with Macs. Since Frank Wessel and I were Mac users, this was fine with us, because we liked the development environment and Mac OS X. Falko was rightly concerned that Macs had typically lagged Windows in graphics driver support from NVIDIA, which may have meant that we would miss out or be delayed on important performance updates and capabilities. We arranged a trip to Apple HQ in Cupertino (at our own expense, though Apple did give us a nice lunch) to meet with some of Apple’s hardware and software leadership so we could make sure the G5s would work for us. We learned a few interesting things, but one that sticks with me is Apple’s philosophy of hiding hardware details. Both Windows and Linux allow programmers to set CPU affinity so a thread is locked to a CPU in order to prevent cache interference and pollution due to arbitrary scheduling (this was a big problem in operating systems at the time, but has been remedied somewhat since). Apple refused to expose an affinity API, because they figured the OS knew better than the programmers, just as Steve Jobs knows better than everyone else about everything (OK, so that’s a reasonable point). While we could live with that restriction, I was amused at the (possibly correct) assumption that programmers don’t know best.
Once we decided on the Apple 30″ Cinema Display and the PowerMac G5, we carefully worked to devise the right wall configuration to fit within our budget. We ended up deciding on an array of 55 monitors, 5 high and 11 wide. With the help of Greg Dawe from Calit2 at UCSD, we designed and contracted for a frame that would attach to the VESA mounts on the monitors. We ordered 10 PowerMac G5s (2.5 GHz wit 2GB RAM) and 10 monitors, so we could build a small 3×3 configuration and make sure things were working. Because these were dual-core G5s, I took one over to my Scalable Parallel and Distributed Systems (SPDS) Lab so one of my Ph.D. students could experiment to measure the cache-to-cache bandwidth. Unfortunately, someone broke into my lab (and a few others) and stole the G5, as well as a couple Dell machines and my PowerBook G3. Since the university is self-insured and the value of everything was probably less that $5000, we didn’t get any reimbursement. As a side note, I did install a video camera in my lab, which helped capture the guy when he came back for more.
The 3×3 wall was a success, but we decided to drive two monitors with each Mac, because the performance was pretty good and we could save some money. Because we built the wall 5 monitors high, we couldn’t easily use vertical pairs per machine, so we decided to skip the 11th column, which is how the HIPerWall ended up with 50 monitors and only 200 megapixels of resolution. Next time, I’ll write about the software.