Cloud Computing for Home Has Huge Problems

We’re getting lots of examples of Cloud Computing for use at home these days. Examples include Apple‘s new iCloud, the Siri digital assistant built into the iPhone 4S, Google Documents and GMail, and cloud backup, like Mozy, Carbonite, and the one I use, CrashPlan. All of these store your data in the cloud (on servers somewhere on the Internet) and provide you services using that data. Cloud Computing means you don’t have to maintain infrastructure (servers and programs and such) and can use the services from nearly anywhere. It’s great for businesses that need to scale services quickly. So what’s the problem for home users?

The problem is that home Internet access isn’t up to the task of supporting the data intensive cloud services, and, even if it were capable, capacity limits put in place by our service providers will severely curtail the cloud’s usefulness. The examples below range from annoying to potentially catastrophic. For cloud computing to work for average people, these problems must be fixed. If not, a lot of people are going to have big problems, as described below.

Cloud backup is a great way to make sure your data is backed up to a remote location that will survive even if your house is burglarized or burns down. You run a program on your computer and it backs your data up to the cloud whenever you have a network connection. This means you always have a backup in case of disaster. The first problem anyone using these tools encounters is that it takes weeks to make that initial backup. That’s right – the upload speed from our homes is very slow, usually on the order of one or two million bits per second, and I think the cloud backup providers throttle even further, so the upload speed is typically not at your bandwidth limit. Once the initial backup is made, future backups are incremental, only sending changed data, so are usually fast. Problems can occur for people that use virtual machines (Parallels or VMWare, for example), because the virtual disks they use tend to be many GB, so just booting a VM guarantees a significant upload, even if only changed parts of the disk are sent. Everybody is getting better and better digital cameras all the time, so more and larger photos are being stored on hard drives and they also need to be backed up, along with our iTunes files and digital movie copies, etc. Things are pretty ugly, because even average users will soon have hundreds of GB of data that they care about and don’t want to lose.

All of the above is annoying, because our home Internet infrastructure stinks, but it gets worse: If you have a failure or loss and need to restore that backup of say 200GB, your Internet Service Provider (ISP) may prevent it. Even with the faster download speeds, such an undertaking will take days, but with capacity caps that are now being put into place by cable companies and other ISPs, we may be blocked when we hit that cap, or at least significantly slowed. Yes, I know some of the backup providers have services where for an outrageous fee, they will mail you DVDs or maybe a hard drive with your data, but that’s on top of the usual monthly charge. So if your MacBook gets stolen, not only will you need to buy a new one, but you’ll need to pay to get your files back or be blocked by your ISP. Not very comforting. Perhaps cloud backup isn’t as good a deal as we thought and we all should keep local backups as well (yes, I know that’s a good idea, but not nearly as low impact and convenient as cloud backup).

Apple’s iCloud is a new player in this game, and it will cause lots of trouble. One feature, PhotoStream, automatically uploads your photos to the cloud from your iPhone and then down to iPhoto. It really works and is surprisingly nifty. It took more than a GB of photos from my wife’s new iPhone 4S, sent them to the cloud, and the next day, they were in her iPhoto. That’s pretty handy! But wait, that means it uploaded a GB of photos to the cloud. Then it downloaded them again. Then iPhoto uploaded them again (at least I think that’s what it was doing when it was hogging my internet connection all day). So we’re aiming for those ISP-enforced capacity caps without even knowing it.

Even the nifty Siri assistant built into the iPhone 4S uploads the commands to the cloud for interpretation (and the results may require internet data too). So the data plan from your phone company, unless it is unlimited, will be slowly eaten away by constant Siri use. It may not be much, but it isn’t nothing.

In short, there are companies selling us cloud services for the home that will be strongly affected by limitations imposed by our network connections and by our ISPs. Before long, these competing interests will collide and we, the consumers, will be screwed. We will have to pay more if we want to use these very handy cloud services.

I have some (not nearly comprehensive) suggestions on how to avoid such a crisis.

  1. ISPs should track data usage as cloud service usage grows and adjust their capacity caps upwards as needed so even above average users never hit them. The ISPs always say the caps only affect the top 1% or less, so they should keep it that way.
  2. Allow occasional exceptions to the capacity caps. If someone calls and says they are restoring a cloud backup, lift the cap that month, as long as it is a rare event.
  3. The services should allow preferences to be set to make sure we don’t upload or download too much so we trigger these caps.

Essentially, these cloud computing services will transform all of us into heavy data users on our networks, so it will no longer be people downloading porn or pirating movies or songs that are the big bandwidth hogs, but ordinary people that take photos and movies with their phones and back up their media libraries. No longer will the ISPs be able to claim that it is only abusers that are using all their bandwidth, because it might be all of us, but just happening behind our backs by automatic programs accessing the cloud for us, but without us explicitly initiating it.

New Hiperwall version significantly enhances functionality

Hiperwall Inc. today announced the new version 2.0 of the Hiperwall display wall software. The new version significantly enhances functionality of existing components and adds two new ones that are very powerful. See the announcement or the Enhancements list for an overview of what is new, but I’ll mention a few of the new features/capabilities and describe why they are significant.

  • Security: Any connections that could connect from outside the Hiperwall LAN (such as Senders and Secondary Controllers) use authenticated SSL connections to enhance the security and integrity of the system. Sender connections even within the LAN use SSL to authenticate the connection.
  • Multi-Sender: The new Sender can deliver multiple portions of a computer’s screen to the display wall. This means several applications or data feeds can be shown from a single machine. Of course, the entire screen can be sent, as before. Sender performance is also improved, particularly when a Sender window is shown across a large portion of the display wall.
  • Secondary Controller: While the usual Control Node is very powerful and easy to use, Secondary Controllers are even more intuitive and easy to use. Secondary Controllers can be anywhere in a facility to control walls distributed throughout the area. They show a low-bandwidth view of the content on the display walls, so they can be used over wireless or at home to monitor the wall’s contents and behavior. They can also focus on a single display wall (in a multi-wall configuration) or show all active objects. You can see how easy the Secondary Controller is in the following video.

http://www.youtube.com/watch?v=h_iXg-FDjYk

  • Share: Until now, the Sender has been able to show applications and other data on a Hiperwall from anywhere across the Internet. With Share, Senders can be shared with several Hiperwall systems, enabling collaboration and communications across distributed sites. Share automatically adjusts the data rate based on link conditions to each display wall it connects to, so systems connected via lower speed links will not slow down the data feed to systems connected via fast links.
  • Streamer: The Streamer can now send what is shown on a display device, in addition the the usual capture device and movie file streaming. This is not meant to replace the Sender. which sends the contents of a computer’s displays, but typically provides a higher frame rate at the expense of much higher network bandwidth.
  • Text: Generate attractive text labels and paragraphs with any installed font in any color and with colored or transparent backgrounds. This is great for digital signage or even labeling Sender or Streamer feeds.
  • Slideshows: Slideshows now have more advanced transitions, so attention-grabbing wipe and fly motions can be used.

There are many other great new features and capabilities, but the ones listed here are the ones I think will have the biggest impact on our already very easy to used display wall software. The Secondary Controller makes content manipulation even easier and more intuitive than before, so customers can take advantage of Hiperwall’s incredible interactivity and flexibility. Share makes sharing content among walls and among sites quick and easy. Even small features, like content previews, make the Hiperwall experience even better than before. Visit Hiperwall.com for more information.

Hiperwall Features and Software Development

Now that we have released a maintenance update to our third software release and are closing in on our fourth release (likely this Summer), I’ll comment on how our development has changed and how we focus on what to develop and when.

At the start of 2007, the HIPerWall software primarily consisted of two programs: the original TileViewer, which handle big image viewing, and the very interactive NDVIviewer that displayed regular images, movies, video streams and more — I called it MediaViewer (more details on both can be found in this article).

I was lucky to hire Dr. Sung-Jin Kim back to the HIPerWall project as a postdoctoral researcher, and together we set about transforming the software. Note: When I write HIPerWall, it designates the research project, which is distinct from the Hiperwall company.

Sung-Jin developed a new TileViewer that could handle all the MediaViewer features as well as deal with big images much better than the original TileViewer. He added the ability to rotate anything from a playing movie to a billion pixel image in real-time and interactively. This new TileViewer formed the basis of the Hiperwall technology licensed from UCI to the Hiperwall company. Today’s product, however, bears little resemblance to that old code.

Over the years, many of the thousands of visitors we had to HIPerWall expressed their interest in running their software in high resolution on the wall. When told this entailed lots of parallel and distributed computing programming as well as a significant overhaul of their drawing code, people shied away. We decided we needed a way for people to show their applications on the tiled display without having to rewrite their code. We also wanted to provide the ability to use proprietary programs, like PowerPoint, CAD, GIS tools, etc. One way of doing this is to capture the video output of a computer via a capture card, then stream the screen to the wall. We could already stream HD video, so this was certainly a workable solution, but required very expensive (at the time) capture cards that tended to use proprietary codecs. It would also take enormous network bandwidth to stream a high-resolution PC screen. While we have this capability in the Hiperwall software today, we decided it was too brute-force and inelegant (and expensive) for the time.

I decided to use software to capture the screen and send it to the HIPerWall. I developed the ScreenSender (later changed to HiperSender or just Sender) in Java so it can work on Mac, Windows, or Linux, yet have sufficient performance to provide a productive and interactive experience. While the original Sender was fairly primitive and brute-force, today’s Sender software can send faster than many Display Nodes can handle and uses advanced network technology that lets us have tens of Senders displayed simultaneously without seriously taxing the network.

We also started to improve the usability of the software. Initially, the software could be operated by a few key presses, but as we got more content and more capabilities, we knew we had to make a user interface. Sung-Jin and I defined an interface protocol and made a graphical user interface to allow users to choose content to display and to view and changed object properties for displayed objects.

So we had this powerful software that was starting to gain attention. First, the Los Angeles Times published a nice article on the front page of the California section, followed by a radio interview I did for a radio station that broadcasts National Academy of Engineering content, and culminating in a CNN piece that was repeated around the world.

Somewhere around this time, Jeff Greenberg of TechCoastWorks came along to see if he could help us form a company. Because he has been in the computing technology industry for years, he was able to guide our efforts to make the software easy to use for commercial purposes. Around the end of the year, Samsung became interested in licensing our product, so the real software effort began. While it is okay for research software to crash (in fact, if it does, you can claim that you’re pushing the edge), commercial software has to work as expected, and in this case, 24 hours a day, 7 days a week, for months at a time. Therefore, any memory leaks that would have been okay for a short run in the lab were not acceptable, nor were crashes in corner cases. We also had to work hard to improve performance. In the HIPerWall, we used PowerMac G5s with 2 or 4 processors each and advanced graphics cards (for the time). This was a pretty nice environment for our software, but embedded PCs in Samsung’s monitors were not quite as fast and had significantly less graphics horsepower. We used a small 2×2 Samsung wall as a test bench and made the software sufficiently robust that we demonstrated it on a huge 40 panel wall at the Samsung booth at the Infocomm show in Las Vegas in June 2008. We also had to make the software multilingual, which is not as easy as it sounds, even with Java’s support for Unicode characters. The Samsung-licensed version of the software supports 8-10 languages.

Choosing features to develop has changed from making what we think is cool to making things that will help customers and help sales. Our software still handles gigapixel images with aplomb, but for the many control rooms and network operation centers (NOCs) that use Hiperwall, the popular display objects are Senders (for monitoring whatever it is being monitored) and Streamers (to keep an eye on CNN and the weather). For digital signage applications, regular images and movies are popular, along with Streamers and Senders. In order to coordinate these complex display layouts, we provide a way to save the state of the Hiperwall as an Environment, which can be restored easily.

We also added a Slideshow feature that can contain any of our object types with variable timing. It can even have overlays of a company logo, for example. This feature is popular both for digital signage (step through products, etc.) and control rooms where there may be more information than can comfortably fit on the wall at a given time. (Though the right answer is to buy a larger wall! 😎 )

In response to customer requests, we added scheduling capability to show different environments at different times on different days, etc. UCI’s Student Center Hiperwall system makes tremendous use of the scheduler for their very artistic content.

Another example of our responsiveness to customer needs comes from the large Hiperwall-based Samsung UD system installed at the Brussels Airport. They were using 3 infrared cameras to view passengers along the walkways then show the streams on the tiled display along the walkway, as shown below. One camera was on the opposite side, so the video needed to be flipped horizontally. They used another computer to do the flip, which added some delay. Since such a flip is trivial in today’s graphics cards, we added flip options to the Streamer software, thus eliminating the need for extra hardware and delay.

 

With our next release, we will add many more customer-centric features that will make Hiperwall significantly more powerful, secure, and collaborative, but I will not comment on any here until they are officially announced by the company.

On the origins of the Hiperwall name

Many people are confused by the spelling of the Hiperwall® name, often misspelling it “Hyperwall” or even “Hyper Wall.”

The name Hiperwall is a registered trademark owned by the University of California (UC Irvine, in particular) and exclusively licensed for commercial use by Hiperwall Inc.

The goal of the research project led by Falko Kuester and myself when we were UCI professors was to develop technology to drive extremely high resolution tiled display walls. Our approach differed from that of other tiled display systems in that we wanted our system to scale easily to huge sizes, so we needed to avoid the centralized rendering system (read potential bottleneck) that most other had. Therefore, we put powerful computers behind the displays. These display nodes perform all the rendering work for their display and had little interaction with other display nodes. We use a central control node that simply commands the display nodes what to display, but doesn’t get in the path of the rendering, thus doesn’t bottleneck the system.

Because of this very distributed and highly parallel computing approach, our system is much more responsive than most other tiled display systems, therefore we called it the Highly Interactive Parallelized display Wall, or HIPerWall for short. The acronym is a little forced, because we had to ignore the word “display,” but the idea is pretty clear. You can see the research project logo on this image of the desktop screen for the HIPerWall Mini system we showed at Apple’s World Wide Developers Conference in 2006. At 72 million pixels screen resolution, the HIPerWall Mini was one of the highest resolution displays in the world at the time.

You’ll note that the “IP” in HIPerWall is highlighted in a different color. This is because we based our technology on the Internet Protocol (IP) rather than proprietary protocols or networks so we could interoperate and use standard, off-the-shelf equipment. This is one of the main reasons Hiperwall systems are so cost-competitive today: we use our advanced software on COTS computers, displays, and networks to create a powerful tiled display system without proprietary servers, amplifiers, and non-scalable bottlenecks.

About the same time we built HIPerWall, NASA Ames built a much smaller tiled display named Hyperwall, which surely led to some name confusion. NASA’s current Hyperwall is even higher resolution than the original 200 MPixel HIPerWall. In the meantime, Apple has made some displays for their stores to show iOS App sales, unfortunately naming them Hyperwall, too.

So to summarize, Hiperwall is the product derived from HIPerWall the research project. NASA and Apple both have Hyperwall systems, which are unrelated to each other and unrelated to Hiperwall.

Eclipse and Yoxos

I use Eclipse for my Java development. I used to use JBuilder Turbo, but it’s now so hard to get a license of it for more than one computer, I’ve given up and went to straight Eclipse.

Eclipse is a really good development environment with on-the-fly compilation and generally excellent features and a few annoyances. One of the biggest annoyances is its update/install system that usually doesn’t actually find updates and typically doesn’t do a good job of installing new components. One day, I tried to install the profiling tools and the install system had such a hard time finding the components to install, campus security blocked access to the Hiperwall lab because they were sure only malware would hit 70 FTP servers in a few seconds. No, it was Eclipse, it turned out, after I got us blocked a second time. So clearly no Eclipse component installs on campus. When trying at home, I game up after an hour or so of it not finding the components. This isn’t necessarily the fault of the Eclipse developers – they rely on free hosting for mirrors of the files, but the mirrors may not always be up to date or even complete.

Because of these troubles, I tried and am still using Yoxos. Yoxos creates a custom Eclipse at start time, which delays the initial start quite a bit as the components are downloaded, but if nothing changes, future starts are fairly fast. It allows you to select which components you want and then downloads (from Yoxos’ servers) and installs them for you. It works very well and I haven’t had any trouble with a Yoxos-built Eclipse.

The version I’m using is currently free, but as Yoxos is a commercial entity, they charge for some services and this version may eventually cost something. Whether it will be worth the money to save hassle depends on the cost. But for now, Yoxos is a terrific way to use Eclipse and is highly recommended for Java developers.

UCI EECS Colloquium Talk 2010

I will present a talk on “Hiperwall: From Research Project to Product” at the UCI EECS Colloquium at 5PM on Wednesday, Nov. 10, in McDonnell Douglas Engineering Auditorium.

The official announcement is here.

I made minor updates on 10/10, so be sure to get the updated presentation (below).

The presentation is Colloquium Presentation 2010 updated.

NEC Display Solutions Partners with Hiperwall

NEC Display Solutions announced today that they are partnering with Hiperwall for our software to power high-resolution display walls (sorry, I can’t stand the more limiting term “video walls”).

For more information, read their press release.

The History of HIPerWall:The Research Software (2005-2006)

The HIPerWall system was a pretty impressive collection of hardware for 2005, with 50 processors (more were added later), 50 GB of RAM, more that 10 TB of storage (we got a gift of 5TB worth of drives for our RAID system from Western Digital), and 50 of the nicest monitors available, but it was the software that really made it special. Remember that HIPerWall is an acronym for Highly Interactive Parallelized Display Wall. We took that interactivity seriously, so we didn’t just want to be able to show a 400 million pixel image of a rat brain, but we want to allow users to pan and zoom the image to visually explore the content. This user interactivity set the HIPerWall software apart from the other tiled display software available at the time and is still a major advantage over competing systems.

The original software was written by Sung-Jin Kim, a doctoral student at the time who was working on distributed rendering of large images. His software, TileViewer, was originally written to use Professor Kane Kim’s TMO distributed real-time middleware, but Sung-Jin ported it to Mac OS X and IP networking so it could work on HIPerWall. TileViewer ran on both the control node and on the display nodes. The control node managed the origin and zoom level of the image, while TileViewer on the display nodes computed exactly where the display was in the overall pixel space, then loaded and rendered the appropriate portion of the image. We preprocessed the images into a hierarchical format so the right level and image tiles (hence the name) could be loaded efficiently. The images were replicated to the display nodes using Apple’s very powerful Remote Desktop software. TileViewer also allowed color manipulation of the image using Cg shaders, so we took advantage of the graphics cards’ power to filter and recolor images. TileViewer didn’t have much of a user interface beyond a few key presses, so Dr. Chris Knox, a postdoctoral scholar at the time, wrote a GTK-based GUI that allowed the user to select an image to explore and then provided zoom and movement buttons that zoomed and panned the image on the HIPerWall. The picture below shows Dr. Chris Knox and Dr. Frank Wessel examining a TileViewer image on HIPerWall. The Macs are visible on the left of the image. The one below that shows Sung-Jin Kim in front of TileViewer on HIPerWall.

TileViewer in use on HIPerWall

Sung-Jin Kim in front of HIPerWall

The HIPerWall was built in the newly built Calit2 building at UCI. We knew HIPerWall was coming, so Professor Falko Kuester, the HIPerWall PI, and I, as Co-PI, worked to get infrastructure in place in the visualization lab. Falko was on the planning committee for the building, so we hoped our needs would be met. The building had good networking in place, though no user-accessible patch panels, but power was “value engineered” out. We quickly determined (blowing a few breakers in the process) that HIPerWall would need a lot more power than was available in the visualization lab at the time. The Calit2/UCI director at the time, Professor Albert Yee, agreed and ordered new power circuits for the lab. Meanwhile, postdocs Kai-Uwe Doerr and Chris Knox were busy assembling the framing and installing monitors into the 11×5 frame designed by Greg Dawe of UCSD. We had a deadline, because the Calit2 Advisory Board was to meet in the new UCI Calit2 building and Director Larry Smarr wanted to show HIPerWall. At somewhere around 3:00 PM on the day before the meeting, the electricians finished installing the power behind the wall. At that point, we moved the racks into place, putting 5 PowerMac G5s on each rack, installing Ethernet cables and plugging in the monitors and Macs to power. Once we booted the system, it turned out that TileViewer just worked. We were done making the system work by 6PM and it was a great surprise for Larry Smarr that HIPerWall was operational for the meeting the next morning.

Larry Smarr at initial HIPerWall demo

Falko Kuester at initial HIPerWall demo

Sung-Jin Kim then turned to distributed visualization of other things, like large datasets and movies, also in a highly interactive manner. The dataset he tackled first was Normalized Difference Vegetation Index data, so the new software was initially named NDVIviewer. This software allowed the import of raw data slabs that could then be color coded and rendered on the HIPerWall. In keeping with the “interactive” theme, each data object could be smoothly moved anywhere on the display wall and zoomed in or out as needed. Once again, the display node software figured out exactly what needed to be rendered where and did so very rapidly. The NDVI data comprised sets of 3D blocks of data that represented vegetation measured over a particular area over time, so each layer was a different timestep. The software allowed the user to navigate forward and backward among these timesteps in order to animate the change in vegetation. The picture below shows NDVIviewer running on HIPerWall showing an NDVI dataset.

NDVI visualization on HIPerWall

NDVIviewer was also able to show an amazing set of functional MRI (fMRI) brain scans. This 800 MB data set held fMRI brain image slices for 5 test subjects who were imaged on 10 different fMRI systems around the country to se whether machines with different calibration or from different manufacturers yield significantly different images (they sure seem to do so), for a total of 50 sets of brain scans. NDVI viewer allowed each scan to be moved anywhere on the HIPerWall, and the used could step through an individual brain by varying the depth or through all simultaneously. In addition, the Cg shader image processing could be used to filter and highlight the images in real-time. Overall, this was an excellent use of the huge visualization space provided by HIPerWall and never failed to impress visitors.

fMRI dataset visualization on HIPerWall

NDVIviewer could do much more than just show data slices. It showed JPEG images with ease, smoothly sliding them anywhere on the wall. It could also show QuickTime movies, using the built-in QuickTime capability of the display node Macs to render the movies, then showing the right portions of the movies in the right place. While this capability had minimal scientific purpose, it was always impressive to visitors, because a playing movie could be resized and moved anywhere on the HIPerWall. The picture below shows a 720p QuickTime movie playing on HIPerWall.

HD movie playing on HIPerWall

Sung-Jin Kim added yet another powerful feature to NDVIviewer that allowed it to show very high-resolution 3D terrain models based on the SOAR engine. SOAR is extremely well suited for tiled display visualization, because it is a “level-of-detail” engine that renders as much as if can of the viewable area based on some desired level of detail (perhaps dependent on frame rate or user preferences). NDVIviewer’s implementation allowed the used to vary the level of detail in real-time, thus smoothing the terrain or rendering sharper detail. The movie below shows SOAR terrain rendering on HIPerWall.

Because of the power and capabilities of NDVIviewer, I started calling it MediaViewer, a name which stuck with almost everyone. An undergraduate student, Duy-Quoc Lai, doing summer research added streaming video capability to MediaViewer, so we could capture Firewire video from our Panasonic HD camera and stream it live to the HIPerWall. Starting with the addition of streaming video in 2006, we started transitioning the software to use the SPDS_Messaging library that I had developed for parallel and distributed processing research in my Scalable Parallel and Distributed Systems laboratory.

In addition to TileViewer and MediaViewer, several other pieces of software were used to drive the HIPerWall. The SAGE engine from the University of Illinois, Chicago’s Electronic Visualization Lab was the tiled display environment for OptIPuter, so we ran it on HIPerWall occasionally. See the movie below for an example of SAGE on HIPerWall.

Dr. Chris Knox wrote a very ambitious viewer for climate data that could access and parse netCDF data for display on the HIPerWall. This allowed us to explore data sets from the UN Intergovernmental Panel on Climate Change (IPCC) on a massive scale. We could see data from many sites at once or many times at once, or both. This outstanding capability was a fine example of what HIPerWall was intended to do. The picture below shows one version of the IPCC viewer running on HIPerWall.

IPCC climate models explored on HIPerWall

Doctoral student Tung-Ju Hsieh also modified the SOAR engine to run on HIPerWall. His software allowed whole-Earth visualization from high-res terrain data sets, as shown in the movie below. This project was built to explore earthquakes by showing hypocenters in 3D space and in relation to each other. As before, each display node only renders the data needed for its displays and only to the level of detail specified to meet the desired performance.

Doctoral student Zhiyu He modified MediaViewer to display genetic data in addition to brain imagery for a project with UCI Drs. Fallon and Potkin to explore genetic bases for Schizophrenia. This research turned out to be very fruitful, as HIPerWall speeded up the discovery process for Drs. Fallon and Potkin. The image below shows Dr. Fallon on the left and Dr. Potkin on the right in front of HIPerWall. Photo taken by Paul Kennedy for UCI.

Drs. Fallon and Potkin in front of HIPerWall

Another software project started on HIPerWall is the Cross-Platform Cluster Graphics Library CGLX. This powerful distributed graphics library makes it possible to port OpenGL applications nearly transparently to tiled displays, thus supporting 3D high-resolution visualization. Professor Falko Kuester and Dr. Kai-Uwe Doerr moved to UCSD at the end of 2006 and continued development of CGLX there. CGLX is now deployed on systems around the world.

In the next article, I will cover new research software from 2007 on when I took over leadership of the project at UCI. This new software forms the basis of the technology licensed to Hiperwall Inc., significantly advanced versions of which are available as part of Samsung UD systems and as products from Hiperwall Inc. In a future post, I will cover the wonderful content we have for HIPerWall (and Hiperwall) and how easy it is to make high-resolution content these days.

Asymmetric Computing: Days of Cheap GPU Computing may be over

Reposted from my Asymmetric Computing blog.

For those of us interested in GPU computing, Greg Pfister has written an interesting article entitled “Nvidia-based Cheap Supercomputing Coming to an End” commenting on the future of NVIDIA’s supercomputing technology that has been subsidized by gamers and commodity GPUs. It looks like Intel’s Sandy Bridge architecture may end that.

If you don’t read Greg Pfister’s Perils of Parallel blog, you should. He’s been doing parallel computing for a long time and is very good at exposing the pitfalls and hidden costs of parallelism.

Added Hiperwall Description

I added a description of Hiperwall.