I’m in favor of the idea of backing up data offsite to avoid disasters and their effects. Living in California, where we overdo it in both earthquakes and fires, data safety is essential. While I was working on my dissertation research and the dissertation itself, I would back all of it up on a CD every few months and mail the CD to my wife’s relatives living across the country.
These days, it should be easier to do offsite backup, but our data is much bigger, too. Companies like Mozy and Carbonite offer unlimited online backup for reasonable monthly fees. Jungle Disk is a pay-as-you go plan that uses Amazon S3 or RackSpace Cloud for storage (since those commercial services charge by the GB, that cost is passed on to the customer).
I use Mozy on my Windows PC and it mostly works (on the PC, I’ll talk about Mac in a minute). The biggest problem with Mozy (and if you read all the complaints on the web, Carbonite too) is that it is so slow. I use the PC in a building and on a campus that has gigabit speeds all the way to the internet provider, yet the upload speed was always kilobits per second or maybe 1 mpbs. And there were long periods of time where the upload rate was 0, yet the internet connection was fine. Clearly Mozy throttles bandwidth and sometimes even stops uploads. This is apparently a really big problem if you need to restore a few hundred GB of data. Mozy and the others don’t really have a good solution to this. At least the services, such as Jungle Disk, based on commercial storage clouds can’t do as much bandwidth throttling, because the cloud owners manage the networks and it is in their best interest to make access fast (and they likely don’t know or care what data is being stored there).
While Mozy is tolerable on my PC, it was completely worthless on my Mac. I designated a few hundred MB of files, including my Quicken file and other documents that I couldn’t afford to lose. Mozy claimed it was done backing up for months, yet when I checked into it, it had apparently only backed up a single 42KB file. So I had a false sense of security, but really was completely unprotected, which is intolerable. Therefore, I deleted Mozy forever from my Mac and will delete it from my PC when my subscription expires in a couple months.
I have recently been trying CrashPlan on my Mac, which is pretty nifty. It manages both online backup and backup to local storage. It seems to have a family plan for a very reasonable rate, but for the moment, I’m using the 30 day free trial, and the online backup has been fast (my cable modem’s full upload rate). The backups to local storage also work fine, though it seems to have a bit of trouble remembering to be able to use a disk image on a network file server as a backup destination. But overall, CrashPlan seems like a pretty good solution to backup, plus would make restoration from local backup fast and easy while remote restores would obviously be slower, but available in case of disaster.
I will present a talk on “Hiperwall: From Research Project to Product” at the UCI EECS Colloquium at 5PM on Wednesday, Nov. 10, in McDonnell Douglas Engineering Auditorium.
NEC Display Solutions announced today that they are partnering with Hiperwall for our software to power high-resolution display walls (sorry, I can’t stand the more limiting term “video walls”).
I just installed a really nifty WordPress plugin called WPtouch. If you are reading this on a normal PC, Mac, or Linux browser or even an iPad, you won’t see any difference. If, however, you look at the site on an iPhone or other smartphone, you’ll see a friendly mobile interface that takes less time to download and looks like a mobile App. Of course, you can switch back to the normal view at the bottom of the page.
If you run a WordPress site, give WPtouch a shot. It does exactly what it says it will do.
I’ve been intrigued with low-power Intel Atom + NVIDIA Ion combinations for a while now and have recently put together a little net-top computer with a 1.8GHz Atom (dual-core + Hyperthreading) paired with an Ion2 plus 4GB RAM and the hard disk left over after I upgraded my PS3’s drive. The unit is a very small machine – bigger than a Mac Mini, but smaller than any minitower. It has Dual-Link DVI, so it can drive high-res displays, as well as a bunch of USB ports, eSATA, and a card slot.
Given the clock speed of this machine, I would expect it to perform pretty well. While it isn’t paired with a very new, fast hard drive, it seems to boot fairly quickly and is responsive (under Windows 7 64-bit). While I haven’t tried any CUDA tests on there, I ran Prime95 to test it and burn it in, and I’m surprised at how slow the Atom is. The 1.8GHz Atom takes between 8 and 10 times (not percent) longer than my 2.16GHz Core2Duo in my laptop for each of the Prime95 benchmark tests. My guess is that this is a very floating-point-intensive test, so it goes to show that the Atoms stink at floating-point computations (probably like the Cell in the PS3 stinks at double-precision computations). Perhaps FP is even emulated on the Atom.
So what? So the Atom is lousy at floating-point arithmetic? All people use them for is netbooks and set-top boxes, right?
Very true, but those netbooks and set-top boxes are sold as Windows (or Linux) machines that can run normal software, not special-purpose machines, like iPads and Android phones. Why does that matter?
Well, for years, FP speed has been getting better and better to the point that programmers were encouraged to use FP operations rather than faking it with fixed-point or making do with integer arithmetic. Games, graphics applications, and much more has become FP-intensive, since it is so fast on normal Intel and AMD processors, yet those programs will suffer if run on an Atom. Sure, we won’t be running equation solvers on Atoms, but this new reality is bucking a trend that has made programming easier (always a good thing) while providing good performance. While clock speed has never been a very good measure of performance, now more than ever, we need to be very clear that a 1.8GHz Atom is MUCH weaker than a 1.8GHz Core2Duo at some operations. Even older processors, like Pentium 4, will run rings around the Atom when doing floating point.
So I admire Intel’s ability to save energy with the Atom and to make it work well as a Windows host processor, but I am alarmed that they are willing to trade so much performance for power (though, frankly, Atoms are quite low power). Perhaps the power consumption is also 8-10 times (or more) lower than the Core2Duo in my laptop and is probably 50 times lower than that of a desktop PC, but the clock speed numbers are quite misleading when it comes to certain important kinds of performance. And that is bound to make some people unhappy.
The HIPerWall system was a pretty impressive collection of hardware for 2005, with 50 processors (more were added later), 50 GB of RAM, more that 10 TB of storage (we got a gift of 5TB worth of drives for our RAID system from Western Digital), and 50 of the nicest monitors available, but it was the software that really made it special. Remember that HIPerWall is an acronym for Highly Interactive Parallelized Display Wall. We took that interactivity seriously, so we didn’t just want to be able to show a 400 million pixel image of a rat brain, but we want to allow users to pan and zoom the image to visually explore the content. This user interactivity set the HIPerWall software apart from the other tiled display software available at the time and is still a major advantage over competing systems.
The original software was written by Sung-Jin Kim, a doctoral student at the time who was working on distributed rendering of large images. His software, TileViewer, was originally written to use Professor Kane Kim’s TMO distributed real-time middleware, but Sung-Jin ported it to Mac OS X and IP networking so it could work on HIPerWall. TileViewer ran on both the control node and on the display nodes. The control node managed the origin and zoom level of the image, while TileViewer on the display nodes computed exactly where the display was in the overall pixel space, then loaded and rendered the appropriate portion of the image. We preprocessed the images into a hierarchical format so the right level and image tiles (hence the name) could be loaded efficiently. The images were replicated to the display nodes using Apple’s very powerful Remote Desktop software. TileViewer also allowed color manipulation of the image using Cg shaders, so we took advantage of the graphics cards’ power to filter and recolor images. TileViewer didn’t have much of a user interface beyond a few key presses, so Dr. Chris Knox, a postdoctoral scholar at the time, wrote a GTK-based GUI that allowed the user to select an image to explore and then provided zoom and movement buttons that zoomed and panned the image on the HIPerWall. The picture below shows Dr. Chris Knox and Dr. Frank Wessel examining a TileViewer image on HIPerWall. The Macs are visible on the left of the image. The one below that shows Sung-Jin Kim in front of TileViewer on HIPerWall.
TileViewer in use on HIPerWall
Sung-Jin Kim in front of HIPerWall
The HIPerWall was built in the newly built Calit2 building at UCI. We knew HIPerWall was coming, so Professor Falko Kuester, the HIPerWall PI, and I, as Co-PI, worked to get infrastructure in place in the visualization lab. Falko was on the planning committee for the building, so we hoped our needs would be met. The building had good networking in place, though no user-accessible patch panels, but power was “value engineered” out. We quickly determined (blowing a few breakers in the process) that HIPerWall would need a lot more power than was available in the visualization lab at the time. The Calit2/UCI director at the time, Professor Albert Yee, agreed and ordered new power circuits for the lab. Meanwhile, postdocs Kai-Uwe Doerr and Chris Knox were busy assembling the framing and installing monitors into the 11×5 frame designed by Greg Dawe of UCSD. We had a deadline, because the Calit2 Advisory Board was to meet in the new UCI Calit2 building and Director Larry Smarr wanted to show HIPerWall. At somewhere around 3:00 PM on the day before the meeting, the electricians finished installing the power behind the wall. At that point, we moved the racks into place, putting 5 PowerMac G5s on each rack, installing Ethernet cables and plugging in the monitors and Macs to power. Once we booted the system, it turned out that TileViewer just worked. We were done making the system work by 6PM and it was a great surprise for Larry Smarr that HIPerWall was operational for the meeting the next morning.
Larry Smarr at initial HIPerWall demo
Falko Kuester at initial HIPerWall demo
Sung-Jin Kim then turned to distributed visualization of other things, like large datasets and movies, also in a highly interactive manner. The dataset he tackled first was Normalized Difference Vegetation Index data, so the new software was initially named NDVIviewer. This software allowed the import of raw data slabs that could then be color coded and rendered on the HIPerWall. In keeping with the “interactive” theme, each data object could be smoothly moved anywhere on the display wall and zoomed in or out as needed. Once again, the display node software figured out exactly what needed to be rendered where and did so very rapidly. The NDVI data comprised sets of 3D blocks of data that represented vegetation measured over a particular area over time, so each layer was a different timestep. The software allowed the user to navigate forward and backward among these timesteps in order to animate the change in vegetation. The picture below shows NDVIviewer running on HIPerWall showing an NDVI dataset.
NDVI visualization on HIPerWall
NDVIviewer was also able to show an amazing set of functional MRI (fMRI) brain scans. This 800 MB data set held fMRI brain image slices for 5 test subjects who were imaged on 10 different fMRI systems around the country to se whether machines with different calibration or from different manufacturers yield significantly different images (they sure seem to do so), for a total of 50 sets of brain scans. NDVI viewer allowed each scan to be moved anywhere on the HIPerWall, and the used could step through an individual brain by varying the depth or through all simultaneously. In addition, the Cg shader image processing could be used to filter and highlight the images in real-time. Overall, this was an excellent use of the huge visualization space provided by HIPerWall and never failed to impress visitors.
fMRI dataset visualization on HIPerWall
NDVIviewer could do much more than just show data slices. It showed JPEG images with ease, smoothly sliding them anywhere on the wall. It could also show QuickTime movies, using the built-in QuickTime capability of the display node Macs to render the movies, then showing the right portions of the movies in the right place. While this capability had minimal scientific purpose, it was always impressive to visitors, because a playing movie could be resized and moved anywhere on the HIPerWall. The picture below shows a 720p QuickTime movie playing on HIPerWall.
HD movie playing on HIPerWall
Sung-Jin Kim added yet another powerful feature to NDVIviewer that allowed it to show very high-resolution 3D terrain models based on the SOAR engine. SOAR is extremely well suited for tiled display visualization, because it is a “level-of-detail” engine that renders as much as if can of the viewable area based on some desired level of detail (perhaps dependent on frame rate or user preferences). NDVIviewer’s implementation allowed the used to vary the level of detail in real-time, thus smoothing the terrain or rendering sharper detail. The movie below shows SOAR terrain rendering on HIPerWall.
Because of the power and capabilities of NDVIviewer, I started calling it MediaViewer, a name which stuck with almost everyone. An undergraduate student, Duy-Quoc Lai, doing summer research added streaming video capability to MediaViewer, so we could capture Firewire video from our Panasonic HD camera and stream it live to the HIPerWall. Starting with the addition of streaming video in 2006, we started transitioning the software to use the SPDS_Messaging library that I had developed for parallel and distributed processing research in my Scalable Parallel and Distributed Systems laboratory.
In addition to TileViewer and MediaViewer, several other pieces of software were used to drive the HIPerWall. The SAGE engine from the University of Illinois, Chicago’s Electronic Visualization Lab was the tiled display environment for OptIPuter, so we ran it on HIPerWall occasionally. See the movie below for an example of SAGE on HIPerWall.
Dr. Chris Knox wrote a very ambitious viewer for climate data that could access and parse netCDF data for display on the HIPerWall. This allowed us to explore data sets from the UN Intergovernmental Panel on Climate Change (IPCC) on a massive scale. We could see data from many sites at once or many times at once, or both. This outstanding capability was a fine example of what HIPerWall was intended to do. The picture below shows one version of the IPCC viewer running on HIPerWall.
IPCC climate models explored on HIPerWall
Doctoral student Tung-Ju Hsieh also modified the SOAR engine to run on HIPerWall. His software allowed whole-Earth visualization from high-res terrain data sets, as shown in the movie below. This project was built to explore earthquakes by showing hypocenters in 3D space and in relation to each other. As before, each display node only renders the data needed for its displays and only to the level of detail specified to meet the desired performance.
Doctoral student Zhiyu He modified MediaViewer to display genetic data in addition to brain imagery for a project with UCI Drs. Fallon and Potkin to explore genetic bases for Schizophrenia. This research turned out to be very fruitful, as HIPerWall speeded up the discovery process for Drs. Fallon and Potkin. The image below shows Dr. Fallon on the left and Dr. Potkin on the right in front of HIPerWall. Photo taken by Paul Kennedy for UCI.
Drs. Fallon and Potkin in front of HIPerWall
Another software project started on HIPerWall is the Cross-Platform Cluster Graphics Library CGLX. This powerful distributed graphics library makes it possible to port OpenGL applications nearly transparently to tiled displays, thus supporting 3D high-resolution visualization. Professor Falko Kuester and Dr. Kai-Uwe Doerr moved to UCSD at the end of 2006 and continued development of CGLX there. CGLX is now deployed on systems around the world.
In the next article, I will cover new research software from 2007 on when I took over leadership of the project at UCI. This new software forms the basis of the technology licensed to Hiperwall Inc., significantly advanced versions of which are available as part of Samsung UD systems and as products from Hiperwall Inc. In a future post, I will cover the wonderful content we have for HIPerWall (and Hiperwall) and how easy it is to make high-resolution content these days.
While looking at my website logs, I found an interesting service had been crawling my pages. Site Dossier (http://www.sitedossier.com/site/www.stephenjenks.com) tells a number of interesting things about the site, including the interesting discovery that there are over 3600 sites on the IP address (and by clicking the IP address, you can see what they are). I had no idea that virtual hosting was so efficient!
Kudos to Skype for adding new settings in their latest iPhone app that should address some of my earlier complaints. It looks like the app can be set to sign out of Skype immediately or after a delay when it is put in the background. This means the power drain should go away.
Of course, I’d like the best of both world: have the app stop communicating with the Skype servers after a time, but allow a notification message to wake it back up in time to re-establish connection and answer an incoming call. I’m sure if such things are possible, they’ll work on it, because I assume the purpose of a Skype app is to help people use Skype as much and as well as possible.
Apple was late to the multitasking party with the iPhone. The reason Steve Jobs kept giving is that multitasking allows apps to run in the background and drain the battery and that nobody really has come up with a good way to fix that. For iOS 4, Apple defined a strict set of criteria for apps to multitask according to a set of profiles (music, VOIP, mapping, etc). This should have helped, but it doesn’t. Apps are, of course, written by people, and some of those people are not great programmers or lazy or just haven’t been taught these things, so apps that use multitasking often cause huge battery drain.
Skype is the worst offender on my iPhone. It is so bad that I am tempted to delete it. There really needs to be an option to disable multitasking for some poorly written apps, like Skype (and AIM isn’t that good either). I have been using Skype more often recently, as the number is on my business cards, so it is nice to have Skype handy on the iPhone. I noticed that the battery tends to drain very quickly on my iPhone 3GS after using Skype and that killing Skype in the multitasking bar (double tap Home button, press and hold Skype until it wiggles, then tap the minus sign to kill it) makes the battery drain stop.
It is so bad that last last night (well, 2 in the morning), the iPhone was fully charged with Skype idling in the background, yet it was fully drained and had shut itself off by this morning. Not only will that permanently hurt the battery a bit, it means my phone is now useless until it recharges a bit. VERY ANNOYING!
So what do I think should be done:
1) Have settings either in the OS or in apps that allow us to make them suspend rather than keep running when they are not in the foreground (I know, this is adding user-visible cruft, which Jobs hates, but if app writers can’t get it right, then we users must take things into our own hands)
2) Disable multitasking when the battery level gets below say 30% (perhaps make it user-settable – I’d choose 50% or 60%).
3) Reject apps submitted to the App Store if they use too many cycles or perform too much communications when idle in the background.
So far, multitasking in iOS for has not brought me any benefits on the iPhone (though I think it would help on the iPad), so I think there should be an option where we can just turn it off! And Skype should fix their damn app to stop draining the battery, dammit!