With the challenges of the global pandemic, the team mostly working from home, travel restrictions, and component shortages delaying installations, the Hiperwall development team had a banner year producing three important product releases! Our team’s ability to remain productive while working remotely is a testament to the team’s creativity and drive and the commitment of everyone in the company. Since we make video wall software, we need access to specialized equipment, including well-configured networks and video walls for testing, neither of which tend to be in most people’s homes. Remote access, cameras, and some trips to the office allowed the team to perform excellent work under austere conditions.
We started the year with our benchmark Hiperwall 7.0 release. This release added fundamental improvements, including the groundbreaking Quantum viewer software to synchronize content playback across many computers driving many LED controllers. Because of this capability, Hiperwall has been deployed to drive enormous LED video walls in control rooms around the world. We also added new capabilities for source management and many other features and additions.
Mid-year, we announced a feature release version, Hiperwall 7.1, which included new fault-tolerance content (called HiperFailSafe Content), in addition to VMS plug-in support, so popular video management systems can directly interact with Hiperwall systems and add their content to the video wall. Many other features and performance improvements were also included.
As the third quarter ended, we released Hiperwall 7.2, which added new audio volume and muting controls. This is much more interesting and capable than it sounds because of the complexity of managing many audio devices and even more audio sources (movies, streams, etc.). We also added significant new capabilities to make our sources even more robust and compatible with industry standards.
As the new year begins, we have new features and releases under development and testing – I can’t wait to show you the great capabilities that are coming. I’m proud of our team and their accomplishments in 2021, and I expect 2022 will be even more amazing!
Hiperwall video walls have supported playing content with audio since the beginning, but we had avoided adding audio controls to the system because of the complexities described in this article. That changed with a product version we released several months ago. This article describes the journey the development team took to make the robust and powerful audio controls in the current product. Customers with maintenance contracts can upgrade to the current version and get these new audio management capabilities for no additional cost.
Why Audio?
Hiperwall makes video wall software, so why is there a need for audio in such a visual medium? Many of our customers use their Hiperwall video wall in a control room environment, where audio might be a distraction. Therefore, whatever audio features we add must allow operators to easily mute the system to avoid disturbing the people monitoring and using the control room video wall. Many other customers, however, use their Hiperwall video wall in some sort of public (or employee) facing application where audio is commonly used. In the past, we advised such customers to use a multi-input audio mixer to manage their audio with fine-grain control.
Why is Audio Management a Challenge?
The Hiperwall video wall software is a distributed and parallel computing system that uses multiple display computers to draw content on video wall displays, including LCD, projector, and LED walls, which allows it to scale from very small systems to enormous systems with hundreds of displays or thousands of square feet of LED tiles. Each of those display computers can have audio output and perhaps even speakers attached. Beyond that, a Hiperwall video wall can have tens or hundreds of sources and content items, many of which can have audio channels. Therefore, video wall audio control not only has to manage display computer output volume, but also volume levels for perhaps hundreds of content items, all in an easy-to-use and scalable manner.
Project Beginnings
One of our significant partners expressed the need for volume control for the display computers, because some of their customer projects could use those capabilities. Having looked into volume controls in the past, I thought, “I know how to do that. I’ll do it.”
In the early days of Hiperwall software development, I made the Hiperwall Daemon, a small program that runs on each of the display computers to manage them, transfer content and issue commands, and make sure the display software is running. Having the Daemon control the output volume of its display computer is easy but communicating to make it do so was the challenge. I enhanced the state transfer protocol to allow individual volume control for each of the possibly many displays in the system. The protocol needed to be concise, yet scalable, so the mechanism I added extended an existing control scheme to add volume values when needed. I also added a volume control slider and status reporting to the “Walls” tab in the controller software so volume levels can be quickly and easily changed for one display or an entire wall at a time. The volume values also had to be consistent between fault-tolerant controllers, so I made sure the controllers coordinated.
Content Audio Control Development
After discussing this new capability with the team, we decided that we couldn’t just stop at device volume control, but needed to address content volume as well. Managing volume of individual content items is more technically challenging than just changing device output volume. Because each display computer can have many contents playing audio simultaneously, each object needs separate audio controls both on the controller and built into the display software. Managing audio levels for multiple items is a lot like using an audio mixer like the Windows mixer (right-click on the speaker in your system tray and choose “Open Volume Mixer” to see what I mean). Each object can have a volume setting, but the display computer volume limits that volume to its maximum.
The development team members who make the display software developed mechanisms to adjust the volume level for each object open on their display(s), and that worked very well. We added object volume to the extensive list of properties we manage for each object (like size, position, rotation, transparency, etc.). We added a volume slider to the controller so the operator could change the volume of a selected content object. We the made mechanisms to propagate the volume properties throughout the system and between fault-tolerant controllers. We also extended the existing Environment mechanism so volume levels are saved with each Environment and restored when the Environment is loaded. We even added volume control to our XML-based web services style interface so third-party programs can control content volume. But we weren’t done yet…
Audio Muting Policies
Being able to quickly and easily mute audio content is critical – we do it all the time with our TVs and radios and phones, but when you have many outputs and potentially hundreds of audio sources, it becomes a lot more complicated. We first had to decide which operations made sense. Of course, we needed the ability to mute all content, but we had to decide what it meant to mute audio. Was it just setting the volume of all the content items to 0? That’s easy, but it doesn’t remember the old value in case we want to unmute some or all of the content. Your TV remembers the volume it was before it was muted, so we decided customers would expect that kind of behavior, even though a TV is only playing one thing at a time, while we can show many items at once. Therefore, we had to make muting reversible rather than just setting an object’s volume to 0. This meant maintaining a bit more state and a lot more logic to mute and unmute properly. Of course, muted objects can be unmuted as a group or individually.
Beyond muting everything, we also devised a “mute all but this” mechanism, which allows the operator to define an object that should be the focus of attention by eliminating any potentially distracting audio from other source. While this is not something TVs would have, it is very beneficial in a video wall environment with many sources that include audio.
Convenient and Simple Controls
Because audio has the potential to be disruptive and distracting, we had to make sure the controls to mute everything were easy to find and always available. In addition to the “Mute All” capability as part of the audio controls, we added very obvious system Mute button to the controller so it can be used quickly and easily. Like its “Mute All” counterpart, this Mute button can mute or unmute all the content at once. It has intuitive icons to show its state rather than words like the button in the audio controls.
In addition to the fault tolerance controller software, we have another way of controlling the video walls called HiperOperator. This is a very easy-to-use graphical application that several possibly remote operators to manipulate content on the video wall simultaneously. In keeping with the simplicity of HiperOperator, we had to add very simple audio controls to it. HiperOperator makes it very easy to manipulate properties of individual objects, including applying filters, rotating, and even making the transparent using a simple menu. We added audio volume and muting to the menu, making it extremely convenient to set audio properties of content items. We also added the object’s volume level to the descriptive label shown with each object in HiperOperator.
As with the controllers, HiperOperator needed a very simple way to mute and unmute all audio in the system, so we added a button/icon in the corner of the display that toggles the muted state when clicked. All of these volume and mute states need to be coordinated across all controllers and HiperOperators, too, so if one user performs a volume control operation, it is reflected everywhere. As anyone who builds distributed systems knows, such coordination is easier said than done, but because we were building on already robust protocols and communications links, it turned out very nicely.
Development of audio control capabilities was a very interesting and challenging activity, but more for policy reasons than technical ones. The underlying technology to change display computer output volume or individual object volume is not particularly difficult. Rather, defining behaviors and capabilities was by far more challenging. Since multi-input, multi-output audio is rare except in audio mixers, we had to do lots of prototyping and debating of how things should work. We worked with our partners to get feedback on our designs and made some changes based on their suggestions. The new audio features are the result of a great collaborative effort among the dev team and with our technical services group and our partners.
I recently wrote a blog post for the Hiperwall website on the value and quality of video wall controller solutions. I wrote the post because one of our potential customers was comparing Hiperwall to a very cheap bundled “solution” and they were being told the bundled product was comparable and good enough for what they needed. In almost no scenario was that true, so I made a list of questions at the end of the blog post that any video wall customer should ask as they are evaluating solutions.
I wrote a white paper about why resolution matters in control rooms and other environments where information density and content are important. I wrote it in response to the increasing popularity of Direct View LED (dvLED) in digital signage. Because dvLED is bright and beautiful and almost seamless, there is temptation to use it for control room video walls rather than the tiled LCD panels normally used. dvLED is quite costly, so some integrators may push it to increase their profit, but for the moment, control room customers will likely be unhappy with the result.
The problem with today’s dvLED tiles is that their pixel pitch is quite coarse. For digital signage and concert venues, this is fine, but if detailed information is to be displayed or for close viewing, it is not ideal. The pixel pitch (the distance between pixels) for some of the best currently available dvLED systems is more than 4 times worse than ordinary commercial LCD monitors. That means for the same size display, every 4 pixels on an LCD display are equivalent to 1 pixel on the dvLED tile. So essentially, buying a top-quality dvLED system costs a lot more and gives a quarter the resolution. (For this example, the assumptions are 1.3mm pitch for the dvLED and 0.6mm pitch for the LCD panel – I know 0.9mm LED is coming, but it’ll be expensive).
If we revisit this in 2 years, costs for dvLED will be down and the pixel pitch will be much closer to the 0.6mm that a common commercial LCD panel can do. Until then, for detailed information display, avoid paying a lot more for much coarser resolution.
I wrote a blog post for the Hiperwall web site about how we can put content anywhere and at pretty much any size. This was prompted by one of our competitors claiming they could do Picture-In-Picture, while we couldn’t. Well that’s absurd, because we can put anything anywhere, so if you want a video stream in front of another video stream, just move it there. Heck, put 2 or 3 in front – that’s OK. I even made the video shown in the blog post of an animated video stream becoming partially transparent as it flies over another video stream that is in front of a very high-resolution live data feed of the air traffic map around LAX.
This capability has been part of Hiperwall for years, so we don’t really think about how powerful and different it is until we’re reminded of exactly how limited our competitors are. Picture-in-Picture is an amusing thing for a competitor to think is great, because it has fallen out of favor – the TVs I bought in the first decade of this century all had it as a feature, but modern TVs don’t bother, because it is a hassle and most people don’t use it. Now I’m not saying it isn’t a useful concept for a video wall, but I think flexible object positioning is far more capable and powerful than very limiting Picture-in-Picture features.
I wrote a post for the Hiperwall blog comparing the traditional A/V technology to circuit switching in networks, while the newer IT-based visualization approaches are more like packet switching. Click for more.
At Hiperwall, our tagline is “See the big picture” and we do that well. In applications from scientific visualization to control rooms and operations centers, being able to see lots of information in great detail allows our users to understand more clearly and make important decisions quickly. We’ve shown billion pixel images on Hiperwall systems, but haven’t had anything larger until now.
One of our developers used a capability provided by an NVIDIA library that the developers of Witcher 3: The Wild Hunt included in their video game to take an enormous game capture. He chose a resolution of more than 61,000 pixels by 34,000 pixels, so 2 gigapixels altogether. It took an absolute beast of a computer about a minute to render and save the resulting 1 GB file. Once the image was imported into our Hiperwall system, we could see how amazing it looked.
Here we’re showing the image fully zoomed out on our small 24 megapixel Hiperwall, so we can see Geralt on his horse. Click on the photo to see it in more detail.
Geralt in a 2Gpixel image shown on Hiperwall
When we zoom in to see his face, however, we can see the amazing detail in the rendering. We can see details of his witcher eye and the links on his armor. This image shows a zoom level of 1.0, so every pixel on the Hiperwall shows one pixel from the image.
Finally, we animated zooming in on the image, so you can see how smoothly we can manipulate images, even those with 2 billion pixels. (Sorry about the canned music – we had people talking in the room as I shot the video.)