I made a short post on LinkedIn about Content Integrity and HiperZones.
|
|||||
I made a short post on LinkedIn about Content Integrity and HiperZones. Hiperwall video wall software has always been great at accepting and displaying lots of sources of several types, but sources and their uses have changed over the years as what they were showing evolved. This post contains my experiences with the changing landscape of sources used in Hiperwall video wall systems around the world, particularly for control rooms and similar, rather than signage applications. Early daysOur first source type was the Sender, followed closely by the Streamer. These standalone apps send data to the video wall computers in very different ways making their uses unique. SenderThe Sender software (now called HiperSource Sender) captures the screen or part of the screen of the computer it runs on and sends that to the Hiperwall system. Sender can run without even being installed on the computer (for environments that restrict software installation). The encoding of the Sender’s video stream is completely CPU-based, so it runs on nearly any PC, Mac, or Linux box, and the performance scales with the CPU and network speed. Sender can send its data directly to the video wall if it is on the LAN or can use the HiperController as an intermediate via an encrypted channel. This allows Sender sources to come from anywhere on the Internet. We extended this capability with a product now called HiperCast (was Share) that can deliver multiple Sender sources to multiple Hiperwall systems around the world. Our clients use Sender for monitoring dashboards, social media feeds, desktop sharing, and more. Sender provides multiple captures on a single machine, runs in VMs, and supports KVM control of the source machine. The flexibility and utility of Sender means customers want to use it in many situations where high frame rate is not required. Sender also scales and resizes handily, so if a user changes the resolution or orientation of the screen of a Sender PC, it can adjust automatically. While Sender is our oldest source type, it has undergone many performance and capability improvements over the years and remains the simplest and perhaps most flexible of our sources. Sender was built at a time when most of our customers were running custom applications to monitor and control their systems, so it was perfect for delivering those output screens to the video wall. Since the nature of such applications has changed to be more web-based, other solutions, especially HiperSource Browser, discussed later, grew in popularity. StreamerThe HiperSource Streamer software is designed to deliver high frame rate, high quality video streams to the Hiperwall video wall. It uses hardware accelerated video capture, compression, and encoding to send a bandwidth-efficient video stream. Streamer uses our patented approach to synchronize playback on different display computers to make the frames sync across display boundaries for a seamless experience even if the video spans multiple display tiles. Streamer provides display capture, like Sender, but also supports capture cards, so it can stream video content captured from HDMI, SDI, or analog video sources to the wall. For display capture, Streamer also supports KVM control of the Streamer PC, which means content and applications on that PC can be interacted with while being shown on the video wall. Because Streamer requires hardware support to encode the stream, it must run on a moderately powerful PC on the Hiperwall LAN. Since Streamer is great for high frame rate video streams, our customers use it to play videos, show presentations that have artistic transitions, share VMS consoles or other applications where high frame rate matters, and to show TV stations to monitor news and weather. Streamer has evolved over the years to support more hardware types and to manage capture cards in a flexible manner. Streamer continues as the go-to source type for video-style streaming. Streamer+With version 8 or the Hiperwall video wall software, we added a complete re-imagining of the Streamer idea in a product caller Streamer+. This new product was built from the ground up for performance, allowing more streams, more desktop captures, more capture card inputs, and higher resolution and frame rate than the original Streamer on supported hardware. The new Streamer+ has been the go-to choice for new installations and even upgrade customers since it was released. Streaming EvolutionWhile Sender and Streamer are well suited to sending desktops and captured streams to the Hiperwall, our customers also wanted to send networked video streams from IP Cameras, video encoder boxes, and VMS gateways to their Hiperwall video walls. Since we already had a powerful synchronized streaming protocol from the Streamer, we adapted that to make HiperSource IP Streams. The IP Streams source software can ingest many types of network streams from ONVIF cameras, RTSP encoders, RTP and HTTP sources, and more. It then wraps the streams with our patented approach to synchronize playback across the multiple displays of the Hiperwall video wall and delivers the streams to the display computers. Because the IP Streams software is so efficient, a single moderate PC can ingest and process around 75 streams for simultaneous delivery and playback on a Hiperwall system. With the addition of HiperSource IP Streams, and to provide more flexibility to our customers, we combined all the source license types into a single HiperSource license type. Therefore, customers could easily switch sources as their needs changed and no further license updates were needed. This simple source interchangeability has worked very well for our customers who may not know exactly what they want to use when they are defining their system. Now they can pick and choose and change as needed. IP Streams has become a significantly popular source type for many of our clients’ applications. Some need to display streams from IP cameras, possibly via a VMS gateway, such as those provided by Milestone and Genetec. Some customers in secure or otherwise restricted environments use video encoder boxes to take the HDMI output of a computer and convert it to an RTSP stream that the IP Streams software delivers to the Hiperwall. Thus, their secure, mission-critical computers never have to be on the same network as the Hiperwall system, and the only interface to the Hiperwall network is a video cable. IP Streams also delivers content from TV decoder boxes, now that MPEG 2 streams are supported. Large scale revolutionWhile Sender and Streamer have been able to capture web pages and send them to the video wall from the start, we wanted to take web content to the next level, so we developed HiperSource Browser. It is a real web browser, based on the Chromium engine, so it supports current web technologies, but it scales in amazing ways. Like most web browsers, it supports tabs with different content (web pages or PDFs, etc.), but each tab is actively rendered and sent to the Hiperwall video wall simultaneously. This means one PC can deliver several web page sources at once while the user does something else entirely. HiperSource Browser also scales in size, allowing a single web page capture to be huge. A control room customer with an enormous video wall uses HiperSource Browser to send several dashboards with 10s of millions of pixels each to their giant Hiperwall. That isn’t a typo – tens of millions pixels worth of custom dashboard data each! Other customers use Browser for normal-size web content. Because the world has migrated from custom applications to web-based application and dashboards, HiperSource Browser has an incredibly bright future as more and more customers switch over to modern infrastructure. Its flexibility to send multiple content items of differing, possibly enormous, size makes it strongly in demand. And because it works in virtual machine environments and doesn’t interfere with other uses of the PC, it is very friendly to the staff and operators that use Hiperwall video walls. Browser is built on the Sender protocol, so it can send across the Internet via an encrypted channel and HiperCast can deliver Browser streams to multiple Hiperwall systems at once. The futureThe current broad range of sources described here support our customers’ current use cases, but technology is always changing. Performance and feature improvements are obvious next steps, but we are always examining new use cases and customer needs. New streaming protocols are becoming popular for both audio and video, so we are watching market acceptance of those. Integration with collaboration or other products could also be in the cards as the world recovers from disruptions over the last years. We have great sources to get your content to your Hiperwall video wall system, but we’re far from done. I made this comment on LinkedIn about how awesome HiperZones are, so thought I’d share it here too. HiperZones is a new capability added to Hiperwall video wall software Version 8.0.
With the recent release of Hiperwall video wall software version 8.0, “content integrity” is foremost on our minds so I wrote a blog post explaining it and our features to help achieve it. The official Hiperwall version of this post is here. With the challenges of the global pandemic, the team mostly working from home, travel restrictions, and component shortages delaying installations, the Hiperwall development team had a banner year producing three important product releases! Our team’s ability to remain productive while working remotely is a testament to the team’s creativity and drive and the commitment of everyone in the company. Since we make video wall software, we need access to specialized equipment, including well-configured networks and video walls for testing, neither of which tend to be in most people’s homes. Remote access, cameras, and some trips to the office allowed the team to perform excellent work under austere conditions. We started the year with our benchmark Hiperwall 7.0 release. This release added fundamental improvements, including the groundbreaking Quantum viewer software to synchronize content playback across many computers driving many LED controllers. Because of this capability, Hiperwall has been deployed to drive enormous LED video walls in control rooms around the world. We also added new capabilities for source management and many other features and additions. Mid-year, we announced a feature release version, Hiperwall 7.1, which included new fault-tolerance content (called HiperFailSafe Content), in addition to VMS plug-in support, so popular video management systems can directly interact with Hiperwall systems and add their content to the video wall. Many other features and performance improvements were also included. As the third quarter ended, we released Hiperwall 7.2, which added new audio volume and muting controls. This is much more interesting and capable than it sounds because of the complexity of managing many audio devices and even more audio sources (movies, streams, etc.). We also added significant new capabilities to make our sources even more robust and compatible with industry standards. As the new year begins, we have new features and releases under development and testing – I can’t wait to show you the great capabilities that are coming. I’m proud of our team and their accomplishments in 2021, and I expect 2022 will be even more amazing! Hiperwall video walls have supported playing content with audio since the beginning, but we had avoided adding audio controls to the system because of the complexities described in this article. That changed with a product version we released several months ago. This article describes the journey the development team took to make the robust and powerful audio controls in the current product. Customers with maintenance contracts can upgrade to the current version and get these new audio management capabilities for no additional cost. Why Audio?Hiperwall makes video wall software, so why is there a need for audio in such a visual medium? Many of our customers use their Hiperwall video wall in a control room environment, where audio might be a distraction. Therefore, whatever audio features we add must allow operators to easily mute the system to avoid disturbing the people monitoring and using the control room video wall. Many other customers, however, use their Hiperwall video wall in some sort of public (or employee) facing application where audio is commonly used. In the past, we advised such customers to use a multi-input audio mixer to manage their audio with fine-grain control. Why is Audio Management a Challenge?The Hiperwall video wall software is a distributed and parallel computing system that uses multiple display computers to draw content on video wall displays, including LCD, projector, and LED walls, which allows it to scale from very small systems to enormous systems with hundreds of displays or thousands of square feet of LED tiles. Each of those display computers can have audio output and perhaps even speakers attached. Beyond that, a Hiperwall video wall can have tens or hundreds of sources and content items, many of which can have audio channels. Therefore, video wall audio control not only has to manage display computer output volume, but also volume levels for perhaps hundreds of content items, all in an easy-to-use and scalable manner. Project BeginningsOne of our significant partners expressed the need for volume control for the display computers, because some of their customer projects could use those capabilities. Having looked into volume controls in the past, I thought, “I know how to do that. I’ll do it.” In the early days of Hiperwall software development, I made the Hiperwall Daemon, a small program that runs on each of the display computers to manage them, transfer content and issue commands, and make sure the display software is running. Having the Daemon control the output volume of its display computer is easy but communicating to make it do so was the challenge. I enhanced the state transfer protocol to allow individual volume control for each of the possibly many displays in the system. The protocol needed to be concise, yet scalable, so the mechanism I added extended an existing control scheme to add volume values when needed. I also added a volume control slider and status reporting to the “Walls” tab in the controller software so volume levels can be quickly and easily changed for one display or an entire wall at a time. The volume values also had to be consistent between fault-tolerant controllers, so I made sure the controllers coordinated. Content Audio Control DevelopmentAfter discussing this new capability with the team, we decided that we couldn’t just stop at device volume control, but needed to address content volume as well. Managing volume of individual content items is more technically challenging than just changing device output volume. Because each display computer can have many contents playing audio simultaneously, each object needs separate audio controls both on the controller and built into the display software. Managing audio levels for multiple items is a lot like using an audio mixer like the Windows mixer (right-click on the speaker in your system tray and choose “Open Volume Mixer” to see what I mean). Each object can have a volume setting, but the display computer volume limits that volume to its maximum. The development team members who make the display software developed mechanisms to adjust the volume level for each object open on their display(s), and that worked very well. We added object volume to the extensive list of properties we manage for each object (like size, position, rotation, transparency, etc.). We added a volume slider to the controller so the operator could change the volume of a selected content object. We the made mechanisms to propagate the volume properties throughout the system and between fault-tolerant controllers. We also extended the existing Environment mechanism so volume levels are saved with each Environment and restored when the Environment is loaded. We even added volume control to our XML-based web services style interface so third-party programs can control content volume. But we weren’t done yet… Audio Muting PoliciesBeing able to quickly and easily mute audio content is critical – we do it all the time with our TVs and radios and phones, but when you have many outputs and potentially hundreds of audio sources, it becomes a lot more complicated. We first had to decide which operations made sense. Of course, we needed the ability to mute all content, but we had to decide what it meant to mute audio. Was it just setting the volume of all the content items to 0? That’s easy, but it doesn’t remember the old value in case we want to unmute some or all of the content. Your TV remembers the volume it was before it was muted, so we decided customers would expect that kind of behavior, even though a TV is only playing one thing at a time, while we can show many items at once. Therefore, we had to make muting reversible rather than just setting an object’s volume to 0. This meant maintaining a bit more state and a lot more logic to mute and unmute properly. Of course, muted objects can be unmuted as a group or individually. Beyond muting everything, we also devised a “mute all but this” mechanism, which allows the operator to define an object that should be the focus of attention by eliminating any potentially distracting audio from other source. While this is not something TVs would have, it is very beneficial in a video wall environment with many sources that include audio. Convenient and Simple ControlsBecause audio has the potential to be disruptive and distracting, we had to make sure the controls to mute everything were easy to find and always available. In addition to the “Mute All” capability as part of the audio controls, we added very obvious system Mute button to the controller so it can be used quickly and easily. Like its “Mute All” counterpart, this Mute button can mute or unmute all the content at once. It has intuitive icons to show its state rather than words like the button in the audio controls. In addition to the fault tolerance controller software, we have another way of controlling the video walls called HiperOperator. This is a very easy-to-use graphical application that several possibly remote operators to manipulate content on the video wall simultaneously. In keeping with the simplicity of HiperOperator, we had to add very simple audio controls to it. HiperOperator makes it very easy to manipulate properties of individual objects, including applying filters, rotating, and even making the transparent using a simple menu. We added audio volume and muting to the menu, making it extremely convenient to set audio properties of content items. We also added the object’s volume level to the descriptive label shown with each object in HiperOperator. As with the controllers, HiperOperator needed a very simple way to mute and unmute all audio in the system, so we added a button/icon in the corner of the display that toggles the muted state when clicked. All of these volume and mute states need to be coordinated across all controllers and HiperOperators, too, so if one user performs a volume control operation, it is reflected everywhere. As anyone who builds distributed systems knows, such coordination is easier said than done, but because we were building on already robust protocols and communications links, it turned out very nicely. Development of audio control capabilities was a very interesting and challenging activity, but more for policy reasons than technical ones. The underlying technology to change display computer output volume or individual object volume is not particularly difficult. Rather, defining behaviors and capabilities was by far more challenging. Since multi-input, multi-output audio is rare except in audio mixers, we had to do lots of prototyping and debating of how things should work. We worked with our partners to get feedback on our designs and made some changes based on their suggestions. The new audio features are the result of a great collaborative effort among the dev team and with our technical services group and our partners. After not upgrading my mobile phone for a couple years, I splurged and got the new iPhone 12 Pro Max (because I wanted the new camera features). The phone is enormous, as you would expect, but you may not believe exactly how enormous it is. If anyone needs to land an aircraft, this thing is about the size of an aircraft carrier deck. Maybe I’m exaggerating a little, but it is big, though my hands are big too, so it feels natural and is a beautiful phone. By the way, the camera does seem to be spectacular, and the low light mode with the “normal” camera blew me away! But that’s not why I’m writing this. We’ve come to rely on our phones for so many things that upgrading to a new phone is more complicated than it was in the past. Previously when switching to a new iPhone, I would restore the backup of the old phone and most things would work right away. A couple apps would detect they’re on new hardware and require that I log in again, but otherwise there was not transition other than newer, fancier hardware. These days, however, our phones are not just our lifelines and our entertainment – they identify and authenticate us, and therein lies the problem when upgrading to a new phone. We are all using 2-factor authentication apps (if you’re not, do so. Now. I’ll wait) that are tied to the hardware identity of our phone. Some of us also use our phones for car keys or house keys, again tied to specific IDs in the phone that don’t transfer to a new one automatically. With this new phone, most things transferred perfectly, as expected, so I could easily log into my iCloud stuff or Dropbox or Instagram, either automatically or just by entering my credentials. The problems were with the “authenticator” apps and with my Tesla Model 3. Rightfully so, they didn’t transfer over. The Microsoft Authenticator is pretty excellent in that it has a recovery mode that allows restoring its functionality via information stored on iCloud. Luckily my old phone was still working so I could authenticate via the old one to allow the new one to restore the settings. If the old phone were lost or broken, things would have been a lot uglier, requiring the use of backup codes or other methods of proving identity. The Google Authenticator was much worse. It had no recovery mode, so the answer is just disable the old one in your Google Security settings and enable the new one. Fine for Google, but other services, like Hubspot, also use the Google Authenticator, so for those I had to log in, disable the Google Authenticator, then re-enable it on the new phone. Again, because I had the old phone there, I could log in easily, but if I hadn’t had it accessible, things would have been tough. The process for switching phones for the Tesla should have been easy, but didn’t work well. Adding the new phone via Bluetooth was trivial and worked well, as did logging into the Tesla app (using Microsoft Authenticator for 2FA), but adding the phone as a key for the car didn’t work. I put the keycard on the console and told the app to make the phone a key, but it claimed it couldn’t connect to the car. Playing with WiFi and Bluetooth didn’t work. In the end, I rebooted the car computer (yes, I know that sounds crazy) and that fixed it. So now my new phone has replaced the old one in all capacities and I’m happy. But this should serve as a warning to us all that switching phones is more challenging than ever, and if we lose or break a phone, the trouble will be huge! Since many of the backup 2FA mechanisms use a text message to your phone if the authenticator app doesn’t work, that doesn’t help if you can’t receive the text message. My advice is to get the backup codes for your essential services and securely store them somewhere you can get to if your phone is gone. Easier said than done… Last week I bought the Logitech MX Vertical mouse to see if it will be better at preventing repetitive strain injury to my wrist and hand. Somehow working from home because of the pandemic means fewer breaks and more intense work, so I could feel it in my hand and wrist (particularly my thumb has been bothering me). The MX Vertical is a vertical mouse with a “handshake” grip. This means your hold on the mouse is almost like when you’re giving a handshake. (Remember handshakes? Those aren’t going to be a thing anymore.) This means your wrist is in a more natural position than the twisted angle needed for a regular mouse. I really wanted to like the MX Vertical because I thought it could help me and because it is crazy expensive, so I wanted to be able to rationalize my decision to buy it. But when it arrived, I hated it. There were stupidly terrible software problems with it that I’ll cover later, but more importantly, it didn’t feel good. The biggest problem I had with the MX Vertical is because of my big hands. The side of my hand rests on the desk when I grip the mouse in its handshake position. This makes movement terrible. My normal Logitech MX Master mouse is large enough that my hand stays clear of the desk so movement is very smooth. Not so with the MX Vertical. I think if it had an attachable extension to support the hand, it could be improved. So I gave up on the MX Vertical. I tried to convince my wife to try it, but after my experience with it, she had little interest. But then I had the thought that a wrist rest might keep my wrist high enough that my hand wasn’t dragging on the desk, so I ordered a foam wrist rest (yes, I’m buying accessories for my accessories). The wrist rest helped, so I used the MX Vertical all day yesterday and found it to be pretty good. I quickly got used to the vertical hand position, and I think it plus the wrist rest will help with my hand and wrist issues. So does that mean I’m happy with it and it is the perfect mouse? No. Not at all. It is a fairly light mouse, particularly for its size, so clicking a button requires that I hold the opposite side with my thumb to prevent the mouse from moving off where I wanted to click. I never had to do that with a normal mouse. The button positions are OK, but not great, though again my big hands have an influence on that. The biggest ergonomic mistake is the position of the scroll wheel. It is between the buttons pretty much where it would be on a normal mouse, but that makes it fairly far behind my fingertips, so to use the mouse wheel requires significant finger movement and isn’t as natural as on a normal mouse. The MX Vertical has lots of fancy features, but that means it requires software to use them. The problem is that because I had an older Logitech mouse, I already had the Logitech Options software loaded. Well, the older software completely fails when this mouse is plugged it and it even disables the scroll wheel. Of course, installing the new version of the software over the old one means things are still broken (I installed on 2 different PCs, so it wasn’t a fluke). The only was I could get it to work is to uninstall the Logitech software, uninstall the device, then install the latest Logitech Options software, then it worked OK. Because clicking requires holding the mouse firmly, I wouldn’t think the MX Vertical would be good for gaming, but it seems ok for normal usage. For the moment, I will use it on my main work PC. With a wrist rest, it seems comfortable, so if you feel like your old mouse is causing trouble, the MX Vertical could be a good option. Yesterday, NASA and JPL released a magnificent panoramic image of Mars taken by the Curiosity Rover. This 1.8 billion pixel image is made up of over 1000 images stitched together. More info can be found here. I imported the image to the Hiperwall system in our Customer Experience Center, because I love enormous images. It took a couple minutes to import and store such an enormous file, but once imported, Hiperwall software allowed me to move and zoom the image in real-time. In order to take a video of it, I used Hiperwall’s animation feature to start with the fully zoomed-out image (so we can see all of it), then had it slowly zoom in to an area with lots of detail until one pixel in the image was one pixel on the screen. It then held that position for a bit, then zoomed back out and repeated. All of this took just a few seconds to set up, then I shot the video on my iPhone. The video was taken at 4K/60 FPS, but I’m not sure YouTube will offer it with that quality. This video shows our unprecedented ability to handle enormous imagery, but it also shows how easy it is to animate content on a Hiperwall. While Hiperwall is commonly known for Command and Control video walls, many of our customers use Hiperwall systems for corporate communications, live presentations, and collaboration, often in addition to their control room Hiperwalls! If you’re planning to order a Tesla, each Tesla owner (or orderer, in my case) is given a referral link that others can use to get some goodies. At the moment, when someone uses a referral link, both parties get 1000 supercharger miles, so win-win! My link is below in case anyone would like to use it when you order a Tesla: |