Enabling Visualization of Dense Data
Visualization Team in Engineering
For the past couple of years, our Visualization Engineering Team has been building software to provide insights from data collected by Hovermap. The dense data collected and its numerous applications have led to the need for a point cloud visualization tool which must meet many technical requirements in order to maximize the user experience for Hovermap customers. This series of articles delves into the areas of consideration and innovative solutions they’ve developed to provide Emesent’s Visualization Software.
The Visualization Engineering Team started building a visualization application using Unreal Engine, a 3D Developer tool used in 3D games, architecture and engineering software. This includes a LiDAR Point Cloud plugin which we could use to display our point clouds. This plugin uses Octree Spatial Partitioning in combination with a point budget to choose the best points to display out of all of the points in the clouds. The point budget limits the points displayed to what the renderer or graphic processing unit (GPU) can sustain.
One of the benefits of going this way is that even when exploring billions of points, we are only rendering the smaller subset of points currently in the viewport and even then, only the most visually relevant ones. Thus we can maintain a highly interactive frame rate while exploring this data.
The drawback to this technique is that we are only exploring the data that the algorithm is deciding is relevant or important. The lower the Point Budget, the more these algorithmic decisions affect the final output. There are ways around this, for example, by using a larger point size to blend points into surfaces. However that is essentially synthetic and not “real” data.
This isn’t necessary for Hovermap data, as it produces very dense point clouds, and the detail is there without needing to be synthesized. So, since starting the project, we have been extending the plugin to cater for our dense data.
One of our early extensions was to add support for the point attributes that are produced by our SLAM processing software. We included support for displaying intensity, time, ring number, range, and true color attributes.
Another was to maximize our single frame render budget to 100 million points – well beyond what came out of the box from Unreal.
But this didn’t meet requirements. Hovermap captures datasets with billions upon billions of points, so we needed a different approach to our final output.
To enable a full frame render of all points, we developed a system we call Multi-Frame Rendering. With this concept, we have modified the renderer to build up the image over multiple GPU frames. To apply it the algorithm begins by traversing the octree looking for the most visually relevant nodes (collections of points), they are tagged and then rendered. The algorithm then traverses the octree again and looks for next most visually relevant nodes that have not already been rendered. This is repeated until there are no more points remaining.
Each frame is composited using depth to build up a final frame of all points, be it a billion or 10 billion. We can see from these images the density of Hovermap data is just amazing, and we can display that now within our Viewer. Render times are affected by the point budget and the rendering hardware, but it is all very quick, and Multi-Frame Rendering can render a full billion points in a couple of seconds.
This approach will allow our users to quickly explore Hovermap datasets containing billions of points and quickly render full quality images that represent the true data that has been captured with no compromise.
In the subsequent article of this series, we’ll look into how to build the user interface, so follow us on LinkedIn to see when it’s available.