Bentley Boosts ContextCapture Capabilities6 Jul, 2016 By: Cyrena Respini-Irwin
Infrastructure solutions provider believes reality capture technology holds big promise for AEC projects.
Last November, during its annual Year in Infrastructure Conference, Bentley Systems announced the general availability of ContextCapture, its software that creates 3D models from digital photographs. Now, less than a year after that launch, Bentley is updating ContextCapture, and confirming its commitment to integrating reality-capture technology into AEC workflows. “We're very bullish about where the market is going,” said David Huie, product marketing manager at Bentley Systems.
ContextCapture has its roots in technology from Acute3D, a software company Bentley acquired at the beginning of 2015. Several years prior to the acquisition, Acute3D developed Smart3DCapture, a “standalone software solution optimized for large-scale automatic 3D reconstruction from photographs.”
According to Huie, ContextCapture has value for a broad swath of the AEC world, because everyone who needs to reconstruct real-world conditions — from surveying professionals to image acquisition companies to design firms — has something in common. “They all need a baseline of information about the real world in order to progress their work,” he explained.
ContextCapture is a modern entry in a series of technologies used to address that common need. Three decades ago, professionals relied on total stations; then, fifteen years ago, the use of LIDAR and point clouds increased, Huie explained. “Photogrammetric reconstruction is another of these disruptive points,” he continued. “Our belief is that with photogrammetric reconstruction, you make reality capture for everyone much more accessible, cost effective, and practical.”
Another Arrow in the Quiver
Huie stressed that ContextCapture isn’t intended to replace existing reality-capture technologies, but rather to complement them, as each offers its own advantages. Laser scanning captures high-resolution data, but is best reserved for infrequent use on sizable projects because of the financial, time, and training resources it requires.
In contrast, the high speed and low cost of collecting photographs for ContextCapture make it suited to a wider range of project sizes and frequent data collection. For example, users can fly a camera-carrying drone over a construction site as often as desired, creating new ContextCapture models on a weekly or even daily basis.
In addition, the simplicity of the photo collection process means there’s very little barrier to entry, training-wise. “Whether you’re using a drone, an airplane [or] ground-based photography … digital cameras are something almost anyone can use,” said Huie. Michael Barkasi, a senior application engineer with Bentley, agreed: “It’s not quite like taking pictures of your family on Christmas morning, but it’s still just taking pictures.” This reduces the personnel bottlenecks that can arise when only a few people in a company have the training needed to operate data collection equipment.
Scale is another factor in determining the most appropriate reality-capture method for any particular job. Huie noted that ContextCapture is capable of capturing reality of any scale, thanks to a tiling mechanism that breaks massive projects — such as city models — into smaller chunks and allows for parallelizing, splitting the task across multiple CPUs and GPUs. (All of this sharing of the load happens locally; ContextCapture does not offer a cloud processing option.) “[Each computer] can look at the job queue, grab a tile, and build it … this is great asset for people that have a deadline to meet,” said Barkasi.
ContextCapture creates 3D models in a range of formats that can be used in CAD or geographic information system (GIS) software, published online, or viewed with a free Windows or Mac viewer or free plugin Web viewer.
Click the arrow to view, pan, zoom, and rotate a reality mesh model created with ContextCapture. To evaluate flooding risk and potential site grading options at this brownfield site in Pennsylvania, Cedarville Engineering Group spent one hour capturing photographs of the site, then created the model by running ContextCapture for one day on a single computer.
Files in the 3MX format can be used natively within Bentley MicroStation. New in this release is support for multi-resolution meshes, which are built so that mesh data can be used at different levels of detail, said Huie. The updated ContextCapture also adds support for ESRI’s native data type for ArcGIS Pro and Online, enabling users to integrate 3D models of any scale into the ArcGIS environment.
To scale up to the largest projects, Bentley offers the tool in two sizes. ContextCapture handles tasks such as modeling railways or industrial sites documented through drone flights or vehicle-mounted cameras; ContextCapture Center is tailored to city modeling and other projects that require a very large amount of imagery and parallel processing. But the software can scale down, as well; Barkasi shared the example of an aircraft turbo prop that was modeled using 150 photos, yielding a model with .5 cm final resolution. “The scalability is above what our competitors can do,” said Barkasi.
In the new version of ContextCapture, the amount of imagery the software can accommodate has been extended from 30 to 100 gigapixels. (ContextCapture Center, in contrast, has no cap on the imagery amount.) Huie explained that customers have become familiar with the product since its introduction, and would like to use more data now that they’re diving into it more deeply.
Although the ContextCapture technology is “extremely accessible,” said Huie, adhering to best practices for taking photographs is essential for good results. “That is the one critical skill set that needs to be developed,” he noted.
Any digital camera can be used to collect photos for ContextCapture, including the handheld variety. Even still frames extracted from video can be used, although those are of lower quality than photographs, said Barkasi.
The software requires a minimum of three images, captured in sequence, with about 70% overlap from one to the next, Barkasi explained. Ideally, users should never change the angle more than 15 degrees at a time, and they should move from one distance to another sequentially, instead of jumping abruptly from one focal length to another. “With poorly taken images, or those that don’t have enough overlap, you can’t expect them to reconstruct well,” he noted.
Moving objects such as pedestrians or cars are removed automatically, but a vehicle or person that lingers long enough to appear in the same position in multiple images may be included in the final model.
Users have the option of tying imagery to control points on the ground to make it spatially accurate, enabling precise measurement of coordinates, distances, areas, and volumes. The need for ground control varies depending on the particular use and project accuracy requirements, said Barkasi.