ContextCapture for Reality Modelling
This section has all the step-by-steps you need in order to get from the REVO survey to a reality mesh, which then can be used for further intelligence extraction in ContextCapture Editor.
ContextCapture is a powerful tool to create reality meshes from point clouds, images, videos, which also inputs and contains a big database of cameras specifications and coordinate systems.
Note: Please import ZEB-CAM .opt file to your ContextCapture
ContextCapture Editor, complimentary with ContextCapture, is a CAD tool which helps you to make use of your models. In order to learn more about ContextCapture itself, visit Bentley Communities and Reality Modelling Channel on YouTube.
Firstly, in order to be able to produce a strong reality mesh, it is important to capture the data with colour in mind. If you haven’t done so already, please refer to the survey section – Scanning with color in Mind.
Once the data is properly captured, it needs firstly to be processed in HUB, then in ContextCapture
ContextCapture Editor
Once you have created reality mesh, you can use this in ContextCapture Editor to create BIM models, sections, volumetrics and much more. This video shows a step-by-step procedure on how to create 3D CAD out of a Reality Mesh.
For creating BIM out of meshes, please convert to a 3SM format in ContextCapture.
Then, follow the steps to create lines, cylinders and digitize your model.
Processing the Video in HUB
Once the point cloud and video data are processed, save the results. Along with the defaults, the results folder should contain:
- Pointcloud trajectory – .txt
- Pointcloud – .las
- Video Trajectory – .txt
- Video – /viewerData folder – .mp4
Note: When using processing settings in HUB you can change such SLAM parameters as Convergence Threshold, Bounding Box and similar.
All resources on processing can also be found in the GeoSLAM HUB section.
Cleaning up the Point Cloud Data
A common challenge when surveying is unnecessary data collected due to people walking around and therefore it is good practice to ensure that your point cloud has no additional un-required noise.
This can easily be done with a range of third party tools. In this example, the noise caused by moving people is cleaned in CloudCompare.
Data Processing in ContextCapture
Once you have:
- Point Cloud and its .traj file
- Video and its .traj file
You can start creating your mesh in ContextCapture.
Once installed, Main ContextCapture Features are ContextCapture Master, Engine and Acute3D Viewer. Master is the main user interface, Engine – works like your dongle and needs to be turned on when data is being proceed, and the Viewer lets you view the models.
Watch the video below for a step-by-step operation of how to generate these models:
The model production steps
- Data Import: Import your point clouds, video and corresponding trajectories to separate blocks;
- Aerotriangulation (AT): Run AT on the video data as defined in the video and make sure AT has worked correctly. (Note: the camera calibration file can be found here)
- Merge PointCloud and AT Blocks.
- Start Reconstruction and define settings
- Submit Production
ZEB-CAM Optical File
If you do not have the optical parameters file for ZEB-CAM in your ContextCapture, please download and add this file to your camera database:
<?xml version="1.0" encoding="utf-8"?> <OpticalProperties version="1.0"> <Id>0</Id> <Name/> <Description/> <Directory/> <ImageDimensions> <Width>1920</Width> <Height>1440</Height> </ImageDimensions> <CameraModelType>Fisheye</CameraModelType> <CameraModelBand>Visible</CameraModelBand> <SensorSize>4.57</SensorSize> <FisheyeFocalMatrix> <M_00>1304.405768</M_00> <M_01>0</M_01> <M_10>0</M_10> <M_11>1304.405768</M_11> </FisheyeFocalMatrix> <FocalLength>1.97654999956405</FocalLength> <PrincipalPoint> <X>965.123</X> <Y>724.379</Y> </PrincipalPoint> <Exif/> </OpticalProperties>