Years in the Making
OpenSpace applies advanced technology
to make construction simpler and more transparent
OpenSpace is built for speed on a foundation of computer vision, AI and data visualization. OpenSpace is the leader in tap-and-go passive video capture and automatic photo mapping. In just 18 months, our customers have captured over half a billion square feet of active construction projects, proving the speed and simplicity of our solution. And with the introduction of OpenSpace Insights, we are pioneering a new class of AI tools that will leverage those images to provide unprecedented insight into project status and progression.
The technology behind OpenSpace—which is similar to the perception and navigation systems in self-driving cars—is the culmination of nearly two decades of combined research and development, which started with the founders’ work at MIT. Our team of MIT, CalTech, Stanford and Berkeley alumni is focused on developing algorithms specifically for the construction industry.
OpenSpace Vision Engine
OpenSpace’s proprietary and patent-pending Vision Engine is the core of the first fully automated reality capture system. OpenSpace lets builders and owners capture 360 video and photos without any manual input in a fraction of the time of other tools. And the Vision Engine is always learning—the more you walk, the faster and more accurate it gets.
Computer vision is how computers interpret and understand digital images and videos. It’s commonly used across industries including industrial automation, medical imaging, robotics, self-driving cars and security cameras, to name a few. The Vision Engine relies on computer vision to automatically align images into a single integrated scene, recognize and label key features, and map them to floor plans. These features can then be tracked across images and space to provide a much richer understanding of the captured environment.
3D reconstruction is the process of recreating 3D objects or spaces from 2D images. 3D reconstruction is used in medicine, survey, robotics and mining. The Vision Engine uses 3D reconstruction to locate features in space and recreate 3D environments. Features in two images are compared, and then an estimate of camera position is computed that best aligns those features. As this process is repeated thousands of times across a full OpenSpace 360 video capture, a 3D point cloud is created. The point cloud provides a direct way to tie a feature in an image to a 3D location in space.
Machine learning is how computers find patterns and solutions without specific instructions. Machine learning algorithms create a mathematical model based on training data to predict future results without being explicitly programmed to perform the task. The Vision Engine uses each capture and walk track as a training dataset. Every time you walk the site, the Vision Engine learns a bit more about the 3D environment you are in and can align and map the images faster and more accurately.
Simultaneous Location and Mapping (SLAM)
Simultaneous Location and Mapping (SLAM) is a technique to construct a map of an unknown environment while simultaneously moving through it. SLAM is one of the core algorithms used for self-driving car navigation. Images or scans are continuously captured as the sensor moves through the local environment and algorithms align sequential data to estimate position and path. OpenSpace uses image-based SLAM to estimate the path of the walker on a floor plan. Subsequent captures are then fed into a machine learning system that refines the estimate and increases accuracy with every walk.
Project tracking is core to construction. Every team has a spreadsheet, a whiteboard or a Gantt chart. What’s missing is the ability to get first-hand, image-based data, process it for insight and see it in context-mapped to project plans. Builders are visual people with highly tuned spatial skills–let’s stop forcing them into cells and boxes.
OpenSpace Insights is a suite of tracking and analysis tools that turns images into insights. Your team can’t be everywhere and see everything. OpenSpace Insights acts as a digital copilot, covering your blind spots and giving you the data needed to make better decisions. Leveraging the Vision Engine, OpenSpace Insights adds the ability to segment, classify and track specific items and systems across time and space.
Semantic segmentation is the process of grouping pixels together into logical chunks. These chunks can be purely feature based—for example, all the pixels of a yellow shirt would form one chunk, while the pixels of a red scarf would form another. Or the chunks can be associated with predefined classes and given a label. OpenSpace is using semantic segmentation to develop a series of construction-specific classes and classifiers to transform raw images into logical segments that can be tracked and counted.
Image Search is the process of isolating and identifying a particular item in an image and is commonly used to find a specific item (the “target”) within a noisy background (the “detractors”). OpenSpace has adapted image search to enable easy site-wide search. Simply select an object of interest in an image (some DensGlass or a light fixture, for example), and OpenSpace technology finds other similar objects in your project. We then can track how many times this object appears in the images across time and on which floors.
Once images have been processed, aligned, located and segmented, they can be analyzed to deliver progress tracking. Items located using object detection or classified using semantic segmentation can be located in 3D space using the point cloud and tracked over time. The result is a quantitative map of project activity that can be used to verify work in place, maintain trade coordination and benchmark productivity.
Big Data Visualization
Big data visualization is the practice of rendering visual representations of large, complex datasets, making them easier to digest and understand. OpenSpace has a long history developing innovative visualizations, starting with this TED Talk where an MIT researcher describes how he recorded 90,000 hours of home video to understand how and when his infant son learned new words. It highlights the work of our founders, who were his students at the time.