Camera positioning is the next evolution of GPS.
Camera-based techniques provides hyper-accurate, centimeter-level, 6 Degree-of-Freedom positioning. From a single image, position and orientation can each be calculated along three axes. And unlike GPS, it works indoors and out. A multitude of current and emerging industries fundamentally require this capability:
• Augmented Reality
• Autonomous Robotics
• Indoor Navigation
• Emergency Response
• Real World Search
• Checkout-Free Shopping
• Connected Home
• Smart Cities
• Autonomous Vehicles
The primary constraint of camera positioning is that it requires a map of visually distinct features. Such maps can be built from a variety of input data types — 2D images, RGB-D images, and 3D point clouds — captured by image and depth sensors on mobile devices, IoT devices, robotics, and other platforms. These platforms share the same goal: build visual maps and determine precise positioning from them.
Why not share the maps?
GPS was deemed too valuable to society to remain proprietary, and the next evolution should not be any different. Interoperability between camera-equipped platforms improves coverage, update frequency, and development time.
Mapping and positioning should not have to be the core competency of spatial application developers, but without interoperable maps, the maps and the technology will need to be built over and over again on every platform.
Google and many other companies are currently amassing their own proprietary datasets to use for visual positioning. The only companies that believe proprietary maps are a good solution are those that believe they can be the one to own them. While the walled garden may be convenient to for developers to start, it allows single entities to dictate where and how the maps are used.
Camera Positioning Standard
An open-source framework is needed to facilitate cross-platform functionality and speed development towards universal mapping and localization for spatial applications. We are proud to propose the Camera Positioning Standard (C.P.S.) in advancement of this objective.
C.P.S. is infrastructure for visual maps. The standard comprises a suite of visual map data and messaging formats. It enables cross-platform exchange of heterogeneous, visual map data that can be leveraged by existing and future localization techniques including classification, regression, learning, and more. The full framework will include robust format specifications and performant reference implementations compatible with common computer vision platforms.
Inspired by the principles of GRPC, C.P.S. should subscribe to the following:
- Free and Open — Fundamental features must be free for all to use. Release all artifacts as open-source efforts with licensing that should facilitate and not impede adoption.
- Interoperability — The framework must be platform agnostic so that it can be created and consumed by any camera-enabled platform.
- General purpose — The data formats should be applicable to a broad class of use-cases and localization approaches.
- Performant — The data formats should be designed efficient I/O including support for parallel processing frameworks.
- Unambiguous — Coordinate systems and reference frames should be consistent to ease interfacing with native applications and frameworks.
- Coverage and Simplicity — The framework should be available on every popular computer vision development platform and easily extensible to other platforms of choice. It should be viable on CPU and memory-limited devices.
At Fantasmo, we are already hard at work developing a proposal for a feature-based map format along with an accompanying C++ reference implementation. The reference implementation will include an encoder and decoder with a simple API to ease integration. The reference implementation is planning to be released in July with the specification draft following at the end of Q3.
Additional specifications and implementations will be co-developed with the community. Likely additional early platforms include Python, iOS, Android, and ROS.