Tweet about this product: A robot delivering autonomous fast-charging for AVs & Intelligent eMobility: @PowerHydrant #MIN94 www.PowerHydrant.com

Autonomous Vehicles are most likely to be Electric Vehicles (PEV). Since these vehicles will be shared and parked less frequently and in demand more often they will need to be charged-fast and autonomously with a physically conductive interface to the Supply Equipment (Electricity or Hydrogen). This is what PowerHydrant does — autonomously proves fast and safe recharging. While Inductive Wireless chargers are limited by laws of nature, PowerHydrant is exploiting the smartphone and VR dividend to provide to low cost reliable computer vision driven robotics unencumbered by performance limitations.
PowerHydrant can be a key enabler of the $5+ trillion autonomous movement revolution. The economic impact of the self-driving car: Morgan Stanley estimates that self-driving vehicles could deliver the following:
• $1.3 trillion in annual savings to the U.S. economy ($5.6 trillion potential global impact)
• $507 billion in annual U.S. productivity gains
• $158 billion in annual U.S. fuel cost savings
• $488 billion in annual U.S. accident cost reduction savings
• $11 billion in annual U.S. savings from reducing congestion
• $138 billion in annual U.S. productivity savings from less congestion
1.2 Billion Vehicles On World’s Roads Now, over 2 Billion By 2035 (maybe 2030)

By exploiting opportunities and constraints specific to this (Computer Vision Directed Robotic Conductive Charger) and similar applications, a simple, fast, low-cost, robust computer-vision directed robotic scheme of a priori determined (by way of radio link [as in 802.11p] between target vehicle and PowerHydrant) solid model alignment to sensed 3d sparse point cloud can be shown. The core software algorithmic elements used in this technique are stereovision disparity-based point-cloud generation and iterative closest point (ICP) by Tolga Birdal and others. Even with limited image data (e.g. just image data for the front right quadrant of vehicle), alignment of the solid model to the sparse point cloud allows for any rotation and translation data for any sub components of the target vehicle to be inferred. Importantly, this technique is ideally implemented on fully commoditized components ensuring low-cost, reliable implementation, and robust usability. This technique does not require structured light or time-of-flight and therefore will work across all real-life lighting scenarios: from pitch dark (via non-structured IR illumination) to mid-day full sunlight.