Page tree

ROBOTICS   



2020

2021-2022

2023-2024

2025+




 Speech/Natural Language Recognition

Voice/speech recognition in the robotics area refers to technology that can process and interpret voice/speech inputs or commands and perform a physical action or respond using voice or speech. Amazon Alexa, Google Voice and Siri are some of the best examples of voice/speech recognition technology reaching the mass-market. Integration of state of the art, consumer-grade voice/speech recognition technology into robots is already happening as in the case with Roomba or Ziro. It is likely to accelerate further.

 Computer Vision

Giving robots the ability to see and understand visual images, objects or faces has recently been amplified by the use of AI and deep learning techniques like ConvNets. The access to state of the art computer vision techniques — especially image and object recognition through open-source AI platforms like Tensorflow or AI-based vision kits like AWS  DeepLens  are revolutionizing robotics. Almost every robotics market (including those for autonomous cars, drones, logistics and warehouse robots, cleaning robots, family robots, and service robots) is making use of enhanced AI vision capabilities. This has major implications for navigation, communication, safety and manipulation aspects for robots going forward.

 Consumer Robots

Established consumer robot companies like iRobot continue to see traction with vacuum and cleaning robots. However, poor execution by most other existing personal assistant and household robot companies, the demise of many high-profile companies during 2018-19 (including Anki, Mayfield Robotics, and Jibo, among others) and a lack of enthusiasm from consumers, have soured the market. Even so, there is a strong likelihood that smart speakers will merge with personal assistant robots beyond 2020.

 Tactile Sensors

In robotics, tactile sensors are used for security and physical safety purposes in order to control robotic movements or to recreate human touch and dexterity in a robotic gripper arm. Recent advances in graphene-based tactile sensors have  essentially re-created human touch by using machine learning techniques. Just like robots acquire language and vision capabilities through AI, we are likely to see advancements in touch sensitivity that are similar or better than humans’ in the next few years as tactile sensors become more robust and flexible. Touch-sensitive robots will combine visual and  audio information to perform quality checks in industrial settings (such as touchscreen  testing) and will allow consumer robots to have touch-sensitive grips which allow  them to shake hands like a human.

 Robot Simulation

Simulations are a powerful tool to teach robots behavior, reinforcing correct actions and avoiding safety issues around robot training in a live real-world environment. For example, autonomous cars are increasingly using simulation environments to learn how to drive, which is not only  safer, but is also a much more efficient training scheme, allowing the ability to scale up the simulations indefinitely and recreate billions of edge case scenarios. Robot simulations will move into other areas including industrial robots, consumer robots, drones, surgical robots and many others in the next few years as we are able to integrate the latest AI and reinforcement learning techniques into a wide range of simulation software.

 Navigation and Mapping

Navigation and mapping are critical capabilities to enable autonomy within robots. When advancements in 3D depth-sensing technology (from companies like Stereolabs and Occipital) are combined with advanced AI-based geometric SLAM techniques which perform 3D scene understanding by using a combination of neural networks and reinforcement learning,  robots will get much better at navigating unknown and hazardous environments without reference maps. This should have big implications for defense, mining, construction, safety and rescue tasks. Improved navigation and mapping will also give rise to autonomous citywide transport fleets.

 Ingestible Robots

Ingestibles are a type of “smart pill” that use wireless technology to monitor internal reactions to medications. Invasive procedures like colonoscopies and endoscopies might incorporate pill-shaped video cameras.

 Collaborative Robots

Industrial Cobots are making major inroads where humans work together with robots in running a task in a shared workspace. Almost all the industrial robot manufacturers have collaborative robots today. Collaborative robots should make up 15-20% of total industrial robot sales in 2021, Cobots will move from the industrial shop floor to picking and handling tasks for e-commerce and warehouses, with companies like Kindred using the latest reinforcement learning techniques in collaboration with humans to train robots.

 Cloud Robotics

Rather than rely on local sensing, computation and memory, a robot can use a shared resource like the cloud to perform its operation. Amazon’s Kiva warehouse robots use cloud robotics to coordinate routing, and so does Google’s Waymo self-driving car that uploads each mile driven to the cloud to share with its fleet of cars. There are several challenges to scaling up cloud computing including network connectivity, latency, security, and robust frameworks, and cloud robotics will take time to evolve. However, numerous factors will continue to drive the rapid growth of cloud robotics. The evolution of ROS 2 (Robotic Operating System) is expected to stimulate strong growth in this market. Also Amazon’s  RoboMakerGoogle’s cloud robotics development platform, and Microsoft’s integration of ROS with Windows should drive developer adoption.

 Robotic Manipulation

One of the hardest problems in robotics is that of manipulation and dexterity. The use of tactile sensors is one of the ways that robots can improve manipulation techniques, but it requires many other capabilities like perception, learning, planning and control. While manipulation and gripper technologies have worked well in controlled environments like a factory shop floor, having a robot grab a beer from the refrigerator or clean up a room is still a grand challenge for robots. Companies like Boston Dynamics are working hard at solving some of these challenges and hope to use AI and reinforcement learning to help teach robots like Spot Mini techniques around robot manipulation.

 Powered Exoskeletons & Wearable Robotics

Exoskeletons will advance from the bulky armor-like "Iron Man" contraptions to "second  skins" that are used by both the military and in physiotherapy settings to increase strength of the wearer. They will also assist a portion of the physically disabled in using their limbs. The heavy electric motors will be replaced by pneumatic muscles, which are extremely lightweight but forceful. Materials will advance so that such exoskeletons will be soft, pliable and worn like clothing. Superflex, a spinoff of SRI, is a smart suit for the aging population which can assist with motor activities like sitting and standing.

 Swarm and Self-Assembling Robotics

Robots collaborating with humans is one thing, but robots collaborating behaviors among themselves is an entirely different level of complexity. Swarm robotics is an upcoming area of academic research and military funding. Nature is often used as an inspiration to advance capabilities of swarm robots, like creating the 1000 robot swarm which can also self-assemble and reconfigure. In the beginning, swarm and self-assembling robots will see increasing applications in safety and rescue, material handling, construction and agricultural farming use cases. Interestingly, MIT Media Lab has suggested that blockchain will be a key technology that will help robot swarms to achieve consensus about actions, while maintaining decentralized intelligence, authenticity and security.

 Soft Robots

Robots have been largely made of hard composite materials, with a mechanical look and movements. However, there is an emerging area within robotics that promotes the use of novel soft materials, which make robots much more natural and life-like with soft movements and enhanced abilities to sense and feel their environments. The  Octobot is the first autonomous soft robot, which behaves and looks like an Octopus and is made of 3D printed materials. Soft robots are ideal for surgical  applications, but could also be used in toy and educational robots, or food inspection use cases. Overall soft robotics is a technology and design challenge, which should give us more life-like robots in the near future.

 Neural Robot Interfaces

Controlling robots through your mind is no longer in the realm of science fiction. MIT’s CSAIL Lab has shown that it is actually possible to control a robot using your EEG signals in the brain. Separately, CTRL-Labs is building one of the first mass-market brain computer interface solutions which includes reading emotions and intentions through a wristband, which seems counterintuitive but it works reading electromyography (EMG) activity which are signals from the brain to your muscles. A much more powerful solution in the works is being worked by Elon Musk’s Neuralink, which plans to embed a chipset in your brain, to enable brain to computer communication, but it’s not hard to see why it won’t connect with robots in the future. Neuralink plans to implant its chip into the first human by 2020.