Researchers at England’s Newcastle University have implanted a camera in an artificial hand and paired it with artificial intelligence so the hand can recognize an object and know how to pick it up the right way.
With current bionic hands, the wearer usually has to initiate the hand’s motion by sending signals from the brain to the end of the remaining limb. Then, the artificial extension translates the signals into hand movements. Controlling the grasping pressure on an object can be difficult as well as time-consuming.
In contrast, software that controls the new “seeing hand” has been shown dozens of objects, each from various angles. The software also has been programmed with the knowledge of how to pick up an object it identifies. Just by seeing a biscuit or a hammer, the hand will know in a fraction of a second how to pick it up and how much pressure to use.
The hand that sees can be retrofitted onto many existing prostheses.
TRENDPOST: The seeing hand is an interim step toward researchers’ ultimate goal of creating a hand that can communicate directly with the brain by connecting electronic neural networks to nerve endings in the stump of the remaining arm.