首页 练字文章 Robots learn to identify objects by feeling

Robots learn to identify objects by feeling

2021-01-11 18:27  浏览数:458  来源:小键人290286

Robot arms on factory floors excel at repetitive assembly line tasks such as picking up a
car door and aligning it to a car frame. These manipulators are typically designed to hand
le specific objects and perform particular tasks. The vision for robotics in the future, h
owever, goes well beyond these specialized manufacturing robots. We want general-purpose r
obots in our homes assisting the elderly and performing daily chores such as loading the d
ishwasher or folding laundry (1). Realizing this vision requires robots with a sense of to
uch that are adept at handling a wide range of objects (of different materials, shapes, si
zes, weights, and textures) in unstructured environments. Such dexterous robots do not exi
st yet because creating hardware for dense tactile feedback in robots has proven to be a m
ajor engineering hurdle (2). Writing in Science Robotics, Li et al. (3) describe a type of
tactile sensor that simultaneously measures contact loads and thermal properties of an ob
ject in contact. The data obtained from 10 such sensors mounted on a robot hand can be use
d to identify grasped objects with high accuracy, underscoring the potential of thermal da
ta in tactile sensing for robots.Technologies for human skin-like electronics (or e-skins)
in robotics has seen remarkable progress in recent years (4). Pressure-sensitive e-skins
have matured substantially to provide sensitive force feedback (both normal and shear forc
es), allowing robots to pick up small and delicate objects (5). However, there have been f
ar fewer attempts to create e-skins that sense multiple stimuli together—like static force
s, vibrations, and temperatures (6, 7). Traditionally, methods to manufacture multimodal s
ensors have been cumbersome or the resulting sensors have been difficult to scale down in
size.Li and colleagues use simple fabrication methods to construct a sensor—called a quadr
uple tactile sensor—that reports the contact pressure, the temperature and thermal conduct
ivity of an object, and the external temperature at once. The sensor is a stack of four in
dividual layers. A clever feedback circuit is used to maintain a fixed temperature differe
nce between two traces on the top and two traces on the bottom layers (see Fig. 1). When a
n object meets the top layer, the voltage required to maintain the fixed temperature diffe
rence in the top layer increases, thereby reflecting the thermal conductivity of the objec
t in contact. Likewise, the pressure is measured by the bottom layer from the changes in t
he thermal conductivity of the porous material. Importantly, this elegant combination of s
ensor and circuit architecture reduces cross-talk between these signals.After mounting 10
copies of these quadruple sensors across a robot hand (one at each fingertip and five dist
ributed across in the palm), the researchers collected a dataset of outputs (each a 4 × 10
signal map) while grasping 13 different objects. These objects include balls and cubes of
two sizes made of plastic, steel, and sponge and a human hand. A part of this dataset was
used to train a machine learning (ML) classifier that could subsequently predict object i
dentities from the signal maps. The object classification accuracy was found to be as high
as 96% when using the entire signal from all 10 sensors. Li and colleagues’ work goes fur
ther and offers additional insights into what enables this high classification accuracy by
dissecting the role of different signals and modalities. The ability to correctly identif
y an object dropped substantially when using pressure data alone (69.6%) or thermal conduc
tivity estimates alone (68.1%).Quite strikingly, a combination of pressure and thermal con
ductivity data could account for a classification accuracy of 92.3%, underscoring the usef
ulness of multiple sensing modalities in object identification. Although the researchers d
o not directly explore the optimal placement of sensors within the robot hand, this experi
mental platform presented here could prove to be a particularly good sandbox for these typ
es of studies.Despite the ease of fabrication, the sensor described here will likely need
to be optimized in its form factor prior to wide adoption. The current thickness of ~6 mm
may be prohibitive for use in some robot hands. Similarly, scaling down the lateral dimens
ions by a factor of ~10 will be worthwhile, because it will allow increasing the sensor de
nsity, opening up this sensor for other challenging applications, e.g., high spatial densi
ties are required for robots to detect slip quickly (8). A scaled-down version of the sens
or is also likely to provide improvements in speed and power consumption.This work from Li
and colleagues also highlights another broader challenge the field of robotic tactile sen
sing grapples with: The dataset collected here is highly specialized to their system. The
field would benefit immensely with standards for tactile data—like image formats—that will
allow interchangeable use of data generated across different teams. When these sensors we
re used in garbage sorting applications, the authors observed that it was more difficult t
o generalize the ML model (and detect new unseen objects of the trained classes) with smal
l datasets—a well-known issue in the field. Standardization of tactile data will boost the
amount of data that any team can access. Overall, this work brings attention to a new dim
ension in tactile sensing for robotics at a time of exciting progress in the field.



声明:以上文章均为用户自行添加,仅供打字交流使用,不代表本站观点,本站不承担任何法律责任,特此声明!如果有侵犯到您的权利,请及时联系我们删除。

字符:    改为:
去打字就可以设置个性皮肤啦!(O ^ ~ ^ O)