We just released a very excited touch sensor that finally simplifies touch sensing for robotics.
Our most exciting result: Learned visuotactile policies for precise tasks like inserting USBs and credit card swiping, that work out-of-the-box when you replace skins! To the best of our knowledge, this has never been shown before with any existing tactile sensor.
Why is this important? For the first time, you could now collect data and train models on one sensor and expect them to generalize to new copies of the sensor -- opening the door to the kind of large foundation models that have revolutionized vision and language reasoning.
Would love to hear the community's questions, thoughts and comments!
this makes advanced touch sensors more like machine-cut screws than bespoke hand-forged nails.
With capacitative sensors, it is unclear from existing literature if it is possible to detect shear. Additionally, they generally operate at significantly lower frequencies.
Could it be used to sort trash and recycling? Could it recalibrate if gunk got on it, or as it aged? (I guess silicon is probably pretty resistant to aging.) Can it wash and de-stem a tomato?
I think I want a trackpad made out of this. How much resolution could it get? I suppose I wouldn't want to sacrifice a lot of resolution for the pressure, tilt, etc. that I am assuming this would provide.
(I said "think", because I might find out that it feels like running my finger over skin, and I'm wondering how creepy that might feel. I don't really want my laptop to have a fleshy part.)
Trash sort and recycling: Not many robots here, majority of sorting takes advantage of object material properties. Some companies tried to add delta robots to keep up with the high rates required to even approach profitability, but they weren't good enough. Maybe some municipalities or universities that have lots of funding could justify adding robots, but it's just hard to financially justify.
Recalibration: I'm curious what the developers have for handling reduced magnetic fields over time along with gunk. Silicone is washdown rated, but anything soft at high throughput with parts will start to wear out and change pickup characteristics.
Washing and destemming a tomato is more of a problem to solve now that will need another 10+ years of price reductions in robot+end effector costs and increased efficiency before it beats bulk washing and hand-destemming (or crude machine work). Maybe it'll be a grad-student's project for a theoretical future home-bot
The Lenovo TrackPoint is likely already 95% of what you'd need from a trackpad, but this touch sensor is likely not even focused at that market.
Things I see useful for this robot touch sensor:
* Simpler version that detects part presence, is just a Boolean feedback of "part detected" which can stick on existing end effectors. This is often handled by load calculations of the robot to detect if it has a part, but could also detect if a part has substantially "moved" while it's been gripped, sending a signal to the robot to pause
* Harder to suggest items for food as soft grippers (inflatable fingers) will grip at the precise pressure that they're inflated, reducing the need for sensitive feedback. The application for this touch sensor would be food that needs a combination of different pressures to properly secure something, can't think of a great example
* Hard to also suggest places where this sensor would help with fine alignment, as major manufacturers have motor and arm feedback with WAY more sensitivity than the average person would realize, google Fanuc " Touch Sensing". But, this could help when the end effector is longer and it's harder for the joints to detect position
* Fabric manipulation. Fabric is just a hard problem for robots, adding in more information about the "part" should be helpful. Unlocking more automations for shoe manufacturing at reasonable prices is a big wall
- AnySkin expressly handles wear and gunk by being replaceable. So if it wears out, and you have a heuristic or learned model for the old skin, it will work pretty well on the new skin! We verify this through an analysis of the raw signal consistency across skins, as well as through visuotactile policies learned using behavior cloning. We found swapping skins to work for some pretty precise tasks like inserting USBs and swiping credit cards.
- Could definitely be used for part motion detection
- Soft, inflatable grippers are effective, but often passive. AnySkin is not just soft, but also offers contact information from the interaction to actively ensure that blueberry doesn't get squished!
- This sensor would be key for robots that seek to use learned ML policies in cluttered environments. Robots are very likely to encounter scenarios where they see an object they must interact with, but the object is occluded either by their own end-effector(s) or by other objects. Touch, and an understanding of touch in relation to vision becomes critical to manipulate objects in these settings.
- Industrial robots do have very sensitive motor and arm feedback. However, these systems are bulky and unsafe to integrate into household robotic technologies. Sensors like AnySkin could be used as a powerful, lightweight solution in these scenarios, potentially by integrating with some exciting recent household robotics models like Robot Utility Models.
- ReSkin, the predecessor to AnySkin, has previously been used quite effectively for fabric manipulation! (see work from David Held's group at CMU). AnySkin is more reliable as well as more consistent and could potentially improve the performance seen in prior work.
Heh, fair. I wasn't thinking of this as a practical usage, it was just the first thing to come to mind when imagining a task requiring a lot of pressure sensitivity and a range of forces.
Then again, now that I've said it, I believe the current approach to this is to breed really hard, tasteless tomatoes and then agitate them in a vat. Perhaps we can eventually get tastier produce if robots can handle more fragile things!
Hm... or you could invert things and make a glove, then use it as a controller. (VR, or just a richer set of control dimensions for eg photo editing or something.) I guess that needs to generalize across hand shapes and sizes, not just swapping out the glove, but I'd be up for a calibration/training phase.
> * Harder to suggest items for food as soft grippers (inflatable fingers) will grip at the precise pressure that they're inflated, reducing the need for sensitive feedback. The application for this touch sensor would be food that needs a combination of different pressures to properly secure something, can't think of a great example
How do you know the right pressure without feedback? A lot of foods vary in firmness over time and ripeness. Lemons, for example. I guess most don't, as long as you're sticking to a single type of food.
[1] https://www.digikey.com/en/products/detail/melexis-technolog...
ReSkin: versatile, replaceable, lasting tactile skins https://arxiv.org/abs/2111.00071
> Z- coordinate system [36]. For an overall sensing area of 20mm x 20mm (Figure 3), we measure magnetic flux changes using 5 magnetometers. Four magnetometers (MLX90393; Melexis) are spaced 7mm apart around a central magnetometer. All 3D-printed molds, circuit board files, bill of materials, and libraries used have been publicly released and opensourced on the website
https://reskin.dev/
a breakboard is available here https://www.adafruit.com/product/4022
More than happy to answer questions about it either here or on my email as the corresponding author on the paper!
This might be interesting for musical instruments with more tactile feedback, like hand drums or violins. But an electronic control surface like that only exists because human musicians aren’t already robots.
Other questions: Is the primary skin material a molded silicone or possibly TPU (can be 3d printed)?
https://www.smooth-on.com/products/dragon-skin-10-slow/
So I don't think you could 3d print it, but you could 3d print a mold.
How tech-independent is the policy learning part? Do the models end up relying on how the board is giving you direction vectors, rather than contact location? (Nothing wrong with that, I'm just wondering if the directional aspect "factors out" certain kinds of change, and thus simplifies the learning process.)
That being said, the exact quantities the policy depends on are hard to interpret, given the use of deep learning. This could potentially be modality agnostic, but there has been no sensor so far that has shown (1) the ability to detect intuitively relevant quantities like contact location and 3-axis forces, and (2) sufficient signal consistency for deep learning models to generalize across instances. This was a key motivating factor for AnySkin, and we found a relatively straightforward fabrication procedure that enables this for magnetic sensing.
One advantage biotacs have over these is that I can send a guy a (very large) check and buy them. Most academically-sourced things like this cannot be gotten for any price. These look cool, I'd love to have a few.
And the board underneath is just a grid of these https://www.adafruit.com/product/4022 ?