以后地位:主页 > 电机资讯 > 国际快讯 >

基于RFID和触摸屏技术的开源抓物机械人

2019-05-15 15:05 来源:未知 提交:poster

机械人专业人士Mark Silliman提议了一个开源(open-source)的软硬件机械人平台Liatris(http://liatris.org/),这个平台利用RFID和触摸屏技术,巧妙的实现为了无视觉抓物功效。

关键字开源 RFID 触摸屏 抓物机械人

机械人专业人士Mark Silliman提议了一个开源(open-source)的软硬件机械人平台Liatris(http://liatris.org/),这个平台利用RFID和触摸屏技术,巧妙的实现为了无视觉抓物功效。

传统的机械人抓取物体的思绪经常是根据机械人获得的视觉数据,计算和处理获得抓取点。

Liatris项目则巧妙的利用RFID 和触摸屏技术,尝试解决分外物体的抓取功效。

其原型思绪是如许的,对付特定偏向,首先存储该偏向的CAD模子数据,然后在实体偏向上设定RFID标签,并在基体上沾一个等边三角形的导体作为触摸屏的触点。首先机械人根据RFID信号获得抓取偏向的CAD信息,接下来,颠末过程触摸屏接触等边三角形触点,就可以或许或许精确定位哪里得当抓取。

这种解决计划本钱相当低,而且对付特定物体的抓取,也可以或许或许到达较高的精确性。
基于RFID和触摸屏技术的开源抓物机械人

附英文:
At IROS  2012, Gill Pratt declared that grasping was solved, which was a bit of a surprise for all the people doing grasping research. Grasping, after all, is the easiest thing ever, as long as you know absolutely everything there is to know about the thing that you want to grasp. The tricky bit now is perception: recognizing what the object that you want to grasp is, where it is, and how it’s oriented. This is why robots are festooned with all sorts of sensing things, but if all you care about is manipulating an object that you’re familiar with already, dealing with vision is a lot of work.

Liatris is an open-source hardware and software project (led by roboticist Mark Silliman) that does away with vision completely. Instead, you can determine the identity and pose of slightly modified objects with just a touchscreen and an RFID reader. It’s simple, relatively inexpensive, and as long as you’re not trying to deal with anything new, it works impressively well.

To get around the perception problem, Liatris uses a few clever tricks. First, each object has an RFID tag attached to it with a unique identifier, so that the robot can wirelessly detect what it’s working with. Once the robot has scanned the RFID tag, it looks the identifier up in an open source, global database of objects and downloads a CAD model and a grasp pose that “defines the ideal pose for the gripper prior to grasping the object.”

So now that you know what the object is and how to grasp it, you just need to know exactly where it is and what orientation it’s in. You can’t get that sort of information very easily from an RFID tag, so this is where the touchscreen comes in: Each object is (slightly) modified with an isosceles triangle of conductive points on the base, giving the touchscreen an exact location for the object, as well as the orientation, courtesy the pointy end of the triangle. With this data, the robot can accurately visualize the CAD model of the object on the touchscreen, and as long as it knows exactly where the touchscreen is, it can then grasp the real object based solely on the model. The robot doesn’t have to “see” anything: you just need the touchscreen and a RFID reader, and a headless robot arm can grasp just about whatever you want it to.

Here’s a video of the Liatris project in action; keep in mind that this is a proof of concept, which is why it looks kind of like a lot of it is held together with electrical tape:

All of this stuff runs under ROS, using MoveIt! Again, it’s a proof of concept, and things like the open source, global database of objects that it depends on don’t entirely exist yet (although similar things do exist already). In terms of hardware, all you need is a touchscreen and RFID reader: the equipment used in the demo will run you maybe $1,600 in total. It only works with rigid, conductive objects right now because the touchscreen is capacitive, but good multi-touch resistive touchscreens might fix that.

I know we’ve been kind of bashing vision this whole time, but future improvements to Liatris could add cameras to identify object states once the objects are known, for example detecting whether a pot has a lid on it, and whether it’s filled with anything. And with enough touchscreens in enough places, it could make collaborative robots happen without having to rely on vision that we haven’t gotten totally figured out yet, as Mark Silliman and his collaborators explain on the project’s website:

    The workspace could have many capacitive touch screens covering all work spaces. The exact location and identity of each touch screen would be passed to the local network, allowing a mobile robot to navigate to the specific touch screen and interact with objects on it. This means that any robot, potentially one of many in this workspace, would know exactly where everything is in the building. The result would be a true “internet of things” experience with humans and robots working together.



来源:
http://spectrum.ieee.org/automaton/robotics/robotics-software/liatris-vision-free-grasping-with-rfid-and-touchscreens

相干文章

友情链接:广州早教网  华人科技资讯网  德利社出版广电总局  大学生校园网  商业评论网  砂浆生产线网  广东省技工学校  内蒙医药网  中国视野新闻网  阿尔迪姆LED新闻网