What Is The Facebook Robotics Lab?

At first glance, the emerging robotic Facebook platform seems a bit confusing. A red and black Sawyer robotic arm (made by Rethink Robotics, which has recently disappeared) is swaying with the sound of a machine in a new lab at the advanced Silicon Valley headquarters.

He accidentally has to move his hand into the space of the right space, but he gets up, gets up, stands up on the track, leaves, and then returns to the starting position. Then the hand moves to the right, very close to the destination. But he stooped back before. With a chain, we cheer him on-he suddenly leaves the course again.

But like a rabbit that zigzags back and forth to avoid the peregrine falcon, the apparent madness of this robot is a sign of bizarre intelligence that Facebook sees as the key to developing better artificial intelligence as well as better robots.

As you can see, this robot is learning to explore the world. And, according to Facebook, it could one day lead to intelligent machines like telepresence robots.

Why Facebook is Doing Robotics Research

It’s very strange to hear that the world’s best social networks are doing robotics research instead of making search useful, but Facebook is a large organization with a lot of competing priorities. While these robots don’t have a direct impact on the Facebook experience, what companies learn from them can have unexpected effects.

Robotics is a new area of ​​research for Facebook, but its use and innovative work in the field of AI are well known. A mechanism that can be called AI (the definition is very vague) controls all sorts of things, from camera effects to automatic adjustment of limited content.

Artificial intelligence and robotics naturally overlap with mastery. This is why we have events extending beyond the two. And advances in one field often do the same thing or open up new areas of study in another.

So, it’s not surprising that Facebook, with a great interest in using AI to solve many problems in the real social media world, may want to join robotics to get new ideas.

What Can It Be Used For?

Facebook is very complex data that is skeptically organized. Of course, learning how to navigate a computer network is very different from learning how to navigate an office, but the idea of ​​a system that teaches the basics in a short amount of time is shared with a few simple rules and goals.

Learning how AI systems learn and how to remove obstacles such as inaccurate priorities, violation rules, and weird data pooling methods is important for agents who need to work in the real and virtual worlds.

Perhaps the next time there is a humanitarian crisis that Facebook needs to track on the platform, the AI ​​model that helps it will depend on the efficiencies it has learned from itself here.

Use for “curiosity”

This piece is less visual but more interesting. After all, everyone has a little curiosity and understands what sometimes kills cats, but in most cases, the motivation is to learn more effectively. Facebook applied the concept of curiosity to a robotic arm that was asked to perform a variety of everyday tasks.

Now it may seem odd that they can fill a robot’s hand with “curiosity”, but in this context, the term simply means that AI is in charge of how it moves forward, whether it sees, decides, or moves fast. -Provides incentives to reduce uncertainty about this action.

This can mean a lot. Slightly rotating the camera while recognizing an object can give you a slightly better image, increasing your confidence in recognition. First, check the target again to double-check the distance and make sure there are no obstacles. However, empowering the AI ​​to find the trust-building phase can ultimately get the job done faster, even if initially slowed down by “curious” tasks.a robotic

Display by touch

Vision matters, but it’s not the only way we or our robots perceive the world. Many robots have sensors for movement, sound, and other parameters, but the actual touch is relatively rare. Blame it for the lack of a good tactile interface (even if we approach it). However, Facebook researchers wanted to investigate the possibility of using tactile data as a proxy for visual data.

Think about it. This is perfectly normal. People with visual impairments use touch to navigate the environment or get nice details about objects. There is a significant overlap between concepts, not because they “see” through touch. That’s why Facebook researchers used an AI model to determine what to do based on the video, but instead of the actual video data, it provides high-resolution touch data.

The algorithm doesn’t seem to care if it looks like a global view as we see it. As long as the data is displayed visually (e.g. tactile sensor pressure map), patterns can be analyzed in the same way as photographic images.

Learning to walk from scratch

Walking is an incredibly complex act or series. Especially if it has 6 legs, such as the robot used in this experiment. You can program your legs to move forward and rotate, but that doesn’t sound like a trick? Eventually, we had to teach ourselves without any user guidance or import settings. So the team decided to teach the robot to walk.

This is not a new type of study. Many robotics and AI researchers are doing that. Evolutionary algorithms (different but related) have a long history and have already seen interesting articles.

Giving the robot some basic priorities, such as “rewarding” it for forward, didn’t have a real idea of ​​how to deal with the leg, so the team said that he experimented and tried a variety of things and said that the aim is to achieve a stable from idle movement for robots, and reducing the time needed to hours from weeks.

wooden-robot

Right now, it’s not clear what Facebook will do with the insights it gains, but it could turn out to be fruitful for the company’s AI-based hardware called the Facebook Portal. While doing such things in the real world is slow-paced, it would set up the robots for a more efficient deployment in the future (such as on terrains where obstacles come as a surprise). The company also says it has confidence in using the data gained from robot training to aid AI learning for other scenarios, such as helping digital assistants to gauge user needs.

Self-supervised learning means robots would be able to pick up the skill of self-learning that comes from doing things like taking steps in the snow without any manual help. As far as we’ve learned, the extent of reinforcement learning isn’t expansive at the moment. Plus, robots are said to be “dumb” because they require the creator to compose a line of code for every action they take. With the Robotics lab, Facebook attempts to set them up for future challenges and tis would make significant progress in the AI world if done effectively and successfully. The future looks bright for the friendly-looking yet dependent, dumb machines.

Conclusion

What do social media giants like Facebook want from robots? Currently, Facebook says this study is not product-specific. But remember that Facebook works to bring people together (yes, they sell ads). After all, Facebook is already building a portal, hardware, and Oculus VR and video conferencing devices. “The insightful advancement of this is that it can be measured remotely.” (If you’ve seen WIRED recently, privacy and security issues may arise.)

But we are ahead of ourselves. All home robots, except Roomba, have failed so far. Partly because the machine isn’t smart or can’t be used enough. There are no particularly agile robots. But Facebook’s robotic arm can fix it, and prepare the machines for the upcoming robotic revolution.