Published: May 10, 2017

Computer scientist envisions a world where robots have that human touch

Just mention the words 鈥渄rone鈥 or 鈥渞辞产辞迟鈥聽and some will conjure unsettling visions of a future in which computers threaten to take over the world.聽

Dan Szafir, a professor in the Department of Computer Science and ATLAS Institute, envisions a day when robots can be found making beds at understaffed nursing homes, drones fly over fields providing precise measurements of crop yields, and flying automatons hover around the International Space Station, handling mundane chores so astronauts can tendto more important tasks.聽

Rather than seeing such intelligent machines as replacements for people (as is so often the fear), Szafir views them as integral collaborators, able to help DIY-ers with household projects.

鈥淭he ultimate goal is to design robots that can better support human activities鈥攖o improve usability, efficiency, and how much people enjoy interacting with them,鈥 Szafir says.聽

With an undergraduate degree in history and a PhD in computer science from the University of Wisconsin-Madison, Szafir arrived at 欧美口爆视频 in 2015 with a reputation鈥攁t age 27鈥攁s a key player in the burgeoning multidisciplinary study of human-robot interaction.聽

鈥淭here are a lot of good technology people and a lot of good social scientists, but individuals who bridge the gap between the two are rare. Dan is one of them,鈥 says Bilge Mutlu, an assistant professor at UW and Szafir鈥檚 mentor.聽

Remotely controlled robots have long been used in factories, bomb disposal and space-exploration. But as they transition to more complex, autonomous and intimate work alongside people鈥攙acuuming homes like the iRobot Roomba, or assisting shoppers like Lowes鈥 new robotic greeters鈥攊t鈥檚 becoming critical that humans and robots understand each other better, Szafir says.

With funding from NASA, the National Science Foundation and Intel, Szafir has rolled out several new research initiatives.

One aims to improve robots鈥 ability to understand nonverbal cues, like eye gaze, hand gestures and changes in voice intonation. 鈥淎s people, we are coded to use gestures. It鈥檚 something we do naturally, and we are very good at untangling what they mean,鈥 Szafir says. Robots, not so much. For instance, he explains, if you鈥檙e working on a car with a friend, you might say, 鈥淗ey, can you grab that wrench?鈥 while pointing or glancing at the toolbox across the room. If your co-worker were a robot, you鈥檇 have to say: 鈥淣ext, I need the 7 mm wrench. It is on this particular table in this particular place. Go pick it up and put it in my hand.鈥澛

Szafir and his graduate students will first videotape teams of human volunteers building something in the lab, painstakingly documenting their verbal and nonverbal cues. Next, he hopes to develop probabilistic models (if a human gestures like X, there鈥檚 a 90 percent likelihood she means Y) that could someday be used to develop software for more intuitive robots.聽

He鈥檚 also exploring ways to design robots so humans can better predict their actions. 鈥淩ight now, drones are loud, very robotic looking and hard to predict,鈥 he says. 鈥淧eople find that unsettling.鈥

Szafir is also developing ways robots, drones and hand-held consumer devices can interact, sharing information gleaned from their myriad sensors to paint a fuller picture for a remote human user. Can鈥檛 make it to that football game? 鈥淲e could potentially combine footage from drones overhead, ESPN, and pictures and videos from your friends鈥 cell phones to create a full, reconstructed 3D map of the environment and port it back to you at home using a virtual reality device. You鈥檇 get the sense that you were right there,鈥 Szafir says.

Sound like science fiction? Maybe so. But Szafir, well aware that some are creeped out by his chosen field, believes the potential for good far outweighs the potential for harm.