Skip to content

Telestration: How Helena Mentis Applies Design Thinking to Surgery

Telestration: How Helena Mentis Applies Design Thinking to Surgery

Telestration: How Helena Mentis Applies Design Thinking to Surgery

Helena Mentis is the director of the Bodies in Motion Lab at University of Maryland, Baltimore County (UMBC) with research spanning human-computer interaction (HCI), computer supported cooperative work (CSCW) and medical informatics. During a recent visit to the Design Lab at UC San Diego, Mentis talked about her research on surgery in the operating room.

She examines the medical world through surgical instruments and the workflow inside the operating room. Mentis hones in on minimally invasive surgery and the reliance toward images.  She is particularly interested in how medical professionals see and share visual information in a collaborative fashion, which has grown over the past several years. She asks, “What happens if surgeons were given greater control over the image? What would happen to the workflow? Would it change anything?”

In one study at Thomas Hospital in London, surgeons were using a lot of pointing gestures to direct the operation. Confusion would arise and the surgeon would need to repeat his exact intention with others. This break in the workflow inspired Mentis’ team to ask: what if we were to build a touchless illustration system that responded to the surgeon’s gestures? Her team set out to build what she calls “telestration,” which enables surgeons to use gestures to illustrate their intentions through an interactive display.

During another operation, the surgeon encountered a soft bone and had to stop the procedure. As a result, the surgeon had take off their gloves to re-examine the tissue on the visual display. Mentis notes, “There is a tight coupling between images on display and feeling with the instrument in hand.” If the image on display could be more closely integrated with the workflow, would this save time in the operating room?After publishing her findings, people raved over how voice narration rather than gesture aided imaging and collaboration in surgery. Consequently Mentis asked, “If given the opportunity would doctors use voice or gesture?” The ensuing observations revealed that while doctors stated their preference for voice, gesture was more frequently used for shaping telestration images. While voice narration and gestures provided greater interaction with the image, surgeons actually spent more time in surgery. Mentis reasons, “There is more opportunity for collaborative discussion with the information.” Interestingly, this did add time to the overall operation, but it also yielded greater opportunities to uncover and discuss critical information.

About Helena Mentis, Ph.D.

Assistant Professor, Department of Information Systems
University of Maryland, Baltimore County

Helena Mentis, Ph.D., is an assistant professor in the Department of Information Systems at the University of Maryland, Baltimore County. Her research contributes to the areas of human-computer interaction (HCI), computer supported cooperative work (CSCW), and health informatics. She investigates how new interactive sensors can be integrated into the operating room to support medical collaboration and care. Before UMBC, she was a research fellow at Harvard Medical School, held a joint postdoctoral fellowship at Microsoft Research Cambridge and the University of Cambridge, and was an ERCIM postdoctoral scholar at Mobile Life in Sweden. She received her Ph.D. in Information Sciences and Technology from Pennsylvania State University.

Helena Mentis is the director of the Bodies in Motion Lab at University of Maryland, Baltimore County (UMBC) with research spanning human-computer interaction (HCI), computer supported cooperative work (CSCW) and medical informatics. During a recent visit to the Design Lab at UC San Diego, Mentis talked about her research on surgery in the operating room.

She examines the medical world through surgical instruments and the workflow inside the operating room. Mentis hones in on minimally invasive surgery and the reliance toward images.  She is particularly interested in how medical professionals see and share visual information in a collaborative fashion, which has grown over the past several years. She asks, “What happens if surgeons were given greater control over the image? What would happen to the workflow? Would it change anything?”

In one study at Thomas Hospital in London, surgeons were using a lot of pointing gestures to direct the operation. Confusion would arise and the surgeon would need to repeat his exact intention with others. This break in the workflow inspired Mentis’ team to ask: what if we were to build a touchless illustration system that responded to the surgeon’s gestures? Her team set out to build what she calls “telestration,” which enables surgeons to use gestures to illustrate their intentions through an interactive display.

During another operation, the surgeon encountered a soft bone and had to stop the procedure. As a result, the surgeon had take off their gloves to re-examine the tissue on the visual display. Mentis notes, “There is a tight coupling between images on display and feeling with the instrument in hand.” If the image on display could be more closely integrated with the workflow, would this save time in the operating room?After publishing her findings, people raved over how voice narration rather than gesture aided imaging and collaboration in surgery. Consequently Mentis asked, “If given the opportunity would doctors use voice or gesture?” The ensuing observations revealed that while doctors stated their preference for voice, gesture was more frequently used for shaping telestration images. While voice narration and gestures provided greater interaction with the image, surgeons actually spent more time in surgery. Mentis reasons, “There is more opportunity for collaborative discussion with the information.” Interestingly, this did add time to the overall operation, but it also yielded greater opportunities to uncover and discuss critical information.

About Helena Mentis, Ph.D.

Assistant Professor, Department of Information Systems
University of Maryland, Baltimore County

Helena Mentis, Ph.D., is an assistant professor in the Department of Information Systems at the University of Maryland, Baltimore County. Her research contributes to the areas of human-computer interaction (HCI), computer supported cooperative work (CSCW), and health informatics. She investigates how new interactive sensors can be integrated into the operating room to support medical collaboration and care. Before UMBC, she was a research fellow at Harvard Medical School, held a joint postdoctoral fellowship at Microsoft Research Cambridge and the University of Cambridge, and was an ERCIM postdoctoral scholar at Mobile Life in Sweden. She received her Ph.D. in Information Sciences and Technology from Pennsylvania State University.

Helena Mentis is the director of the Bodies in Motion Lab at University of Maryland, Baltimore County (UMBC) with research spanning human-computer interaction (HCI), computer supported cooperative work (CSCW) and medical informatics. During a recent visit to the Design Lab at UC San Diego, Mentis talked about her research on surgery in the operating room.

She examines the medical world through surgical instruments and the workflow inside the operating room. Mentis hones in on minimally invasive surgery and the reliance toward images.  She is particularly interested in how medical professionals see and share visual information in a collaborative fashion, which has grown over the past several years. She asks, “What happens if surgeons were given greater control over the image? What would happen to the workflow? Would it change anything?”

In one study at Thomas Hospital in London, surgeons were using a lot of pointing gestures to direct the operation. Confusion would arise and the surgeon would need to repeat his exact intention with others. This break in the workflow inspired Mentis’ team to ask: what if we were to build a touchless illustration system that responded to the surgeon’s gestures? Her team set out to build what she calls “telestration,” which enables surgeons to use gestures to illustrate their intentions through an interactive display.

During another operation, the surgeon encountered a soft bone and had to stop the procedure. As a result, the surgeon had take off their gloves to re-examine the tissue on the visual display. Mentis notes, “There is a tight coupling between images on display and feeling with the instrument in hand.” If the image on display could be more closely integrated with the workflow, would this save time in the operating room?After publishing her findings, people raved over how voice narration rather than gesture aided imaging and collaboration in surgery. Consequently Mentis asked, “If given the opportunity would doctors use voice or gesture?” The ensuing observations revealed that while doctors stated their preference for voice, gesture was more frequently used for shaping telestration images. While voice narration and gestures provided greater interaction with the image, surgeons actually spent more time in surgery. Mentis reasons, “There is more opportunity for collaborative discussion with the information.” Interestingly, this did add time to the overall operation, but it also yielded greater opportunities to uncover and discuss critical information.

About Helena Mentis, Ph.D.

Assistant Professor, Department of Information Systems
University of Maryland, Baltimore County

Helena Mentis, Ph.D., is an assistant professor in the Department of Information Systems at the University of Maryland, Baltimore County. Her research contributes to the areas of human-computer interaction (HCI), computer supported cooperative work (CSCW), and health informatics. She investigates how new interactive sensors can be integrated into the operating room to support medical collaboration and care. Before UMBC, she was a research fellow at Harvard Medical School, held a joint postdoctoral fellowship at Microsoft Research Cambridge and the University of Cambridge, and was an ERCIM postdoctoral scholar at Mobile Life in Sweden. She received her Ph.D. in Information Sciences and Technology from Pennsylvania State University.

Read Next

Design Lab Faculty

New Design Lab Faculty Working to Shape the Future of UC San Diego

As a global pioneer in design thinking, research, and invention, The Design Lab prides itself on recruiting the brightest and most innovative minds in the design field. Today, we would like to extend a warm welcome to brand new faculty members Elizabeth Eikey, Haijun Xia, and Edward Wang!

Elizabeth Eikey
From a first-generation undergraduate student at Penn State, then an inquisitive Best Buy employee and finally, to the Assistant Professor in the Department of Family Medicine and Public Health & The Design Lab at UCSD, Dr. Elizabeth Eikey has an illustrious career. Her research work at The Design Lab focuses on the intersection between technology, mental health, and equity, primarily studying the possible applications for technology in supporting mental health and well-being.

Haijun Xia
After receiving his PhD in Computer Science from University of Toronto, Xia made the move across countries to begin his time as a researcher at UC San Diego. ‘I wanted to work at The Design Lab and UC San Diego, because of the diversity of skill here,’ says the Professor, ‘We are all approaching the many challenging research questions from different angles, which is really important to develop comprehensive solutions.

Edward Wang
When Edward Wang was an undergraduate student at Harvey Mudd, he never expected himself to become a researcher, let alone becoming a professor. It was only after a Professor offered him the chance to help design a course she was planning about biosignal processing, that he began on this path. ‘As I was designing the class over summer, I had to read a bunch of papers,’ he says, ‘I couldn’t stop thinking about how cool all of it was. Especially when it branched out into computer science and how it could be involved in biosignal processes.’

Interdisciplinary Powerhouse: Pinar Yoldas is a Perfect Fit for the Design Lab

Pinar Yoldas describes herself as an interdisciplinary designer, artist and researcher whose current research revolves around speculative biology, in which she designs and creates what could possibly be the next steps of evolution regarding human tissues, organs, and bodies. Evolution, in the eyes of Yoldas, includes the potential for humans in the future to possess modular bodies in which humans can interchange or add on additional sexual organs. 

She is currently an Assistant Professor in the Visual Arts Department at UC San Diego and a member of The Design Lab. While she earned her PhD in Visual and Media Design from Duke University, her interests and credentials don’t stop there. Yoldas also holds a MFA in Game and Interactive Media Design from UC Los Angeles; a MA in Visual Arts from Bilgi University; a MS in Information Technologies from Istanbul Technical University; and a Bachelors of Architecture with a minor is Sociology from Middle East Technical University. Combining her passions for science, art, and undoubtedly, education, Yoldas has impressively served as a bridge throughout her career between five different disciplines and serves as an inspiration for the pursuit and practical application of interdisciplinary science and art studies.
Design Lab Michele Morris Design Forward Summit Xconomy Uc San Diego

Lab Focused on Human-Centered Design Moves to Put San Diego on Map

Xconomy Article
For Michèle Morris, the big question hanging over organizers as they laid the groundwork last year for the first Design Forward Summit was whether the innovation community in San Diego understood the value of design.

“We didn’t know who was going to show up—and 600 people showed up,” said Morris, who is associate director of the Design Lab at UC San Diego and a founder of the Design Forward Summit.

Now, with the second Design Forward Summit set to begin Wednesday on San Diego’s downtown waterfront (and Thursday in Liberty Station), Morris said the question to be answered this year is “What’s next?”
Surveillance Technology San Diego

San Diego council committee unanimously approves ordinances targeting surveillance technology

Photo courtesy of John Gibbins/The San Diego Union-Tribune

A City Council committee on Wednesday unanimously approved two proposed ordinances geared at governing surveillance technologies in the city, an action sparked by sustained pushback from activists and others who were surprised and upset last year when it was revealed that San Diego had quietly installed cameras on streetlights throughout the city.

Lilly Irani, an associate professor at UC San Diego (and Design Lab faculty) who specializes in the ethics of technology, called the vote “a win for better governance in the long term.”

Irani helped draft the ordinances and assisted the organized opposition dubbed the TRUST San Diego coalition, which focuses on responsible surveillance in the region. The coalition was born out of concerns about one specific technology — so-called smart streetlights — and ultimately landed a seat at the table to draft the proposals.

“Without Councilmember Monica Montgomery championing this... there would be no table,” Irani said.
Benjamin Bergen

Design Lab member Benjamin Bergen featured as an expert in “History of Swear Words”

Picture Credit: Netflix

Design Lab member and UC San Diego Cognitive Science professor Benjamin Bergen was featured as an expert in "History of Swear Words," a new Netflix comedy series exploring the usage of and science behind cursing. Bergen is the author of "What the F: What Swearing Reveals About Our Language, Our Brains, and Ourselves" and "Louder Than Words: The New Science of How the Mind Makes Meaning"

Watch the full series now on Netflix!
Ford People-centered Automation

Ford Gifts $50K to Design Lab People-Centered Automation

Colleen Emmenegger, Head of People-Centered Automation at The Design Lab, was recently the recipient of a $50,000 grant from Ford Motor Company. The grant was awarded for her work regarding how drivers can understand, negotiate, and manage shared autonomy with their vehicles in a way that is accessible and easily translatable.

“We're trying to figure out if you can build a contract with the driver and her automated vehicle co-pilot so the driver knows exactly what they need to do and what the system does," says Emmenegger. "We're trying to build something that explicitly and continuously communicates, and that doesn't act as an invisible ‘controlling entity’ of the car. A system that provides dynamic, yet constant feedback to the driver and not sudden, startling warnings." 
Back To Top