skip to Main Content

Telestration: How Helena Mentis Applies Design Thinking to Surgery

Telestration: How Helena Mentis Applies Design Thinking to Surgery

Telestration: How Helena Mentis Applies Design Thinking to Surgery

Helena Mentis is the director of the Bodies in Motion Lab at University of Maryland, Baltimore County (UMBC) with research spanning human-computer interaction (HCI), computer supported cooperative work (CSCW) and medical informatics. During a recent visit to the Design Lab at UC San Diego, Mentis talked about her research on surgery in the operating room.

She examines the medical world through surgical instruments and the workflow inside the operating room. Mentis hones in on minimally invasive surgery and the reliance toward images.  She is particularly interested in how medical professionals see and share visual information in a collaborative fashion, which has grown over the past several years. She asks, “What happens if surgeons were given greater control over the image? What would happen to the workflow? Would it change anything?”

In one study at Thomas Hospital in London, surgeons were using a lot of pointing gestures to direct the operation. Confusion would arise and the surgeon would need to repeat his exact intention with others. This break in the workflow inspired Mentis’ team to ask: what if we were to build a touchless illustration system that responded to the surgeon’s gestures? Her team set out to build what she calls “telestration,” which enables surgeons to use gestures to illustrate their intentions through an interactive display.

During another operation, the surgeon encountered a soft bone and had to stop the procedure. As a result, the surgeon had take off their gloves to re-examine the tissue on the visual display. Mentis notes, “There is a tight coupling between images on display and feeling with the instrument in hand.” If the image on display could be more closely integrated with the workflow, would this save time in the operating room?After publishing her findings, people raved over how voice narration rather than gesture aided imaging and collaboration in surgery. Consequently Mentis asked, “If given the opportunity would doctors use voice or gesture?” The ensuing observations revealed that while doctors stated their preference for voice, gesture was more frequently used for shaping telestration images. While voice narration and gestures provided greater interaction with the image, surgeons actually spent more time in surgery. Mentis reasons, “There is more opportunity for collaborative discussion with the information.” Interestingly, this did add time to the overall operation, but it also yielded greater opportunities to uncover and discuss critical information.

About Helena Mentis, Ph.D.

Assistant Professor, Department of Information Systems
University of Maryland, Baltimore County

Helena Mentis, Ph.D., is an assistant professor in the Department of Information Systems at the University of Maryland, Baltimore County. Her research contributes to the areas of human-computer interaction (HCI), computer supported cooperative work (CSCW), and health informatics. She investigates how new interactive sensors can be integrated into the operating room to support medical collaboration and care. Before UMBC, she was a research fellow at Harvard Medical School, held a joint postdoctoral fellowship at Microsoft Research Cambridge and the University of Cambridge, and was an ERCIM postdoctoral scholar at Mobile Life in Sweden. She received her Ph.D. in Information Sciences and Technology from Pennsylvania State University.

Helena Mentis is the director of the Bodies in Motion Lab at University of Maryland, Baltimore County (UMBC) with research spanning human-computer interaction (HCI), computer supported cooperative work (CSCW) and medical informatics. During a recent visit to the Design Lab at UC San Diego, Mentis talked about her research on surgery in the operating room.

She examines the medical world through surgical instruments and the workflow inside the operating room. Mentis hones in on minimally invasive surgery and the reliance toward images.  She is particularly interested in how medical professionals see and share visual information in a collaborative fashion, which has grown over the past several years. She asks, “What happens if surgeons were given greater control over the image? What would happen to the workflow? Would it change anything?”

In one study at Thomas Hospital in London, surgeons were using a lot of pointing gestures to direct the operation. Confusion would arise and the surgeon would need to repeat his exact intention with others. This break in the workflow inspired Mentis’ team to ask: what if we were to build a touchless illustration system that responded to the surgeon’s gestures? Her team set out to build what she calls “telestration,” which enables surgeons to use gestures to illustrate their intentions through an interactive display.

During another operation, the surgeon encountered a soft bone and had to stop the procedure. As a result, the surgeon had take off their gloves to re-examine the tissue on the visual display. Mentis notes, “There is a tight coupling between images on display and feeling with the instrument in hand.” If the image on display could be more closely integrated with the workflow, would this save time in the operating room?After publishing her findings, people raved over how voice narration rather than gesture aided imaging and collaboration in surgery. Consequently Mentis asked, “If given the opportunity would doctors use voice or gesture?” The ensuing observations revealed that while doctors stated their preference for voice, gesture was more frequently used for shaping telestration images. While voice narration and gestures provided greater interaction with the image, surgeons actually spent more time in surgery. Mentis reasons, “There is more opportunity for collaborative discussion with the information.” Interestingly, this did add time to the overall operation, but it also yielded greater opportunities to uncover and discuss critical information.

About Helena Mentis, Ph.D.

Assistant Professor, Department of Information Systems
University of Maryland, Baltimore County

Helena Mentis, Ph.D., is an assistant professor in the Department of Information Systems at the University of Maryland, Baltimore County. Her research contributes to the areas of human-computer interaction (HCI), computer supported cooperative work (CSCW), and health informatics. She investigates how new interactive sensors can be integrated into the operating room to support medical collaboration and care. Before UMBC, she was a research fellow at Harvard Medical School, held a joint postdoctoral fellowship at Microsoft Research Cambridge and the University of Cambridge, and was an ERCIM postdoctoral scholar at Mobile Life in Sweden. She received her Ph.D. in Information Sciences and Technology from Pennsylvania State University.

Helena Mentis is the director of the Bodies in Motion Lab at University of Maryland, Baltimore County (UMBC) with research spanning human-computer interaction (HCI), computer supported cooperative work (CSCW) and medical informatics. During a recent visit to the Design Lab at UC San Diego, Mentis talked about her research on surgery in the operating room.

She examines the medical world through surgical instruments and the workflow inside the operating room. Mentis hones in on minimally invasive surgery and the reliance toward images.  She is particularly interested in how medical professionals see and share visual information in a collaborative fashion, which has grown over the past several years. She asks, “What happens if surgeons were given greater control over the image? What would happen to the workflow? Would it change anything?”

In one study at Thomas Hospital in London, surgeons were using a lot of pointing gestures to direct the operation. Confusion would arise and the surgeon would need to repeat his exact intention with others. This break in the workflow inspired Mentis’ team to ask: what if we were to build a touchless illustration system that responded to the surgeon’s gestures? Her team set out to build what she calls “telestration,” which enables surgeons to use gestures to illustrate their intentions through an interactive display.

During another operation, the surgeon encountered a soft bone and had to stop the procedure. As a result, the surgeon had take off their gloves to re-examine the tissue on the visual display. Mentis notes, “There is a tight coupling between images on display and feeling with the instrument in hand.” If the image on display could be more closely integrated with the workflow, would this save time in the operating room?After publishing her findings, people raved over how voice narration rather than gesture aided imaging and collaboration in surgery. Consequently Mentis asked, “If given the opportunity would doctors use voice or gesture?” The ensuing observations revealed that while doctors stated their preference for voice, gesture was more frequently used for shaping telestration images. While voice narration and gestures provided greater interaction with the image, surgeons actually spent more time in surgery. Mentis reasons, “There is more opportunity for collaborative discussion with the information.” Interestingly, this did add time to the overall operation, but it also yielded greater opportunities to uncover and discuss critical information.

About Helena Mentis, Ph.D.

Assistant Professor, Department of Information Systems
University of Maryland, Baltimore County

Helena Mentis, Ph.D., is an assistant professor in the Department of Information Systems at the University of Maryland, Baltimore County. Her research contributes to the areas of human-computer interaction (HCI), computer supported cooperative work (CSCW), and health informatics. She investigates how new interactive sensors can be integrated into the operating room to support medical collaboration and care. Before UMBC, she was a research fellow at Harvard Medical School, held a joint postdoctoral fellowship at Microsoft Research Cambridge and the University of Cambridge, and was an ERCIM postdoctoral scholar at Mobile Life in Sweden. She received her Ph.D. in Information Sciences and Technology from Pennsylvania State University.

Read Next

Cancer Care Kentucky Design Lab Ucsd

Experiencing Cancer in Appalachian Kentucky

A new paper from The Design Lab's Melanie McComsey and Eliah Aronoff-Spencer describes a new framework for improving #cancer care in Appalachian Kentucky. The project would leverage broadband connectivity and cancer communication research to make a difference in the lives of patients and their families.

Nothing tells the story of people working together better than a community quilt. A diversity of talents, colors, and materials brought together through skill and shared purpose. Perhaps never before have we as Americans needed a stronger reminder that many hands make short work of big problems. The work presented here by the L.A.U.N.C.H. Collaborative offers a new framework for health care that could be compared to a digital quilt, powered by community-based participatory design, with lived expertise and the newest advances in broadband-enabled connected health solutions. This work demonstrates the value and need to engage local communities and what can be learned when beneficiaries and traditional caregivers work together to develop healthcare solutions.
Productivity

Bringing Order to Chaos: How to Increase Productivity By Mastering Unstructured Time

Podcast with Design Lab member Amy Fox

In this episode we will talk to UCSD Cognitive Scientist, Amy Fox, about Structured and Unstructured time. Join us as we learn about the difference between the two, and tips and tricks that can help you organize and boost your productivity.

Triton Tools & Tidbits is a podcast that is focused on discussing topics that will engage and enrich student life and education. Brought to you by the Office of the Vice Chancellor for Student Affairs.
Trolley Stops Designathon UCSD Design Lab

Designathon Seeks to Reimagine Trolley Stops

On April 6+7, 2019, on UC San Diego campus Warren Mall, over 200+ students, neighbors, designers, technologists, and media-makers will come together for the Pepper Canyon Mobility Hub Designathon, an event developing proposals that will support the transformation Pepper Canyon Trolley Station at UCSD campus, currently under construction, into a dynamic, multimodal mobility hub. The event is produced through a partnership between The UC San Diego Design Lab, SANDAG, UC San Diego Campus Planning, the UC San Diego Young Planners’ Society, Sixth College Culture, Art, Technology program, and UC San Diego Urban Planning Program.
Building New Bridges: San Diego And Tijuana’s Combined Bid Breaks Down Barriers To Bi-National Cooperation

Building New Bridges: San Diego and Tijuana’s Combined Bid Breaks Down Barriers to Bi-National Cooperation

As dusk hovered over The Rady Shell at Jacobs Park on October 3 at the ‘Welcome Home, Bienvenido a Casa’ event, reflections off the San Diego Bay illuminated an evening of excited anticipation more than five years in the making. Will the San Diego-Tijuana megaregion take home the win in their bid to be the 2024 World Design Capital? Or will it be their competitors, Moscow?

Hosting the event and spearheading the San Diego-Tijuana bid initiative is the interorganizational collaboration of Design Forward Alliance, UC San Diego Design Lab and the Burnham Center for Community Advancement, with the full support of the City of San Diego and City of Tijuana and regional elected officials. This collective was created to amplify San Diego’s capacity as a global leader in human-centered design-driven innovation. The combined communities of art, culture, business, education, civic and design worked together in a multi-year, multi-national collaboration culminating in this night of solidarity for the joint-effort to win the coveted World Design Capital designation—a year-long city promotion program that would begin in 2024 and put the region on the global stage as a world-class innovator of economic, social, cultural and environmental design solutions for a better society.

“It’s not just about gaining the World Design Capital title,” said the Director of The Design Lab, Mai Thi Nguyen. “It’s about how we actually want to contribute and collaborate on multidisciplinary design innovation throughout the region, nationally and globally.”

Postdoc Researcher Zhutian Chen is Visualizing Data in Augmented Reality

Currently a postdoctoral researcher at the UC San Diego Design Lab and at the Creativity Lab, Zhutian Chen describes what he does as visualizing data in an augmented reality environment. Chen seeks to visualize data “beyond the desktop” and to “allow the user to interact with [the data].” 

Before joining the Design Lab, Chen originally earned his B.S. in Computer Software Engineering at the South China University of Technology, eventually earning his Ph.D. in Computer Science at the Hong Kong University of Science and Technology. During his time as an undergraduate, Chen was given the opportunity to intern for Microsoft Research as a research assistant where he discovered his newfound interest in Human-Computer Interaction (HCI). 

“Before my internship in Microsoft Research, I think [I was] more interested in doing research related to data mining and machine learning,” says Chen. “However, I [found] out that HCI is much more interesting because it makes you engage the human.” 
Design Lab Self-driving Nissan Toyota Ford Duke Stanford E-hmi

Design Lab Collaborates with Amgen to Explore Adoption of Medical Therapies

The Design Lab has recently embarked on an exciting collaboration with Amgen to explore the…

Back To Top