Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.
Software development has long demanded the skills of two types of experts. There are those interested in how a user interacts with an application. And those who write the code that makes it work. The boundary between the user experience (UX) designer and the software engineer are well established. But the advent of “human-centered artificial intelligence” is challenging traditional design paradigms.
“UX designers use their understanding of human behavior and usability principles to design graphical user interfaces. But AI is changing what interfaces look like and how they operate,” says Hariharan “Hari” Subramonyam, a research professor at the Stanford Graduate School of Education and a faculty fellow of the Stanford Institute for Human-Centered Artificial Intelligence (HAI).
In a new preprint paper, Subramonyam and three colleagues from the University of Michigan show how this boundary is shifting and have developed recommendations for ways the two can communicate in the age of AI. They call their recommendations “desirable leaky abstractions.” Leaky abstractions are practical steps and documentation that the two disciplines can use to convey the nitty-gritty “low-level” details of their vision in language the other can understand.
Read the study: Human-AI Guidelines in Practice: The Power of Leaky Abstractions in Cross-Disciplinary Teams
“Using these tools, the disciplines leak key information back and forth across what was once an impermeable boundary,” explains Subramonyam, a former software engineer himself.
MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.
Less is not always more
As an example of the challenges presented by AI, Subramonyam points to facial recognition used to unlock phones. Once, the unlock interface was easy to describe. User swipes. Keypad appears. User enters the passcode. Application authenticates. User gains access to the phone.
With AI-inspired facial recognition, however, UX design begins to go deeper than the interface into the AI itself. Designers must think about things they’ve never had to before, like the training data or the way the algorithm is trained. Designers are finding it hard to understand AI capabilities, to describe how things should work in an ideal world, and to build prototype interfaces. Engineers, in turn, are finding they can no longer build software to exact specifications. For instance, engineers often consider training data as a non-technical specification. That is, training data is someone else’s responsibility.
“Engineers and designers have different priorities and incentives, which creates a lot of friction between the two fields,” Subramonyam says. “Leaky abstractions are helping to ease that friction.”
In their research, Subramonyam and colleagues interviewed 21 application design professionals — UX researchers, AI engineers, data scientists, and product managers — across 14 organizations to conceptualize how professional collaborations are evolving to meet the challenges of the age of artificial intelligence.
The researchers lay out a number of leaky abstractions for UX professionals and software engineers to share information. For the UX designers, suggestions include things like the sharing of qualitative codebooks to communicating user needs in the annotation of training data. Designers can also storyboard ideal user interactions and desired AI model behavior. Alternatively, they could record user testing to provide examples of faulty AI behavior to aid iterative interface design. They also suggest that engineers be invited to participate in user testing, a practice not common in traditional software development.
For engineers, the co-authors recommended leaky abstractions, including compiling of computational notebooks of data characteristics, providing visual dashboards that establish AI and end-user performance expectations, creating spreadsheets of AI outputs to aid prototyping and “exposing” the various “knobs” available to designers that they can use to fine-tune algorithm parameters, among others.
The authors’ main recommendation, however, is for these collaborating parties to postpone committing to design specifications as long as possible. The two disciplines must fit together like pieces of a jigsaw puzzle. Fewer complexities mean an easier fit. It takes time to polish those rough edges.
“In software development, there is sometimes a misalignment of needs,” Subramonyam says. “Instead, if I, the engineer, create an initial version of my puzzle piece and you, the UX designer, create yours, we can work together to address misalignment over multiple iterations, before establishing the specifics of the design. Then, only when the pieces finally fit, do we solidify the application specifications at the last moment.”
In all cases, the historic boundary between engineer and designer is the enemy of good human-centered design, Subramonyam says, and leaky abstractions can penetrate that boundary without rewriting the rules altogether.
Andrew Myers is a contributing writer for the Stanford Institute for Human-Centered AI.
This story originally appeared on Hai.stanford.edu. Copyright 2022