A Humanist Explores AI

By Tad Vezner

How do you know things? How do you make sense of the world?
Or, to be more precise, how do you comfortably inquire about information, and what’s your source of choice? Those were a couple of seemingly simple yet incredibly complex questions that David McGaw (M.Des. ’07) asked when he started working as a designer for Google.
He was fascinated enough to write an internal paper for the ubiquitous tech company after joining its user experience research team nearly a decade ago. He’d initially been working on Google Assistant, a virtual assistant software app able to engage in two-way conversations, trying to figure out how users would use it—or would want to.
The notion of a query is, after all, a very modern one. The way you used to learn things, McGaw notes, was by reading a book or by talking to wiser, more informed people than you.
McGaw wrote his paper, which referenced the philosopher Plato, to help others explore the question of epistemology, the philosophical study of knowledge.
“Do people approach knowledge as a series of questions that they want answers to? Or is there a longer, deeper process, in the context of a question, as part of a community? In the modern era, we’ve turned this into typing words into a screen. But maybe human brains have historically operated in a different way,” McGaw says. “Do you type, or do you ask?”
When trying to learn about the world via computers, yes, people had trained themselves to type things in. McGaw’s team was trying to introduce a new way of interacting, a more natural one.
“It turns out it works better if people just talk. At length,” he says. “You’re trying to get people to unlearn their ‘use a computer’ skill [and] just talk to it like a person.”
“The role of design in a tech-forward innovation world is more about, ‘How do we connect what you’re building with how people behave?’” McGaw adds. “But there’s a larger humanist question of, how do you find meaning? Is it less about mastering a tool and making it jump through hoops, or more about working in a partnership, with technology and humans?”
Which brings McGaw to his newest gig at Google: working as a design strategist on DeepMind, Google’s artificial intelligence apparatus.
Back when McGaw studied ancient history at Yale University in the 1980s, he ran a letterpress print shop, working on a 100-year-old press. Though he wouldn’t formally study design for decades, he considers the work a crucial part of his design development.
“The intersection of design with the mechanical process to execute it was a fascinating junction,” he says. “How do you make information clear and interesting?”

WHAT IS AN AI?

Last fall, Google conducted a global study on how people expect to interact with AI, and how they’re comfortable interacting in the future. It found that people’s “mental models” were struggling to catch up with how rapidly the streams and formats of information dissemination were developing, McGaw says. It came down, again, to how people
A Humanist Explores AI would end up viewing this highly complex, interactive informational source.
What is an AI—is it a tool? A collaborative partner? An apprentice that one mentors? A relationship one develops?
In order for AI to be used to its maximum potential, McGaw says that it needs to be more akin to a bond that is developed.
“You shouldn’t have to master skills or have to prompt an AI, you should train an AI to be a good partner to you,” McGaw says.
That broad idea seems simple, but the implications relating to products can be profound.
“We’re used to using technology in a push-button way. But if AI can help you in more abstract ways, it requires working with it in a more abstract way. So you have to have a better expectation of the rhythms of the interaction,” he says.
“You’ll find a lot of folks who come to the role of [user experience] researcher through the science of human research, the study of human thought or behavior. But you’re not going to find a lot of Davids,” says Julie Anne Seguin, a fellow user experience researcher at Google who works with McGaw. “David comes from a design strategy background, even leaning on the business strategy side of things. He’s kind of the big picture guy; he’s always looking toward the future.”
“David is really good at uncovering, what are people’s natural expectations? What are people’s pain points and needs?” Seguin adds.
One big challenge for AI developers in every company right now, McGaw says, is figuring out how much an AI should explore the context of questions before arriving at its conclusions. AIs that answer questions with questions for additional information will hone in on better answers, but perhaps be more annoying for users.
“It’s still about these large systems and how technology and culture and people interact,” McGaw says. “We get to decide, but I think it helps to have people who understand the cultural aspects as well as the technological aspects, and the implications of how they come together.”
McGaw became a designer in a roundabout way. Long after receiving his history degree at Yale, he became a campus minister with the interdenominational Campus Crusade for Christ at Harvard University.
He chose to study design at Illinois Institute of Technology because he thought it would help him with his intermittent graphic work, but discovered the discipline’s larger context of problem solving and fell in love with it.
He worked at McKinsey & Company for years, as well as several smaller design firms in the San Francisco area before being approached to work at Google.
Throughout it all, his faith and humanist beliefs have grounded him.
“AIs are an interesting part of the tech landscape, and something will come after that,” McGaw says. “I like having the tool set to address those problems, and design is an interesting space between pure art and pure problem solving. How do you bring your whole self to what you do? Find places where you can really be you. Design is another way of thinking about patterns that humans make and [the] ways we work together to create something.”

 

Related Stories