Civic Design

Friday, October 13, 2006

Questions about ballot usability research

Regular readers of this blog know that I have been reviewing a lot of research about ballot usability. Although the results provide many needed answers, I just keep coming up with questions.

The first set is about understanding who voters are.

Who are voters, anyway?

  • Is there a documented mental model of voters' tasks that we all should be using to create a more accurate conceptual map for voting systems? For example, how common is it for voters to enter the booth knowing how they're going to vote? Are there voter personas written up anywhere?
  • Do we know how regular people normally talk about and perform the act of voting?
  • What is the conversation that the voter is having in the voting booth? What are we missing by looking mainly at time on task and error rates in mock voting situations?


Are we really understanding intent when we do research outside a real election?

And oh, I have many questions about "voter intent" in the research.

  • Most of the research I've read so far is either retrospective and indirect (looking at residual vote rates for past elections) or is based on mock election setups (with made-up parties, candidates, and measures outside a regular election). Just taking the mock election type of study, how can we be sure we're accurately measuring voter intent?
  • If you give the voter a slate to vote from, aren't you just testing whether they can follow the instructions for the study (rather than the usability of the system or ballot)?
  • If you give the voter a guide to choose from, how do you know they didn't change their mind when they got to look at the actual ballot?
  • If you give the voter a guide to choose from, then ask them at the end of the session who they voted for, how reliable is what they remember?

You can videotape the sessions at each and every station, and then check the video tapes, but do you really know for sure what the voter intended to do, even then?

Also, setting up a situation where study participants review a guide first isn't realistic. We don't know whether people typically do this, for one thing. But for another, in a real election, voters are inundated for weeks or months with campaign material that -- whether they are paying attention or not -- must influence how people vote.


Performance versus preference

There is often no relationship in usability studies done in the
US (no matter what the topic) between observed performance and how study participants rate the thing they used and their experience with it.

  • When studies use over- and undervotes as errors, why do satisfaction ratings matter?
  • What are we really learning about confidence and satisfaction by using ratings? We don't know why people feel confident that their vote is being cast as they intended.
  • On what do study participants base their confidence ratings? (Are we culturally predisposed to trusting computers?)


I don't think we know very much about who voters are. I don't think we have a good way of measuring intent in usability studies. And if performance and preference don't match, I'm not confident that some elections official won't use that difference to discount findings or to pick the set of data that best supports his position.

0 Comments:

Post a Comment

Subscribe to Post Comments [Atom]



<< Home