Civic Design

Saturday, May 09, 2009

Testing ballots: Real names or fictional? Direct how to vote or not?

How do you design a study to learn about ballot and voting system usability without doing the research during an election? The ideal situation would be to watch over voters’ shoulders on Election Day. But because we prize voting privately in the United States, observing voting on Election Day just is not an option. What’s a researcher to do?

It’s a challenging research situation.

What ballot should be used in the study? Should you use a real ballot from a recent election? What are the tradeoffs there? Next, you have to set up a situation that is similar to Election Day but isn’t. Turns out that making a study a lot like Election Day doesn’t really work for research.


Why researchers use constructed ballots with fictional names

There's a lot of research about whether fictional names are okay in voting studies, particularly by the people at Rice University in the ACCURATE project. They found that fictional names are okay as long as they're realistic. The NIST standard ballot for certification testing uses fictional names. Many researchers have picked up that ballot (or subsets of it) to use in their research.

When you use real names locally, it can be jarring if the design or the format looks different from what participants are expecting, and you instantly have artifacts from that. If there's one thing that does not look like the ballot they used, then voters notice and it's an instant distraction. So why not make the whole thing up?

Most researchers have decided not to use a real ballot from a recent election with. Why not? Using a constructed ballot, with fictional contests, names, and amendments or questions:
  • avoids asking people to vote in a contest where they might have their own opinions, or where it asks them to reveal their political preferences

  • levels playing field across levels of political interest

  • allows constructing a ballot that can be used to test different usability issues in types of contests or tasks

Why researchers tell participants how to vote

Now, why not let study participants vote the way they want to? Why give them a slate to vote or task scenarios to work from?

In usability tests, researchers often ask participants to carry out pre-determined scenarios. Sometimes this is done to measure specific behavior, sometimes to make sure certain things are tested, sometimes to make sure that the facilitator is ready for the next expected thing. Part of the art of conducting a usability study is knowing when to let participants do what they want to do and knowing when to go back to the test design. (In the ideal world, what the participant wants to do and what you want them to do are the same thing.) Researchers in the elections space make this decision consciously and deliberately to make sure that they can collect the data measures that will prove (or not) a hypothesis.

Instructed voting makes it possible to evaluate error rates without directly observing the participant voting. People who study how people use other kinds of technology try to instrument the system to capture test data or observe directly. These things are difficult to do with voting systems. So, instructed voting versus "just vote as you might" asks the participants to be thoughtfully accurate, and not just randomly mark the ballot.


The special challenges of voting research

Researching how people interact with most technology, a researcher can go into the field, hang around where the person is doing what they want to do and ask questions. Election Day is not the time for that. Most voting research requires that the number of variables be limited and those that remain are controlled.
So far, researchers have found that using the NIST standard ballots and directed tasks is the best way to manage that.


-- By Whitney Quesenbery and Dana Chisnell

Labels: , ,