HomeLearning CenterUsing Big Data to Make Elections Fairer

Using Big Data to Make Elections Fairer

Originally published by Kosuke Imai and Ruth Greenwood for Common Wealth Magazine

In 1812 the Boston Gazette first used the term gerrymander in response to a set of voting districts devised by Massachusetts Gov. Elbridge Gerry. The Herald suggested that one of the districts looked like a salamander, and so created the (admittedly awkward) portmanteau for the map, calling it the “Gerry-mander.”

Today, gerrymandering has roughly the same meaning as when it was coined: manipulating voting districts to gain an advantage for a party or other group. While most don’t know much about the process of drawing districts, one thing is clear: people hate gerrymandering. A 2019 poll shows that 63 percent of Americans have a negative impression of gerrymandering, while a measly 5 percent view it favorably.

Gerrymandering is as harmful as it is unpopular. We run two projects that provide tools to help people, courts, and legislators understand how and why that is so. The Algorithm-Assisted Redistricting Methodology (ALARM) Project and the Election Law Clinic are housed in Harvard University’s Institute for Quantitative Social Science and Harvard Law School, respectively.

The Election Law Clinic partners with PlanScore to offer visualizations of the partisan biases of redistricting plans. The site includes data from 1972 to 2022 for every state, and allows users to easily see the partisan skews of congressional, state house, and state senate plans.

Back to News