How to Create a Google Earth Choropleth Map: Chester County 2012 General Election

Summary: In this post I will show you how I created a Google Earth choropleth map  of the 2012 Presidential Election results of the precincts in Chester County, Pennsylvania.  Here is the final result:

 

Here are the links to some of the tools and resources we’ll be talking about (for easy later reference):

Background

For different scenarios it is useful to understand people’s political leanings on a granular level.  For me personally, this interest started because I am currently looking to buy a house in Chester County, but obviously there are many other reasons why such a map might be useful (e.g. for political campaigns who want to maximize resource expenditure).

Most election result data and maps only go down to the level of individual counties, such as this Washington Post map:

So in a toss-up state, Chester County is a toss-up county: 49.7% Romney and 49.2% Obama.  For purposes of looking at real estate, this is however still too coarse.  So I wondered how I could get at a more detailed breakdown.

Overview

Here’s what we’ll be doing in a nutshell: get election data and tie it to the map shapefiles of Chester County voting precincts, then converting those shapefiles to KML files so we can view the map in Google Earth.

Getting the Data

The lowest level of political data I could get my hands on were the Chester County (CC) precinct results, which are conveniently located here: Chester County Election Results Archive.  Inconveniently, the data for each precinct is contained in a separate PDF.  To use the data in our map, we’ll need it all together in one place.  So I went through all 200+ PDFs and transcribed the precinct data in Excel (yes it sucked).  Here’s the data in CSV format.  Note that I only picked out votes for Romney or Obama, not all the other information contained in the precinct reports.

So now we have the voting data, we’ll need the map data (in the form of shapefiles) of Chester County precincts.  We could create those ourselves, but who has time for that.  What I found was the 2010 Census TIGER/Line® Shapefiles (there’s also the 2008 TIGER/Line® Shapefiles for: Pennsylvania).  Click on “Download Shapefiles” on the left, then “Voting Districts” from the dropdown and then select the state and county.  You’ll get a package like this one for Chester County.

Now that we have the data and the map shapefiles we come to the hardest part of this exercise: marrying the data with the map.

Map Meet Data, Data Meet Map

So admittedly I am not expert on shapefiles, I have plenty more learning to do there.  For a better explanation of what a shapefile is, consult your encyclopedia.  Important for us, there are two files that interact: the actual shapefile (.shp extension) which contains the geometry of polygons, and a data file (.dbf extension) which contains data connected to the shapes - in our cases respresenting voting precincts.

From my Google searches the following tasks are best accomplished in ArcGISthe industry standard mapping software by a company called esri.  But it costs money which I don’t have to spend on this right now.  So we will use three other tools: GeoDa, Excel 2010, and Access 2010. Theoretically we don’t really need GeoDa, but it helps to see what’s going on.

So when you open the shapefile (.shp extension) in GeoDa (which you can download for free here) you’ll see something like this:

 

You can also look at the data associated with the shapefile which is contained in the .dbf file.  Just click on that little table icon (third from left) and you’ll see something like this:

There’s nothing there now that’s really interesting for us, other than the names of the actual voting precincts.  What we need is the voting data.  In the source data file (the .csv file), we have votes for both Romney and Obama.  To combine these in just one column (since the coloring will be based on one variable as contained in one column) I just used the percentage of votes for Obama, with Romney representing the flipside.  So a 34% vote Obama would simply mean a 66% vote for Romney (nevermind the 1% third party candidates).  This’ll make sense when you take a look at the Category Editor screenshot below and look at the breaks compared to the labels.

At this point we could just add a column and input the voting percentages for Obama for each precinct.  This is what I initially did, but editing this in Excel is much faster, so we’ll do that.  Here’s what you need to do:

  1. Make copies of all the shapefiles and work on those
  2. In Excel, open the .dbf file
  3. Add a column called “Vote” with the percentage of votes for Obama (or Romney, doesn’t matter)
  4. You cannot save back to .dbf format, so save as .xlsx
  5. Open MS Access
  6. Under External Data import Excel and choose the .xlsx file you just created; be sure to select the first row as column headers
  7. Then export it (under External Data > Export > More) as a dBase File. Pick a name and click Ok

Now when you reopen the shapefile in GeoDa, you’ll see the column you just added:

Color-coding the Map: Choroplethic!

At this point, you can already create a choropleth map in GeoDa.  Right-click on the map and select Change Current Map Type > Create New Custom.  Play around with the breaks and labels and such.  Here’s what I came up with:

 

Already pretty cool!  But we’d like to see some of the underlying geographic data so we can do some more in-depth analysis of what’s going on. For example, why is there such a big Obama supporting precinct surrounded by Republican precincts in the lower left corner?

This last part is thankfully easy.

You’ll need to a download a little stand-alone program called shp2kml.  This will convert your .shp file (including data) into a .kml file which can be viewed on Google Earth.  Here are the settings I used:

 

 

 

 

On that last screen, click “Create and Open”, pick a file name, and voila, Google Earth opens with your awesome choropleth map: sweet!

Now we can see that in an otherwise sparsely populated district, we find Lincoln University: this likely explains why there is such a pro-Obama precinct in otherwise Republican territory:

 

Next, I’ll have to figure how to make these maps using d3.  Enjoy! :)

Post-Election Model Evaluation Against Actual Results: a Victory for State Polls

So, here we are, November 7, and Barack Obama has been re-elected POTUS.  This came as a surprise to no-one who’s been watching the polls and following Nate Silver.  Well, maybe to Dean Chambers of Unskewedpolls.com.  Poor guy.

Nate did extremely well, correctly predicting all states and the overall election.  He had Florida at 50/50, and here it is, still undecided because it’s so close.

And how did my humble prediction model do? Equally well! (Read yesterday’s predictions here.)  Here are Nate’s and my predictions for competitive states (any state beyond 100% certainty):

image

Actual winners highlighted in blue or red.  The sole holdout is Florida, which we had at 50/50.

Taking a look at my projected likely outcomes for an Obama win:

image

The first two paths to victory are his actual paths to victory.  At the moment, CNN has Obama with 303 Electoral College votes, but if Obama wins Florida, he’ll get the 332. The outcomes above reflect the fact that Obama is slightly more likely to get Florida than not, and in actual fact this is what it’s looking like right now.

Another way to look at the accuracy of the model is to compare the computed state averages based on state polls to the actual election outcomes:

image

What we’re looking at here is the competitive states, sorted by how much the actual spread (based on election results) between D and R varied from the projected spread. One way to read this is that the undecideds broke more one way than the other.  In Arizona, for example, the polls had Obama at 43%, which is what he actually got. Romney, however, picked up 55% of the vote instead of the projected 50%.  This could be interpreted by reasoning that the additional 5% that Romney received were undecideds in the poll.  The average spread difference in these competitive states was only 2 - not bad!  Looks like the state polls were pretty good indicators of how the election would pan out.

Overall, my simple model did extremely well, actually surprisingly so, given how simple it is. The way it works is:

  1. Take the average D and R percentage and margin of error of the past X number of state polls for a given state (I’ve been using 10) for all states
  2. Simulate the election outcome by randomly picking a vote percentage value within the margin of error for each state, determining the winner, and allocating the appropriate number of Electoral College votes for the winner
  3. Do this 1,000,000 times and determine the percentage that each candidate won (or there was a tie); this is a Monte Carlo simulation to determine the probabilities of an outcome

That’s basically it.  How many polls to look back in this case was pretty arbitrary (10).  Using a lower number gives you possibly a bit fresher set of data (the polls were taken more recently) but leaves you a bit more exposed to potential outliers.  For example, if you look at my post from yesterday, you can see that using only the past 5 polls Florida had a probability of 37% Dem to 63% Rep.  Here are the past 5 polls:

image

We can see here that the InsiderAdvantage poll was an outlier and had Romney at +5.  Using the past 10 polls smoothed this outlier out.

What’s remarkable to me is the accuracy of this model given how dumb it is.  Part of Nate’s secret sauce is his weighting of polls. You can see the weights when you hover over the little bar chart. For example, the weight given to the InsiderAdvantage polls was 0.57976 vs. 1.411115 for the PPP poll:

image

We don’t know exactly how these numbers are determined, but we know that variables such as time, polling house effect, sample size, etc. go into this weight. My model did none of this weighting, which is probably the first area I would start improving it. Nonetheless, the results are quite similar, so the question is, how much bang for your buck do we get from additional complexity.  There’s of course also the possibility that this simple model just happened to work out, and that under different conditions a more nuanced approached would be more accurate.  This is probably the more likely case.  But the takeaway is that you don’t need an overly complex model to come up with some pretty decent predictions on your own.

Another area where the model fudges is in taking the average: it takes the poll averages (e.g. 50% Obama and 48% Obama averages out to 49% Obama), but it also averages the margin of error.  Well, the more data you have and the larger the sample of voters that were surveyed, the smaller the margin of error will become.  I did not take this fact into consideration and simply averaged the margin of errors, yielding undue uncertainty for each average.

My model also did not take other factors into consideration in making the final prediction: no economic factors or national polls - just the state polls.

In all, I finish with the following observations:

  • In 2012, state polls were excellent predictors of the election outcome when considered in the aggregate.  It is finally time for the media to stop writing articles about single polls without putting them into the context of the larger picture.  If a candidate is up by 2 points in state, we’ll see polls that show a candidate up 4 and tied.  Putting out an article talking about a tied race is simply misleading.
  • Nate talked about the possibility of a systemic bias in the polls.  There appeared to be none when considered in the aggregate.
  • A model doesn’t have to be complex to be of value and reveal underlying trends in a set of data.
  • Data trumps gut feelings and intuition in making election predictions.  Be wary of pundits who don’t talk about their conclusions based on what’s happening in the polls or other relevant data.
  • Python rocks!

Competing with Nate Silver in Under 200 Lines of Python Code - Election 2012 Result Predictions

UPDATE: Post-election analysis here

It’s November 6, and over 18 months of grueling and never-ending campaigning is finally coming to an end.  I’m admittedly a bit of a political news junkie and check memeorandum religiously to get a pulse of what’s being talked about.  Together with the Chrome plugin visualizing political bias, it’s a great tool.

However, the past few years have been especially partisan and the rhetoric in the blogosphere is rancid.  So it was truly a breath of fresh air when I discovered Nate Silver in the beginning of the summer.  No bullshit - just the facts.  What a concept!  So Nate has been become the latest staple of my info diet.

He’s been catching a ton of flack in the past few months for his statistical, evidence based model that has consistently favored Obama in spite of media hype to the contrary.  All of the arguments against him don’t really hold much weight unless they are actually addressing the model himself.

One article in particular caught my attention: "Is Nate Silver’s value at risk?" by Sean Davis over at The Daily Caller.  His argument basically boils down to the question of whether state polls are accurate in predicting election outcomes, and whether Nate Silver’s model has relied to heavily on this data.  After re-creating Nate’s model (in Excel?!), Sean writes:

After running the simulation every day for several weeks, I noticed something odd: the winning probabilities it produced for Obama and Romney were nearly identical to those reported by FiveThirtyEight. Day after day, night after night. For example, based on the polls included in RealClearPolitics’ various state averages as of Tuesday night, the Sean Davis model suggested that Obama had a 73.0% chance of winning the Electoral College. In contrast, Silver’s FiveThirtyEight model as of Tuesday night forecast that Obama had a 77.4% chance of winning the Electoral College.

So what gives? If it’s possible to recreate Silver’s model using just Microsoft Excel, a cheap Monte Carlo plug-in, and poll results that are widely available, then what real predictive value does Silver’s model have?

The answer is: not all that much beyond what is already contained in state polls. Why? Because the FiveThirtyEight model is a complete slave to state polls. When state polls are accurate, FiveThirtyEight looks amazing. But when state polls are incorrect, FiveThirtyEight does quite poorly. That’s why my very simple model and Silver’s very fancy model produce remarkably similar results — they rely on the same data. Garbage in, garbage out.

So what happens if state polls are incorrect?

It’s a good question, although Sean’s answer isn’t particularly satisfactory: he basically says we probably don’t have enough data.

However, this piqued my interest… was it really so easy to emulate his model?  I wanted to find out more… the Monte Carlo plugin is $129: screw that.  I bit of Googling later and it turns out Monte Carlo simulations are pretty easy to do in Python.

So after creating a few arrays to hold the latest polls for each state (via both RealClearPolitics and 538), I ran the numbers. I’ll perhaps go into the code in a separate post, but for right now, let me just post my predictions along with Nate’s.

Let’s start with state-by-state probabilities.  I’ve listed Nate’s state probabilities, and then two version of my predictions.  The model I’m using is really super simple.  I take the average of the past X number of polls and then run the Monte Carlo simulation on those percentages and margin of errors.  That’s how I come up with the state probabilities.  The “Diff” column then lists the difference between my predictions and Nate’s predictions.

image

Everything is in the general ballpark, which for under 200 lines of Python code isn’t bad I think!  The average difference using the past 10 polls is 5%, while the average difference using the past 5 polls is 4% (looking only at those states where our probabilities differed).  Like I said, not bad, and in no case, do we predict different winners altogether (with the exception of a the 5 poll projection calling for Romney to win Florida).

So, who’s gonna win it all?  Using the above percentages and simulating an election 1,000,000 times, I get the following, using first the past 10 and then then past 5 results:

image

Using the 10 last polls is somewhat close to Nate, but overall, again, we’re in the same ballpark and the story being told is the same.  Obama is the heavy favorite to win it tonight with roughly 10:1 odds in his favor.

Now let’s look at the most likely paths to victory for each candidate.  For Obama, the following 5 paths occurred most often in the simulation:

image

M2 here stands for Maine’s 2nd Congressional District.  Maine and Nebraska apportion their Electoral Votes one for each Congressional District (Maine has 2, Nebraska 3) plus 2 for the overall vote winner.

image

You can see why Ohio is so pivotal for Romney.  Here are the most likely paths without Ohio:

image

Certainly puts his late push into PA into some perspective.

Well if you do happen upon this post and are actually interested in the Python code, let me know, and I might do a follow-up post looking at the code specifically.

Cheers, and enjoy the election!  I’d put my money on Obama!

UPDATE: Post-election analysis here