Sunday, November 27, 2016

And it gets even worse


or the tyranny of the small states

A follow on from post yesterday about the unfairness of the electoral college. I was surprised by how small of minority of the voting population could elect the president of the US. It really is possible with the electoral college that 78.5% percent of voters could be overruled by as few as 21.5% in electing the president of the US. 

Personally, I find the results a bit shocking. But it also made me wonder in which direction we are heading in--towards a more or less fair selection process given future population changes.

So, I went with published estimates of the 2015 population, then extrapolated these into the next census count year (2020). Obviously, there will be a shifting of the electoral college based on changing demographics and this was accounted for in the 2020 census year. The interesting part too is that the electoral college being fixed to the House and Senate and the members of the house have been fixed to 435 members since 1913--all this was taken into account.

What happens is that the skew gets worse (i.e. less fair). Generally because more populated states tend to acquire more people at a larger rate over all. So, the effect on the electoral college selection process in the future is that an even smaller portion of the population can elect the president in the future. In other words the relevancy of the majority in selecting the next president becomes even less relevant.

OK--so the plot:




In 10 years the worst case results in a downward trend of .12% less of the voting population needed to elect the next president. Before you write this off as inconsequential--that .12% represents 400,000 people. That's 400,000 additional people in 2020 that could potentially lose the right to have their vote count. And a further extrapolation of this trend (beyond 2020) just ends up getting worse.

Given that the small states have an disproportionate representation in electing the president due to the electoral college, I like to call this the tyranny of the small states.


Saturday, November 26, 2016

The inequity of the electoral college

or potentially how sk(cr)ewed are we?


This presidential election cycle in the US has given me much to think about. Especially post-election. Twice in my lifetime now we've elected a president with less than half of popular vote. When Gore lost in 2000 the margins were close (.5% of the popular vote), it's less so this time with Hillary Clinton standing at roughly 2 million votes and counting (or currently by a margin of 1.5%)

It got me wondering just how this could pencil out that the electoral college can skew the popular vote? I started doing a little research and a little simple math to see just how skewed (or screwed) we could be.

So, the electoral college has a representative for each member of the House and Senate. And that's the reason for the skewed representation. The senate is not based on population. So, out of the 535 votes, 100 are not based on population (one for each member in the Senate). Forget the fact that this approach seems somewhat feudal in approach (where only land-owners were allowed to vote).

Just how sk(cr)ewed are we. Well I grabbed data from the 2010 census, which is used to determine the electoral distribution. And it's ugly.

If we take the largest states and assume they all vote for the losing ticket, then the winning ticket gets the smallest margin of the winning vote, the outcome becomes just 21.5% of the population determining the outcome of the election. This is a highly unrealistic outcome, but theoretically possible. It shows that one-fifth of the population of the United States can select the president of the United States.

The simulation below accumulates the population of the most populous states with a losing electoral count. Then assume the remaining states a simple majority votes the other way (66,298,350).

state: California
state: Texas
state: New York
state: Florida
state: Illinois
state: Pennsylvania
state: Ohio
state: Michigan
state: Georgia
state: North Carolina
state: New Jersey
pop: 175547114, half electoral: 270

That gives the worst skewed case where the minority of 21.5% of the population selects the president. Surely any system that could so poorly represent the wishes of a majority must be broken or open to manipulation.

Sunday, December 28, 2014

Random bit slogging notes through some performance issues

OK so I've been spending time over the last year occasionally tweaking performance improvements on a multi-core application. This can be a huge timesink. What works best for me is to gather data, try some obvious changes, then get away from the computer and stew on the problem for a bit.

Obviously for the multi-core world, the one goal here is to support scaling as more cores are thrown at a problem. That has meant that performance tweaking requires:

  1.  Avoid locking of any kind, otherwise performance won't scale as more cores are thrown into the stewpot
  2. Minimize cache misses or hot cache reloads, increase cache-coherency
  3. Old fashion instruction tweaking (i.e. reducing instruction costs). 


The above are listed in their approximate order of importance.

I highly recommend watching the videos listed on this posting as they point out that #2 is often more important that #3 in performance tweaking.

Locking can often be avoided by using userspace RCU, or similar tricks.

 Other great resources:

  Performance bit twiddling
  Awesome parallel programming reference
  Detailed Assembly/C/C++ x86 Optimizations

 Obviously one of the great tools is just running perf top, a great deal of insight can be gained just by looking at the results the command below produces:

 sudo /usr/bin/perf top -p <pid>

Pretty much any kind of hardware/software supported events can be profiled, but by default counts are samples per function.

There are a ton of tools out there to help evaluate performance--just make sure that you understand how the data is being captured and presented otherwise you risk getting sucked down the rabbit-hole of false assumptions...

Saturday, May 31, 2014

Awesome! videos on modern CPU performance optimzations

Wow--I finally ended up watching these after sitting on these links for a while. They are just what the doctor ordered if you have questions about low-level source code performance optimizations in modern processors.

Questions on SSE/AVX, pipelining, cache fetch times, memory access, locality of memory access etc. are addressed in these talks. What is fantastic (and repeatedly driven home) is that performance is not necessarily about reducing overall CPU instructions, but reducing memory cache access times (and how to do this).


http://channel9.msdn.com/Events/Build/2014/4-587
http://channel9.msdn.com/Events/Build/2013/4-329


By the way--you can disregard the Microsoft provenance--most of the discussion/techniques equally apply to any modern x86 compiler.

Jo bob says watch'em!

Saturday, April 19, 2014

Computing changed bits in a range of values

So, there's a reason to do this, other than just wasting CPU cycles. I need to take in a range of 2 byte values and compute a mask on all changed bits between the lower and upper value of this range.

The underlying reason is this allows for a quick match against network packet port ranges. So below I have a little test app that computes the changed bits given an arbitrary start stop value.

This ends up producing the following output:

slioch@slioch-All-Series:~/vyatta$ ./range 32076 62199
xxxxxxxxxxxxxxxx
slioch@slioch-All-Series:~/vyatta$ ./range 52076 62199
xxxxxxxxxxxxxx--
slioch@slioch-All-Series:~/vyatta$ ./range 62076 62199
xxxxxxxx--------
slioch@slioch-All-Series:~/vyatta$ ./range 1 2
xx--------------
slioch@slioch-All-Series:~/vyatta$ ./range 1 3
xx--------------
slioch@slioch-All-Series:~/vyatta$ ./range 1 4
xxx-------------
slioch@slioch-All-Series:~/vyatta$ ./range 1 1024
xxxxxxxxxxx-----
slioch@slioch-All-Series:~/vyatta$ ./range 1 1023
xxxxxxxxxx------

The "x" represents a bit position that changes, while the "-" represents a position that doesn't change. This ends up allowing for a quick comparison against a start-stop (or range) of values. Note that the representation above has the LSB (least signficant bit) on the left.



#include <string.h>
#include <strings.h>
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <stdbool.h>
#include <math.h>


#define NUM_BITS_FOUR_BYTES 32

static void
byte_to_binary(unsigned int *x, int num)
{
  unsigned int mask = 0x01;
  int z;
  for (z = 0; z < num; ++z) {
    printf(((*x & mask) == mask) ? "x" : "-");
    mask = mask << 1;
    if (z % NUM_BITS_FOUR_BYTES == (NUM_BITS_FOUR_BYTES - 1)) {
      mask = 0x01;
      x++;
    }
  }
}


/* 
   Calculate fixed/wildcards in ranges of numbers
   for binary representations.
*/
int main(int argc, char **argv)
{
  if (argc < 3) {
    printf("need start and end\n");
    return;  
  }

  long int start = strtoul(argv[1], NULL, 10);
  long int end = strtoul(argv[2], NULL, 10);
  if (end < start) {
    printf("Error in range\n");
    exit(0);
  }

  //now let's see compute all the bits changed in this range
  unsigned int i, accum = 0;
  accum = start ^ end;
  int loc = 32 - __builtin_clz(accum);
  for (i = 0; i < loc; ++i)
    accum |= 0x1 << i;
  byte_to_binary(&accum, 16);
  printf("\n");


}

Wednesday, April 16, 2014

A little fun with L1/L2/L3 cache RAM fetch speed

I know it has been a while since I've posted. And that's pretty much a reflection of being super busy as well as not having a little nugget at the tips of my fingers to post.

So I'm back.

I wrote the little program below to test the cost of fetching values from lookup tables of various sizes. Presumably this would show up in performance differences once the lookup table exceeds the size of the various cache it could easily fit within.

So, my little experiment went as followed:

Saturday, December 14, 2013

Mad Cyclist on Hwy-101

OK, on 12/10 I saw something I would never expect to see on a freeway.

Leading into San Francisco from Silicon Valley (well the Peninsula actually) there was this crazy dude cycling and passing traffic on 101.

OK--it was stop and go kind of speeds, but what the heck! I've got to give this man credit for going for broke when even the most insane of us would say no thanks.








And here's one a little further on--notice that he's actually passing the slow lane: