How Hunch Built a Data-Crunching Monster

  • "more conventional technologies like Hadoop"

    It's remarkable how quickly distributed computing has come to the forefront of scaling. 3 years ago, before hadoop and horizontal scaling became huge buzz-ideas, this (buying comically large boxes and heavily optimizing your math libs) was the standard practice.

    That being said, there is a reason why people have slowly but surely been moving towards distributed systems: If Hunch needs boxes that big to deal with 500k uniques/month, they are going to need to run some seriously insane hardware when they actually grow a large user base.

  • Interesting and very unexpected. Any other examples of companies going the non-COTS/Map-Reduce/Cloud way?

  • That is darn impressive. What sort of network interface and protocol is it using?

  • I hope they are running with swap turned off...