Search
Martin Thompson

SE Radio 201: Martin Thompson on Mechanical Sympathy

Recording Venue: WebEx
Guest: Martin Thompson
Martin Thompson, proprietor of the blog Mechanical Sympathy, founder of the LMAX disruptor open source project, and a consultant and frequent speaker on high performance computing talks with Robert about computer program performance. Martin explains the meaning of the term “mechanical sympathy,” derived from auto racing, and its relevance to program performance: the importance of code that takes into account the computer architecture. The discussion proceeds to cover the basics of hardware architecture, and then Martin continues with a discussion of the costs of different program characteristics in a multi-core processor.  Two key quantitative constraints, Amdahl’s Law and Little’s Law, are covered next. The discussion evolves into issues facing application programmers, such as poorly implemented libraries and frameworks, lock-free algorithms, and the impact of java garbage collection. The conversation wraps up with some thoughts on the proper methodology for approaching program performance.


Show Notes

Related Links

Join the discussion
6 comments
  • Thank you! Very interesting and relevant for me as a developer! One of those episodes where I wasn’t able to really catch everything said, either because of audio quality or background noise when listening at the gym :-p but mainly because it was action packed with good technical stuff.

  • Well done Martin! It really is a shame that the audio quality of this podcast is so bad. I’ve listened to nearly every one of the SE Radio podcasts and Martin Thompson gives a virtuoso performance – one of the best SE Radio podcasts in terms of content density and insight and one of the worst in terms of sound quality.

    With the typically high quality of content, I really urge the makers of this podcast would invest in better sound technology.

  • I had a job interview for a well known search company, I was asked to write a fast algorithm for testing the parity (odd or even number of ones) of each of a billion 32bit word.

    I thought of the obvious of counting while shifting out. And was struggling to think of a faster one, maybe folding, (I knew there is a much faster way).

    The interviewer suggested a 4GB lookup table, I told him it would be too slow, random access to a 4GB table would spend most of the time managing cache misses. I tried to explain why, but he just kept telling me that I was wrong, there is enough RAM (no swap). I tried to explain about cache, and burst read, but still the blank looks, and the insistence that it was me that was wrong. So I changed tact, I mentioned that we only have to check 1 billion values, but would first have to create the table of 4 billion values. He told be the values where pre-calculated. So I asked where are they at the start? In a file he told me. I then went on to tell him that fetching them from file is the same as fetching from swap…

    I have used lookup tables: e.g. 256 byte table to lookup the sin×255 of an angle (in 1/256 of a turn).

More from this show