Tag: debugging

SE-Radio Episode 337: Ben Sigelman on Distributed Tracing

Filed in Episodes by on September 11, 2018 1 Comment
SE-Radio Episode 337: Ben Sigelman on Distributed Tracing

Ben Sigelman CEO of LightStep and co-author of the OpenTracing standard discusses distributed tracing, a form of event-driven observability useful in debugging distributed systems, understanding latency outlyers, and delivering “white box” analytics.  Host Robert Blumen spoke with Sigelman about the basics of tracing, why it is harder in a distributed system, the concept of tracing […]

Continue Reading »

SE-Radio Episode 282: Donny Nadolny on Debugging Distributed Systems

Filed in Episodes by on February 14, 2017 1 Comment
SE-Radio Episode 282: Donny Nadolny on Debugging Distributed Systems

Donny Nadolny of PagerDuty joins Robert Blumen to tell the story of debugging an issue that PagerDuty encountered when they set up a Zookeeper cluster that spanned across two geographically separated datacenters in different regions.  The debugging process took them through multiple levels of the stack starting with their application, the implementation of the Zookeeper […]

Continue Reading »

Episode 101: Andreas Zeller on Debugging

Filed in Episodes by on June 20, 2008 1 Comment
Episode 101: Andreas Zeller on Debugging

In this episode we’re talking to Andreas Zeller. about debugging. We started the discussion with an explanation of what debugging and how it works in principle. We then briefly discussed the relationship between
debugging and testing. Next was the importance of the scientific method for debugging. We then looked as debugging as a search problem, leading to a discussion about delta debugging, the main topic of this discussion. We concluded the discussion by looking at the practical usability of delta debugging and the relationship to other means of automatically finding problems in software.

Continue Reading »

Episode 59: Static Code Analysis

Filed in Episodes by on June 16, 2007 3 Comments
Episode 59: Static Code Analysis

This episode is a discussion with Jonathan Aldrich (Assistant Professor at CMU) about static analysis. The discussion covered theory as well as practice and tools. We started with an explanation of what static analysis actually is, which kinds of errors it can find and how it is different from testing and reviews. The core challenge of such an analysis tool is to understand the semantics of the program and reduce its possible state space to make it analysable – in effect reconstructing the programmer’s intent from the code. The user can “help” the tool with this challenge by using suitable annotations; also, languages could do a better job of being analysable. The conceptual discussion was concluded by looking at the principles of static analysis (termination, soundness. precision) and how this approach relates to model analysis.

The second more practical part started out with a discussion of how Microsoft successfully uses static analysis in their Windows development. We then discussed some of the tools available; these include Findbugs, Coverity, Codesonar, Clockwork, Fortify, Polyspace and Codesurfer. To conclude the discussion of tools, we discussed the commonalities and differences with architecture visualization tools as well as metrics and heuristics.

Part three of the discussion briefly looked at how to introduce static analysis tools into an organization’s development process and tool chain. We concluded the discussion by looking at situations where static analysis does not work, as well as at the FLUID research project at CMU.

Continue Reading »