Search
Bernhard Merkle

SE Radio 115: Architecture Analysis

Recording Venue:
Guest(s): Bernhard Merkle
Host(s): Markus
During Evolution of a software system, it becomes more and more difficult to understand the originally planned software architecture. Often an architectural degeneration happens because of various reasons during the development phases. In this session we will be looking how to avoid such architectural decay and degeneration and how continuous monitoring can improve the situation (and avoid architectural violations). In addition we will look at “refactoring in the large” and how refactoring can be simulated. A new family of “lint like tools for software architectures” is currently emerging in the marketplace I will show some examples and how they scale and support you in real world projects.


Show Notes

Links:

Join the discussion
7 comments
  • Great stuff! Very interesting and lots of good points made.

    But there are a couple of wider issues that I would like to add to the mix. I should declare however that I am one of the guys behind Structure101 (one of the tools mentioned in the interview), so my perspective may not be maximally objective…

    First up, I think the emergence and growing popularity of a number of such tools over the last few years is interesting in its own right. My view here is that this goes hand in hand with the decline of Waterfall and growing popularity of Agile. The general trend is towards emergent design – the notion that design is primarily an aspect of the code-base rather than something that is defined elsewhere (e.g. UML). However, for the most part, we have seen little or no attempt to define precisely what an emergent design really is: how you can see it, define it and measure it. An IDE allows you to look at low-level code but doesn’t help with “the big picture”. To my mind, what these tools are really seeking to do is plug this conceptual gap, with the common premise that design and low-level code are not distinct entities, but rather all part of a continuum within the code-base. Above all, they say, there are (or should be) meaningful levels of abstraction above the class…

    This feeds into architecture rules. One possible view of these – and one that fits very well with Waterfall – is that their primary purpose is to enable the “architect” to define rules that must be followed by the “developers”, and anything that does not conform is marked as a violation and rejected, either in the IDE or the build system. For sure, that can be part of the equation. But it is important to mention the other end of the spectrum, namely that the rules can serve to convey architectural intent and so communicate “the big picture” (higher level abstractions) through the team. This is a different dynamic to pure enforcement, with the emphasis much more on formalizing a shared view of the code-base, with sets of rules (ideally defined visually) each telling their own little design story. In Agile environments, it should be up to the team whether the rules are considered cast-iron constraints or just guidelines (better still if all team members can add new rules or modify existing ones).

    Finally, and surprising myself a little, I’m inclined to take issue with your comment that Lattix does not scale well to larger projects. Surely matrix (DSM) views are eminently scalable, indeed much more so than traditional graph rendering techniques? I’d tend to see the issues here as more about intuitiveness (most people prefer diagram views), and the ideal is to support both. Can you elaborate a little on this?

    Thanks!

    Ian Sutton
    Headway Software

  • Hi Ian,

    thanks for the comments.


    Finally, and surprising myself a little, I’m inclined to take issue with your comment that Lattix does not scale well to larger projects. Surely matrix (DSM) views are eminently scalable, indeed much more so than

    Well to be fair Lattix is not bad and IIRC they first came up with the DSM idea or productized it in Lattix. So they deserve some credits.
    And DSM are in fact scalable, but with scalability to larger projects i mean a different thing. The problem is not the DSM as mechanism but you often need additional helpers, which i will describe here:

    First IMO we often need both possilibities to display results, graphical in graphs and numerical in DSM. The two are just different views at the same data but with different advantages.
    e.g if I want to see the abstraction levels or if lots of modules are coupled together, then usage of graphs is recommended.

    The second issue with scalability is that the architecture is also described directly in the DSM, so it is also not visible as a diagram. Most people like to talk about architectures using diagrams. To be fair i think in the meantime Lattix has also diagrams to display the architecture but the editing (of depdendency rules) is still done in the DSM directly. I think your tool (Structure101) uses a different approach for good reasons :-).

    The third issue is that during navigation within a DMS matix I often get lost during thrill-down or up, because the matrix expands, collapses, and my previous position/focus is lost. So I have to remember where I was previously which is tedious for large and real projects.

    The last thing is that e.g. C++ lattix has not a real parser.
    Relying on M$ BSC files and doxygen export is too weak to my experience. They have also a Unterstand/C importer but IMO you need a real parser.


    (DSM) views are eminently scalable, indeed much more so than traditional graph rendering techniques? I’d tend to see the issues here as more about intuitiveness (most people prefer diagram views), and the ideal is to support both. Can you elaborate a little on this?

    The idea is to support both as you say and the powerfull tool do this.
    As i said above, graphic and numerical display have their special strengths in certain situations..

    I do not think that graph rendering techniques are less scalable than DSM. Essentially what you need is a good aggreation of the detailed results of more finegrained layers (e.g. from layers to components to libraries to modules to packages to classes to methods etc.etc.).

    But with graph rendering techniques you can even do much more.
    And Sotograph (as it name implies) is a very strong tool here. 🙂
    Take a inheritance tree or call tree of all submodules/components and layout them with an apropriate graph algorithm and you will see the abstraction layers (if they exist in the curren code base).
    Same applies to coupling. Take a spring embedder layout and you will see this immediately. And if there is a high coupling you will see a big ball, converging together. (which people call the big ball of m..)

    To achieve the same via numbers is possible, but much more harder to recognize IMO.

    kind regards,
    Bernhard.

  • Bernhard,

    I agree with almost all of that, but one nit.

    I do not think that graph rendering techniques are less scalable than DSM.

    Well, they are actually (at least in terms of screen real estate). Though worth noting that other techniques can also help (e.g. auto-partitioning).

    Essentially what you need is a good aggreation of the detailed results of more finegrained layers (e.g. from layers to components to libraries to modules to packages to classes to methods etc.etc.).

    Totally agree with this in principle. And I think all the tools in this space are specifically based on the premise of aggregation of smaller meaningful things to bigger meaningful things. Mind-sized chunks at every level. Actually, I have a talk on precisely this subject (“Ideal Code-base Structure”) at http://www.headwaysoftware.com/products/structure101/demos.

    Trouble is that the overwhelming majority of the world’s existing code-bases are not well structured in this regard. And, while it is easy to show a ball of mud as a … errr … ball of mud, that does not mean it is helpful so to do.

    To my mind, the biggest challenge for today’s architectural control tools is the (re-)discovery of structure from existing chaos, and this often involves dealing with big ugly dependency graphs where scalability of views is critical. One example is the ability to bypass the existing hierarchical decomposition altogether (bad hierarchy just gets in the way) and dip straight down to the elemental structures at the bottom….

  • Let me offer a few clarifications from Lattix.

    In my opinion, DSMs scale well not just in display but in transformations as well. I have used Lattix to do architecture analysis for years and this includes analysis of massive systems – they have scaled exceptionally well for me. I have not had any difficulty with displaying abstraction levels in DSMs – in fact I find them quite natural for that purpose. Numerous companies are using DSMs today and it is being taught in many universities because they all find DSMs very useful. (The value of DSMs goes beyond just the display aspects but I don’t want to get involved in that discussion here).

    However, let me make some corrections:

    To be fair i think in the meantime Lattix has also diagrams to display the architecture but the editing (of dependency rules) is still done in the DSM directly.

    Lattix allows you to create rules from other diagrams as well. In fact, there are 4 different ways to set rules in Lattix, only one of them is from a DSM. However, there are significant benefits to setting up rules from the DSM – DSM approach forces a completeness in specifying dependency rules and give you fine grained control over them.

    The last thing is that e.g. C++ lattix has not a real parser.
    Relying on M$ BSC files and doxygen export is too weak to my experience. They have also a Unterstand/C importer but IMO you need a real parser.

    Lattix takes in the output of Understand, which is a real parser. We use a simple and powerful approach by logically separating the parsing of systems from its analysis. We use both internal and external parsers for the large number of languages and frameworks that we support – this includes Java, .NET, C/C++, Spring, Hibernate, Oracle and more. See here for a complete list: http://www.lattix.com/products/products.php. This approach also allows us to support multi-module systems. As an example, you can model how an application or a service depends on a database through an object relational mapping.

    Neeraj Sangal
    Lattix, Inc.

  • Hello Bernhard,

    I liked your podcast very much. You have considered a number of issues which I haven’t thought of just yet.

    As a software architect I do a number of software architecture rediscoveries a year. During these sessions I work together with (or without) the application architect. When I use a DSM to do this I evertime notice that the extremely short learning curve for understanding the workings of a DSM is very helpful to obtain a usable matrix within a couple of hours.

    My architecture recovering sessions take about a day for medium-sized complex systems. I first start with teaching DSM methodology basics to all stakeholders (and I really mean every stakeholder) in a half hour theoretical session. After that I work with the application architect to model the desired architecture in the DSM. We do this by creating sub-systems (which might be virtual because they may have been non-existent in the first place). After structuring the DSM in a couple of hours I let in the stakeholders that have attended the morning half hour session and discuss the results.

    Everytime it is incredible to see and hear the discussions that arise between the architect and the developers (and project manager) based on the results shown in the DSM via a beamer. At five o’clock I close my laptop and go home. But not after receiving the feedback of the group I have been working with. “Incredible I have learned more of my system today then I did in the last one and a half year”. Though during previous years I have used other approaches as well the results were not as beneficial as the DSM approach.

    Concerning scalability, I have been working with DSMs on huge systems, and after the modularisation has been done (which is the key step in creation of a valuable DSMs), I always found out that it scaled better than any other graphical tool I have been using. However I consider DSM (tools) as something to be used and use other tools as well. But in my opinion you need easy tooling and methods when the chaos (others say complexity) of your software application is not sufficiently reduced to overseeable parts. If you have achieved this it is much easier to bring in other (“more” sophisticated) tools.

    However when one or more matrices have been created they can be extremely worthwhile to monitor any software architecture decay/erosion. This is also an important issue because when time-to-market is pressing you now have the ability to allow a certain amount of decay to get a release out, but you see what is happing and can already plan for architectural refactoring in an upcoming release.

    Whatever method or tool you are using, insight is what is required. Every method or tool that can provide this is helpful.

    I hope my findings have brought you and the listeners a little more insight into the value of DSMs.

    Han van Roosmalen
    Software Architectuur.nl

  • Nice overview on architecture analysis.

    This podcast was made in 2008 and I write this comment in 2015: want to know what advances have been made in the area of architecture analysis in the last 7-8 years? Which new approaches, tools, or techniques have emerged in this time period?

  • I’m currently a student at the HU University of Applied Sciences Utrecht and we’ve been introduced to architecture (and its analysis) and were using an in-house built tool called HUSACCT. It’s open source and is built using swing, so no support for 4k screens (I have to use the magnifier app from Windows to use the program), but it’s analytical parts are pretty good as far as I can tell!

More from this show