Search
JP

SE Radio 119: DSLs in Practice with JP Tolvanen

Recording Venue:
Guest(s):JP Tolvanen
Host(s): Markus
In this episode, Markus talks with Juha-Pekka Tolvanen about using DSLs and code generation in practice. The main part of the episode is the discussion about a number of case studies that show how DSLs and code generation are used in practice.


Show Notes

  • Omega Tau, Markus’ new podcast mentioned in the beginning of the show

Links:

Join the discussion
15 comments
  • Hi Markus

    Excellent show. One comment:

    It is actually easy to handle the kind of “emergency stop” cases you talk about in DSL defined state machines. Simply use a hierarchical state machine and the “emergency stop” case can be handled by drawing a single transition.

    For example, we are using an internal Ruby DSL to generate our state machines and can do your example like this:

    state :BootState
    if_true :Emergency => :EmergencyState
    state :NormalState
    end
    state :OtherState
    end
    ...
    state :EmergencyState
    end
    end

  • Hi,

    yes, I am of couse aware of composite states, but I wanted to illustrate the use of model transformations in general. Also, depending on the meta model you have to work with, you might not have the ability to use comoposite states; I know a couple of systems where people did not include composite states in their meta model “to keep it simple”.

    But I guess I should have made this explicit in the podcast.

    Thanks for your feedback!
    Markus

  • Hi,
    I found one interesting example of m2M transformation as optimization and it applies very good to you Microwave tutorial. Let’s consider you would have a loop transition in your model. It will produce unnecessary code for your microwave, because e.g. when you are at stop state you don’t need anything to happen when the stop button is pushed. So it would be nice to have a transformation which will remove all loop transitions. Of course you could have such a constraint, however it won’t be so flexible as transformation and it won’t really match the metamodel – the state machine, which can have loops when necessary. So this transformation can be a nice option used when needed and I think it cannot be reasonably replaced by something else.
    What is your opinion?

  • I am not experienced, so I am asking. What is more effective – creating or modifiying a metamodel for something specific or using prooven metamodel and create a M2M transformation? When I read some MD* books and articles I always tought that metamodel should be something standardized or at least very well designed to be as universal as possible(in general or in the specific domain) and transformations should be easy to develop. So how is it in real world? Why should be this approach worse?

  • I am a student of computer science and I am working on my bachelor’s work, which should be about mdsd. I am really new to this topis, but it is very interesting for me. I see the power of M2M transformations in the possibility to optimize the models, however no real practical example comes to my mind. Do you agree? Can you come up with some example?

  • In DSM, the language knows about the domain you are working – microwave applications in your example case. This means that the modeling language would not be the plain state machine. It would not allow loops in the models – or at least shows immediately errors if such are specified. This is much better than M2M as now developers get feedback immediately at modeling time (cheaper and safer in terms of error prevention) and there is no need to run model transformations creating copies of the partly the same ”thing”.

    Your question on what are the places for using M2M is still valid. In my opinion often people are looking for M2M simply because the tools in use have limited functionality, e.g. need to combine data from several individual files.

  • I agree with you, but when we see the statemachine as a metamodel which is fixed and is our domain and micwave is a special case modelled, then it dowsn’t make sencse to put such a constrint in metamodel, because in our domain(finite state machines) we will need loops as well. Consider we will be creating a machine accepting e.g. sme source code and when you come at a comment, you want to stay in “ignore state” so you need loop. Of course, if our domain were microwaves, then the constraint would be necessary, however, then the metamodel as it is now won’t be sufficient.

  • Your comment Markus makes me think that “model transformations” are simply DSL’s or model generators that generate models instead of code? Then “model transformations” aren’t really a special case.

    For example, a DSL that generate UML models would then be a “model transformation”?

    Cheers
    Morten

  • I don’t see any reason to fix to one metamodel only. There is already now several dialects of state machines (that do not focus to any particular problem domain). Taking then the DSL view, we can have a version fitting to ”microwave” domain that prevents loops and other language (basically a metamodel with different concepts and constraints) that addresses the needs of another domain. Tools should allow defining these metamodels and related generators cost effectively (read: hours or days). Actually, one implemenatiton of state machine I used just recently is available here: http://www.metacase.com/blogs/jpt/blogView?showComments=true&entry=3405685238, covering the metamodel, notation and generators.

  • Things are quite different in M2M. You need to create objects as opposed to text, and you have to care about object identity.

    Of couse, you can code-generate the serialization format of a model – hence you can emulate a M2M via a code generator.

    But real M2M, with a suitable language, is more productive.

  • That’s interesting Marcus. So the advantage is that you can operate on objects instead of text and the disadvantage is that you have to work with and learn the M2M language.

    Having said that, assuming most tools can import and export XML, it would be fairly easy to load XML files into a dynamic language (say Ruby) and generate internal objects for direct manipulation and transformation. Then you would get the benefit of working with objects while staying in a general purpose language. No need to learn a separate M2M language.

    Morten

  • I wonder if M2M is really only needed when using text template based code generation?

    If you are writing a DSL in a general purpose language without using text templates, you already have the full power of a general purpose language to do optimizations, transformations etc. There would be no point in adding a M2M step.

    But I can see that a M2M step would be needed if you are using a text template based tool. Text based templates limits what your generator can do.

    Morten

  • In my opinion, domain-specific languages should generally be used whenever possible as they normally produce better results than general-purpose langauges. If we apply e.g. UML we usually don’t raise the level of abstraction, can’t prevent errors as UML don’t know about any particular domain, and the possibilities for code generation are usually limited. Perhaps one reason why people are looking M2M is that the the original source metamodel was already poorly suited for the task.

    Your question on cost efficiency is highly relevant and we wrote whole chapter on economics side in our book on DSM (http://dsmbook.com/toc.html). The cost efficiency depends usually on the repetition: how many developers, number of products/product versions or customization projects will be there? Sometimes another view to cost efficiency is looking the value of domain experts being able to make the ”development” work. Both of these kinds of things can be usually estimated and calculated well enough to make a decision.

    Based on our experience, language definition (metamodel, notation, editors) takes almost always less time than making the generator/transformation. So if you have an existing metamodel (I can’t say UML to be proven metamodel 🙂 you may need several transformations and finally still need to develop the generator. Making M2M based on universal metamodels is usually also costly since the source model contains errors and is incomplete and those need to be checked in relation to M2M. Otherwise we are transforming models with errors. By placing these rules directly to the language (DSL rather than universal/general-purpose) we can prevent errors to happen already early on and guide developers on making the specifications.

    One part of the cost-efficiency is tools used to create languages, transformations and generators. Few years ago at OOP conference a German tool developer company told that it took 25 man-years to develop UML tool with Eclipse EMF. Obviously, that kind of figure does not make the creation of modeling editors cost effective for most organizations. But what if you could develop the same kind of UML tooling in one man-month? So tools used may make the calculation of cost-efficiency to look quite different.

  • I think the relevant question here is who and how many developers need to learn “the new stuff”. With external DSLs (where M2M stuff is often related) only one person needs to learn the generator/transformation language. With internal/embedded languages developers can continue to use the familiar language, but then comes the question on scalability: how to prevent people to apply the basic host language constructs when better abstraction is available.

    Still, I wonder what is the problem M2M aims to solve? In the podcast we agreed on at least one: there is already a generator and we transform models to the structure/format it expects. A generalization of this is the need to integrate tools (usually one direction only) based on different metamodels.

  • Ron Jeffries always had a good anwser to the question, I want to start doing ATDD; what tools should I use? His anwser: Forgot the tools. Just write a test. Any test. Now. I’m paraphrasing.In our course Product Sashimi, Bonnie and I focus on writing out scenarios longhand on paper or whiteboards for two key reasons: (1) to engage more creative parts of the brain (although we have no scientific evidence that that actually happens, we hope it does) and (2) to take the focus away from tools and put it on the scenarios themselves. We don’t care much about syntax or style, but rather focus on the expected result in the scenario, the central action to invoke, and what assumptions we have to make to get there. (Then, When, Given, if you will.) Our attendees seem to like it.I think we could all do with a healthy dose of Assume you got it wrong .

More from this show