Search results:

    MetaEdit+ Modeler

    MetaEdit+ Workbench

    Are there any reverse engineering features?

    Explanation:Can I take my existing Delphi / C++ / whatever code, and read it in to create graphical models? Maybe if I could then I could generate documentation, or code in different language...

    Yes, but not out of the box. Since we can have any modelling language, and generation of any kind of code, there's no possibility for built-in reverse engineering. This is a metaCASE tool, not a fixed-method CASE tool, so we have lots of flexibility in general, but some things can't be hard-coded anymore.

    MetaEdit+ has an API, XML import, and text file reading and parsing capabilities in the MERL generator language. Users can thus use any of these to write their own reverse engineering. The UML Examples project contains an example graph using a MERL generator for simple Java reverse engineering.

    However, this is not something we envisage our customers doing much, especially not round-trip engineering. The benefit of MetaEdit+ is the ability to build your own modeling language and code / documentation generation, so generated code and documentation are never edited - all changes are made to the models, and the code or documentation is regenerated as needed.

    See also the question on model-to-model transformations.


    Is it possible to generate output in another, probably lower level Domain-Specific Modeling Language, also built in MetaEdit+?

    Explanation:We've wondered about the possibility to create a layered set of DSLs.

    Yes. You could perform such a model-to-model transformation in one of at least three ways:

    1. by generating the desired XML with a report, then reading that in to create the lower-level models
    2. by exporting XML from Graph Manager, or some other text format with a report, and manipulating that in an external program to create XML to import
    3. by exporting XML or text, and reading that into an external program that makes calls back into MetaEdit+ via the API to create the lower-level models.

    However, experience with this approach, whatever tool is involved and whatever export/transform/import method is used, shows this is normally a Bad Thing. Without knowing your exact plans, I won't say that your idea is bad, but let me state several ways in which people have "gone wrong" along a path like this.

    1. Normally the idea is that each piece of data in the high-level language gets transformed to >1 piece of data in the low-level language (let's say 2 pieces). This is fine if you never (or rarely) look at the low-level language, and never (or very rarely) edit it. But if you edit it, you are now working with 2 pieces of data, but clearly they are not totally independent, since they could be produced from one piece. The idea of DSM is to come up with a minimal sufficient representation of systems, and this is the main reason for its 5-10x productivity increases: code or UML-based ways of building systems involve lots of duplication - the same information in several places.
    2. Often people say next "and we want to be able to change the high-level models still, and have those changes reflected in the low level models". That means you are working with 1+2 pieces of information, and also that you need to come up with some way to propagate the changes down to the low-level models "correctly". "Correctly" here means: without destroying information manually added there; updating the automatically generated parts; creating new generated parts; updating manually added parts to reflect changes in the top-level models (e.g. if the manually added part refers to the name of an element originally from the high level model, and that element's name has now been changed).
    3. And next people say "and we want to be able to change the low-level models, and have the high-level models update". This is even harder than 2). The only cases when it can really happen are where you don't really have a high level and low level, but two different representations at the same level. Even then, it can only really apply to the intersection of the sets of information recorded in the two different models. E.g. UML tools have one class in a model mapping to one class in the code. With sufficiently simple changes, a bit of luck and a following wind, the best UML tools today are capable of maintaining that simple mapping for the names of classes, attributes and operations. The actual code isn't kept in synch (it's not in the UML models), nor are say method calls in models kept in synch with actual calls made in the code (it could be done in theory, but in practice UML models only show a tiny fraction of the calls, so it's hard to know which ones to show automatically). The only way the synchronisation can be made "better" is by moving the two languages closer, e.g. allowing UML operations to contain the method body as text, or the code to show things like "Abstract class" as specially formatted comments (assuming the language generated doesn't have a direct representation of abstractness). Each move towards better synchronisation is thus a move away from having a higher level language and a lower level language.

    The DSM solution is to turn the question on its head and ask "OK, you showed me a high-level modeling language that doesn't yet capture enough information to build full systems. And you showed me a low-level language and a transformation into it. Now tell me what extra information you want to put in the low-level models". (Note that here we're asking for information, most likely on a (problem) domain level, not for the actual representation of that information.) "Now let's look how we can extend or change the high-level modeling language to provide places to capture that information". (Sometimes this step may require a rethink of the modeling language, especially if it's been based on a language that somebody had already made.) "And finally, let's show how we now have all the information we need, and we can generate full code directly from the top model, using the information it originally captured and the new information we would otherwise have entered in the lower-level models." (Remember that there was only a certain amount of information to be added to the lower-level models, regardless of its representation or duplication into several places there. Since we would have been able to generate the initial lower-level models from the information in the initial higher-level models, added extra information, and then generated code, we can clearly expect to be able to generate full code. Whilst it's probably not the best way, you can think how the transformation of the high-level models to low-level models worked, and the transformation of the low-level models to code worked (with the extra information), and thus how you could generate the low-level models, add into them the extra information now recorded in the high-level models, then generate code from there as you would have done in the original scenario)

    And you thought you asked a simple question! :-) Seriously though, I hope this helps you understand the issue in general. It's one of the main reasons why MDA is doomed to failure, at least in its current form of PIM *transform* -> PSM *add detail* *generate* -> code (or *transform* -> More Product Specific Model *add more detail* ...). The industry guru vote is already in on MDA, and it's a resounding "no thanks". Successful DSLs simply don't use this approach - think of SQL, for example. Whilst it would be possible to have C code generated from SQL (and I'm sure this has been done in some places) and edit it, it just leads to a greater set of things that need to have large amounts of expert work to build tools for the synchronisation, and more work for the users in learning and working in two different languages.

    Summa summarum: if you can specify a transformation from a high-level language to a low-level language, and then have modellers add more detail, and then another transformation to code, you can look instead at the information that gets added by modellers, extend the high-level language to capture it, and merge the transformations into one. It makes life easier for the modellers (one language, one model, no round trip desynch) and much easier for the metamodellers (one language, single one-way non-updating transformation). Additionally you normally find a way to record the information into significantly fewer data elements in the higher level language.