I know, I'm not going to sound very good for this post on the planet and expecting very less dialog in form of comments on it, but anyway...
The Java development blogosphere is experiencing a new wave of marketing campaign, yes everyone is talking about Netbeans, but hold on, it's not just viral-marketing, I found reviews where some says it's ruby support is the bomb, some are finding profiler better, someone is just finding it coming-up with better attitude:). Well, I've to tell it, I'd tried Netbeans in various forms in last few weeks, I can tell you it can do pretty much what all IDEs do these days. Netbeans might be looking less refreshing with all that swing (elegant inside!) rendering, but giving Eclipse Key binding, for example, really shows it's eager to compete your favorite IDE. In this post I'm not trying to say Netbeans is better or something like that, my love for Eclipse is unparalleled for various reasons, but yes I am, sure, eager to tell you my modeling experience with Netbeans.
Before I do that I'd just write down my opinions about Netbeans 6. I would say, profiling is really pretty good, I don't need to understand nefariously complex architecture to use it, I don't need to go to help to use it. See you would say, TPTP is complex only for lame users. Are you talking about UI? how many perspectives you have as part of TPTP? which one to use for profiling? and what you've to do to profile remote Java application? can you answer it without looking in documentation? Don't get me wrong, I'm trying to be as unbiased as I can.
As for Ruby support, It is just about as good as DLTK Ruby IDE, I'm no Ruby big shot as yet, but things works pretty OK with DLTK as well, cheers! Netbeans has fantastic UI editor though, VE plain sucks, oh and is it around now? last time I heard it's dead.
Now on modeling, To My fellow Eclipse committer guys, let's face it, Eclipse doesn't give anything in modeling front which an average user can use out of the box. It might have innovative and wonderful framework / UML infrastructure for building tools around it though, which is very well appreciated by 2% eclipse plug-in developers (including me), fine. So for our hacky interests EMF might replace MOF but Eclipse users are not getting any advantage of it. I know, some might say it's no big deal to not have modeling as a part of IDE, but hey that's the whole point, everything from single place is the point. Few would like to use Visio to model class diagram and then write code for it. We've quite a few other reluctantly-opensource projects, in the community, controlled by arbitrary licenses which are not very useful.
For those who don't agree, download and try Netbeans for modeling, You will, for sure, appreciate the efforts put into it, it can do hell lot of stuff (It can't copy and paste model elements in single diagram though :) ), Java reverse engineering, simple modeling, in context design pattern appliance, it's just there and works! you don't need to find something on Internet and read it's license thrice before using it. And the best part of it, It doesn't cost an arm and leg to get these features. OK, it might behave little in-development but it's just there for you, try it!
Eclipse is the great Plaform and I can't even start to compare it or any of it's innovative aspects with something as trivial as Netbeans in that context, but guys, shouldn't Eclipse consider it's "end-users" who expect nice features? Advocacy on plain words don't help, it needs strong backing of solid product which offers something more. Learn Learn..
Showing posts with label Modeling. Show all posts
Showing posts with label Modeling. Show all posts
Friday, October 05, 2007
Tuesday, February 27, 2007
Executable Models
Imagine...
A work place where your regular tasks involve analyzing and modeling the parts of system based on stories (or requirements, whatever) and writing few snippets of instructions. You've a world class IDE with some wonderful perspectives, editors, views and a state of art execution infrastructure. The IDE allows quick execution of your models to test them, allows visual model debugging while encouraging ergonomics to prevent heavy "keying and mousing", IDE supporting finest model refactoring to allow painless design change, while providing real time collaboration for distributed brainstorming and discussions...
You pair model it, you test drive it, you version it at finer granular unit of control, you've a catalogue of blueprints (design + domain) which can be used statically as well as in context of existing system, you have continuous integration of models which compiles, validates (tests) and builds the system. At end of the day, you've a system which can simulate in development environment as well as deployable as production software....
Imagine a execution infrastructure able interpret UML (relax, take unambiguous parts of UML or Mellor subsets) directly, or may be it emits a script which can be plugged in existing JVM, or may be UML VM itself with attach on demand and hot model replacements :) ....
I dream to be in such an environment as well as developing such environment.
Realize....
These imaginations, although will sound like philosophies, are not without basis, existing tools and technologies can be leveraged and bridged to provide this envisioned platform; it's hard, takes lot of resources but not impossible. Ok, it will not give quick returns, will have adoption problems which can be handled.
At first, Executable UML will seem like a buzzword, to me it sounded daunting and it seemed like everyone wants to change the world. That change is for good, how long you want to work with something as ugly as XML or JSP and other complexities which in no way adds to customer requirements? They deserve to be generated or they don't even deserve to be in our codebase (I should say modelbase :) ).
Executability of models is not something entirely new (apart from Virtual Machines, OOP etc.), guys like Steven Mellor and Marc Balcer are working on it for quite sometime now.
If you feel, that, support for such tooling is not enough; Let me tell you, Eclipse has made such feeling "Past"; With Eclipse, We can leverage most of the Eclipse related technologies for achieving it, ECF for collaboration, UML/EMF/GEF/GMF for modeling, Debug Framework and GEF for simulation and model debugging (in this case, we may need to change the abstraction semantics of stack frames etc.) . Of course building VM or runtime is non-trivial, but it's worth trying alternative approaches like integrating with existing runtime.
This is the time for improving the software crafting (or engineering?), no one needs costly software, and current fashions make them expensive; Well its different topic to debate about productivity levels of existing methods but it can surely be improved.
A work place where your regular tasks involve analyzing and modeling the parts of system based on stories (or requirements, whatever) and writing few snippets of instructions. You've a world class IDE with some wonderful perspectives, editors, views and a state of art execution infrastructure. The IDE allows quick execution of your models to test them, allows visual model debugging while encouraging ergonomics to prevent heavy "keying and mousing", IDE supporting finest model refactoring to allow painless design change, while providing real time collaboration for distributed brainstorming and discussions...
You pair model it, you test drive it, you version it at finer granular unit of control, you've a catalogue of blueprints (design + domain) which can be used statically as well as in context of existing system, you have continuous integration of models which compiles, validates (tests) and builds the system. At end of the day, you've a system which can simulate in development environment as well as deployable as production software....
Imagine a execution infrastructure able interpret UML (relax, take unambiguous parts of UML or Mellor subsets) directly, or may be it emits a script which can be plugged in existing JVM, or may be UML VM itself with attach on demand and hot model replacements :) ....
I dream to be in such an environment as well as developing such environment.
Realize....
These imaginations, although will sound like philosophies, are not without basis, existing tools and technologies can be leveraged and bridged to provide this envisioned platform; it's hard, takes lot of resources but not impossible. Ok, it will not give quick returns, will have adoption problems which can be handled.
At first, Executable UML will seem like a buzzword, to me it sounded daunting and it seemed like everyone wants to change the world. That change is for good, how long you want to work with something as ugly as XML or JSP and other complexities which in no way adds to customer requirements? They deserve to be generated or they don't even deserve to be in our codebase (I should say modelbase :) ).
Executability of models is not something entirely new (apart from Virtual Machines, OOP etc.), guys like Steven Mellor and Marc Balcer are working on it for quite sometime now.
If you feel, that, support for such tooling is not enough; Let me tell you, Eclipse has made such feeling "Past"; With Eclipse, We can leverage most of the Eclipse related technologies for achieving it, ECF for collaboration, UML/EMF/GEF/GMF for modeling, Debug Framework and GEF for simulation and model debugging (in this case, we may need to change the abstraction semantics of stack frames etc.) . Of course building VM or runtime is non-trivial, but it's worth trying alternative approaches like integrating with existing runtime.
This is the time for improving the software crafting (or engineering?), no one needs costly software, and current fashions make them expensive; Well its different topic to debate about productivity levels of existing methods but it can surely be improved.
Wednesday, February 21, 2007
When will 'Software Modeling' catch big time attention?
From last one year, I'm working on a modeling tool featuring support for UML2 and BPMN1.0 specifications. Well, this tool is not envisioned just like "yet another modeling tool" allowing you to draw basic elements (and strange visual manipulations) and generate structural code and documentation (which is not impressively useful!).
It's commonly known that modeling is good for visualizing and documenting a system. Well, I've heavily used UML for documenting and I found it useful for communicating software blueprints to those who don't understand the technology well.
Contemporary tools can do better apart from visualizing, Typically a modeling tool doesn't sponsor full blown generation of an application for target platform, say J2EE; Not that it's insane attempt but it's simply too much of work. They generate structural code (skeletal) from model (say, classes from Class Diagrams) ; this is commonly known as forward engineering. Tools also support round trip engineering for regenerating the code from model while maintaining the manual customizations.
How redundant, Why to model and code at the same time and why maintain both?
The tool I'm working on is supposed to generate "typical application" end-to-end (heh, really?!). Also, Tool has a special language(sort of BASIC dialect) , that can be used to write business logic for the application which might, otherwise, result in tedious micro modeling (boy, you will require incredible mouse capabilities for that level of modeling). This language is customized implementation of OMG(tm) Action Semantics Language (ASL) and Object Constraint Language (OCL); The logic in this language is translated in to platform specific code during transformation and code generation.
Technically, this is enough for generating entire application. Recently, tool vendors are trying to avoid the ASL approach altogether and use supplemental UML models (like activity and state machines) to specify behavioral aspect of system; but this approach has usability problems, although it is an interesting technical challenge (umm... given enough beer and I'll even admit that I wrote a state machine compiler for money to generate behavioral code..).
How bad again, Why I want to learn UML, a dumb ASL language and still struggle with generated code to customize it?
I agree that the tool can generate "student projects" like applications, but still keeps me skeptical about *serious* applications which involve complexities of security, performance and integration. It's not impossible to do that either, however, supporting multiple security/persistence/application frameworks/third party interfaces for multiple platforms, in its entirety, would be a non-trivially huge attempt ever made.
Lately, I'm observing generative methods in regular development; these days, most applications are combination of generated + hand written code(in some reasonable combination), people manage both code as well as model. Generative Technologies are gaining acceptance gradually, especially because support from Eclipse projects like EMF, GEF, GMF and other Modeling subprojects apart from rising community interest, hundreds of modeling tools are available and major players are investing in this technologies.
We have witnessed the transitions in technologies; Transitions from Hardware specific Assembly code to C code, C code to C++/Java/RoR code.., these transisions are fine and gradual, but with modeling being a paradigm shift in the way we see software development (its not just about learning new language, right?), modeling imposes the steep learning curve, and understanding the mammoth specifications. It's always difficult to adopt a different way of doing things.
With the current state of tooling and specifications, it is difficult to offer "Generate everything, Write Nothing" jargon feature, and its pain in a**e to maintain both model and code in sync (tools are improving though). I'm sure of one thing, generative development is going to be the next generation of software development, not sure how many more years...
It's commonly known that modeling is good for visualizing and documenting a system. Well, I've heavily used UML for documenting and I found it useful for communicating software blueprints to those who don't understand the technology well.
Contemporary tools can do better apart from visualizing, Typically a modeling tool doesn't sponsor full blown generation of an application for target platform, say J2EE; Not that it's insane attempt but it's simply too much of work. They generate structural code (skeletal) from model (say, classes from Class Diagrams) ; this is commonly known as forward engineering. Tools also support round trip engineering for regenerating the code from model while maintaining the manual customizations.
How redundant, Why to model and code at the same time and why maintain both?
The tool I'm working on is supposed to generate "typical application" end-to-end (heh, really?!). Also, Tool has a special language(sort of BASIC dialect) , that can be used to write business logic for the application which might, otherwise, result in tedious micro modeling (boy, you will require incredible mouse capabilities for that level of modeling). This language is customized implementation of OMG(tm) Action Semantics Language (ASL) and Object Constraint Language (OCL); The logic in this language is translated in to platform specific code during transformation and code generation.
Technically, this is enough for generating entire application. Recently, tool vendors are trying to avoid the ASL approach altogether and use supplemental UML models (like activity and state machines) to specify behavioral aspect of system; but this approach has usability problems, although it is an interesting technical challenge (umm... given enough beer and I'll even admit that I wrote a state machine compiler for money to generate behavioral code..).
How bad again, Why I want to learn UML, a dumb ASL language and still struggle with generated code to customize it?
I agree that the tool can generate "student projects" like applications, but still keeps me skeptical about *serious* applications which involve complexities of security, performance and integration. It's not impossible to do that either, however, supporting multiple security/persistence/application frameworks/third party interfaces for multiple platforms, in its entirety, would be a non-trivially huge attempt ever made.
Lately, I'm observing generative methods in regular development; these days, most applications are combination of generated + hand written code(in some reasonable combination), people manage both code as well as model. Generative Technologies are gaining acceptance gradually, especially because support from Eclipse projects like EMF, GEF, GMF and other Modeling subprojects apart from rising community interest, hundreds of modeling tools are available and major players are investing in this technologies.
We have witnessed the transitions in technologies; Transitions from Hardware specific Assembly code to C code, C code to C++/Java/RoR code.., these transisions are fine and gradual, but with modeling being a paradigm shift in the way we see software development (its not just about learning new language, right?), modeling imposes the steep learning curve, and understanding the mammoth specifications. It's always difficult to adopt a different way of doing things.
With the current state of tooling and specifications, it is difficult to offer "Generate everything, Write Nothing" jargon feature, and its pain in a**e to maintain both model and code in sync (tools are improving though). I'm sure of one thing, generative development is going to be the next generation of software development, not sure how many more years...
Subscribe to:
Posts (Atom)