Going Meta, Going Modular

I’m reviewing the design of a small project which I have recently inherited. Despite the project’s size, the problems are large numerous: The clients are fat, the middle-tier is thin, there are two databases though there should only be one, those databases aren’t even 1NF (are there negative normal forms? We may have discovered something here!), and we made some – let’s call them “strange” — choices in technologies. The list and explanations of design “should-have” is likely to be longer than the product documentation (so likely, that I’m leaving out a lot of the nitty-gritty in my report). So, to say the least, we’ve got a few corrections to make.

It is this long list of improvements that made me realize the importance of spreading the word about modular design and liberal use of metadata in any application. As far as I can tell, many programmers are surprisingly ignorant of the importance of these concepts or do not fully understand them. Whenever possible, any data — and even any business logic — used by your application should be stored outside of the application it’s self. By using metadata and creating modular, loosely-coupled systems, your applications can literally be assembled as-needed — and if you think I’m just talking about component frameworks, you’re wrong. Here are a few examples:

Fat vs. Thin clients: Sometimes having a fat client is good — it improves speed and lends it’s self to working “offline”. However, from a maintainability standpoint, n-tier is the way to go. It’s easier to maintain, manage, test, and extend a single middle tier than multiple clients. That’s where modularization comes in — if the “middle tier” refers to a virtual, as opposed to physical, element of your application that can be loaded/used by physical applications (clients and servers are the “physical” elements) then you can have the best of both worlds. Switching from fat to thin clients should simply be a configuration (metadata) change, or at least a change in a compiler directive — not an architectural overhaul!

High vs. Low Levels of Normalization: High levels of normalization can be good because they are easier to maintain and extend. However, lower levels of normalization can be faster and can sometimes be necessary in order to improve performance. Though normalization should be important to a database designer, it should not be to your application (unless it’s used to design relational databases). There are plenty of persistence frameworks out there; there is no excuse for not using one or simply inventing your own. Storing queries in metadata and using a persistence framework will allow you, or even your customers (according to metadata!), to normalize or de-normalize data without requiring a single change to the application. Add on a framework like Data Abstract and they can choose their database too.

Technology A vs. Technology B: Sometimes you find the perfect technology to incorporate into your application, sometimes you don’t. Sometimes you just don’t have time to investigate and have to use what’s available. In any case, if you abstract the functionality, then you can always switch it out later by updating the metadata (where such information is hopefully stored) or changing (preferably no more than) one line of code. Look into using a plug-in framework like Hydra.

Getting back to the architectural review mentioned in the first paragraph; had we used the basic concepts of modular design, the issues would not seem so daunting because we could simply exchange one module for another. So, dear reader, please keep in mind the importance of modular design and metadata the next time you get lazy — because it’s  much easier to be lazy when you have less work to do.