15 Apps One API
Legacy Systems is a more a rule than an exception. With technology changing so rapidly there are bound to be legacy systems, processes and infrastructure. At the same time the development team can always fall behind as far as technology is concerned. However, with the age-old sound principles and the latest technologies, legacy systems can be transformed towards using an API layer.
Here at Cazton when we create software we make sure that we give due importance to software metrics like scalability, maintainability, plug-in architectures, right abstractions, best principles, relevant design patterns, right methodology etc. In this particular case study, we had a client in the health care industry that had about 15 standalone apps and few different databases. Infact, just because these system were standalone some data that needs to be shared was in separate databases. Would you like to have a user who uses 5 different systems that belong to your company but exists in 5 different databases? Such a system would need to query all these databases for just one piece of information and then the problem of duplicates needs to be dealt with. Even worse when the total data you have in a year is not even substantial and you are not worried about your database overgrowing beyond the server capacity. One of the golden rules of software development is "Keep Things Simple" and another one is "DRY or Don't repeat yourself".
I hope the problem is clear but just to elaborate a little bit, imagine a WPF app to monitor an active system. Imagine a phone app to monitor the same system and then add a web app, a tablet app, a legacy Windos forms app. And now imagine all of these with no common API. I am sure I am preaching to the choir but just to be specific, the moment a new feature needs to be added, it needs to be added in all these apps. The absence of a common API increases a lot of development and testing overhead, adds to the complexity, increases room for human error and requires more resource to do the same task that could have been done, otherwise, by less resources. This also increases the cost both in the short-term and the long-term. Now that the problem is clear this is how we fixed it:
- REST API: Without adding a lot of overhead into design and keep things HTTP-centric we chose REST. The lightweight, stateless, web-inspired approach were the major reason we chose REST. The requirements were simple but planning for the future REST made most sense. HTTP/REST based approach with emphasis on statelessness and performance are undoubtedbly the current paradigm. The data format that made most sense in this kind of arrangement is JSON and where the calls made to the API were just HTTP-based REST calls, the data returned was in JSON format.
- Authentication and authorization: When it comes to APIs, one of most important consideration is authentication and authorization. Whereas it might be tempting to use the security mechanisms that work great for a type of environment, it might not be the best approach for another environment. Let's take an example of a thick client app that is used inside the internal(secure) network of a compnay. The app needs to be installed on the every machine and that might mean the user has access to a web.config file with database credentials. It's works great for most purposes as the assumption is that an attacker first needs to hack into the network before he gets to the app. However, with the changing scenario and lot of devices coming into picture be it the phone, tablet, PC or the browser, it can become challenging to install that information on a device such as a phone. The moment a phone is lost, that sensitive information can lead us into trouble. So, when it comes to authentication as well as authorization, the combined security strategy needs a lot of thought. At times a good strategy could be creating a Custom STS that works for every possible scenario. However, we created a very comprehensive strategy keeping in mind the major threats, thanks to the OWASP project.
- Source Control: Given that the client was alreading using TFS, we used the Team Suite and used the Scrum template for effective collaboration between the Ria team and the Client team. We were able to track our work items, play the planning poker, monitor our burndown and effectively use the collaboration tools for the devs and testers in Visual Studio Ultimate Edition. It was fun working with the team and we had short 2-week sprints and in couple of sprints, we had a great sense of Team Velocity.
- Continuous Integration: Source control strategy is incomplete without Continuous Integration. We loved the fact that we were able to use Jenkins to establisha very satisfactory Continuous Integration strategy.
- Unit Testing: With the budget we worked with, it was hard to justify a 100% code coverage. However, creating an API without Unit Testing would be self-defeating. The RIA team decided to work extra un-billable hours to add Unit Testing for the important components. Good things lead to great results. The client team appreciate the good workand realized the major benefit of Unit Testing that it replaced documentation for them. We love working with smart people and the team was overjoyed to see the client team respond that well to it. Guess what? The client decided to go fully into unit testing and we created the API with 82% code coverage. Why not 100%? I would leave that discussion for another day. However, the short answer is that the ROI beyond 82% was not justifiable and as consultants it's our primary job to do the best thing for the business. It's easy to get tempted to have a emotional reason behind implementing the best practices but it's also easy to end up writing too much extra code that mightnot be of any use. Don't get me wrong, if we had the budget and the need, we'd surely go beyond 82%.