This blog is about delivering business excellence and operational efficiency through integration competency center models and about topics of SOA, Integration in the enterprise context.

Tuesday, March 3, 2009

"What Next" for Integration Competency Centers (ICC)? - Part 2

In my last blog post of this series, I talked about the next-generation highlights of the ICC. First one in the list says: Next generation competency centers will make it lot easier to execute and manage the changes than how it is today. That will transform the role of 'change' from 'threat' to 'opportunity'.

Reason why I see it coming because I see today ‘ability to change’ to be a major roadblock in the enterprises irrespective of the area we talk about (technology, infrastructure, processes, standards etc.) and irrespective of the nature of the change (consolidate, standardize, migrate, optimize etc.). It means that unless we unlock the secret of managing the change better and make it easier to change, we are going to get poor results in almost all scenario. Let me take a real-life example out of my engagements to help understand this point.

Here is this mega enterprise xyz that respects technology leadership and strongly believes that staying ahead in the IT leadership can make them absolutely winner in terms of differentiating in the market. So this enterprise is all up for mega investment into end-to-end transformation of their integration landscape. They got best of the technology lined up, they got best of the consultants lined up and got best of their leaders in the row to deliver this transformation. So as one can see, it involved technology platform change, change in the way architecture is being implemented, change in operational and engagement processes, change in terms of their organizational structure and what not. Organization muddled with it for more than 6 months only to realize that it is not going anywhere. Why that is the case when funding is given, best consultants are working out their brains and top management sponsorship is also there? It was not because changes being done were inappropriate or invalid. It was actually because doing change was so difficult in the organization and they were just not prepared to do it in the scale it demanded. Talking more specifically:

Requirements change scenario – requirements change here was nightmare. Every time a requirements changes, there is huge process of getting the change request approved and subsequently, for every requirements change, huge round of regression testing (to ensure no damage has happened) will be done. This all made every change difficult. There were two key things tried here to make life better. First, stakeholder analysis was carried out along with responsibility map to rationalize the change approval process. Apparently there were too many folks involved in decision making and no one actually owned up the decision and hence every one really just passed the buck or just played safe. Through this analysis, few stakeholders were taken out of the loop and decision making has been simplified by empowering few roles. Now with sufficient and necessary information at hand, a handful set of folks could quickly make a decision on approval of the change. Secondly, a prioritization scheme was developed to make regression testing somewhat practical. Business stakeholders now scan through the business use cases and identify the most critical cases for regression. Rest were simply reviewed by business analysts and architects and their signoff was used as basis to approve the change in production. Through this process, cost of regressing testing of the change was brought down to 40% of the original cost, needless to say whole hassle of test management was saved as well.

Process change scenario – changing any process in this organization is a strict no-no. why? Because everyone of comfortable doing what they do and they don’t believe anything needs to improve. So any process change initiative dies down either at the recommendation level or is proven to deliver bad results by deliberate attempts of the staff members. Now that’s really really scary because it tell me that this organization can never improve. When we analyzed the problem , we figured out two important things. One that most of the time when change is introduced in this organization, people have no clue why that change is being done. Second, for any change, large part of the people involved in the change believe that even if they don’t accept the change, nothing is going to happen, things will move on. To make the process change easier, these two problems were addressed. It was mandated that for every process change, right from top to bottom, every stakeholder has to be made aware about the change drivers and criticality of that change for the organization and for their specific roles. This also gave opportunity to the organization to address any operational concern at individual role level before change is physically rolled out. Second part of the solution was to bring accountability into each role toward this change so that everyone knows the impact on them if they didn’t implement the change effectively. Now that can be argued to be a ‘threat’ based philosophy but it actually worked.

And that’s not just one off case. I have seen several of those examples where similar story has been repeated in slightly different color and shape. So moral of the story is that ICC of tomorrow need to innovate methods and means to make it easier to deal with the changes. Further to it actually innovate the ‘changeability’ to the extent that it can be leveraged as opportunity to make continuous improvements towards business objectives. Having said that, there are two key questions that must cross our mind when we talk about it: a) how do we really do it b) What cost does this come with. We will focus on on "how do we really do it" part here.

How do we really do it?
Change enablement is not easy. This is all about having deeper and wider understanding of the change patterns and subsequently adjust the retarding elements from the change mechanics. Philosophy is simple, change is devil because it comes with unwanted cost and organizations don’t have capability to manage the impact of the change. So if we are able to make the changer easier, cheaper and low-risk, then why not change? We in Infosys have a well-defined framework that we offer to our clients to inspect the change mechanics in the organization, spot the retarding elements and look at options to handle the change better. Some of the general strategies that can be used are:

  • Self-service capabilities for the solution development and maintenance process so that a lean team can easily handle the portfolio responsibilities. This self-service capability is not only for the software engineering process but also for the other processes that involve decision making, analysis and interactions across multiple stakeholders. For software engineering, self-service capability allows application teams to develop normal integration scenario by themselves by using a simple self-service enabled integration workbench. As name suggests, self-service capability allows application teams/business team to configure integration scenario without needing to have knowledge of underlying integration products/platform. Similarly, self-service capability allows business units to have certain information / intelligence about the integration by themselves without needing to depend on the integration team. A good example of this is an EAI feasibility analysis and Order of Magnitude cost calculations. For both of these, business units can be provided with an online framework that allows them to feed in input available to them (non-technical) and get the sense of outcome (either a go-no go recommendation for EAI or the OOM figure) that allows them to do first level of decision making. This will not only make the decision making process faster but also helps cutting down the effort required in ICC to manage such interactions.
  • Change consolidation across the enterprise before changes are applied so that investment into changes are optimized and prioritization of changes taken enterprise-wide value considerations. My experience with large number of enterprises tells that very often money spent on various changes (either in software or vendors or methods or on processes etc.) is not trivial and most of the time, enterprises have no visibility of the enterprise-view of the changes. So for example, there has been so many cases where different units of the same enterprises have been evaluating different products for the same purpose (integration in our case) without having visibility of what other units are doing. So they end up spending money of consultants, their own staff, infrastructure and surprisingly in many cases have ended up have different product selected (so there goes the possibility of license optimization through economy of scale).
  • Lose coupling of the changes to reduce the spread of impact (localization) by means of meta data layer. Most of the time, main change is not difficult but what scares the organization are various strings attached to change causing second level of change due to dependencies, then third level of change and so on and so forth. This chain of impact actually sucks up money big time and make the change a really painful process. By carefully analyzing this chain of impact (and hence dependencies), lose coupling can be built to localize the impact of the change. In the software engineering space, smart use of meta-data layer can really help to build loosely coupled parts of the software ecosystem and thus breaking the long chain of impact. This concept is not new but to my surprise, maturity of adoption and implementation for the same has been really poor as I see it. One good example of this decoupling is configuration driven change. In the design, if we have provision to make the execution level change or solution behavior change simply by changing a configuration parameter, that will eliminate the need of hand coding, retesting and redeployment and thus making is easy to change by simply dealing with the configuration parameters.
  • Leveraging the possibility of automatic change propagation and thus reducing the manual effort (of doing it first time and reworking the manual mistakes later on). Previous point suggest that chain of impact really drives the cost of change in many cases. Where we are able to break the chain, its fine but if chain of impact is unavoidable, next weapon against it will be automation of the change propagation. Where ever possible, an automation framework should be built to be able to create the modifications/alterations based on original change using automation rules/scenario intelligence and change algorithms. This will again help reduce the cost of change and increase the speed of change. A good example of this is automated build and deployment process. As soon as software baseline changes, a click of a button should allow to package the entire solution and deploy in the final destination without needing to manually do anything. Similarly, in an advance development workbench scenario, a change in design can be easily propagate to automatic creation of meta-data and in some case the actual code as well.
  • Build intelligent change analytics to get full understanding of the change dependencies in order to reduce the hidden costs (and hence overshooting of budgets). From previous two points, it is very obvious that chain of impact and dependencies are important elements that drive the cost the complexities associated with the change. However, if those are not known upfront, it poses serious risk to the success of the change initiative/process. The late realization of chain impacts could cause inability to sustain the change or drag the larger part of the program to deal with the newly emerged impacts. Intelligence change analytics provide the clear impact view upfront and thus allows stakeholders to make informed decision about the approval of the change. At the same time, it also gives opportunity to investigate the chain of impacts/dependencies and identifying innovative mechanisms to deal with the change.

I hope this part of the story would have brought in some insight into thinking of next-generation ICC from ‘change’ perspective…btw, few call it agility. Doesn’t matter what we call it. End results matter. So you have any take on this perspective, I’ll be glad to hear about it and discuss. It might help me evolve the next-generation ICC story for everyone’s benefit. Watch out for the next part of the blog soon on "High degree of collaboration of the ICC across stakeholders" as Part 3 of the blog.

1 comment:

  1. Great topic Rakesh. Both on the topic of ICC's and your emphasis on management change. Your post reminded me about a topic that I've been meaning to write for some time - so now I've done it. Check out my article about Change Management - or more specifically Change Leadership - at http://blogs.informatica.com/perspectives/index.php/2009/03/16/change-management/

    ReplyDelete