When technical systems are developed, it is better they are designed, deployed, and supported in a standard way. Having designers go and do their own thing can result in poor performing, badly supported, and difficult to upgrade systems. We had the chance to improve an organisations govenance framework

The Brief

An organisation wanted to get all developments onto a level playing field. Senior management and strategists needed assuring their systems were of sufficient quality to be able to hang new business services off them. We were asked to look at some sort of certification system that will highlight any system which may be a compliance risk, or could prevent the business moving forward.


The organisation had a loosely applied governance framework, which was applied patchily, largely ignored, tedious, and had no impact on future analysis. Project teams were expected to complete a huge spreadsheet, with over 450 questions relating to all kinds of concerns. From these questions, a system is given a score and if the score is below a threshold the system could possibly present a compliance risk.

The answer to the questions is ‘compliant’, ‘not compliant’, or ‘not applicable’. Teams would mostly answer ‘not applicable’ just to avoid any awkward questions. The framework was a good idea, but poorly formed, badly implements, and worse, results didn’t mean anything.


We began by looking at the framework. What we wanted was for the assessments being done by expensive project resources to actually have a purpose. If it wasn’t possible for the risks being carried by non-compliance systems, then there is no point in continuing. The CTO was the perfect stakeholder for this responsibility.

This gave the output of the governance process a purpose and with someone who has the capacity to do something about it. The next step was to revisit the spreadsheet and analyse every question. The purpose of this step was to get a sense of the overall trajectory of the questioning. They were grouped into what appeared to be half-descent sections, covering hosting, security, development language, and so on. However, there didn’t appear to be any rhyme or reason for the question to be on the list. With no owner, source, or documentation to back them up, we needed to revisit the way the questions came into being.

Stakeholder workshops were key here. After discovering a group of people who were involved in creating the questions, we got them into a room and we were determined to come away with a way of supporting each and every question. Each question was analysed and one person was made responsible for the question. Next, that person had to discover in what policy, process, or industry standard the question was derived from. Any question which couldn’t be backed up with solid evidence was struck from the list.

By doing this, it became apparent the best way of doing this is not to make questions up on the fly, but look at a policy or framework, and create the questions that way. Stakeholders were also asked to provide an impact for not complying with a question.

With the stakeholders on side and thinking about the framework in a different way, we embarked on the benefits of getting development teams to fill these out. It was evident the reasons were to keep systems development on the right track, simplify support, and reduce the amount of exotic software and hardware. The point of the exercise was to show the level of risk the organisation carried with non-compliance. A series of report were developed that showed the level of risk.

These risks are against the business functions the systems supported, and therefore we could show in a simple way which of the business functions and therefore the business capabilities are at risk. This is a powerful message and gave a strong indication where intervention is needed to ensure the systems are brought back in line.

The new spreadsheet still had work to do. It was still a huge artefact and appeared to be a mountain to climb. To help get the best out of it, we introduced the idea of system profiling. By answering a dozen questions about the size purpose, location, users, and other questions, we were able to shrink the number of relevant questions.

The questions were also broken down into mandatory and advisory. Advisory questions didn’t need to be answered and overall had no effect on the resultant risk profile. Some would question why they were actually there, but as an aide-memoir, they proved useful. Mandatory questions not only had to be answered, but they had to give a reason for the answer. This could be a useful comment or a pointer to a design artefact. Either way there is now positive confirmation of the answer.

The improved spreadsheet and framework was put into action. Several systems were chosen, which covered many of the different profiles. The results were encouraging. The users reported a much simplified process meant they could provide the evidence, whereas before it was tedious and laborious. The other great angle was there was now a person named for each question. Developers could make contact and have a meaningful conversation about the situation.

The framework could now be used across any sort of development. It shows where non-compliance affects the business model and indicates where investment is needed.


Better reporting

Senior management and Product Owners can now see the risks they are carrying, and how they affect the business model.

Easier to use

The smaller number of more directed questions made it easier to use. Teams now found it less erroneous and quick. 

Risk is measured

Risk can be measured across all systems, or tested across particular sections. It will indicate if there needs a serious look at the hosting environments for example, or individual systems.


Individuals are now responsible for each question, and each is backed up by documented best practice, process, or policy, making it easier for developers to question the statement.


The open reporting means anyone can see the condition of any system and the impacts the non-compliance has.

By being able to explain and negotiate a non-compliance, an agreement can be struck instead of forcing the compliance, wasting money, especially if the non-compliance won’t have a major impact.

Systems can be re-assessed as they are developed and more is understood about them. Even after delivery, it is recommended systems are re-assessed at suitable intervals, or when something significant happens to them such as a major upgrade.