The 5C Model for Software Design

Learn how to create a sustainable Software Architecture!

If you can´t measure it, you can´t manage it!

If you are wondering how well you have succeeded in modularizing your software, there are actually possibilities to measure this.

cut
cut

CUT: Relational Cohesion

An indication of good cohesion is a high level of interdependence between the subcomponents of a component. The metric Relational Cohesion therefore reflects the ratio of these subcomponents to the connections between them. According to the JArchitect documentation, healthy values should be >= 1.5. This would mean that the number of internal connections or dependencies should always be slightly higher than the number of internal building blocks themselves. However, it should not be higher than 4, as this would indicate relatively bad internal structures due to a comparatively high number of internal dependencies. According to the following formula, a value of approx. 1.16 results for the example in the picture.

Relational Cohesion = (number of connections + 1) / number of subcomponents


conceal
cut

CONCEAL: Visibility Metrics

The visibility metrics were presented in one of my articles on DZone in 2018. You can read it for free by clicking here.


conceal
cut

CONTRACT: Weighted Interface Complexity (WIC)

After I was disappointed by the possibilities to measure the complexity of interfaces while working on the 5C model, I thought about something new. First I asked myself which measurable things of an interface make it complex for the user. By user I mean the one who will write the code that will use this interface as a consumer.

  • Number of methods, functions, or public variables: The sheer number of possible interaction points increases the effort to understand an interface. We use the decimal power of the possible methods of an interface (i.e. 1 for 1-9 methods, 2 for 10-99 methods, etc.).
  • Number of fields in parameters: The larger the number of individual fields that make up the input and output parameters, the more complex the interface. For weighting, we take the power of ten of this value for further calculation.
  • Number of data types: The more different data types there are in the parameters of an interface, the more complex it is. Enumeration types are regarded as separate data types. Again, we use the power of ten of the number of data types as we did with the number of methods and data fields.
  • Nesting depth of the parameter structure: The largest nesting depth among the parameter objects is used for this, regardless of whether these are used in the input or output.

The Weighted Interface Complexity is then calculated as follows, whereby values <=0 are omitted from the calculation. My recommendation would be to aim for a value <= 6, although I must confess that I still lack the empirical values here.

WIC = Power_of_Ten_Methods x Power_of_Ten_Data_Fields x Power_of_Ten_Data Types x Maximum_Nesting_Depth_Parameter_Objects


conceal
cut

CONNECT: Average Component Dependency

The real problem with bad architectures is usually a too high degree of dependencies between components. To measure whether this amount of dependencies is still within limits, we calculate the average number of dependencies per component.

The metrics Depends Upon and Used From were calculated as examples for each of the six components in the picture. Depends Upon is all the components on which this component is dependent, including itself. Used From, on the other hand, determines the number of components that depend on it, including the component itself. Dependencies are cumulated, which means that indirect dependencies are also included. Depends Upon affects all components used by a component, as well as all components that use them, and so on.

Next we calculate the Cumulative Component Dependency (CCD) by just summing up all Depends-Upon values. In our case.

CCD = 4 (A) + 4 (B) + 2 (C) + 2 (D) + 1 (E) + 1 (F) = 14

To put a number like the CCD in relation, we calculate the Average Component Dependency (ACD). Therefore we divide the CCD by the number of components. The ACD specifies how many other components a component depends on on on average. In this specific case, an ACD of 2.33 means that a component is dependent on an average of 2.33 components, including itself.

ACD = 14 / 6 = 2.33


conceal
cut

CONSTRUCT: Highest ACD

A well balanced architecture should result in components with low ACD values. To verify you could just calculate the Highest ACD value of all the modules of your architecture.


conceal

Typical Fallacies

You have to be careful when using metrics. In fact, I think the use of metrics has in most cases done more damage than anything else. This is illustrated by the following anecdote.

A governor in India wanted to reduce the number of cobras in his territory by paying a bounty for every dead cobra. The result, however, was that the existing cobra plague in the area worsened. The bounty suddenly gave dead cobras a value. As a result, the citizens began to breed cobras and kill them later to collect the bounty. Some of these animals escaped and so worsened the problem.

The same thing can happen if you just set thresholds for certain metrics. If the team cannot build or deploy when missing a key figure they will sooner or later find ways to trick the metric which sometimes will make the code worse than it was before.

What you should do instead is monitor metrics over time. If there are questionable trends, then take action to counter them. I do not recommend automatic checks of thresholds.