Connascence is a quality metric for describing how coupled two systems are, or in the terms of this example, how coupled our implementation class and test class are. Because it describes levels of coupling (in a structured order, see below) we can use it to help prioritise what should be refactored first.
As always there is a trade off in how far down the chart you work but from personal experience I usually find beyond Connascence of Meaning the value starts to drop off. I'll go over a simple example (so take it with a pinch of salt) that should show firstly, how to clean up your code, but also how you could end up with a better solution. Oh, and excuse my python (if it's not overly pythonic), I'm not a native python coder. In the below example I'll tackle two of the types of Connascence.
A very typical kata most of you may have come across at some point in time. The requirements are very simple.
- You count in numbers, starting at 0, incremeting one at a time
- For numbers divisible by 3, you say "Fizz"
- For numbers divisible by 5, you say "Buzz"
- For numbers divisible by 3 or 5, you say the rules in their above order, ie "FizzBuzz"
- If the number does not match a rule, you say the number, ie "4"
Easy! Right let's start test driving this...
I've not gone overly purist, and I've jumped in at getting "Fizz" to work. A first test that passes as simply as I can make it. Many people may stop here and move on to coding "Buzz", but I'd like to first tackle some coupling. First up is Connascence of Value in that both my implementation and test share knowledge of "Fizz" and "3". I'll fix this first by injecting the values into my implementation so that my test can control the scenario:
In the above I've fixed Connascence of Value, and in doing so I'm able to randomise the number and word said (as it doesn't matter what they are). Contextually, if it can divide by the number, the word gets said.
Luckily I've no Connascence of Timing (the timing of the execution of code doesn't impact me) or Connascence of Execution Order (nor does execution order of the implementation code affect me) and arguably no Connascence of Position.
I am however breaking Connascence of Algorithm as both my implementation and test know that it's a multiple of a number. They are coupled by both having to know the same "algorithm", in this case a simple mod 0. So let's fix this, again by injecting in the "algorithm" so that the test controls the scenario...
In removing the Connascence of Algorithm I'm able to competely change the context of my test. Instead of passing in a number I just need to pass in a lambda expression (which would be "lambda i: i % 3 == 0"). In the case of my test though, I don't care what the expression is, only that if it returns true, then FizzBuzz will say the word. I can also easily add a negative test case:
Next up I hit Connascence of Meaning (an example would be, returning an int to represent a monetary value. Is it pence? Pounds? Dollars? Cents? etc...). This basic kata isn't really affected by it, so I'll halt my refactoring there. In terms of testing "Buzz" I'm already covered by the above tests. The next step would be to test saying two rules, ie "FizzBuzz" at which point I'd fix only injecting a single rule/word, for an array of rules/words.
So where are the actual rules now that I've extracted them out? And how are they tested? Separately of course, and easily testable. Here a Rules class I eventually ended up making (that gets injected into my FizzBuzz class) that represents my configuration:
In terms of testing the above class, all I need to test is that a rule exists in the array for "Fizz" and for "Buzz". I'm effectively just testing my configuration is correct, not the logic arround how rules/words are applied. Using Connascence I've been able to decouple my tests, and in doing so it's help drive out a solution where I've decoupled configuration from function. My FizzBuzz class doesn't need to know the details of the rules, it just applies them if necessary.
To take a step back and look at the problem again...FizzBuzz is (as I know it) a drinking game. New rules are constantly added as the game progresses (making it harder and harder so more drink is consumed). If a rule applies, you say a specified word. If not you say a number. The above solution I've ended up with makes it very easy to add new rules without needing to change a lot of code. It's only when rules overide other rules etc... (new features!) that things get more complicated, but the code is in a good/flexible position to adapt now that I've refactored it and reduced coupling.
I've a nearly finished solution (lacking a couple of configuration tests) here: https://github.com/robertbeal/kata-fizzbuzz-py although keep in mind it only shows the destination, not the journey (so to speak). And while the above does test configuration and function, you would still compliment it with higher level tests as well.
This post is just a very simplistic example of using Connascence to decouple your tests. I've not covered all of the levels of Connascence (although please click any links above for code examples) but I hope it shows a rough idea of how effective it can be at improving your code quality. It's obviously much easier to apply to a kata like FizzBuzz than "real" code, but that's something that comes with practise. All I can say is that once you try it (and see the light) you'll get hooked as it's quite an eye-opening, measurable tool for improving code quality.