I’m not objecting to the practice, just the naming. That’s because I’m thinking like an analyst instead of a programmer. Naming determines the direction of the thoughts from that point on. We shouldn’t tell people to test something they don’t have because it’s not right.
Ask any Denver bound passenger on a plane to navigate where they are going. The chances of getting to Denver are about as good as having a project complete successfully without analysis. The analysis is your plan, your map, and your radar. The destination is your business goal.
Testing in project workflow
There are several concurrent workflows during a project. Each basic workflow takes up a majority of the current activity during the project at some point can occur concurrently with most other workflows. Here’s my version of the project workflow without any particular allegiance to another methodology. It’s what I’ve found to be a cohesive set of activities that require unique skill sets.
- strategy - coming up with a good business idea
- project initiation – committing assets to making the idea real
- needs – talking to folks about what they want based on that idea
- requirements – understanding and communicating those needs in testable terms
- analysis - organizing business data and workflows based on those requirements
- design - figuring out how it's all going to work with what we have for the technical folks
- implementation - the construction
- testing - the double check of validation and verification
- deployment - getting it launched
- maintenance - keeping it working and updating it
Testing is about two things. It's about validation and verification. Validation is checking stuff that has been planned and making sure that the plan was carried out. Verification is about seeing happy faces on your stakeholders no matter what they said and what you wrote down in your plan. Valid product + verified happiness = project success.
Testing can start as early as the requirements phase. Since the project is being expressed as testable statements, it follows that testing can start early. The information that is extracted at that point is useful for planning and setup. Then more planning is done throughout analysis and design for the large scale types of tests that match up with the large granular data definitions and workflows.
When the granularity of design comes down to chunks of code and individual statements, you still have testing that goes on. It is this drill down into the finer details of design that the test first advocates are talking about. Here the design is put forth as a supposition and as it gets implemented it is pushed back into the validity loop of testing. Hopefully, there’s enough high-level analysis and design to make this low level design and testing stress free for the coder.
Test-Driven Development in isolation
Testing is not what is done before you write code. It doesn't matter how much you like Kent Beck. Kent says that Test Driven Development (TDD) is a style of development. So, it's not a style of testing? What's the first step? It's assert. What do you assert? Now you have to go to your data and workflow models to figure that out. What? No analysis and design models? But if you did have some analysis, then you could do technical design and get the rough sketch of the code.
The TDD coder is supposed to come up with types of tests to run (analysis level testing), rename identifiers (analysis), assert first (analysis), test data (design level testing), and practice this iteratively. If the analysis and design work had been done previously, the coder could use it.
And if you don’t have any of that, you do the best you can and try to figure it out. Some programmers use pseudocode and activity models called flowcharts for their technical design. They were trained to take exercises in programming class and wrangle them into code without any supporting analysis or design. Management got trained into thinking that the real world was the same and programmers could pseudocode the entire business project given enough bright students.
So, as an analyst, I was interested to figure out what programmers were being told to do if they only had a listing of needs from the stakeholders. That list is usually so hard to understand that without proper editing and rewriting, the resulting code is flying without radar. Testing first in that case could be about several things including
- testing stuff that you make up on the fly
- doing something else and calling it testing
Make it work first, then make it better.
The mantra of TDD is red/green/refactor. When I personally use it in practice it turns out to be, guess/make it work/make it better. I was told to do that by senior programmers. I've been doing that as a programmer for too many years as a way to manage when the analysis wasn't done. Guess what the stakeholders want and try to get it into words that you can manage. Then turn that best guess into workable code that won't break. And if you have any time after that, give it some thought on how to reduce the code or make it more reusable.
I know that Beck is more interested in getting the gap controlled between decision (design) and feedback (test) so any thinking is good when it shortcuts the path to better code. But analysis isn't good at giving feedback so to a programmer, it's better to have some concrete metric of success rather than a bunch of papers. Write another automated test. One more green light.
Maybe that's partially the fault of revamping our programming curriculum in school and leaving out structured programming. The dearth of analysis after object oriented programming took over had the programmers clamoring for some sort of control that worked.
What's the most important quality of a requirement? It's testability. So if you gather some related needs and can conceive of a test for them, then it usually becomes a good requirement. The remaining tests are easy to extrude from the requirement.
I think coders flipped this on its head. They say that they should be able to conceive of a test and see if it fits the requirement. From their perspective, it makes sense because they have to see the linkage back into the stakeholders' requirements from where they sit in their 8x8' cube. Without guidance they have to draw the map.
But coders aren't looking at the big picture so I'm against TDD being a stand-in for analysis. It works for low-level design though. And it makes sense. So as long as the analysis is sufficient, TDD works well as a design phase validation of the requirements in analysis to the working code put to the fire by automated tests. If activity is anything before the unit tests, the lowest level test possible, then it's not about the test, it's about the requirements.