In this post, I will expand on my previous two posts and will extract a working named pipe server program from my feature tests. I will then update the feature tests to perform acceptance testing against the named pipe server.
In my last two posts, I have been using SpecFlow to build an interprocess communication solution using Windows named pipes. My solution has been to manage work queues that can be used to ship units of work to background processes. In the first two posts on this subject, I built out the SpecFlow feature, the IPC protocol, the named pipe server, and the named pipe client. However, all of this code was embedded inside of the feature tests and the named pipe service did not exist as a separate executable component. In this post, I will extract the named pipe server into an executable program and will revise my feature tests to use the new server program.
In my opinion, refactoring working code works best when there is a working suite of unit or feature tests available. The reason why this works best is that refactoring code with working tests adds the pressure to return the tests to working status as quickly as possible in order to validate that the refactoring was performed correctly.
To summarize where I left off in my last post and to give us a starting point for this refactoring, here’s the feature test and step implementations that I ended up with at the end of the last post:
Here is the source code for the step definitions:
Extracting the Named Pipe Server
The first thing that I am going to do is to extract the named pipe server code from the StepDefinitions class. This will break my tests and step definitions, so I will have to go back later and fix that code. I also need to make a change to the named pipe server to allow a single connection to execute multiple requests from a client. I have to do this because the work queue state is going to be moved to the server process and the step definitions will not have access to that. The step definitions will instead have to send commands to the named pipe server to query or configure the state of the work queues.
Because the revised named pipe server will be capable of processing multiple commands from clients, I added a new message called GOODBYE. The client will send the GOODBYE message to terminate the session, and the named pipe server will disconnect the server side of the named pipe after replying with an OK response.
Here is the revised code for the named pipe server program:
The main changes to this code from the baseline version are:
I added the GOODBYE command
I moved the named pipe server connection logic into the ListenForConnection method
After a connection is established, I am immediately calling ListenForConnection again to create another server pipe and wait for a connection. This is ok because the waiting for a connection logic is run asynchronously on another background thread, so the call will not block.
The command processing code is in the ProcessCommand method.
After executing a command, I am calling ProcessCommand. Like the call back to ListenForConnection, the ProcessCommand method will asynchronously wait for a command to be received on another background thread, so the call will not block.
The server program runs until the standard input stream is closed. I will use this in the revised test code to terminate the server program at the end of a test scenario.
Revising the Step Definitions
With the named pipe server now extracted into a separate program, the step definitions no longer work. They need to be revised to deal with the fact that they no longer have access to the server state, and running the named pipe server means executing the server as a child process. To start and stop the server, I am going to add hooks to the test code to run before each scenario executes and after each scenario finishes:
The StartWorkQueueService method will be called before each scenario executes and will launch the work queue service program as a child process. The StartWorkQueueService method will redirect the standard error, input, and output streams. Output written by the service program to standard output or error will be written to the standard error stream for the test runner. This is helpful to debug the service program because I can output trace statements to STDERR for example and see what happened in the server program for each scenario.
After each scenario completes, the StopWorkQueueService method is called. This method will send the new GOODBYE message to the server and then will close the server’s standard input stream. Closing the input stream will send an EOF to the main program code for the server that will cause the main program to terminate.
The other change that I made is that the client end of the named pipe is now an instance method of the StepDefinitions class. The connection is established when the server process is launched and is terminated after the scenario completes. All of the test code needs to be changed to use the new client pipe.
Let us revisit the first scenario for creating a work queue:
The first big change is that since we no longer share state with the named pipe server, we need to query the server to see if a work queue exists or not. Fortunately, we have already defined the LIST command and we can just send that and process the results:
As you can see, instead of querying shared state, we now have to send commands to the server to retrieve the state and set preconditions for the scenario being tested. However, this is fairly easy to do thanks to the work that we have already done earlier in this post and the previous posts. We already implemented the LIST command to return the list of work queues that have been defined, and if the work queue exists, we have already implemented the DELETE command. Plus, we have a great understanding of how to send commands to our server, so there’s no struggle to get this step definition passing. Here are the other step definitions in this scenario:
If we run the create scenario, the scenario should pass. The remaining step definitions are listed below:
Where Are We At?
In the first post and second posts, I showed how to use SpecFlow to define a minimum viable product. I created a specification for a protocol between a client and server program and used named pipes to send commands from the client to the server to perform actions and return responses. In this third post, I took my prototype product and started to build out the production code. I extracted the working server prototype that I created and turned it into an actual executable service. Along with that, I revised my feature tests to use and test the executable named pipe server for managing work queues.
All of this work has not been for a random journey. There is an actual destination that I am trying to get to. In the next post, I will begin to look at the new CIM provider model supported by PowerShell 3.0 and Microsoft’s Windows Management Framework that was introduced with Windows 8 and Windows Server 2012. Now that we have a working named pipe server, I will look at creating a CIM provider that uses the named pipe protocol to provide a management interface for creating, deleting, starting, and stopping work queues.
Getting the Source Code
I realize that having the code samples in the blog are good, but having actual code to run are better. For the next post in this series, I will move the code into a GitHub repository and will provide a link to it so that you can get the code and follow along. I will go through and tag each of the revisions in the Git repository that match the end products of each of the blog posts so that you can run the code at each stage.