Architecture & Development Process Audit for Randstad and i-Bridge
What was the issue?
Planning and scheduling software used by the European branches of this global recruitment company was developed as an in-house solution. It quickly achieved success as it satisfied a need but needed to scale out to support more users, and a larger pool of employees, it soon became a victim of its own success.
What benefits were delivered?
The development team had worked closely with the customer (we like this, it’s an Agile practice) and together developed and added features into the product. Storm was called in at around Version 3 because the performance of the product had degraded to the point of concern. A technical audit had been carried out by a local consultancy, but the client wasn’t convinced that they had got to the real root cause of the problem, and were at a loss to know how the product could be scaled up further.
The product was local to an industry and it needed to scale up to be national, the issue was the more names that were added into it the slower it became. Potentially the product should be able to span vertical industries and national borders, but the performance was struggling to cope with the employee pools of even some of the medium sized companies for whom it was being used.
How were they delivered?
On investigation and working with the project team we noticed that there was nobody working on the technical side that understood architecture and therefore the application had simply “emerged” to satisfy the features it needed to run. It’s quite common for software to grow organically, features do get bolted-on, but there comes a point where the core structure shows its weaknesses because it is supporting more than it was originally intended to support.
Our investigation revealed that there was insufficient experience within the development team to plan and design an architecture. The application architecture was unable to scale out beyond a pool of 100 or so employees. And the re-designs were only addressing the symptoms of the problem and therefore mere sticking plasters over the core problem.
In this case study we have the client and the software and the third party provider working together very effectively but unaware or insufficiently experienced to be able to identify that there was a lack of technical architectural vision.
What were the results?
Our suggestion was to throw everything away and start again. (Shock, horror!) Of course in the Agile software development world throwing everything away and starting again really means taking all of the huge investment that’s already been made in developing prototypes and early versions, building on the maturity of the products already released, developing the processes and business work flows , building on further developing our understanding of the business problem, further developing our domain/logical/software model, understanding the limitations and failures of the solutions versions that we have already created and using our knowledge and perhaps recently acquired wisdom to develop novel and innovative solutions. In this way software evolves from knowledge and wisdom gained, and is able to not only deliver, but also greatly exceed customer’s expectations. Throwing everything away and starting again is how we make great leaps forward.
Interestingly, in this case the business was reluctant to take these steps, being determined to keep on getting more return on their investment, based on the assumption that the development team would always find a way to squeeze another few users out of the existing system. After all, they had always managed to do this. This is a tough decision that all technical / business teams must make: when do you stop releasing updates, and start writing the next version?