“Testing” – the word that strikes fear in the heart of every deployment team. While building new products and models is exciting, checking whether the system outcomes are correct can be a bit of a drag. It is not unusual to hear from our clients that it may take weeks or even months to successfully complete their full testing and deployment process. For many companies, the platforms that they use to define and arrive at their logic and calculations are not the same as the target environment. There are also multiple levels of separation between the business teams who want the logic implemented and the business analysts and developers responsible for the implementation. Naturally, these elements contribute to a lengthier testing process.
When it comes time for testing, the process usually looks like this:
First, the business teams put together sample test datasets and calculate the expected results. Then, the development testing team is responsible for building a test harness that can run the same dataset to arrive at the calculated results. The expected results from the business teams are then compared against the calculated results from the development testing team.
This process likely requires multiple iterations, with lots of back and forth between teams when issues are discovered from either the business or development side. There is also difficult, complicated work included in this process, i) when the business team needs to determine their expected results, and ii) when the development team creates a testing harness on top of their implementation to determine the calculated results. There is also the need to exchange large datasets between teams and potentially a need for data transformation if there is not clear communication on consistent data formats. Changes in the logic or calculations that necessitate new inputs and outputs may break previously defined processes. Clearly, there are many potential pitfalls, and the fear of a painful testing process is understandable!
However, we know there’s a better way. At Coherent we live and breathe testing. Our team have decades of experience in this process and its pain points. We are constantly looking to improve the testing process for our customers to make it easier for them to deploy their logic in a confident and auditable manner.
But how? With our core technology, Coherent Spark, that converts logic from Excel files into APIs which can then be integrated as the calculation engine for front-end, back-ends and downstream systems. This makes the process of arriving at calculations, working through requirements, and development more consistent because business users, business analysts, and developers can all work across the same platform. Additionally, all actions and versions of models are automatically logged and hashed on Spark.
When it comes time for testing the results, instead of the business teams manually creating the testing results, our Excel add-in, Spark Assistant, empowers business teams to generate calculated results with minimal effort. These can be compared to testing runs that can be executed on Spark’s Testing Center. The test runs from the Spark Testing Center reflect the exact same calculations that are provided by Spark from its calculation API endpoints, automating this process to save time and effort.
From models, to testing and deployment, everything on Spark is logged, versioned and hashed, making it possible to execute a fully auditable deployment process. With these controls in place, testing is no longer a painful process, but just another step for deployment teams to check off their project plan.
Coherent Spark Product Director
Simon is the Product Manager for Spark at Coherent, leading a team to develop the platform’s features and capabilities. He is a qualified actuary and with 15+ years’ experience in the Property & Casualty / General Insurance space with an exclusive focus on pricing, data and analytics. Having held roles in Canada, UK for a big 4 consultancy across banking and insurance, and a multi-national insurer based in Hong Kong, Simon has significant experience in understanding the challenges that analytical and business users face in executing and deploying calculations and logic.