Without top-grade data, even the most innovative life sciences company in the world will struggle to realize its potential. That’s because, without robust, reliable and well-maintained information, formatted in a consistent and agreed way that’s acceptable to regulators, they will fail to complete clinical trials or gain market authorization.
If you were to ask for one simple tip to save 3 months of time, resource and cost in your new technology rollout plan – I would suggest examining the data quality before you start the implementation. Simple yet effective.
Given the critical role of data as part of high-priority digital transformation programs (something Ian Crone blogged about recently), it’s surprising – alarming, even – how much is assumed about it as companies go into new technology projects. Until they know differently, IT project teams and their business leads tend to operate on the premise that the chosen new Regulatory Information Management (RIM) system will transform the way they work and deliver the desired outcomes. The corporate data it draws on is more an afterthought. No one doubts its quality and complexity – until it’s too late.
It’s this risky assumption that must be challenged, something that should happen much earlier along the project timeline – long before any system/platform implementation has been set in motion, and before any deadlines have been agreed. The inherent risk, otherwise, is that the implementation project will have to stop part way in – when it becomes obvious that the data sets involved are not all they could or should be.
Often, it’s the smallest discrepancies that can cause the biggest hiccups in data consistency. Simple inconsistencies, such as using different abbreviations for the same compound, or misspelling a product name, can result in duplicates appearing in a system. More complex issues occur when the project is linked to IDMP preparations, whether as a primary focus or as a secondary benefit, there may be fields yet to be completed or which require content from other sources. Multiply this up by potentially millions of data points and you see the risk.
It could be that multiple different RIM systems are being consolidated into one new one. As each different system is trawled for its constituent information, consideration needs to be given to differing formatting, data duplication and variability in data quality. A myriad of dependencies and checkpoints, between systems to ensure the success of the content migration project.
Inevitably there will be interdependencies between the data in different systems too, and links between content (between source RIM data and stored market authorisation documents, for instance). All of this needs to be assessed, so that project teams understand the work that will be involved in consolidating, cleaning and enriching all of the data before it can be transferred to and rolled up in any new system.
The sobering costs of project recalibration
If a system implementation is already underway when data issues are identified, project teams must recalculate and recalibrate, which can incur significant cost and effort. Before they know it, a project that was scheduled to take a year needs an additional three months to clean up and enhance the data.
Processing change requests will require key resources that are now committed elsewhere – not to mention additional budget that hasn’t been provided for (a 25%+ hike in costs is not unusual, as data quality issues are identified). Meanwhile, there are Users waiting for capabilities that now aren’t going to materialize as expected: delays are inevitable. Data is critical to the business and without the right quality data, a new system cannot go live.
All of this could be avoided if the data migration implications of a project were analysed, assessed, understood and scoped fully ahead of time. The good news is that this oversight is relatively easy to rectify – for future initiatives at least. It’s just a case of calling in the right experts sufficiently early in the transformation definition and planning a process to perform appropriate analyses.
About the Author
Peter Reynolds brings 20 years of global Life Science/ Pharmaceutical vertical experience, understanding needs, requirements and process to deliver business value through mission critical Enterprise level Software & Services projects, both on-premise and SaaS. After an early career with Content/Document management projects, often using Documentum, within the financial services sector, the focus became about Life Science specific knowledge across the Regulatory, Clinical, Safety and Medical Information domains. He has worked with small ‘virtual’ Biotech’s right up through to the Top 10 Global Pharmaceutical companies and looks to bring this knowledge to fme’s European clients.
Are you interested in future blog posts from the life sciences area? - Please follow us on our social media accounts.