In order to work as intended, this site stores cookies on your device. Accepting improves our site and provides you with personalized service.
Click here to learn more
Click here to learn more
As an information systems consultancy dedicated to successfully delivering lab-based information systems, we help our clients to overcome many different challenges. There are some important questions that we are frequently asked to evaluate.
In part one of this blog series, we’ll summarise the considerations to make when answering 3 common questions about lab informatics systems, all in the theme of ‘is a single system better than multiple similar systems?’
Here the context matters. If one were to generalise, R&D labs tend to be experiment-based, answering questions like ‘What ingredient changes in the product formulation will increase effectiveness and reduce environmental impact?’. On the other hand, QC labs are more focused on samples taken from production runs, and questions such as ‘Are the % composition of key ingredients within a production batch within specification?’
If we use the above generalisation and apply lab informatics thinking, in broad terms, ELNs are centred on recording experiments and therefore more suited to R&D. LIMS, being sample, test and results biased, are generally more suitable to QC labs.
However, it is not that simple. For example, perhaps one of the R&D labs provides analytical services to various teams executing R&D experiments – this type of ‘service’ lab is often better served by LIMS than ELNs.
The type of labs involved is not the only factor to consider. For example, CDS systems are generally applicable to both R&D and QC. The methods and use of the instruments may well vary across R&D and QC, but the instrument data systems can be exactly the same.
Finally, regulatory needs, specifically for QC can also be a driving factor in answering the question. We will consider this further in one of the following questions.
When Scimcon first started nearly three decades ago, the focus within large multi-national companies was on implementing large, monolithic lab systems. This approach still has its place, particularly where the distributed labs are very close in terms of operational and analytical workflows.
Current thinking, however, looks to best support the diversity of lab workflows across global sites. While this should not mean a different system in every single lab, it should ensure some flexibility in selecting systems locally. This has several benefits, including a better informatics fit for each lab, and the increased local user buy-in gained by allowing flexibility.
However, against the background of the drive to increased data aggregation, data science and analytics, and AI/ML, this local approach can be counterproductive. It is therefore important to set standards and guardrails about how these systems are implemented, and how the data is structured and linked via reference data, so that consolidation into centralised reporting tools and data lakes is facilitated.
There is a well-used saying within regulatory-compliant organisations: ‘If a system contains just 1% of GxP data, then the whole system is required to be implemented, managed and maintained in a regulatory compliant manner.’
This statement leaves compliant organisations questioning:
The first step to answering the question is to determine the delta between administering a GxP system, and administering a non GxP system. LIMS, ELN, SDMS, CDS and other lab informatics systems are often classified by labs as mission-critical. Most organisations wouldn’t countenance a lack of system administration rigour or releasing untested changes to mission-critical systems, so this delta may be lower than it first seems.
The next step is an open conversation with QA teams about the types of data being held, and the control systems that will be put in place. In the past, we have successfully taken a two-tier approach, where the administration procedures for non-GxP are simpler than those for GxP data in the same system. However, for this type of arrangement to be viable, a detailed risk assessment is required, and the ongoing management and control of the administration has to be very well executed.
Finally, before making the decision, it’s worth considering whether there are shared services or functions involved. For example, if the GxP and non-GxP work uses the same inventory management, it might be complex to get the inventory system interfacing and updating two systems simultaneously.
Hopefully, we have illustrated the importance of being clear about what your requirements are before answering these key questions about lab informatics systems. Each case is unique, and your decision will usually be based on a wide range of influencing factors. We help organisations to consider all of the options and roll out their chosen model.
Stay tuned for part 2 of this blog series, where we will look at the key question of how you can prepare your data for AI and machine learning.
Industry leader interviews: Pascale Charbonnel?My name is Jamie Portnoff, and I am the founder and principal consultant at JMP Consulting. JMP Consulting assists clients in the pharmaceutical industry to achieve and sustain compliance and improve overall performance in pharmacovigilance (PV) and related functions like quality, medical information and regulatory affairs. Before founding JMP Consulting, I worked in the pharmaceutical industry. Not many management consultants working in PV have hands-on, real-world PV experience; this experience means I understand the realities of day-to-day work in and around PV, and how challenging it can be to deliver against requirements and expectations. In my earliest days in industry, I especially enjoyed working with people and on projects, and I soon realised that I wanted to marry up my problem solving and analytical skills with my practical industry knowledge, and after a few years of working with big consultancy companies I decided to start JMP Consulting.
Let us look at the last three decades.
In the 1990’s there were basic PV safety database systems, such as ArisG, ArisLite and ClinTrace. Fax machines were a huge part of the tech that enabled PV processes, with a high volume of incoming and outgoing data by fax. Processes were extremely paper-intensive and were designed to accommodate transactional work, such as processing of cases and putting aggregate reports together; everything was very compliance-focused. Consequently, there was demand for full-time roles dedicated to paper management, typing up documents and data entry. Teams were typically regionalized, and everything was done “onshore”.
In the 2000’s, PV technology became more sophisticated, more globally oriented. There were advances in what the technology could do, and consolidation of major tech players due to M&A activity. Paper-based processes began to give way to more digitization and electronic workflow management. Analytics tools become more prevalent and more user-friendly. However, a typical PV department was still very paper-intensive. Some of the regionalized models began to consolidate to one system, one process, and one organization, particularly between US and Europe.
Throughout this decade, more stringent regulatory requirements were continually being introduced, such as the Risk Evaluation and Mitigation Strategy (REMS), as well as Volume 9a. Consequently the bar was being raised for the calibre of work, and quality management expectations were increasing. We saw more focused teams dedicated to signal detection and risk management, and specialized teams emerged to manage increasing business system needs as the regulatory requirements led to increasingly complex systems. Dedicated vendor oversight teams were also required as companies began to work offshore with vendors.
Over the last decade, good pharmacovigilance practices (GVP) were introduced in the European Union (EU). The Qualified Person for Pharmacovigilance (QPPV) is not a new requirement, but it became clear that this person needs a whole team around them to support them and help shoulder the workload.
Offshore work has grown in magnitude, partnerships between companies have become an integral part of how business is done, and next generation technology is rolling out to improve efficiency and consistency. Safety systems have become truly global, enabling a scalable end-to-end safety process within a single system.
Big changes are coming with PV technology, which will drive major shiftsin the way we think about how PV work gets done. We have seen evolution in PV technology before, but it seems this time around will be more impactful than anything from the past 20 years.
With the advent of next-generation technology, new hard skills will be required, such as understanding of machine learning, natural language processing and artificial intelligence. Organizations need to be able to manage transformation of the PV business effectively and regularly, and leverage advanced analytical tools to derive meaningful insights from various data sets. Additional ‘soft’ skills will also be needed, such as adaptability, flexibility, open-mindedness as well as the ability to ‘think outside the box’ to drive improvements through innovative thinking.
New roles within the organisation will emerge, with specific roles dedicated to:
Meanwhile, other roles will fade out and teams of people (in-house or outsourced) performing transactional activities will become a thing of the past.
From a process perspective – Processes must be highly scalable to accommodate growth in volume and complexity, and a blend of proven and cutting-edge technology is needed to support and enable this. A future-ready process has metrics to enable continuous improvement; it can efficiently evolve and adapt to simultaneously accommodate new regulations, innovative products and evolving stakeholder expectations.
From a technology perspective – Highly agile, flexible and robust, technology needs to be business-led with strong IS support and should be woven into an organisation’s processes, not vice versa.
From a people perspective – People in the organisation must accept increasing automation of processes – you can have the best technology in the world, but if the people in the team are rejecting it, it is not going to be successful. Well-managed resource models are also hugely important. The organisational structure must be designed around the business’ needs, not vice versa. Employees should offer more than one skillset and in return, they must have a pathway to develop professionally. It is critical that a team can approach things from different angles and can adapt to change – these days excelling in just one area is often not enough.