By Geoff Parker and Paul McTurk
Having worked on more than one hundred information system projects and programs over the last 20+ years, for lab-based organisations of all shapes and sizes, we know that people can sometimes confuse the two. It’s an easy mistake to make! However, there are very clear differences between a project and a program and, as we have demonstrated to our clients many times, handling each in the correct way can have a big impact on overall success.
Projects are typically well-defined, as they deliver a well-understood, specific goal or outcome, within a specified timeline: e.g: implementing a new information system or service within a laboratory. There is usually a distinct team and a clear route from start to completion.
A program involves doing things that deliver a strategy or initiative – or a number of strategy points or initiatives – and is less easy to define, compared to a project. For example, a program might be put in place to respond to a challenge such as: ‘We want to make the lab 30% more efficient.’ There might be (and usually are) projects underneath this, which could include ‘Specific enhancements to current information systems’, ‘Good lab practice training’, ‘Lab supply chain improvement’, etc. Programs can span several months, or even years, and therefore require strategic oversight, a lot of iteration and the involvement of many stakeholders.
Projects are managed through project management methodologies such as PRojects IN Controlled Environments (PRINCE2), and Gantt charts are often employed to map out how you will get from A to B and in what timeframe. At a program level, Gantt charts rapidly become overly complicated and you’re more likely to see a roadmap with aims and targets, but without the detail and structure of a project plan.
So why does this matter? It might be tempting to replicate how you plan and lead a project when thinking about a program. But it’s going to be impossible to scale and communicate effectively using the same approaches.
Having helped many lab-based organisations to run informatics projects and programs, we share some of our insights on how to lead, communicate, manage risk and account for human factors, when planning and rolling out both projects and programs.
Program leaders require strategic thinking, flexibility, excellent communication and stakeholder management, strong delegation, and empowerment skills, as well as effective team and resource management, among many other attributes.
While project managers also need many of these skills, their focus is much more task and delivery-focused. In short, they prioritise everything related to ‘getting the job done’, on time and within budget.
Program leaders have a much wider remit, from defining the strategic direction and focus, to creating a structure under which the ‘child’ projects will operate, managing ‘child’ project risks that could impact other ‘child’ projects, or the program as a whole. Program leaders are focused on achieving benefits, and strategic objectives that align with the organization’s vision.
Project communication is usually to a defined group of people on a regular basis, i.e. daily, weekly or monthly. Most people engaged in a project are involved to a similar degree and are very familiar with the details, so the level of information shared will be both quite granular and consistently consumed by all team members. Good communication within a project tends to be direct, detailed, and unfiltered.
For programs, where there may be hundreds of people involved with varying levels of engagement, cutting through the noise and providing updates that are impactful, relevant and easy to digest is key. Whereas ‘one size fits all’ may be suitable for a project, programs need to be communicated in various levels of detail, and, rather than relying solely on scheduled communication, benefit from participants ‘self-serving’ information.
Program leaders need to enable a shared awareness about what’s happening across the whole program, in an easily digestible format. A simple one-page graphic that shows the key milestones and summarises the roadmap can be effective and might be sufficient for some stakeholders. A program newsletter, outlining progress against key milestones and any major challenges or opportunities is another useful communication method. When sharing updates via tools such as Microsoft Teams, tagging stakeholders is a good way of ensuring your update attracts their attention.
Often Scimcon includes expert communications professionals within programs, who help determine the level of information sharing and recommend the best channels to use, as well as providing guidance on how to navigate organisational culture for the most effective program communication.
Risk management is critical for both projects and programs.
Typically, within projects, risks are identified, investigated, and mitigated as the project progresses. The risks are listed and managed within a regularly updated risk log.
Once again, the scale and complexity of programs dictates a different approach. Rather than identifying risks as they become apparent, a proactive and systematic methodology is required.
A technique we have borrowed from product development methodologies, such as the Lean Startup framework is Riskiest Assumption Testing, often referred to as RAT.
RAT is an effective technique that ensures the program’s most critical assumptions are identified and adequately tested, both at the start of the program, and on an ongoing basis. For example, at the start, one of your riskiest assumptions is whether your team can work well together at all. This needs to be tested early. See “Human Factors” below.
Other examples of riskiest assumptions:
RAT emphasizes rapid experimentation, learning from failures, and adapting mitigation strategies based on evidence.
If a project team works well together, it might be tempting to think that larger teams can do the same. The difference between leading small teams of 10-20 people and teams that are much larger is significant.
Program delivery success is influenced by a variety of human factors that can impact the effectiveness and efficiency of the program and could easily justify a dedicated blog post.
These factors include team dynamics, motivation and morale, decision-making, conflict resolution, issue escalation and knowledge sharing.
Let’s look at one of these – issue escalation – in a little more detail.
Early escalation of issues is a key success factor in the on-time delivery of projects. When confronted with an issue, well-meaning team members can mistakenly believe it is their job to solve the problem quietly and report when the resolution is complete. Often however, this results in the potential problem only coming to the wider team’s attention days or possibly weeks later.
The escalation process should be multi-tiered (‘heads up’, ‘warning’ and ‘escalation’) and transparent within teams, so that it becomes second nature for individuals to share any concerns with the right people, at the appropriate time. Regular problem-solving sessions or informal team meetings where the only agenda point is discussing/brainstorming any concerns, no matter how small, is a good practice and something we do ourselves and advocate with clients!
The connected nature of the program and the ‘child’ projects within the program means that the likelihood of human factors affecting delivery increases and requires ongoing monitoring and proactive management.
Projects and programs may appear very similar in nature however due to programs’ scale and complexity we highly recommend you don’t attempt to lead them in the same manner as projects.
We have hopefully provided some tips and insight for how to take the right approach when planning, leading and implementing projects and programs. To ensure successful outcomes, project / program leaders should include the key aspects of leadership, communication, risk management and human factors in their project or program planning.
If you need help with your upcoming projects or programs, contact us.Industry leader interviews: Jana Fischer?
In this blog, we speak to Jana about Navignostics’ mission, and how the team plans to revolutionise personalised oncology treatments with the help of data and AI.
Navignostics is a start-up personalised cancer diagnostics business based in Zurich, Switzerland. Our goal is simple – we want to revolutionise cancer treatment by identifying a highly personalized and thus optimal treatment for every patient, to ensure that each patient’s specific cancer is targeted and fought as needed. Our capabilities allow us to do this by analysing tumour material, through extracting spatial single-cell proteomics information. and using this data to analyse many proteins simultaneously in individual cells within the tissue.
Single-cell proteomics comprises of measuring and identifying proteins within a single cell, whereas spatial proteomics focuses on the organisation and visualisation of these proteins within and across cells. Combining these two research tools allows the team at Navignostics to characterise tumours on a cellular level, by identifying the proteins present across cells in a tumour, and also how these proteins and cells are organised. This means that the team can provide a more accurate estimate for how certain tumours will respond to different medications and treatments.
Proteins are typically the target of cancer drugs and measuring them on a cellular level allows us to identify different types of tumour cells, as well as immune cells that are present and how the two interact. This data is highly relevant to inform clinicians of the best form of (immuno-) oncology and combinatorial treatment for individual patients. Also, this information is highly relevant to pharma companies in order to accelerate their oncology drug development, by providing insight on drug mode of action, and signatures to identify responders to novel drugs.
The kind of data that we are able to extract from different types of tumours are monumentally valuable, so the work doesn’t stop there. All of the data we harness from these tumours is stored centrally, and we plan on utilising this data by building it into a system we refer to as the Digital Tumour, that will continuously allow us to improve the recommendations we can make to our clinical and pharma partners. Our journey has been rapid, though it is built on years of research and preparation: we founded the business in 2022, as a spin-off from the Bodenmiller Lab at the University of Zurich.
The dream became a reality for us in November 2022, when we secured a seed investment of 7.5m CHF. This seed funding will allow us to pursue our initial goals of establishing the company, achieving certification for our first diagnostic product and developing our Digital Tumour. By extension, collaborating with pharma and biotech partners in oncology drug development. It has also given us the resource we need to move to our own premises. We are due to move off university campus in May 2023. This offers us great opportunity to push forward with the certification processes for our new lab, and it gives us to the chance to grow our team and expand our operation. We will be located in a start-up campus for life science organisations in the region of Zurich, so we’ll be surrounded by companies operating in a similar field and at a similar capacity.
The Digital Tumour will be the accumulation of all the molecular data we have extracted from every tumour that we have analysed to date, and ongoing. Connected to that, we store information on the clinical parameters and patient response to treatment. Over time, our aim is to utilize this central data repository to identify new tumour signatures, and build a self-learning system that will provide fully automated treatment suggestions for new patients, based on how their molecular properties compare to previously analysed patients that have been successfully treated.
Our data storage is quite advanced, so volume isn’t really a challenge for us. Our main focus is standardising the input of data itself. The technology is based on years of research and the data analysis requires a great deal of experience and in-depth expertise. In order to extract the full value from this data, it must be completely standardised. Data integrity is therefore vital to our work, and allows us to get the maximum value from past analyses. Our past experience in the Bodenmiller Lab allowed us to develop standardised processes to ensure that all of our data is fully comparable, which means that we can learn more and more from our past data, and apply this to new cases that we analyse.
It is also important to report on our complex data in a comprehensive but easily interpretable manner to the clinician/tumour board who needs to organise a treatment plan. We’re currently working with our clinical collaborators to develop readily understandable and concise reporting outputs. Unlike genomics analysis, our reports focus on proteins in tissue, which is the same information that clinicians are used to working with. So, there is a common language there that offers us the unique opportunity to provide clinicians with data they can easily interpret and work with.
It’s important to note that personalised treatment approaches and precision medicine are not new concepts in the diagnostics space. However, our technology and algorithms allow us to extract novel types of biomarkers which were previously inaccessible or unknown, so we’re helping to level up the playing field and give clinicians and drug developers’ comprehensive information to individualize therapies.
Comprehensive tumour data is truly at the heart of what we do, and one key benefit of our technology is that we’re able to analyse very small amounts of sample – such as fine needle biopsies – to provide therapy suggestions. We can also analyse bio banked tumour material, so if there is any old material that has been stored, we have the ability to analyse those samples retrospectively. Not only does this help us to fuel our Digital Tumour with more data, but it also allows us to examine new fields such as long-term survival rates of patients with these tumours. This is of huge value to fuel our product development pipeline because it allows us to identify different molecular properties between individuals that may not have been considered on a clinical level, but may have played a role in patient responses to treatments and survival outcomes in the long-term.
This kind of retrospective data also plays a key role in the evolution of healthcare and drug development, as having the technologies available to acquire this sort of data and mine it to our advantage will provide enormous benefits. These include improving individual treatment courses for patients, as well as expediting the development of novel cancer drugs so pharma companies can get more effective treatments to market sooner.
For example, one commonly cited statistic is that 90% of clinical drug development fails during phase I, II, III trials and drug approval. Often, this may arise from a lack of available information to identify the subset of patients most likely to benefit from a novel drug. Having access to Navignostics’ technology and algorithms and a database such as the Digital Tumour will offer the potential to pre-select the right patients to enroll in clinical trials, and more easily identify the patients that do respond to the novel treatment, which could substantially expedite the speed of drug development in the trial stage, and help bring more effective drugs to the market.
Even unsuccessful trials offer valuable opportunities: it is possible to repurpose and reanalyse material from previous failed trials. Such high rates of failure in clinical development means that there are a large number of companies that have invested $millions in developing drugs that have not come to fruition, so if companies want to re-mine their data, our team can reinterpret the existing work into identifying more successful strategies, so we can give those drugs another chance and offer a better chance of Return on Investment.
A failure no longer needs to be a failure. Navignostics and its offerings can bring value to our pharma and biotech partners, and will also bring direct benefit to patients and clinicians once we launch our diagnostics product. So, data from every facet of the oncology industry, from curing a patient to halting the development of a drug, can offer us valuable insight that both we and the Digital Tumour could learn from when developing treatments.
The next three years will be critical for our work, and we have projected timelines and key milestones for our diagnostics developments that we will achieve until our next funding round. Along the way, we are actively speaking to biotech and pharmaceutical organisations to identify projects and build the foundation for long lasting collaborations. We are looking forward to a successful continuation of the Navignostics development in 2023!
Scimcon is proud to showcase start-up companies like Navignostics, and we’re looking forward to seeing how the company will grow over the coming years.
To contribute to our industry leader blog series, or to find out more about how Scimcon supports organisation with lab informatics and data management solutions, contact us today.
Our team at Scimcon is made up of a talented group of interesting individuals – and our newest recruit Ben Poynter certainly does not disappoint!
Ben joined our Scimcon team in July 2022 as an associate consultant, and has been working with the lab informatics specialists to get up to speed on all things Scimcon. We spoke to Ben about his experience so far, his interests, background, and what he hopes to achieve during his career as an informatics consultant.
So, I studied Biomedical Science at Sheffield Hallam University, which was a four-year course and allowed me to specialise in neuroscience. During my time at university, I created abstracts that were presented in neuroscience conferences in America, which was a great opportunity for me to present what I was working on. My final year dissertation was on bioinformatics in neuroscience, as I was always interested in the informatics side of biomedical science as well.
Once COVID hit, I moved into code work, and worked in specimen processing, and then as a supervisor for PerkinElmer who were undertaking some of the virus research. When things started to die down, I began working for a group called Test and Travel (not the infamous Track and Trace initiative, but a similar idea!). I started there as a lab manager, training new staff on lab protocols for COVID-19, and then a month into that I started working more on the LIMS side – which is where I ended up staying. I wrote the UAT scripts for 3 different companies, I performed validation on the systems, I would process change controls. I then moved to Acacium as LIMS lead there, so over the course of my career I’ve worked with a number of LIMS and bioinformatics systems, including LabWare 7, LIMS X, Labcentre, WinPath Enterprise, and Nautilus (ThermoFisher Scientific).
In the early stages, I would have to say it was when Jon and Dave led my first interview, and Jon asked me a question I hadn’t been asked in an interview setting before. He asked me ‘who is Ben Poynter?’. The first time I answered, I discussed my degree, my professional experience with LIMS and other informatics systems, and how that would apply within Scimcon’s specialism in lab informatics consultancy. Then he asked me again and I realised he was really asking what my hobbies were, and how I enjoyed spending my free time. Since starting at Scimcon, I’ve been introduced to the full team and everyone is happy to sit and talk about your life both inside and outside of work, which makes for a really pleasant environment to work in. Also, it seems as though everyone has been here for decades – some of the team have even been here since Scimcon’s inception back in 2000, which shows that people enjoy their time enough to stay here.
I’ve been given a really warm welcome by everyone on the team, and it’s really nice to see that everyone not only enjoys their time here, but actively engages with every project that’s brought in. It’s all hands on deck!
So, my main hobbies and interests outside of work are game design, as well as gaming in general. I run a YouTube account with friends, and we enjoy gaming together after work and then recording the gameplay and uploading to YouTube. We are also working on a tower defence game at the moment, with the aim to move into more open world games using some of the new engines that are available for game development.
In addition to gaming and development, I also enjoy 3D printing. I have a 3D printer which allows me to design my own pieces and print them. It’s a bit noisy, so I can’t always have it running depending on what meetings I have booked in!
Technology is a real interest of mine, and I’m really fortunate to have a role where my personal interests cross-over into my career. The language I use for game design is similar to what I work with at Scimcon, and the language skills I’ve developed give me a fresh perspective on some of the coding we use.
At the moment, I’m working on configuration for some of the LIMS systems I’ll be working with at customer sites, which I really enjoy as it gives me the chance to work with the code and see what I can bring to the table with it. Other projects include forms for Sample Manager (ThermoFisher Scientific), making it look more interesting, moving between systems, and improving overall user experience. It’s really interesting being able to get to grips with the systems and make suggestions as to where improvements can be made.
My first week mainly consisted of shadowing other Scimcon lab informatics consultants to get me up to speed on things. I have been working with the team on the UK-EACL project, which has been going really well, and it’s been great to get that 1-2-1 experience with different members of the team, and I feel like we have a real rapport with each other. I’ve been motoring through my training plan quite quickly, so I’m really looking forward to seeing the different roles and projects I’ll be working on.
I’d really like to get to grips with the project management side of things, and also love to get to grips with the configuration side as well. It’s important to me that I can be an all-round consultant, who’s capable at both managing projects and configuration. No two projects are the same at Scimcon, so having the capability to support clients with all their needs, to be placed with a client and save them time and money, is something I’m keen to work towards.
For more information about Scimcon and how our dedicated teams can support on your lab informatics or other IS projects, contact us today.Outsourcing a remote LIMS validation?
We’ve recently worked with Scott Stanley, Director of the University of Kentucky Equine Analytical Chemistry Lab (UK-EACL), to support him with the launch of his new lab.
Launching a LIMS implementation and validation can be a complex process at the best of times, but when the COVID-19 pandemic meant that the team at UK-EACL had to perform the validation remotely, Scimcon were on hand to guide the process and successfully finalise the implementation.
We recently spoke to Scott about his experience opening the lab and launching his new IS strategy, and you can catch up on some of our main conversations in our video here:
2020 has been a difficult year for most industries, not least for event and tradeshow providers. Luke Gibson, Founding Director of Open Pharma Research and Lab of the Future, shares his experience of running events in the laboratory industry, and what makes Lab of the Future such a unique event.
My name is Luke Gibson, and I am one of the three founding directors of Open Pharma Research. I have 30 plus years of experience in developing and running events, primarily in the financial and trade and commodity sectors. My colleagues Kirianne Marshall and Zahid Tharia bring a similar level of experience to the company.
Kirianne has had many years of experience in managing the commercial side of large congresses, such as Partnering in Clinical Trials, and research and development congresses. Zahid has 30 years of events experience too, particularly in running life science portfolios, and launching congresses/events. Our paths have crossed many times throughout our years working in events, and we eventually hit a point where all 3 of us had the capacity to try something new – something that was worthwhile, fun, and different to the corporate worlds we had become accustomed to. So that was why we created Lab of the Future – with a view to running events in a different way.
I’m not sure if I would describe it as a gap in the market, more an ambition to do things differently. There was a desire from all of us to build an event with a different approach to the one we would take when working for large organisations, because when you’re working on a large portfolio of global events that cover a variety of topics, you and your team are always looking ahead to the next event, and the focus on the longevity of a single event isn’t always there.
We wanted something that we can nurture and grow, something that we can work on year-round without getting distracted by the next thing on our list. It also allows us to stay within this space and build our community, without having to face pressures such as a year-on-year development strategy or diverse P&L. Our desire was to avoid these constraints, and create an event that we can continue to work on for a long time.
We want to be able to live and breathe Lab of the Future, but one of the interesting things about it is that it’s such a broad concept. On the one hand we deal with informatics, but on the other hand, we deal with equipment, technology, and all the connectivity between them – but even that’s just one part of it. We are not an informatics conference; we are not strictly an instrumentation conference; we also look at the innovation side of things.
I think the best way to describe how we see Lab of the Future is as a proxy for how you do science in the future. Everything pertains to more efficient processes; better results; or ways of creating breakthrough innovation, and these are all part of the picture of science in the future. And that is the lab of the future – where the lab is the proxy for the environment where you do the science that matters.
When we started off, we found we received a lot of queries from industry contacts who wanted to get involved, but certain topics they wanted to discuss didn’t necessarily pertain to the physical laboratory itself. But if it was relevant to science, then it was relevant to us. Things like data clouds and outsourced services may not be directly linked to the lab, but they still relate to how you work. So, within that, the scope for the Lab of the Future gets wider still, looking at areas such as how we can create virtual clinical trials, or use real world-data to feed back into R&D.
People are also keen to learn more from their peers and from other areas of the industry. Lab of the Future allows us to host senior speakers and keynotes who can tell us where we’re heading, and show us how the efforts of one area within life science feed into other areas. It presents us with an almost ever-changing jigsaw image, and it’s this strategic element that I think sets us apart from other events.
We attract a real mix of attendees, and that’s what I love about it. You can run a conference for people in a specific job function, such as a data scientist or an R&D manager, but what people really want to know is what the people around them are doing, to almost give them context of the industry as a whole. So, our conference doesn’t just exist to help you do your own job better, but it helps you to develop a concept of where your department is heading in the future, and what you should think about longer term. We aren’t telling scientists how to do their job today; we’re helping them think about their responsibilities for delivery in the future. Lab of the Future is about the delivery of science of the future.
Our sponsors and solution providers that support the conference are also very much part of our community, as they’re all innovating and making waves in this space as well. They’re in a space that’s always evolving to build the Lab of the Future; and they are part of that solution. So, we don’t merely facilitate a conference of buying and selling between providers and services, we offer a space where everyone is evolving together. It’s a real melting pot, and that’s the fun bit really.
Zahid’s background in life sciences definitely gave us a starting point. Further to that, we’ve found that every time we put something out, that our community engages, and as a consequence we’re introduced to people we never expected to be introduced to. The fact we’re always talking to people enriches our content – the people we meet and conversations we have change our way of thinking, and shape what we’re doing.
Although I’m in charge of our marketing operations, I have to say I’m not always sure where some of our contacts come from! One thing I’ve found quite surprising is the lack of reliance on a database – there’s a lot of power in word-of-mouth, especially in this space where everyone is working on something – why not share that? As we’re seen as adding value to the conversation, it allows people to find us through their connections and our supporters.
Scimcon is proud to sponsor Lab of the Future, and we can’t wait to see you at the Autumn virtual congress on 26 – 27th October 2021. Contact us today to learn more about our participation in the event, and stay tuned on our Opinion page for part 2 of our conversation with Luke.The role of AI and ML in the future of lab informatics?
A few months ago I read an article on bioprocess 4.0, which discusses how combining AI and ML with extensive sensor data collected during biopharmaceutical manufacturing could deliver constant real-time adjustments, promising better process consistency, quality and safety.
This led to a discussion with some of my colleagues about what the future of Lab Informatics could look like when vendors start to integrate AI and ML into products such as lab information management systems (LIMS), electronic lab notebooks (ELN) and others.
AI: In simple terms, AI (artificial intelligence) makes decisions or suggestions based on datasets with the ultimate aim of creating truly instinctive system interfaces, that appear like you are interacting with a person.
ML: ML (machine learning) is one of the methods used to create and analyse the datasets used by AI and other system modules. Crucially machine learning does not rely on a programmer to specify the equations used to analyse data. ML looks for patterns and can ‘learn’ how to process data by examining data sets and expected outcomes.
The following example is extremely simple, but it helps to illustrate the basic principles of ML. The traditional approach to adding two values together is to include the exact way the data should be treated within the system’s configuration.
By using ML, the system is given examples, from which it learns how the data should be processed.
Once the system has seen enough datasets, the ML learning functions learn that A & B should be added together to give the result. The key advantage of ML is its powerful flexibility. If we feed our example system with new datasets, the same configuration could be used to subtract, multiply, divide or calculate sequences all without the need for specific equations.
Possibly without realising it, we already see ML in everyday life. When you open Netflix, Amazon Prime Video or Apple TV+ the recommended selections you are presented with are derived using ML. The systems learn the types of content each of us enjoy by interpreting our previous behaviour.
Most of us also have experience of personal assistants such as Amazon’s Alexa and Apple’s Siri. These systems are excellent examples of AI using natural speech to both understand our instructions and then communicate answers, or results of actions. ML not only powers the understanding of language but also provides many of the answers to our questions.
The fact that we all can recognise such an effective and powerful everyday example shows just how far AI and ML have come since their inception in the 1950s.
Voice recognition software has been available for decades; however, it has not made large inroads into the lab. It has been used in areas where extensive notes are taken, areas such as pathology labs or for ELN experiment write ups. These are the obvious ‘big win’ areas because of the volume of text that is traditionally typed, the narrow scope of AI functionality needed, and the limited need to interface to other systems.
However, companies such as LabTwin and LabVoice are pushing us to consider the widespread use of not just voice recognition, but natural language voice commands across the lab. Logging samples into LIMS, for example, is generally a manual entry, with the exception of barcode scanners and pre-created sample templates, where possible. Commands such as “log sample type plasma, seals intact, volume sufficient, from clinic XYZ” is much simpler than typing and selecting from drop downs. Other functions such as “List CofAs due for approval”, “Show me this morning’s Mass Spec run” would streamline the process of finding the information you need.
Take stability studies where samples are stored in various conditions (such as temperature, humidity, and UV light) for several years and ‘pulled’ for analysis at various set points throughout the study.
The samples are analysed for decomposition across a matrix of conditions, time points and potentially product formulations or packaging types. Statistics are produced for each time point and used to predict shelf life using traditional statistics and graphs.
Stability studies are expensive to run and can take several years to reach final conclusions.
AI and ML could, with access to historical data, begin to be used to limit the size of studies so they can focus on a ‘sweet spot’ of critical study attributes. Ultimately, this could dramatically reduce study length by detecting issues earlier and predicting when failure will occur.
Instrument downtime, particularly unscheduled, is a significant cost to laboratories. Using ML to review each new run, comparing it with previous runs and correlating with system failures, could predict the need for preventative maintenance.
AI/ML interventions such as these could significantly reduce the cost of downtime. This type of functionality could be built into the instruments themselves, systems such as LIMS, ELN, Scientific Data Management Systems (SDMS) or instrument control software. If this was combined with instrument telemetry data such as oven temperature, pump pressure or detector sensitivity we have the potential to eliminate most unplanned maintenance.
Another major concern with instrumentation in labs today is scheduling and utilisation rates. It is not uncommon for instruments to cost hundreds of thousands of pounds/dollars/euros, and getting the highest utilisation rates without obstructing critical lab workflows is a key objective for labs. However, going beyond the use of instrument booking systems and rudimentary task planning is difficult. Although it is not hard to imagine AI and ML monitoring systems such as LIMS and ELN, there is far more that can be done to ensure this functionality can go even further. Tasks such as predicting workload; referring to previous instrument run times; calculating sample / test priority; and even checking for scientist’s free diary slots are all tasks that can be optimised to improve the scheduling of day-to-day laboratory work. The resulting optimisation would not only reduce costs and speed up workflows, but would dramatically reduce scientists’ frustration in finding available instruments.
Over the last few years, there has been a massive focus on data integrity within regulated labs. However, many of the control mechanisms that are put in place to improve integrity or mitigate issues are not real-time. For instance, audit trail review is often done monthly at best, and generally quarterly. Not only is it tedious, it is all too easy to miss discrepancies when reviewing line upon line of system changes.
ML could be used to monitor the audit trails of informatics systems and instrument systems in real-time and AI could report any out of the ordinary actions or result trends that do not ‘look’ normal to managers. Where appropriate, the system could interact with the corporate training platform and assign specific data integrity training to applicable teams. The potential increase in integrity of data while reducing the headcount needed to do so could be significant.
Lab directors, IT professional and the Lab Informatics industry are quite rightly focusing on the digital lab and digital lab transformations. Done right, this will form and excellent platform for the next level of informatics development using AI and ML to propel not just digital science forward, but to revolutionise the everyday life of scientists. Personally, I cannot wait!
To find out more about how Scimcon can support your informatics project, contact us today.