A few months ago I read an article on bioprocess 4.0, which discusses how combining AI and ML with extensive sensor data collected during biopharmaceutical manufacturing could deliver constant real-time adjustments, promising better process consistency, quality and safety.
This led to a discussion with some of my colleagues about what the future of Lab Informatics could look like when vendors start to integrate AI and ML into products such as lab information management systems (LIMS), electronic lab notebooks (ELN) and others.
What is AI and ML?
AI: In simple terms, AI (artificial intelligence) makes decisions or suggestions based on datasets with the ultimate aim of creating truly instinctive system interfaces, that appear like you are interacting with a person.
ML: ML (machine learning) is one of the methods used to create and analyse the datasets used by AI and other system modules. Crucially machine learning does not rely on a programmer to specify the equations used to analyse data. ML looks for patterns and can ‘learn’ how to process data by examining data sets and expected outcomes.
How does ML work?
The following example is extremely simple, but it helps to illustrate the basic principles of ML. The traditional approach to adding two values together is to include the exact way the data should be treated within the system’s configuration.
By using ML, the system is given examples, from which it learns how the data should be processed.
Once the system has seen enough datasets, the ML learning functions learn that A & B should be added together to give the result. The key advantage of ML is its powerful flexibility. If we feed our example system with new datasets, the same configuration could be used to subtract, multiply, divide or calculate sequences all without the need for specific equations.
Where can we see examples of how ML and AI are used in everyday life?
Possibly without realising it, we already see ML in everyday life. When you open Netflix, Amazon Prime Video or Apple TV+ the recommended selections you are presented with are derived using ML. The systems learn the types of content each of us enjoy by interpreting our previous behaviour.
Most of us also have experience of personal assistants such as Amazon’s Alexa and Apple’s Siri. These systems are excellent examples of AI using natural speech to both understand our instructions and then communicate answers, or results of actions. ML not only powers the understanding of language but also provides many of the answers to our questions.
The fact that we all can recognise such an effective and powerful everyday example shows just how far AI and ML have come since their inception in the 1950s.
How will AI and ML affect the day-to-day operations of the lab?
Voice recognition software has been available for decades; however, it has not made large inroads into the lab. It has been used in areas where extensive notes are taken, areas such as pathology labs or for ELN experiment write ups. These are the obvious ‘big win’ areas because of the volume of text that is traditionally typed, the narrow scope of AI functionality needed, and the limited need to interface to other systems.
However, companies such as LabTwin and LabVoice are pushing us to consider the widespread use of not just voice recognition, but natural language voice commands across the lab. Logging samples into LIMS, for example, is generally a manual entry, with the exception of barcode scanners and pre-created sample templates, where possible. Commands such as “log sample type plasma, seals intact, volume sufficient, from clinic XYZ” is much simpler than typing and selecting from drop downs. Other functions such as “List CofAs due for approval”, “Show me this morning’s Mass Spec run” would streamline the process of finding the information you need.
Opportunities to take advantage of AI and ML within lab systems.
Take stability studies where samples are stored in various conditions (such as temperature, humidity, and UV light) for several years and ‘pulled’ for analysis at various set points throughout the study.
The samples are analysed for decomposition across a matrix of conditions, time points and potentially product formulations or packaging types. Statistics are produced for each time point and used to predict shelf life using traditional statistics and graphs.
Stability studies are expensive to run and can take several years to reach final conclusions.
AI and ML could, with access to historical data, begin to be used to limit the size of studies so they can focus on a ‘sweet spot’ of critical study attributes. Ultimately, this could dramatically reduce study length by detecting issues earlier and predicting when failure will occur.
Moving on to lab instrumentation
Instrument downtime, particularly unscheduled, is a significant cost to laboratories. Using ML to review each new run, comparing it with previous runs and correlating with system failures, could predict the need for preventative maintenance.
AI/ML interventions such as these could significantly reduce the cost of downtime. This type of functionality could be built into the instruments themselves, systems such as LIMS, ELN, Scientific Data Management Systems (SDMS) or instrument control software. If this was combined with instrument telemetry data such as oven temperature, pump pressure or detector sensitivity we have the potential to eliminate most unplanned maintenance.
Another major concern with instrumentation in labs today is scheduling and utilisation rates. It is not uncommon for instruments to cost hundreds of thousands of pounds/dollars/euros, and getting the highest utilisation rates without obstructing critical lab workflows is a key objective for labs. However, going beyond the use of instrument booking systems and rudimentary task planning is difficult. Although it is not hard to imagine AI and ML monitoring systems such as LIMS and ELN, there is far more that can be done to ensure this functionality can go even further. Tasks such as predicting workload; referring to previous instrument run times; calculating sample / test priority; and even checking for scientist’s free diary slots are all tasks that can be optimised to improve the scheduling of day-to-day laboratory work. The resulting optimisation would not only reduce costs and speed up workflows, but would dramatically reduce scientists’ frustration in finding available instruments.
Over the last few years, there has been a massive focus on data integrity within regulated labs. However, many of the control mechanisms that are put in place to improve integrity or mitigate issues are not real-time. For instance, audit trail review is often done monthly at best, and generally quarterly. Not only is it tedious, it is all too easy to miss discrepancies when reviewing line upon line of system changes.
ML could be used to monitor the audit trails of informatics systems and instrument systems in real-time and AI could report any out of the ordinary actions or result trends that do not ‘look’ normal to managers. Where appropriate, the system could interact with the corporate training platform and assign specific data integrity training to applicable teams. The potential increase in integrity of data while reducing the headcount needed to do so could be significant.
Lab directors, IT professional and the Lab Informatics industry are quite rightly focusing on the digital lab and digital lab transformations. Done right, this will form and excellent platform for the next level of informatics development using AI and ML to propel not just digital science forward, but to revolutionise the everyday life of scientists. Personally, I cannot wait!
To find out more about how Scimcon can support your informatics project, contact us today.