Project or Program? Why adapting your approach and working practices makes the difference.?

By Geoff Parker and Paul McTurk

Having worked on more than one hundred information system projects and programs over the last 20+ years, for lab-based organisations of all shapes and sizes, we know that people can sometimes confuse the two. It’s an easy mistake to make! However, there are very clear differences between a project and a program and, as we have demonstrated to our clients many times, handling each in the correct way can have a big impact on overall success.

What is a project vs. a program?

Projects are typically well-defined, as they deliver a well-understood, specific goal or outcome, within a specified timeline: e.g: implementing a new information system or service within a laboratory. There is usually a distinct team and a clear route from start to completion.

A program involves doing things that deliver a strategy or initiative – or a number of strategy points or initiatives – and is less easy to define, compared to a project. For example, a program might be put in place to respond to a challenge such as: ‘We want to make the lab 30% more efficient.’ There might be (and usually are) projects underneath this, which could include ‘Specific enhancements to current information systems’, ‘Good lab practice training’, ‘Lab supply chain improvement’, etc. Programs can span several months, or even years, and therefore require strategic oversight, a lot of iteration and the involvement of many stakeholders.  

Projects are managed through project management methodologies such as PRojects IN Controlled Environments (PRINCE2), and Gantt charts are often employed to map out how you will get from A to B and in what timeframe. At a program level, Gantt charts rapidly become overly complicated and you’re more likely to see a roadmap with aims and targets, but without the detail and structure of a project plan.

So why does this matter? It might be tempting to replicate how you plan and lead a project when thinking about a program. But it’s going to be impossible to scale and communicate effectively using the same approaches.

Having helped many lab-based organisations to run informatics projects and programs, we share some of our insights on how to lead, communicate, manage risk and account for human factors, when planning and rolling out both projects and programs.

1. Leadership

Program leaders require strategic thinking, flexibility, excellent communication and stakeholder management, strong delegation, and empowerment skills, as well as effective team and resource management, among many other attributes.

While project managers also need many of these skills, their focus is much more task and delivery-focused. In short, they prioritise everything related to ‘getting the job done’, on time and within budget.

Program leaders have a much wider remit, from defining the strategic direction and focus, to creating a structure under which the ‘child’ projects will operate, managing ‘child’ project risks that could impact other ‘child’ projects, or the program as a whole. Program leaders are focused on achieving benefits, and strategic objectives that align with the organization’s vision.

2. Communication

Project communication is usually to a defined group of people on a regular basis, i.e. daily, weekly or monthly. Most people engaged in a project are involved to a similar degree and are very familiar with the details, so the level of information shared will be both quite granular and consistently consumed by all team members. Good communication within a project tends to be direct, detailed, and unfiltered.

For programs, where there may be hundreds of people involved with varying levels of engagement, cutting through the noise and providing updates that are impactful, relevant and easy to digest is key. Whereas ‘one size fits all’ may be suitable for a project, programs need to be communicated in various levels of detail, and, rather than relying solely on scheduled communication, benefit from participants ‘self-serving’ information.

Program leaders need to enable a shared awareness about what’s happening across the whole program, in an easily digestible format. A simple one-page graphic that shows the key milestones and summarises the roadmap can be effective and might be sufficient for some stakeholders. A program newsletter, outlining progress against key milestones and any major challenges or opportunities is another useful communication method. When sharing updates via tools such as Microsoft Teams, tagging stakeholders is a good way of ensuring your update attracts their attention.

Often Scimcon includes expert communications professionals within programs, who help determine the level of information sharing and recommend the best channels to use, as well as providing guidance on how to navigate organisational culture for the most effective program communication.

3. Risk management

Risk management is critical for both projects and programs.

Typically, within projects, risks are identified, investigated, and mitigated as the project progresses. The risks are listed and managed within a regularly updated risk log.

Once again, the scale and complexity of programs dictates a different approach. Rather than identifying risks as they become apparent, a proactive and systematic methodology is required.

A technique we have borrowed from product development methodologies, such as the Lean Startup framework is Riskiest Assumption Testing, often referred to as RAT.

RAT is an effective technique that ensures the program’s most critical assumptions are identified and adequately tested, both at the start of the program, and on an ongoing basis. For example, at the start, one of your riskiest assumptions is whether your team can work well together at all. This needs to be tested early. See “Human Factors” below.

Other examples of riskiest assumptions:

  1. Program objectives are well-defined, well understood and agreed.
  2. The lab and the wider organisation will accept the business change.
  3. There is sufficient budget for the program and the required ‘child’ projects.

RAT emphasizes rapid experimentation, learning from failures, and adapting mitigation strategies based on evidence.

4. Human factors

If a project team works well together, it might be tempting to think that larger teams can do the same. The difference between leading small teams of 10-20 people and teams that are much larger is significant.

Program delivery success is influenced by a variety of human factors that can impact the effectiveness and efficiency of the program and could easily justify a dedicated blog post.

These factors include team dynamics, motivation and morale, decision-making, conflict resolution, issue escalation and knowledge sharing.

Let’s look at one of these – issue escalation – in a little more detail.

Early escalation of issues is a key success factor in the on-time delivery of projects. When confronted with an issue, well-meaning team members can mistakenly believe it is their job to solve the problem quietly and report when the resolution is complete. Often however, this results in the potential problem only coming to the wider team’s attention days or possibly weeks later.

The escalation process should be multi-tiered (‘heads up’, ‘warning’ and ‘escalation’) and transparent within teams, so that it becomes second nature for individuals to share any concerns with the right people, at the appropriate time. Regular problem-solving sessions or informal team meetings where the only agenda point is discussing/brainstorming any concerns, no matter how small, is a good practice and something we do ourselves and advocate with clients!

The connected nature of the program and the ‘child’ projects within the program means that the likelihood of human factors affecting delivery increases and requires ongoing monitoring and proactive management.

Summary

Projects and programs may appear very similar in nature however due to programs’ scale and complexity we highly recommend you don’t attempt to lead them in the same manner as projects.

We have hopefully provided some tips and insight for how to take the right approach when planning, leading and implementing projects and programs. To ensure successful outcomes, project / program leaders should include the key aspects of leadership, communication, risk management and human factors in their project or program planning.

If you need help with your upcoming projects or programs, contact us.

How to work effectively with informatics consultants in life sciences?

As a leader in a pharmaceutical or life sciences organisation, getting the most out of your team and resources is always a top priority. After making the decision to proceed with a critical investment in consulting services, there may even be more pressure to find the optimal use of these time-limited external resources. So, how can you make sure you are using these resources to their full potential? In this blog, our industry expert Micah Rimer will show you how.

During Micah’s 20 years’ working at big pharma & vaccines corporations, including Bayer, Chiron, Novartis and GSK, he has successfully deployed consultancy groups within lab informatics and clinical projects. Micah has worked with Scimcon to support his teams on high profile critical projects

Frame the problem – what does your implementation project need to achieve?

As with any business situation, it is important that there is a common goal that everyone is aligned around.

It is essential that you do not waste valuable time revisiting the same conversations. Ask yourself: “Is it obvious what problem we are trying to solve?” Often, issues can arise when people are arguing about implementing a solution, whilst losing sight of the challenge at hand. 

Take the example of Remote Clinical Monitoring: You might decide that it would be beneficial to have your Clinical Research Associates (CRAs) track and monitor the progress of a clinical study without traveling to clinical sites. That sounds like it could be very promising, but what is the problem that needs to be solved?

Without clear goals on what you want to accomplish with Remote Clinical Monitoring, it will be difficult to declare an implementation a success. In addition, if you and your organisation do not know what you are trying to achieve with a particular technical solution, it will be impossible to give your informatics consultants a clear set of deliverables.

So, first things first, agree on the problem statement!

One of the first times I hired Scimcon to support me with an informatics project, I had recently joined a pharma company and found myself in the middle of conflicting department objectives, with what seemed to be no clear path out of the mess I had inherited. The organisation had purchased an expensive new software system that had already become a failed implementation. After spending a year continuously configuring and programming it, it was no closer to meeting the business needs than when the project had started. There were two loud criticisms to address on that point:

This also highlighted a far wider range of issues, such as some people who felt their skills were not being properly utilised while problems went unsolved, and that the bioinformatics department might not have the right goals to begin with.

To solve this challenge, we sat down with Scimcon to identify all the different problems associated with the inherited project, and to clarify what we needed to do to turn it into a success. In taking time to review the situation and without too much effort, we were able to come up with four key areas to address: 

  1. Understand how bioinformatics/ IT priorities should map to the organisation’s priorities – before we spent any more time and money, what did the organisation actually need?
  2. Solve the bioinformatics problem that the software had been purchased for (assuming that was indeed a verified need).  
  3. Determine how roles and work could be shifted and changed so that we were utilising the talents and the resources in the department.
  4. If possible, put the purchased software to use! 

With the help of Scimcon, we were able to define these problems and then focus on finding answers to each of the questions. In the end it turned out to be one of our most successful engagements together, award winning even. By just asking senior management what their biggest challenge was, we found their overriding priority was to have an overview of all the R&D projects going on. And while the new software was not particularly well suited for solving the bioinformatics problem that it had been acquired for, it could easily be used to map out the R&D process for portfolio tracking. Then, we turned our attention to the bioinformatics problem, which was easily solved by a bit of custom code from one of the bioinformatics programmers who felt that previously his skills were not being properly utilised.

Once we knew where we were, and where we wanted to get to, all we had to do was get there one challenge at a time.

Manage internal expectations – how will the informatics consultants work with your clincial/analytical teams?

Once you have identified and agreed on the problem that you want to solve, the next step is making sure the organisation is ready to work with your consultants. As with all relationships, business or otherwise, a crucial step is to make sure that everyone has the same expectations, and that all the relevant stakeholders are on the same page.

People have many different perspectives on why consultants are brought in.

As there can be so many different roles and perspectives on the use of consultants, you need to make sure that you address all the different stakeholder perspectives. It is important to establish a positive situation, as you want the consultants to be able to work with your teams without unnecessary tension.

When I was just starting out with my first LIMS implementation (Laboratory Information Management System), I remember being impressed that you could hire someone who had the specific experience and expertise to guide you on something they had done before but that was new to you. I wondered, “why was that not done all the time? Why do so many implementation projects fail when you can bring people in who had solved that particular problem before?” When I asked Russell Hall, a consultant at Scimcon for us on that first project, he said that not everyone is comfortable admitting they need help. As my career has progressed, I have come to value that feedback more and more. There are many people who are highly competent and effective in their jobs, but are not comfortable with the appearance that they are not sufficient on their own. It is always important to manage for those situations, rather than assuming that everyone will welcome external help.

Lastly, it is also critical to manage expectations, regarding the use of consultants. Your boss may need to defend the budget, or be prepared to stand behind recommendations or conclusions that are delivered from people outside of the organisation. It should also be considered that management might not readily accept something that might seem obvious to employees working at a different level. By liaising with senior leaders from the outset, you can make sure both parties are aligned how the consultants will interact with people in the company, and what their role will be. This is important both to achieve what you want internally and also to make sure the consultants have a proper expectation of how their efforts will be utilised. 

Communicate and adjust – how is your information managed between your team and consultants?

While it can be very tempting to feel that you can leave the majority of the project to the experts, the reality is things rarely go as smoothly as planned. As the life science business and information management have advanced over the last few decades, the amount of complexity and details has grown tremendously. It is more and more difficult for a single person to maintain an overview of all the relevant facts. The only way to be successful is to communicate and make sure that the right people have the right information at the right time. Your consultants are no different. 

Many organisations have challenges in terms of taking decisions and communicating them effectively. For your consultants who do not typically have all the same access and networks in the organisation that internal staff do, it is imperative that you make sure they are kept up to date. You want to avoid them spending valuable time on focusing on areas and deliverables that have shifted to being less important. Finding ways to keep consultants informed on all the latest developments is absolutely necessary for them to be able to deliver successfully. Figure out what makes sense by considering the organisation culture and the consulting engagement setup. Whether it is by use of frequent check-ins or online collaboration, be prepared to put in additional efforts to make sure that the information gets to where it needs to go.

As well as good communication, organisations have to be able to adjust as needed. Occasionally everything does work out according to plan, but that is more the exception than the rule when it comes to complex life science informatics projects. While timelines and commitments are critical, it is important to view any project as a collaboration. There will be unexpected software issues. There will be unplanned organisational changes and problems. People get sick, life happens. By having open and continuous dialogue, you can be best prepared to make the adjustments needed to find solutions together to unexpected problems.

Ensuring success in your informatics projects

Consultants can be hugely valuable to you and your organisation.

But you have to setup the right conditions for everything to work out well.

  1. Know what the problem you are trying to solve is, and make sure you have as much alignment around the problem statement as possible.  
  2. Make sure the organisation is ready for the collaboration by ensuring that your team and management know what to expect out of the engagement, and that your consultants similarly know the scope and what their mission is.
  3. Lastly, you need to keep in constant communication and make sure that you are ready to work together to adjust to the inevitable bumps that will come up on the road.

Working together, you can get to where you need to go.

If you’re interested in working with Scimcon on your upcoming informatics project, contact us today for a no-commitment chat about how we can help you succeed.

The challenges of implementing ePRO – part two?

Cell network and WIFI connectivity

Clearly good cell network or WIFI connectivity is of primary importance when thinking about using ePRO in a clinical study. Sites should conduct a feasibility activity in their local vicinity during the study initiation phase to understand if cell network strengths are good enough for ePRO and if not, if there is likely to be a large population of subjects that have home WIFI. For remote locations with possible issues with either the cell network coverage or even with the availability of electricity, these sites should be reconsidered before a decision is made to use ePRO.

However, even with bad cell network coverage, it is possible to conduct a study with ePRO by ensuring the correct messaging appears to the subjects on the device. All ePRO systems allow the subjects to access and complete their diaries on a daily basis without cell or WIFI connectivity. The data entered by the subject is stored on the device until connectivity is established. There are circumstances where the site may need to be made aware of certain responses in real time in order to maintain subject safety. If connectivity is established and the data is sent in real time, the site staff may receive an alert from the system asking them to contact the participant. In order to ensure participant safety is maintained it is good practice to alert the participant to contact the sites / study doctor if certain responses in the diary are provided, such as if the participant required emergency medical attention.

Translations

As mentioned earlier in an earlier instalment, translation is an aspect not typically associated with clinical systems such as EDC. Additionally, because the ePRO devices are in the hands of the participants, the screens, in their local languages, must be submitted to the local ethics committees within the countries that the study is to take place in.

It should not be forgotten that paper PRO is also subject to a translation process. However, the additional complexity with ePRO is the need to apply the translated text to the software in order to generate the questionnaire screens. This involves multiple review and update rounds between the ePRO vendor and the translation vendor which increases time, effort and therefore cost. Once the screens are generated there may be further review rounds during the in-country review between the local sponsor representatives, ePRO vendor and translation vendor. All of these review rounds increase the timescales for an activity that is already time critical.

Often the date for submission to the ethics committee is set during the planning phase of the study by the sponsor study team. If the submission date is missed it could result in a delay to the study start. Consequently, it is important to ensure clear timelines are in place between agreeing the ePRO requirements, which will decide what is displayed on each of the screens, and submitting the screen report to the ethics committees. The ePRO vendor will need to be made aware of these timelines at the earliest opportunity. It can take more than 12 weeks for the completion of the translations resulting in the creation of the finalized screen reports. These timelines are difficult to manage if the process includes an in-country review round which allows for local country representatives from the sponsor to review the translations. It is important to make clear to the reviewers what it is that these stakeholders are reviewing. Translation is not an exact science. There are many ways of writing the same sentence. As the translator is the trained expert in translations it should be left to them to choose the most appropriate wording in the local language which most faithfully represents the English version. The in-country review, if indeed one is required, should only be conducted to ensure the correct screens are displayed in the screen reports. Allowing the in-country reviewer to make suggestions on preferred wording risks multiple back and forth review rounds, increasing timelines and jeopardizing study start dates.

End User Acceptance

As briefly mentioned in part one of this series, possibly the most crucial aspect when introducing a new system or process to a user base is change management. If you do not bring your users along the journey with you, you are less likely to gain their acceptance.

How does this manifest itself? With ePRO there is plenty of opportunity for issues to arise, from delays in getting devices to sites, to errors in the software, to usability and connectivity issues. All of which can affect the investigators ability to get on with their daily work. If you have not put in place a good change management process you will find the investigators very quickly become disenchanted with the system and even the smallest of issues will become magnified, resulting in escalations to the sponsors senior management team.

It is important from the outset to set the stage with the investigators. Why are we using ePRO? What benefits does it bring to the sponsor? It may require more work on behalf of the investigator which will need to be compensated for. It is also about setting expectations. Things will go wrong, issues will need to be resolved, backup processes may need to be utilized, but the investigator must be aware that they have the support when needed and how they can access that support.

When implementing ePRO thought should go into understanding how to improve the investigators experience. For example, investigators can be working on multiple studies at the same time for different sponsors all using ePRO, so adding labels onto the packaging of shipped devices so that investigators can quickly store them together is a quick win.

As a sponsor it may also be necessary to implement an additional layer of support for the study team. When issues arise the investigator will contact the vendor helpdesk. Often the helpdesk are unable to provide an immediate remedy so the investigator will then contact the sponsor’s study team representatives. The additional layer of support sits between the study team and the vendor, collating issues, communicating technical information back and forth in a manner that is easily digestible and holding the ePRO vendor to account. This relieves the frustration experienced within the sponsor’s study team and reduces the likelihood of escalation.  

Qualification/Validation

It is important to remember that in accordance with ICH GCP guidance section 5.2.1 [1] the sponsor may transfer trial-related duties to a vendor, however the ultimate responsibility for the quality and data integrity of the trial data always resides with the sponsor. In order to ensure data is collected, stored and transferred in accordance with ICH-GCP guidance the sponsor will need a suitable oversight strategy of the vendor’s processes.  This may, at a minimum, include a qualification activity which involves auditing the ePRO vendor on a regular basis. It may also invoke some internal validation work to ensure, as the sponsor, your own organisation’s technical and study specific requirements for clinical systems are met.

It is not recommended for the sponsor to leave the responsibility of vendor oversight to the individual study teams. The study teams may not have the experience or technical know-how necessary to understand the challenges and considerations discussed earlier in this blog.

We would recommend a two stage process.

Stage one is the qualification of the vendors system by a centralized team including stakeholders from the data management and computer system quality departments. Once qualified, the vendor’s system can be classed as a platform that can subsequently be used on multiple studies.

Stage two is to have experienced stakeholders conduct study specific User Acceptance Testing (UAT) in order to test the many scenarios and nuances associated with ePRO and to ensure the system meets the requirements of the study Protocol. Once implemented and running live on a study it is far more difficult to update the system than finding and fixing the issue during the implementation phase.

However, UAT is the very least the Sponsor must do as part of the overall study specific validation activity (Stage two). The Sponsor’s quality management system (QMS) may also mandate that the implementation of any clinical system (which would include ePRO) requires internal validation documentation to supplement the vendor’s validation package. This can include documents such as a sponsor Validation Plan, Requirements Specification, Risk Assessment, Traceability Matrix and Validation Report.

ePRO vendors will often offer to provide a service to create UAT script on behalf of the Sponsor, however this is not recommended as a misunderstanding of requirements by the vendor may also manifest itself in the scripting. This can result in the script passing when executed when in fact it does not meet the requirements of the Sponsor.

For these reasons, having experienced validation professionals generating the deliverables and executing the testing is recommended.

Summary

The two parts of this blog have detailed, at a high level, many of the challenges that can be experienced during the implementation and use of ePRO. These challenges are summarized below:

These challenges, and others, often create frustration for the investigators and study teams, and in the worst case scenarios, studies can be put on hold or stopped completely, costing millions of dollars and preventing a product from going to market.

A mechanism to help mitigate, or at the very least support the investigators and study teams through, many of these challenges is for the sponsor to put in place a centralized ePRO team within their organization. This team, consisting of internal sponsor stakeholders or perhaps supplementing Scimcon resources into the team, would be responsible for the qualification of the ePRO vendor, the strategic implementation of libraries and standards and for supporting the implementation of ePRO study-specifically, providing help with creating validation deliverables and executing UAT, and being a second level of support during the conduct phase of a study. ePRO is fast becoming the standard and it is worthwhile investing early on in an internal team to ensure success.

To discuss how Scimcon can support your organization with the implementation of ePRO, please contact one of our consultants for an informal conversation.

References

[1] ICH harmonised guideline integrated addendum to ICH E6(R1): Guideline for Good Clinical Practice ICH E6(R2) ICH Consensus Guideline, https://ichgcp.net

The challenges of implementing ePRO – part one?

Benefits of Implementing ePRO 

When it comes to documenting the advantages of using ePRO over paper in clinical trials, the benefits are clear. 

With all the advantages to using ePRO over paper it seems to be a no-brainer to use ePRO whenever possible. However, it’s important to be mindful of certain considerations and challenges that come with the implementation of ePRO within your organization before jumping in.   

Challenges and Considerations 

Historically the implementation of electronic clinical systems in general has been challenging. In the majority of cases it requires the move from a paper-based process to an electronic system in an environment where the reliance has always been on paper, hindering the adoption of computer systems that are seen as alien. Taking EDC as an example, the response to an international survey cited that 46% of respondents identified inertia or concern with changing current process, and 40% identified resistance from investigative sites as the major causes for adoption delays [2].    

ePRO is not immune to these challenges. In fact, it could be argued that ePRO is even more susceptible. While ePRO suffers from the traditional technical issues and user acceptance that EDC experiences, ePRO is also placed in the hands of potentially thousands of study participants many of whom may have little technical understanding. Additionally, ePRO relies on hardware (mobile device or tablet), cell network or WIFI connectivity, translation into the participant’s local language, multiple userbase (study teams, investigators and participants) and local helpdesk support, all of which comes with their own set of challenges and associated costs and few, if any, of which are encountered with EDC.  

ePRO is one of the few electronic systems that directly collects source data and as a result comes under increased scrutiny from a data integrity and quality perspective, especially when used for primary or secondary end point data collection. The system must always be available in order to allow subjects to be activated on ePRO devices. If a participant leaves a clinical site without an active device, this can result in missed data which can be construed as a serious quality issue and perhaps put subject safety at risk.  

Before I go into the more detailed challenges associated with ePRO, let’s first consider the financial costs. 

Cost   

On the surface of it, it would appear that implementing ePRO is significantly more costly than paper. The expense of the devices, associated logistics and data usage (monthly SIM costs), the licenses, helpdesk and translations all contribute to costs that range from hundreds of thousands of dollars to multi-million-dollar contracts per study.  

When making a business case for ePRO it is important to take into account the hidden costs associated with paper in order to compare the two.  

  1. The additional eCRF’s that must be created in your EDC to house the data transcribed from the paper PRO.  
  1. The time spent by the site staff transcribing data from the paper PRO into the EDC and the associated monitoring required to source data verify the transcribed data. For example; this activity requires onsite visits by the CRA.  
  1. The data cleaning at the end of the study by the Data Management team, which takes time and effort with multiple communications back and forth to the investigator. ePRO can significantly reduce the timeline associated with data cleaning due to the nature of these data being electronic source data.  

When conducting a full assessment, the gap between the cost of implementing ePRO vs paper reduces significantly. ePRO vendors have attempted to provide examples which result in paper diaries actually contributing more cost to a study budget than ePRO.  

The business case for implementing ePRO should not be solely based on raw cost. This will likely result in failure to get agreement at the leadership level. You will find it easier to get acceptance if you can prove that ePRO costs are comparable to paper while also concentrating on the non-tangible benefits as, in the case of ePRO, these are the real reasons for its consideration. Increasing the quality of your data collection results in more confidence in that data, which in turn reduces the likelihood of rejection when submitted to the regulators (predominantly for primary and secondary end point data). Receiving the data in real time and reducing the need for data cleaning can aid the ability to get a product to market quicker by shortening the timelines to close the study, which in turn results in cost avoidance.   

Many ePRO vendors will provide a cost calculator; a spreadsheet where the sponsor can plug in parameters associated with their study to provide an estimate of costs before engagement with the vendor. Only a small number of parameters are required to calculate a good estimate with the most important being the length of the study in months and the number of participants. The length of the study drives the helpdesk, data usage and PM costs, whereas the number of participants drives the device, logistics and shipping costs. There are other costs associated with the configuration of the system, translation, shipping, number of sites etc, however these are often negligible in comparison for larger studies.   

In summary; it is important to build a business case for ePRO within your organization in order to assist in gaining acceptance at a leadership level. The business case should include areas of efficiency over paper together with examples of ePRO costs using the cost calculators provided by the vendors, as well as emphasizing the other benefits of ePRO, such as subject safety, compared to paper solutions.  

Software 

In the past, ePRO implementations were customized pretty much from the ground up, coding the study specifics into the vendor’s study builder toolkit. This resulted in a huge effort required to validate the system to ensure errors and bugs were captured before studies went live. Inevitably despite all this testing some issues did make it through to the live study, causing frustration for the participants, investigators and study teams.   

Over the past decade the systems have become more sophisticated. Less code is required during the implementation phase, which has been replaced with configuration. Vendors have also introduced library functionality which allow sponsors to define questionnaires up front that can be reused across studies. As the questionnaire is not rebuilt every time there is less opportunity to introduce errors. Additionally, the ability to reuse questionnaires from a library also results in less work by the vendor per study, less validation on behalf of the sponsor and can reduce the time and costs during the implementation phase.  

It may also be possible to standardize other areas of functionality, perhaps the workflow as to when questionnaires are made available to the participants, or the alerting system, or the visit schedule. It may not be possible to standardize across therapeutic areas, but within a therapeutic area where multiple studies collect the same data this approach can result in substantially reduced timelines during the implementation phase, while reducing the risk of software errors on the studies.     

Hardware (mobile device or tablet) 

ePRO can be implemented in a number of different modalities. In this blog, we are concentrating on provisioned devices; devices that are provided by the ePRO vendor at a cost to the study Sponsor, and “bring your own device” (BYOD); where a subject’s own device is used as the ePRO instrument. It should be noted that all studies require provisioned devices to a certain degree in order to cater for cases where a subject either does not own a compatible mobile device or does not own a mobile device at all. 

When provisioning devices, ePRO vendors are responsible for the associated logistics such as software installation and shipping. Vendors are generally very knowledgeable when it comes to the customs regulations in many countries, including the average timelines required to get a shipment to a site. 

In scenarios where competitive recruitment between sites is employed, it is particularly important to plan ahead.  As it may not be possible to predict the number of subjects that will be recruited at a specific site and therefore the number of devices required at that site, it is necessary to purchase a sensible overage of provisioned devices. Although costly it ensures sites will not run out of devices.  

With an increasing number of the world’s population now owning smart phones, BYOD was seen as the natural progression for ePRO. It reduces the costs and burden of acquiring devices and the associated logistics and also reduces the monthly costs for data usage. These costs do not completely disappear as a certain level of provisioning is required for those cases where participants don’t own a compatible smart phone. BYOD also reduces risks associated with not having enough devices on site, especially, as mentioned above, with competitive recruitment. However, BYOD does come with own set of unique challenges, mainly associated with data integrity and privacy. Some considerations might be: 

  1. How do you ensure equivalence between participants devices to guarantee the questionnaire is displayed to each participant without the introduction of bias? 
  1. As the participants devices are not locked down participants can turn off alarms/alerts, change the date and time etc. How can this be managed? 
  1. Who pays for the data usage on the participants SIM card? 
  1. Participants are not required to carry multiple devices (study device and their own smart phone). This is seen as a benefit and can increase compliance further. 
  1. As the participant will use their own personal smart phone and it is not dedicated to the study, the participant may have concerns over the security of their non-study related private information. 

There are clearly a lot of benefits of using BYOD over provisioned devices with more and more sponsors feeling comfortable moving into this space, however it is important to consider the implications before doing so.    

In the next instalment of this blog, we will discuss some other challenges of implementing ePRO in your organisation, such as connectivity, translations, and end user acceptance testing. Keep an eye on our Opinion page for part two of the series, coming soon. 

References 

[1]  ‘Electronic for Industry – Electronic Source Data in Clinical Investigations’, FDA 2013 http://www.fda.gov/downloads/drugs/guidancecomplianceregulatoryinformation/guidances/ucm328691.pdf  

[2]  ‘Welker JA. Implementation of electronic data capture systems: barriers and solutions.’ Contemp Clin Trials. 2007 May;28(3):329-36. doi: 10.1016/j.cct.2007.01.001. Epub 2007 Jan 11. PMID: 17287151. 

Planning for successful User Acceptance Testing in a lab or clinical setting?

What is User Acceptance Testing?

User Acceptance Testing (UAT) is one of the latter stages of a software implementation project. UAT fits in the project timeline between the completion of configuration / customisation of the system and go live. Within a regulated lab or clinical setting UAT can be informal testing prior to validation, or more often forms the Performance Qualification (PQ).

Whether UAT is performed in a non-regulated or regulated environment it is important to note that UAT exists to ensure that business processes are correctly reflected within the software. In short, does the new software function correctly for your ways of working?

Identifying and managing your requirements

You would never go into any project without clear objectives, and software implementations are no exception. It is important to understand exactly how you need software workflows and processes to operate.

To clarify your needs, it is essential to have a set of requirements outlining the intended outcomes of the processes. How do you want each workflow to perform? How will you use this system? What functionality do you need and how do you need the results presented? These are all questions that must be considered before going ahead with a software implementation project.

Creating detailed requirements will highlight areas of the business processes that will need to be tested within the software by the team leading the User Acceptance Testing.

Requirements, like the applications they describe, have a lifecycle and they are normally defined early in the purchase phase of a project. These ‘pre-purchase’ requirements will be product independent and will evolve multiple times as the application is selected, and implementation decisions are made.

While it is good practice to constantly revise the requirements list as the project proceeds, it is often the case that they are not well maintained. This can be due to a variety of reasons, but regardless of the reason you should ensure the system requirements are up to date before designing your plan for UAT.

Assessing your requirements

A common mistake for inexperienced testing teams is to test too many items or outcomes. It may seem like a good idea to test as much as possible, but this invariably means all requirements from critical to the inconsequential are tested to the same low level.

Requirements are often prioritised during product selection and implementation phases according to MoSCoW analysis. This divides requirements into Must-have, Should-have, Could-have and Wont-have and is a great tool for assessing requirements in these earlier phases.

During the UAT phase these classifications are less useful, for example there may be requirements for a complex calculation within a LIMS, ELN or ePRO system. These calculations may be classified as ‘Could-have’ or low priority because there are other options to perform the calculations outside of the system. However, if these calculations are added to the system during implementation, they are most likely, due to their complexity, a high priority for testing.

To avoid this the requirements, or more precisely their priorities, need to be re-assessed as part of the initial UAT phase.

A simple but effective way to set priority is to assess each requirement against the risk criteria and assign a testing score. The following criteria are often used together to assess risk:

Once the priority of the requirements has been classified the UAT team can then agree how to address the requirements in each category.

A low score could mean the requirement is not tested or included in a simple checklist.

A medium score could mean the requirement is included in a test script with several requirements.

A high score could mean the requirement is the subject of a dedicated test script.

Planning UAT

A key question often asked of our team is how many test scripts will be needed and in what order should they be executed? These questions can be answered by creating a Critical Test Plan (CTP). The CTP approach requires that you first rise above the requirements and identify the key business workflows you are replicating in the system. For a LIMS system these would include:

Sample creation, Sample Receipt, Sample Prep, Testing, Result Review, Approval and Final Reporting.

Next the test titles required for each key workflow are added in a logical order to a CTP diagram, which assists in clarifying the relationship between each test. The CTP is also a great tool to communicate the planned testing and helps to visualise any workflows that may have been overlooked.

Now that the test titles have been decided upon, requirements can be assigned to a test title and we are ready to start authoring the scripts.

Choosing the right test script format

There are several different approaches to test script formats. These range from simple checklists, ‘objective based’ where an overview of the areas to test are given but not the specifics of how to test, to very prescriptive step by step instruction-based scripts.

When testing a system within the regulated space you generally have little choice but to use the step by step approach.

Test scripts containing step by step instruction should have a number of elements for each step:

A typical example is given below.

A screenshot of a cell phone
Description automatically generated

However, when using the step by step format for test scripts, there are still pragmatic steps that can be taken to ensure efficient testing.

Data Setup – Often it is necessary to create system objects to test within a script. In an ELN this could be an experiment, reagent or instrument, or in ePRO a subject or site. If you are not directly testing the creation of these system objects in the test script, their creation should be detailed in a separate data setup section outside of the ‘step by step’ instructions. Besides saving time during script writing, any mistakes made in the data setup will not be classified as script errors and can be quickly corrected without impacting test execution.

Low Risk Requirements – If you have decided to test low risk requirements then consider the most appropriate way to demonstrate that they are functioning correctly. A method we have used successfully is to add low risk requirements to a table outside of the step by step instructions. The table acts as a checklist with script executers marking each requirement that they see working correctly during executing the main body of step by step instructions. This avoids adding the low requirements into the main body of the test script but still ensures they are tested.

Test Script Length – A common mistake made during script writing is to make them too long. If a step fails while executing a script, one of the resulting actions could be to re-run the script. This is onerous enough when you are on page 14 of a 15 page script. However, this is significantly more time-consuming if you are on page 99 out of 100. While there is no hard and fast rule on number of steps or pages to have within a script, it is best to keep them to a reasonable length. An alternative way to deal with longer scripts is to separate them into sections which allows the option of restarting the current block of instructions within a script, instead of the whole script.

Are all the requirements covered?

An important task when co-ordinating UAT is be fully transparent about which requirements are to be tested and in which scripts. We recommend adding this detail against each requirement in the User Requirements Specification (URS). This appended URS is often referred to as a Requirements Trace Matrix. For additional clarity we normally add a section to each test script that details all the requirements tested in the script as well as adding the individual requirement identifiers to the steps in the scripts that test them.

What comes next?

UAT is an essential phase in implementing new software, and for inexperienced users it can become time-consuming and difficult to progress. However, following the above steps from our team of experts will assist in authoring appropriate test scripts and leading to the overall success of a UAT project. In a future blog we will look at dry running scripts and formal test execution, so keep an eye on our Opinion page for further updates.

In order to work as intended, this site stores cookies on your device. Accepting improves our site and provides you with personalized service.
Click here to learn more