Industry leader interviews: Jana Fischer?

We’re kicking off 2023 with a new industry leader interview, and shining a spotlight on Jana Fischer, Co-Founder and CEO of Navignostics.

In this blog, we speak to Jana about Navignostics’ mission, and how the team plans to revolutionise personalised oncology treatments with the help of data and AI.

Tell us about Navignostics

Navignostics is a start-up personalised cancer diagnostics business based in Zurich, Switzerland. Our goal is simple – we want to revolutionise cancer treatment by identifying a highly personalized and thus optimal treatment for every patient, to ensure that each patient’s specific cancer is targeted and fought as needed. Our capabilities allow us to do this by analysing tumour material, through extracting spatial single-cell proteomics information. and using this data to analyse many proteins simultaneously in individual cells within the tissue.

What is spatial single-cell proteomics?

Single-cell proteomics comprises of measuring and identifying proteins within a single cell, whereas spatial proteomics focuses on the organisation and visualisation of these proteins within and across cells. Combining these two research tools allows the team at Navignostics to characterise tumours on a cellular level, by identifying the proteins present across cells in a tumour, and also how these proteins and cells are organised. This means that the team can provide a more accurate estimate for how certain tumours will respond to different medications and treatments.

Proteins are typically the target of cancer drugs and measuring them on a cellular level allows us to identify different types of tumour cells, as well as immune cells that are present and how the two interact. This data is highly relevant to inform clinicians of the best form of (immuno-) oncology and combinatorial treatment for individual patients. Also, this information is highly relevant to pharma companies in order to accelerate their oncology drug development, by providing insight on drug mode of action, and signatures to identify responders to novel drugs.

The kind of data that we are able to extract from different types of tumours are monumentally valuable, so the work doesn’t stop there. All of the data we harness from these tumours is stored centrally, and we plan on utilising this data by building it into a system we refer to as the Digital Tumour, that will continuously allow us to improve the recommendations we can make to our clinical and pharma partners. Our journey has been rapid, though it is built on years of research and preparation: we founded the business in 2022, as a spin-off from the Bodenmiller Lab at the University of Zurich.

The dream became a reality for us in November 2022, when we secured a seed investment of 7.5m CHF. This seed funding will allow us to pursue our initial goals of establishing the company, achieving certification for our first diagnostic product and developing our Digital Tumour. By extension, collaborating with pharma and biotech partners in oncology drug development. It has also given us the resource we need to move to our own premises. We are due to move off university campus in May 2023. This offers us great opportunity to push forward with the certification processes for our new lab, and it gives us to the chance to grow our team and expand our operation. We will be located in a start-up campus for life science organisations in the region of Zurich, so we’ll be surrounded by companies operating in a similar field and at a similar capacity.

Tell us more about the Digital Tumour – how does it work?

The Digital Tumour will be the accumulation of all the molecular data we have extracted from every tumour that we have analysed to date, and ongoing. Connected to that, we store information on the clinical parameters and patient response to treatment. Over time, our aim is to utilize this central data repository to identify new tumour signatures, and build a self-learning system that will provide fully automated treatment suggestions for new patients, based on how their molecular properties compare to previously analysed patients that have been successfully treated.

Sounds interesting – are there any challenges to working with a database of this size?

Our data storage is quite advanced, so volume isn’t really a challenge for us. Our main focus is standardising the input of data itself. The technology is based on years of research and the data analysis requires a great deal of experience and in-depth expertise. In order to extract the full value from this data, it must be completely standardised. Data integrity is therefore vital to our work, and allows us to get the maximum value from past analyses. Our past experience in the Bodenmiller Lab allowed us to develop standardised processes to ensure that all of our data is fully comparable, which means that we can learn more and more from our past data, and apply this to new cases that we analyse.

It is also important to report on our complex data in a comprehensive but easily interpretable manner to the clinician/tumour board who needs to organise a treatment plan. We’re currently working with our clinical collaborators to develop readily understandable and concise reporting outputs. Unlike genomics analysis, our reports focus on proteins in tissue, which is the same information that clinicians are used to working with. So, there is a common language there that offers us the unique opportunity to provide clinicians with data they can easily interpret and work with.

What does this kind of research and data mean for oncology, both in terms of pharmaceuticals, biologics, and healthcare?

It’s important to note that personalised treatment approaches and precision medicine are not new concepts in the diagnostics space. However, our technology and algorithms allow us to extract novel types of biomarkers which were previously inaccessible or unknown, so we’re helping to level up the playing field and give clinicians and drug developers’ comprehensive information to individualize therapies.

Comprehensive tumour data is truly at the heart of what we do, and one key benefit of our technology is that we’re able to analyse very small amounts of sample – such as fine needle biopsies – to provide therapy suggestions. We can also analyse bio banked tumour material, so if there is any old material that has been stored, we have the ability to analyse those samples retrospectively. Not only does this help us to fuel our Digital Tumour with more data, but it also allows us to examine new fields such as long-term survival rates of patients with these tumours. This is of huge value to fuel our product development pipeline because it allows us to identify different molecular properties between individuals that may not have been considered on a clinical level, but may have played a role in patient responses to treatments and survival outcomes in the long-term.

This kind of retrospective data also plays a key role in the evolution of healthcare and drug development, as having the technologies available to acquire this sort of data and mine it to our advantage will provide enormous benefits. These include improving individual treatment courses for patients, as well as expediting the development of novel cancer drugs so pharma companies can get more effective treatments to market sooner.

For example, one commonly cited statistic is that 90% of clinical drug development fails during phase I, II, III trials and drug approval. Often, this may arise from a lack of available information to identify the subset of patients most likely to benefit from a novel drug. Having access to Navignostics’ technology and algorithms and a database such as the Digital Tumour will offer the potential to pre-select the right patients to enroll in clinical trials, and more easily identify the patients that do respond to the novel treatment, which could substantially expedite the speed of drug development in the trial stage, and help bring more effective drugs to the market.

Even unsuccessful trials offer valuable opportunities: it is possible to repurpose and reanalyse material from previous failed trials. Such high rates of failure in clinical development means that there are a large number of companies that have invested $millions in developing drugs that have not come to fruition, so if companies want to re-mine their data, our team can reinterpret the existing work into identifying more successful strategies, so we can give those drugs another chance and offer a better chance of Return on Investment.

A failure no longer needs to be a failure. Navignostics and its offerings can bring value to our pharma and biotech partners, and will also bring direct benefit to patients and clinicians once we launch our diagnostics product. So, data from every facet of the oncology industry, from curing a patient to halting the development of a drug, can offer us valuable insight that both we and the Digital Tumour could learn from when developing treatments.

What does 2023 and beyond have in store for Navignostics?

The next three years will be critical for our work, and we have projected timelines and key milestones for our diagnostics developments that we will achieve until our next funding round. Along the way, we are actively speaking to biotech and pharmaceutical organisations to identify projects and build the foundation for long lasting collaborations. We are looking forward to a successful continuation of the Navignostics development in 2023!

Scimcon is proud to showcase start-up companies like Navignostics, and we’re looking forward to seeing how the company will grow over the coming years.

To contribute to our industry leader blog series, or to find out more about how Scimcon supports organisation with lab informatics and data management solutions, contact us today.

Industry Leader interviews – Marilyn Matz (Paradigm4)?

Marilyn, can you give us a quick insight into Paradigm4, how long the business has been operating and what it does?

Turing award laureate Mike Stonebraker and I co-founded Paradigm4 in 2010 to bring technology from Mike’s MIT lab to the commercial science community to transform the way researchers interrogate and analyse large-scale multidimensional scientific data. The aim was to create a software platform that allowed scientists to focus on their science without getting bogged down in data management and computer science details – subsequently enabling more efficient hypothesis generation and validation, delivering insights to advance drug discovery and precision medicine.   

What was the motivation behind setting up Paradigm4?

Throughout his 40 years working with database management systems, Mike heard from scientists across disciplines from astrophysics, climatology and computational biology that traditional approaches for storing, analysing and computing on heterogeneous and highly dimensional data using tables, files and data lakes were inefficient and limiting. Valuable scientific data—along with its metadata—must be curated, versioned, interpretable and accessible so that researchers can do collaborative and reproducible research.

We created a technology (REVEAL™) that is purpose-built to handle large-scale heterogeneous scientific data. Storage is organised around arrays and vectors to enable sophisticated data modelling as well as advanced computation and machine-learning. This enables scientists to ask and answer more questions, and get more meaningful answers, more quickly.

As one of the areas you are working with is translational research, how would you explain the process?

Translational research is the process of applying ideas, insights and discoveries generated through basic scientific inquiry to the treatment or prevention of human disease. The philosophy of “bench to bedside” underpins the concept of translational medicine, from basic research to patient care.

There are a number of benefits to streamlining translational research, as it gives scientists the ability to integrate ‘OMICS data, clinical, EMR, biomedical imaging, wearables and environmental data to build a rich, systems-level understanding of human biology, disease and health.

Can you give us any examples of Translational Research projects you are currently working on?

We are actively working with leading biopharma companies globally, as well as research institutes. One of our current projects is working with Alnylam Pharmaceuticals to expedite their research leveraging one of the biggest genetic projects ever undertaken – the UK Biobank. Over 500,000 people have donated their genotypes, phenotypes and medical records. With so much data available on such a large scale, Alnylam’s scientists faced a challenge when it came to extracting meaningful information and making valuable connections that could unlock breakthroughs in scientific research.

The UK Biobank captures genomics, longitudinal medical information and images, so having all that data in one place allows researchers to correlate someone’s traits and presence/absence of a disease, or even susceptibility to diseases like COVID-19, with their genetic make-up. Alnylam has used our technology to help use these correlations to investigate causes of disease and identify potential treatments.

What new areas of life science research promise to uncover new insights into human health?

The idea of precision medicine – delivering the right drug treatment to the right patient at the right time and at the right dose – underpins current thinking in healthcare practice, and in pharma R&D. However, until single-cell ‘OMICS came along, researchers were looking at an aggregated picture – the ‘OMICs of a tissue system, rather than that of a single cell type. Now, single-cell analysis has become a major focus of interest and is widely seen as the ‘game changer’ – with the potential to take precision medicine to the next level by adding ‘right cell’ into the mix.

We offer biopharmaceutical developers the ability to break through the data wrangling, distributed computing and machine-learning challenges associated with the analysis of large-scale, single-cell datasets. Users can then build a multidimensional understanding of disease biology, scale to handle more samples from patients with more cells, more features, broader coverage and readily assess key biological hypotheses for target evaluation, disease progression and precision medicine. 

How does Paradigm4 help scientists resolve and even advance challenges with data analysis and interpretation?

By using our platform data are natively organised into arrays that can easily be queried with scientific languages, such as R and Python. The old way of working –– opening many files and transforming into matrices and data frames for use with scientific computing software –– is no longer necessary, because the data are natively “science-ready”. For companies that have tens of thousands of data sets, aggregation of that data in a usable format is tremendously empowering.

Our “Burst Mode” automated elastic computing capability makes it possible for individual scientists to run their own algorithms at any scale without requiring the help of IT or a computer scientist. The software automatically fires up and shuts down hundreds of transient compute workers to execute their task. Any researcher can access the power of hundreds of computers from a laptop.

Has Paradigm4 being deployed in the fight against the Covid-19 pandemic?

When Covid-19 hit earlier last year we partnered with a leading pharma company to identify tissues expressing the key SARS-CoV-2 entry associated genes.. We found they were expressed in multiple tissue types, thus explaining the multi-organ involvement in infected patients observed worldwide during the ongoing pandemic.

The first data sets were from the Human Cell Atlas (HCA) and the COVID-19 Cell Atlas. Questions such as “Where is the receptor for SARS-CoV-2” or “What are the tissue distribution and cell types that contain COVID-19 receptors?” can be answered in 30 seconds or less, with responses from 30 or more data sets (since expanded to ~100). More advanced questions can now be investigated, such as the causes for complications and sequelae seen in some patients. Rather than organising all of those data, researchers can focus their attention on unlocking answers.

It has allowed us to support scientists in breaking through the complexities of working with massive single cell, multi-patient datasets. Accelerating drug and biomarker discovery is a key driver for our customers.

What does the future hold for Paradigm4?

The life science community, as well as more commercially oriented research and development groups in pharma and biotech, understand that they need to use leading edge algorithms and cost-effective, scalable computational platforms to give them the ability ask and answer questions in seconds instead of weeks to push forward discovery. Paradigm4 gives the confidence to make earlier and adaptive change decisions that will shorten development, and provide earlier access to complex, real-time data that can detect efficacy and safety signals sooner.  Importantly, working in partnership with these users, we will further improve and develop the capabilities in analysing datasets, benefitting researchers as they continue to strive for better results.

In order to work as intended, this site stores cookies on your device. Accepting improves our site and provides you with personalized service.
Click here to learn more