Blog

Welcome to the Ozette Blog: Generating Quality Cytometry Data at Scale

Greg Finak, CTO Ozette Technologies

Cytometry, encompassing spectral and conventional cytometry and mass cytometry, is unquestionably the premiere single-cell profiling assay technology for measuring and gaining insights into the immune system. It is conceptually simple, high throughput (capable of measuring millions of cells per sample at low cost), and – with the advent of mass and spectral – high-dimensional. It is widely used across academia and industry to characterize the human immune system in research, pre-clinical and clinical trials and drives modern drug development forward.

Yet, for a technology that has been in practice for over a half century, it continues to face technical challenges – panel development, analytics, experimental practice and standardization, and others. This blog series will discuss the challenges inherent in flow cytometry, and how Ozette has overcome them, and hopefully provide readers with information and insights that can help them generate higher quality data, more reliable biological insights, and an understanding of state-of-the-art tooling and workflows to do so.

For decades now, flow cytometry technology has promised immunological insights and cellular biomarkers via deep immunological profiling. A typical clinical trial can span many months or even years of study. Yet, outside of a few highly specialized labs, generating high quality flow cytometry data reliably over time is surprisingly challenging. So much so, that one of the key mandates of ISAC (International Society for Advancement of Cytometry) is education and training, to help core labs and practitioners produce reliable, high quality data. Many practitioners of cytometry struggle with all aspects of the assay – panel design, instrument calibration and standardization, generation of appropriate controls, unmixing or compensation, and downstream analysis.

The apparent simplicity of the technology masks layered complexity. Entire specialties focus on building and designing individual layers of a proverbial flow cytometry cake. Physicists and engineers design the instruments, chemists and biochemists design the reagents, immunologists design the panels, and computational biologists and bioinformaticians develop methods to process data. The field is exceedingly interdisciplinary. This is what makes it exciting and a joy to work in. With all this layered complexity, it is unlikely that any one individual would be an expert in all areas of the technology.

Yet, when it comes time to generate the data, there is often a singular individual that stains the cells, operates the instrument, acquires the data, compensates/unmixes, processes and gates the data, and analyzes the results. If something in the multi-layered cytometry cake is off, if some part of the recipe is not quite right, what happens? If the person responsible for data generation is not sufficiently expert in nor equipped with the necessary tools and information, to be able to detect, diagnose, and correct the defect, what is the outcome? In our experience, data quality suffers and experiments fail, the reasons “unknown or unexplained”, and consequently flow cytometry gets a reputation as a fickle technology.

This Ozette blog will provide a venue to discuss what happens when that individual is limited by their tooling or technical support and unable to detect, diagnose, and troubleshoot experimental and computational issues. Cytometry is, of course, not a layer cake, and diagnosing problems requires complex and subtle tooling and it greatly benefits from multidisciplinary expertise. In contrast to software solutions that push data quality monitoring onto that ill-equipped individual, Ozette proactively takes ownership of data quality, recognizing that robust biology begins and ends with high quality data. To that end, we have built tools to diagnose and remediate data quality issues, and developed significant internal expertise along the way, generating high quality data in our GCLP lab. Through this blog we will point readers to some of the lesser known pitfalls, and hope to convey some of the expertise of our multidisciplinary team to our readers so that they may be better prepared to address their own experimental problems when they arise.

As a company, Ozette strives to help our customers bridge the gap in tooling and expertise by generating the highest quality spectral cytometry data in our lab, and providing rapid, transparent, state of the art analytics, coupled with a team of experts to help them drive meaningful and reproducible biological insights from their data in a timely manner.

To do this successfully we have had to take a careful look at the entire process of cytometry data generation and analytics, including often-overlooked technical aspects like compensation and unmixing, that are typically not perceived as exciting areas of research and thus have not received much attention from the relatively small computational community that works on flow data. Later posts will cover new developments we have made in the process.

We hope to provide readers with the information and insights that we have gathered about the process and practice of single-cell cytometry at Ozette as we have moved from R&D to a vertically integrated cytometry solution, capable of processing data at scale, in order to help practitioners and decision makers to generate better data, and produce more reliable biological inferences and insights from their data.

This blog will cover Ozette’s opinionated view of the practice of flow cytometry. We will cover technical and non-technical aspects of flow data generation and data analysis, including control generation, unmixing and compensation, processing, gating, and downstream analysis and inference, coupled with some case studies. We will cover tooling, methods, their appropriate and (in our opinion) inappropriate uses. We will discuss organizational efficiency and collaboration & communication in multidisciplinary teams, and the importance of transparency, observability, interpretability, and “explorability” of complex data. We will show how these problems can be solved using the Ozette platform.

To date, we have worked with customers spanning pharma, CRO, academic, and core labs; we have seen a lot of data, and we have identified common, core areas that underlie the most common technical failings and challenges that we’ve observed. Along the way we have developed new tools and methods to remedy these failings. The Ozette platform is designed to help not only those singular flow users that have to manage an entire workflow, but labs and organizations that produce and rely on flow data, to avoid the common (known and unknown) pitfalls that lead to poor quality data, failed studies, lost time, and squandered resources, by detecting, diagnosing, and remedying them early in the data generation process, thereby ensuring reliable, trustworthy, and reproducible biological insights.

If you struggle with or worry about flow data quality, timeliness of results, biological reproducibility, or obtaining reliable insights from your data for decision making, then this blog will be of interest to you, whether you generate, process or consume the data (or all of the above).

Welcome to the Ozette Blog: Generating Quality Cytometry Data at Scale

Ozette recognizes the challenges of producing high-quality cytometry data, which stem from technical complexities and the need for diverse expertise. That is why we have launched the Ozette Blog, a platform to share our insights, expertise, and innovative solutions for generating and analyzing cytometry data at scale.