This conference call transcript was computer generated and almost certianly contains errors. This transcript is provided for information purposes only.EarningsCall, LLC makes no representation about the accuracy of the aforementioned transcript, and you are cautioned not to place undue reliance on the information provided by the transcript.
7/31/2025
Good day, ladies and gentlemen, and thank you for standing by. Welcome to the Nautilus Biotechnology Second Quarter 2025 earnings conference call. At this time, all participants are on the listen-only mode. After the speaker's presentation, there will be a question and answer session. To ask a question during that time, you need to press star 1-1 on your telephone keypad. At this time, I would like to turn the conference over to Ms. Jean-Yong Yi, Investor Relations. Ma'am, please begin.
Earlier today, Nautilus released financial results for the quarter ended June 30, 2025. If you haven't received this news release or if you'd like to be added to the company's distribution list, please send an email to investorrelations at nautilus.bio. Joining me today from Nautilus are Sujal Patel, co-founder and CEO, Grog Malik, co-founder and chief scientist, and Anna Mowry, chief financial officer. Before we begin, I'd like to remind you that management will make statements during this call that are forward-looking within the meaning of the federal security laws. These statements involve material risks and uncertainties that could cause actual results or events to materially differ from those anticipated. Additional information regarding these risks and uncertainties appears in the section entitled Forward-Looking Statements in the press release Nautilus issued today. Except as required by law, Nautilus disclaims any intention or obligation to update or revise any financial or product pipeline projections or other forward-looking statements, whether because of new information, future events, or otherwise. This conference call contains time-sensitive information and is accurate only as of the live broadcast on July 30, 2025. With that, I'll turn the call over to Susil.
Thanks, Jian, and thank you all for joining us. Q2 was a milestone quarter for Nautilus. Not only did we continue our momentum across both targeted and broad-scale proteomic development efforts, but we also publicly shared the first preprint to feature novel data generated using the Nautilus. We believe that this -its-kind proteiform data across a range of important biological systems can only be generated on our platform. No other analysis method comes close. The manuscript, now live on Bioarchive, represents nearly a decade of pioneering work by our team and collaborators. In it, we introduce and validate the application of our iterative mapping method, showing that it can measure proteoforms at a resolution and breadth never before possible. The results speak for themselves. Our approach demonstrated unprecedented dynamic range and industry-leading reproducibility. Even more exciting, early biological insights from this work suggests that iterative mapping may illuminate new mechanisms of tau biology, potentially opening the door to a new generation of neurodegenerative disease diagnostics and therapeutics. Parag will elaborate on these results presented in the manuscript shortly. We believe that this manuscript is both scientific validation and a significant external milestone for Nautilus, and that it sets a new bar in the field of proteomics. I want to recognize the Herculean efforts of our team, as well as our partners at Genentech, the Neural Stem Cell Institute, and Mount Sinai Health System. Zooming out, this quarter, we main-focus on building engagement around our tau proteoform assay and laying the foundation for external collaborations. Feedback from researchers is highly enthusiastic. Many now see Nautilus as a forerunner in decoding the complexity of post-translational modifications with the resolution necessary to drive meaningful biological and therapeutic breakthroughs. This enthusiasm is translating into real momentum. We've continued to deepen our conversations with academic, pharma, and non-profit partners, and the ability to now reference our manuscript gives us a new level of credibility and visibility. The discussions we're having today are more strategic, focused not only on tau, but also broader use cases for targeted proteoform analysis across neurology, oncology, and immunology. To provide more detail on our R&D efforts, let me now turn the call over to Parag. Parag?
Thanks, Sujal. Good morning, everyone. Q2 marked a major inflection point in our scientific journey. As Sujal mentioned, this quarter, we shared a preprint of a manuscript illustrating how our iterative mapping methods enabled a unique capability on the Nautilus platform, resolution of proteoforms at the single molecule level at scale. In traditional proteomic techniques, a protein is often treated as a single entity, but the reality is that proteins exist in many modified forms, each with their own distinct structure and function. These different variants are called proteoforms. Just like a single gene may have thousands of variants defined by mutations, a single protein may have thousands of different proteoforms defined by a combination of alternative splicing and multiple post-translational modifications. The prevalence of different proteoforms may ultimately influence the role a protein plays in disease and how best to therapeutically target it. Before I dive in, I'd like to clarify one important point. You'll hear from others that they are measuring proteoforms. However, the reality is that only platforms that look at intact protein molecules and are able to interrogate multiple positions on those molecules are capable of examining proteoforms with the necessary resolution. Existing affinity-based methods, such as Olink, Somascan, or Alamar, are able to report the relative amount of a protein, but they typically do not measure modifications of those proteins, and certainly not the co-occurrence of those modifications on individual protein molecules. Likewise, peptide-based methods, such as employed by shotgun mass spectrometry or even single molecule peptide sequencing methods, entirely lose the contextual information required to know that multiple modifications are co-occurring on a single protein molecule. Any peptide-based measurement method cannot measure proteoforms. Consequently, we believe that the Nautilus platform is the only platform that has been designed to readily quantify the thousands of distinct proteoforms of key proteins at scale. With the release of our manuscript, we publicly demonstrated the remarkable real-world capabilities of our iterative mapping method. This is notable for two distinct reasons. First, the manuscript represents an -to-end validation of our core platform, which is shared between our targeted proteoform assays and our forthcoming broad-scale assay. Second, the manuscript demonstrates that the Tau assay built upon our platform has the ability to drive powerful biological insight into Alzheimer's disease and related disorders. Diving in, the first part of the manuscript shows the platform is able to go -to-end from sample to answer by taking individual protein molecules from complex samples, attaching them to DNA origami nanoparticles, depositing those nanoparticles on nanofabricated arrays, iteratively probing them, and then applying our machine learning-based engine to quantify the proteoforms in the sample. When we began the company eight years ago, each of these challenges represented its own complex scientific frontier. Consequently, demonstrating them fully integrated is an important proof point regarding the scientific foundations of our approach. After introducing the method and how that method is applied to assay the Alzheimer's disease-associated protein Tau, we performed extensive assay characterization that serves as external confirmation of the platform's scientific rigor and technical maturity. I'd like to call out two specific aspects of the platform characterization data that will have concrete impacts to our customers. First is the reproducibility data. It is extremely uncommon for first introductions of a new method to perform such a rigorous and extensive characterization of reproducibility. However, we've heard from our future customers that reproducibility is top of mind for them. This is natural, as high reproducibility allows researchers to trust their results and know that they are more likely to be able to be replicated by researchers in other labs. We measured the within-experiment reproducibility of our platform as having a median CV of 1.5%. Even across multiple instruments, reagent lots, operators, sample preparations and runs, our median CVs were approximately 5%. To put that in perspective, studies of the reproducibility of existing, mature, affinity-based, and mass spectrometry-based proteomics platforms that look solely at total protein abundances, not proteoforms, have found median coefficients of variation of nearly 40% from run to run and up to 80% across labs and operators. The reproducibility of our platform, even at this earliest stage, is the direct consequence of our single molecule methodology, which determines protein abundance not from a single measurement, but instead from the aggregate of independent measurements of many, many individual molecules. Our reproducibility is also a consequence of our incredible team's steadfast commitment to quality. As I mentioned, reproducibility this tight would be considered exceptional for a mature platform. To have demonstrated such world-leading reproducibility at the first introduction of a novel method is astonishing. We additionally demonstrated that the assay is extremely sensitive and able to accurately measure changes to a proteoform's abundance across a wide range of physiologically relevant concentrations. For reference, mass spectrometric methods, such as tandem mass tagging, lose quantitative accuracy when comparing samples in which a protein's abundance changes by more than a factor of 10. Our analysis revealed that our assay could reliably measure how much a proteoform changes, even for changes of over a factor of 1,000. Furthermore, the assay is able to accurately quantify extremely low abundances of proteoforms. Forms of tau present in samples at levels approximately .1% of total tau can be reliably quantified. This is critical as we know that low abundance forms of proteins like tau can still be tremendously impactful in disease progression. Beyond demonstrating the technical capabilities of our platform, the studies we presented are already providing unique biological insight into Alzheimer's disease and related disorders. Before discussing the paper specifically, I'd like to give a bit of context as to why the findings are potentially so significant. The link between tau and Alzheimer's disease has been established for nearly 40 years. In that time, a huge number of potential biomarker tests and therapeutics targeting tau were developed with the goal of diagnosing AD early and stopping or reversing its progression. Unfortunately, these assays and therapeutics have failed in clinical trials. Retrospective analysis suggests these failures may stem from targeting the wrong proteoform of tau. Unfortunately, prior to the introduction of the Nautilus platform, measuring these proteoforms was out of reach. The proteoform resolution offered by the Nautilus platform gives researchers actionable biological insights that aren't otherwise attainable. With the Nautilus platform, researchers will be able to observe not just how much of a protein is present, but which forms are increasing, decreasing, or appearing uniquely in specific states, knowledge that is critical for understanding mechanisms of disease and for identifying precise therapeutic targets. In our study, we examined a diversity of model systems that are used by researchers around the world to develop the next generation of therapeutics and biomarkers. For the first time, we were able to measure more than 130 different forms of tau that were present, some of which had as many as six co-occurring phosphorylation events. Existing platforms would have mushed all those forms together, providing a low-resolution readout of total tau that obscures the critical proteoform information. In addition to looking at model systems, we applied our method to a small human cohort. Within that cohort was a patient with aggressive AD. This patient was clearly delineated from healthy controls and even other patients with less advanced AD by a form of tau that was quadruply phosphorylated. Moreover, the pattern of forms of doubly and triply phosphorylated tau strongly suggest an order and a timing to how the proteoform came to be formed, an observation that previously had not been possible. This combination of technical rigor and biological insight is why the reaction from researchers with whom we've shared the manuscript has been so strong. They recognize that this isn't just a new measurement method. It's a fundamentally different way of understanding biology. They see that iterative mapping represents an entirely new class of measurement modalities, distinct from either mass spectrometry-based approaches or the affinity reagent profiling methods. The scientists are already asking how they can start integrating the method into their workflows. As we continue to expand our reagent panels and data analysis capabilities, we're confident this core capability will remain a major driver of scientific adoption. For an easier to understand synopsis of the manuscript, I encourage you to check out our blog where we've tried to distill the manuscript into a form that is more broadly accessible. We believe that these findings validate the full Nautilus platform, not just for tau, but as a generalizable engine for proteomic insight. The core platform and iterative mapping method used in the tau studies is also used for broad-scale analysis, and we anticipate the exceptional performance we've observed will translate across applications. Looking ahead, our roadmap for the remainder of 2025 includes continuing to refine and scale our broad-scale assay configuration, advancing multiple external collaborations for tau and non-tau targets, publishing additional datasets and technical white papers to support adoption. With the release of this manuscript and the strong momentum across our platform development and collaborations, we're confident that Nautilus is on track to deliver on its mission to transform how biology is measured and understood. Back to you, Sujal.
Thanks, Parag. This quarter marks a shift from what could be to what is. We've now shown publicly, rigorously, and reproducibly that the Nautilus platform can do what others cannot, and this is only the beginning. Earlier this week, we presented our data from our tau manuscript at the Alzheimer's Association International Conference, AAIC, in Toronto. The conference served as an excellent venue to highlight the capabilities of our platform and get feedback on the potential impacts that high-resolution proteiform analysis of proteins like tau might have to neurodegenerative disease research. Throughout the event, we spoke with a broad range of researchers and potential customers spanning the academic, non-profit, and pharma sectors. Conversations consistently centered around the gaps that exist in understanding of disease progression in Alzheimer's. We heard from several researchers about the conundrum of single PTM measurements of -tau-217. Despite being FDA approved for Alzheimer's diagnosis, it's been observed that -tau-217 is abundant not only in patients with AD, but also in young children. Furthermore, -tau-217 tests have a high false positive and negative rates and do not predict disease trajectory. Researchers are excited about the implications of Nautilus' work for generating more precise ways to stage and prognose disease. They're also excited about how understanding the proteiform landscape might inform therapeutic development by helping identify which model systems are most reflective of human disease and which tau species to target. One other surprising area of interest was in using a patient's proteiform landscape to distinguish among various tauopathies such as frontotemporal dementia and progressive supranuclear palsy. The scientific community showed clear enthusiasm for the specificity and resolution our platform offers in analyzing proteoforms that have historically been impossible to quantify at this level of detail. These interactions reinforced our belief that the ability to measure tau at proteoform level resolution could be transformative for neuroscience disease research and more broadly for understanding complex protein biology in neurodegeneration. Building on this excitement, we're currently in active dialogue with several organizations to formalize early collaborations across both pharma, academic, and nonprofit research settings. As we shared last quarter, we expected to sign an initial collaboration in the first half of the year and I'm pleased to report that we've now signed two collaborations with major US research institutes. These collaborations provide the opportunity to demonstrate our platform's capabilities and performance with customer samples as well as enabling new biological insights in Alzheimer's disease. Though they are not intended to generate revenue initially, these collaborations are a great opportunity to help these collaborations lay a foundation for driving revenue in the future. While there is strong interest in additional collaborations, we're carefully balancing our resources between our targeted proteoform and broad-scale development programs. This balance will guide the total number of collaborations we can engage in at any given time without delaying key development milestones, particularly for our broad-scale platform. Turning to our broad-scale platform efforts, we continue to make steady progress on the new assay configuration which we introduced earlier this year. This work is aimed at better aligning our assay design with the characteristics of our expanding probe library, improving probe yield and performance across the platform. Among the most important advances continues to be both in the assay configuration itself as well as in our methods to determine which probes are suitable for the new configuration. These improvements are intended to reduce technical risk and enable higher performance, particularly as we scale towards comprehensively decoding the proteo. In Q2, our broad-scale progress was in line with expectations. We began early stage experiments with the evolved configuration and initial data continues to be promising. This marks a critical step towards achieving robust quantification of a significant number of proteins from complex biological samples such as cell lysate. We've also begun working with our key suppliers to develop updated formats for our consumables, ensuring that they can meet the demands of our platform and future scale. While we're deferring specific updates on probe performance for now, our focus continues to be on maximizing the yield and functionality of both our existing and in-development probe candidates so that we can deliver on the high quality and proteome coverage our platform is designed to achieve. We expect this optimization cycle to continue over the next two quarters and we'll keep you updated as milestones are reached. Following the technical progress we've just outlined, it's important to put that work in the context of how we're engaging the market and building customer demand. It's worth noting that the two primary applications we're targeting, proteo form analysis and broad scale proteomics are at very different stages of market maturity. On the proteo form side, we're introducing a fundamentally new measurement capability that hasn't existed before. As a result, we'll need to invest in market development and work closely with academic researchers and pharma partners to validate the impact of this data. Additionally, we need to develop exemplars of how this new measurement modality can be integrated into existing research and drug development workflows and into modern AI-based development workflows. In contrast, broad scale proteomics is already a well-established need. Our target customers fully understand the value of this type of data, have budget allocated for it and are actively seeking more effective platforms to generate it. However, even within the broad scale landscape, the data generated by existing affinity-based methods and mass spectrometry-based methods is fundamentally different than the data generated by the Nautilus platform. We believe our platform is uniquely positioned to become a cornerstone technology because our iterative mapping method will provide a resolution and scale exceeding that of existing methods. We specifically anticipate our platform will be unique for its high reproducibility, extreme sensitivity and wide dynamic range. Furthermore, because we get multiple measurements of each protein molecule, we anticipate being able to provide higher resolution views of each protein rather than simply quantifying total protein or peptide abundance. As we continue to examine the unique and important role that Nautilus will play in the proteomics landscape, we recognize that building deep, trusted relationships with biopharma organizations will be critical to Nautilus' long-term success. That's why Parag and I are personally leading those discussions, ensuring that we're not only showcasing the technology, but also deeply understanding how to align with the real-world needs of potential customers to unlock a new era of advances in biological insight and therapeutics. I'd now like to turn the call over to Anna to walk through our financials. Anna? Thanks, Suzal.
Total operating expenses for the second quarter of 2025 were $17.1 million and 18% decrease from $20.8 million in the same quarter of 2024. This result is attributable to a reduction in personnel costs from the headcount reduction we implemented in Q1, as well as normal variability in the timing of R&D activities and ongoing cost optimization efforts. We also saw a meaningful decrease in stock compensation expense year over year. Research and development expenses were $10.4 million, down from $12.4 million a year ago, while general and administrative expenses were $6.7 million, down from $8.4 million in Q2, 2024. Net loss for the quarter was $15.0 million, compared to $18.0 million in the prior year period. We ended the quarter with approximately $179.5 million in cash, cash equivalents and investments, and continued to project a cash runway that extends through 2027. While we're planning for a pickup in research and development spending in the second half of the year, we anticipate that total operating expenses for the full year of 2025 will remain below 2024 levels, while still supporting critical platform development and early stage partnership activities. Back to you, Sujal. Thanks,
Anna. We're incredibly proud of the Nautilus team. For the science we're advancing, the platform we're building, and now the data we're publishing. With the public release of our first manuscript, we've reached an exciting new phase. The world can now see and evaluate our technology on its own merits. Our foundation is solid, our belief in the mission has never been stronger, and we're excited about our path forward. Thanks for joining us today. And with that, we'll open the call for questions. Operator?
Ladies and gentlemen, if you have a question or comment at this time, please press star one one on your telephone keypad. If your question has been answered or you wish to remove yourself from the queue, simply press star one one again. Again, if you have a question or comment, please press star one one on your telephone keypad. Please stand by while we compile the Q&A roster. Our first question or comment comes from the line of Dan Brennan from TD Cowell. Mr. Brennan, your line is open.
Great, thank you. And obviously congrats on the presentations at the event and the manuscript. Maybe just kind of starting off on the reaction so far from the field. You discussed the reaction is so strong and you walked through just how unique this targeted approach is. But at the same time, you talked about the need to kind of build awareness and educate because it is so different. I'm just trying to kind of reconcile both of those. Maybe could you speak a little bit to maybe the kind of early collaborators who you signed up and kind of what the pipeline of demand looks like and kind of how you think this might manifest in the actual projects and or revenues as we look at over the next 18 months.
Hi Dan, this is Pragya. I'll take the first part. I was at AIC talking to researchers in the field and what was so striking was that there's a tremendous interest on the part of folks in the AD community for better ways to understand the disease that we have new markers like PTAO-217, EMBTR-243, that are showing greater specificity but they also aren't predicting the course of the disease. They aren't able to be used effectively as surrogate endpoints in clinical trials. They aren't able to predict which therapeutics might work in different populations and they aren't able to stratify amongst the variety of tauopathies. And so there's also, we had one researcher that we sat down with who essentially said, I mean verbatim that proteoforms are critical for understanding disease, understanding how it self forms fibrils that are the underlying root of many of the symptoms of the disease. And so those statements about the criticality of proteoforms in understanding and making progress in Alzheimer's disease was in every conversation that we had. And so it was pretty exciting even though it is a new measurement. There has been a recognition that the current class of biomarkers for tau and mechanisms for understanding of tau are focused not on total tau. They're focused either on PTMs of tau, truncations of tau, and so I think that recognition that it is not just the whole protein but all the different flavors of it, all the different proteoforms of it are critical to understand.
So, next, I'll take the second half of the question. I think your question really was around pipeline and how we see the opportunity for the tau proteoforms to develop. I think the first thing I wanna highlight was in the prepared remarks, but I just wanna put an exclamation point on it. The go to market activities on the proteoform side and the proteome side will be a bit different. On the proteome side, just to address that first, our customers know what to do with complete proteomic data. They are looking for better tools to analyze the proteome. They're looking for ways to do proteomic analysis in a more effective and easier to use way and they have an existing budget pools. And so that's a technology when we release our proteome product at the end of 2026, we expect to have more significant faster ramp up in terms of revenue and instrument adoption. In contrast, the proteoform opportunity, which I view is just as exciting as proteomes, will take longer to develop and when I say develop, I mean market development types of activities to show the world what's possible with proteoforms and then as well for us to develop assays for each individual biomarker one at a time starting in neurodegeneration and starting with TAU. And so what we're doing in terms of our activities with TAU and our early partners, I think is an exemplar of what we'll do as we continue to roll out more biomarkers. Today, the majority of our focus has been on early collaborations with academic and nonprofit partners, focused on trying to show the power of this data and have the types of biological insights that enable us to go to a conference like AAIC and have meaningful conversations with biologists, which is something that's a bit different than you typically do at a conference like HUBO on the proteome side. From that point, I also expect that we'll be signing early types of pilot agreements with pharma organizations who are looking to start to incorporate this proteoform data as we start to show what the potential is into their workflows and then those pilot agreements in the longer term will lead to revenue generating agreements. So right now, our focus is not on revenue, our focus is on market development and showing the world what the power of this technology is. We're interested in continuing to advance our TAU assay to the point where we can more broadly allow customers access to it and we'll talk about that more in the coming quarters. And then we're interested in developing our proteoform pipeline and the next biomarkers, but as well, as I said in prepared remarks, we are very careful to balance our resources between broad scale and proteoforms considering that we continue to believe that while both are incredibly exciting, broad scale represents a faster revenue top line ramp type of opportunity.
Great, thank you for that. Maybe I'll just ask one more, even though the proteome is really the focus, I'm sure others will get to that. Maybe just on this manuscript, when do you think it'll be published and what type of journal do you think you would hope to get this published in and then what type of revenue, knowing that the proteome is really the bigger and easier opportunity if you can achieve kind of what you hope to achieve on this proteoform product, what type of, like how would you size the opportunity for this product? Thank you. I'll pass it to Praag to give you the
first part and then I'll tackle the second year again. Sure, Edward. We do look at this as a seminal manuscript as something that's very important for us in our future. We've sent it out for peer review. We think that's an important aspect and the manuscript will further improve through that feedback. The places that one would hope a seminal manuscript are of course the big three. And so the timeline for that of course is incredibly variable and it depends in part on the journal and the reviewer process. So it's probably best not to speculate on when publication in a journal will come out. But our hope is certainly for it to be in a high impact journal.
So Dion, on the second part of your question, I think we're in the pretty early stages of trying to size what an opportunity like this looks like. You know, if I look out at the opportunity over the course of the next five, six, seven years, it's certainly an opportunity that has the potential just in protea forms alone to be a multi-hundred million dollar business for us. But the question still is open on the protea form side. Will we just let the world have access to all this data without any sort of upside for Nautilus depending on what we find? Will we form partnerships that are focused on therapeutic development or DX development and share in the upside? The protea form work, because the data is absolutely not generatable in any other method, gives us more optionality on the types of business models we'll pursue. And so I don't have an exact answer of how that will roll out, but I do know from just the early conversations around neurodegeneration and then what we've heard in wider oncology and autoimmune and other areas that protea forms are incredibly exciting. And while we think it's a really exciting opportunity, it's gonna take some market development. So when I think about revenue, I don't think about any revenue this year from these types of activities. And I think about it starting, protea forms starting small next year. And I think about proteomes starting at the end of next year and ramping very quickly in 2027. So that's how I think about them differently. Great,
okay, thank you guys.
Thank you. Our next question or comment comes from the line of Sabu Nambi from Guggenheim. Mr. Nambi, your line is open.
Hey guys, good morning. Thank you for taking my question and congratulations on the publication. From the Tao Manuscript, where did you get the most inbound in terms of customer profile and can you share any other information on the funnel?
Yeah, so probably do you wanna take the first half, I'll take the funnel question.
Sure, I think we've actually seen a tremendous amount of interest from academic groups reaching out, from pharma reaching out, and from nonprofit research institutes reaching out. So it's actually been across the board.
Yeah, and Sabu, in terms of the funnel, I think it's really early to talk about what's in the funnel and how is it developing, especially considering that our biggest development activity just occurred in the last 72 hours or 96 hours at AIC. AIC, I think, has 8,000 in-person attendees. Compared to a conference like HUPO, which is our big proteome conference, this is four times the number of researchers, plus more than four times. And so this was a really big opportunity to get in front of customers. And I would say, in terms of net new conversations, we probably had three, four, five times more net new conversations this week than we did in the entire year before that. And so I think it's really early. I will tell you the types of early interest matches sort of what I said in the earnings call script. It's academic institutions, academic PIs, nonprofit research organizations, nonprofit foundations focused on neurodegeneration. We're finding some early groups inside of pharma who are interested in doing pilots to see how this proteoform data adds a new dimension to their therapeutic development programs. And so those are the types of things that we're seeing, but really still pretty early. I think over the course of the next two quarters, we'll see some more development on that front and we'll be able to say more.
And, Sijan, just a follow-up to that. Just having these two collaborations now actually help you show it's former proof of concept or you think these are independent?
So I think that the two collaborations that we've recently signed here, I think are focused on a few different things, right? First and foremost, both of them are focused on reproducing the data that we have on our samples that we talked about in our manuscript. They're focused on increasing the number of biological samples that we have that have gone through our platform and starting to take some of those insights and showing the world what the power of proteoforms are. And then in addition to that, another key goal for us is that because proteoforms and proteomes share the same platform, each of these collaborations is serving to harden our core platform, to harden our consumables, and to work through some of the kinks that you'd have to work through during an early access program on proteomes, doing those earlier with proteoforms today. So I think for all of those reasons, I think this work that we're doing with these collaborations and others to come is really quite strategic for us.
Got it. And one last one. I know you guys have decided to not give any additional specific updates on flow performance, but any reason you decided to not do that moving forward?
Yeah, so just to clarify, I didn't say we wouldn't do it going forward. We will certainly do that in the coming two quarters. We didn't do it today because if you recall sort of our comments for the year, at the beginning of the year, we introduced this assay configuration change, and the assay configuration change was focused on rebuilding a piece of our assay so that a greater number of the probes that we've already developed and that are in development would be able to run properly on our platform and in the assay. And so at the beginning of the year, too many of our thousands of probe candidates did not function properly on the platform, did not have the specifications needed to hit their performance targets in our assay. So instead of going and building thousands and thousands and thousands of more candidates, we took a pause to change the assay configuration and get to the point where a greater percentage of those will be working on the platform. The assay configuration work is a multi-quarter exercise. It's proceeding on target and at the pace of our expectations. We just in the last few weeks hit a point where all of the pieces of our assay configuration change came together so that we can start to test the entire probe library and the new configuration. And so the next couple of quarters will be critical for us to making that transition fully into the new assay configuration. And once we have a yield percentage or some sense of how many of those probes are functioning well in the new configuration, we certainly will give you some guidance.
Thank you for clarifying. Thank you guys.
Thank you. Once again, ladies and gentlemen, if you have a question or comment at this time, please press star one one on your telephone keypad. I'm sure no additional questions in the queue at this time. Ladies and gentlemen, thank you for participating in today's conference. This concludes the program. You may now disconnect. Everyone have a wonderful day.