This conference call transcript was computer generated and almost certianly contains errors. This transcript is provided for information purposes only.EarningsCall, LLC makes no representation about the accuracy of the aforementioned transcript, and you are cautioned not to place undue reliance on the information provided by the transcript.
7/30/2024
Good day and thank you for standing by and welcome to Nautilus Q2 2024 earnings conference call. At this time, all participants are in listen-only mode. After the speaker's presentation, there will be a question and answer session. To ask a question during the session, you'll need to press star 1-1 on your telephone. You'll then hear an automated message advising your hand is raised. To withdraw your question, please press star 1-1 again. Please be advised that today's conference is being recorded. I would now like to hand the conference over to your speaker today, to Yeonhee Investor Relations. Please go ahead.
Thank you. Earlier today, Nautilus released financial results for the quarter ended June 30, 2024. If you haven't received this news release or if you'd like to be added to the company's distribution list, please send an email to investorrelations at nautilus.bio. Joining me today from Nautilus are Sujal Patel, co-founder and CEO, Frog Malik, co-founder and chief scientist, and Anna Mowry, chief financial officer. Before we begin, I'd like to remind you that management will make statements during this call that are forward-looking within the meaning of the federal security laws. These statements involve material risks and uncertainties that could cause actual results or events to materially differ from those anticipated. Additional information regarding these risks and uncertainties appears in the section entitled forward-looking statements in the press release Nautilus issued today. Except as required by law, Nautilus disclaims any intention or obligation to update or revise any financial or product pipeline projections for other forward-looking statements, whether because of new information, future events, or otherwise. This conference call contains time-sensitive information and is accurate only as is a live broadcast, July 30, 2024. With that, I'll turn the call over to Sujal.
Thanks, Jian, and welcome to everyone joining our Q2 2024 earnings call. The update that Frog, Anna, and I will be sharing with you today is due to the great work of our teams in the Bay Area, Seattle, and San Diego. My thanks go out to them for their continued progress against our key scientific and business objectives. I look forward to the day that their work is in the hands of researchers who leverage our platform to explore important biological questions once thought unanswerable. The team remains focused on both our development milestones and commercialization goals. Through these and other efforts, we remain motivated by our purpose to revolutionize proteomics in the name of improving the lives and health of millions of people around the world. Our entire team is motivated by this goal, fully aligned, and committed to doing what it takes to make this a reality. As you've heard me say on previous calls, to deliver a range of long-discussed and long-desired improvements in human health, we believe biomedical research needs a dramatic acceleration in targeted identification and therapeutic development. We believe we are pioneering a fundamentally new approach that holds the potential to overcome the limitations of traditional and peptide-based analysis methods and to unlock the value of the proteome, both in targeted proteoform analysis and broad-scale discovery, something we continue to view as one of the most significant untapped opportunities in biology today. As you'll hear from Parag in a few moments, KOLs are enthusiastic about the data that we shared at this year's US HUPO conference. In fact, several have begun to discuss with us the specific initiatives against which they plan to apply our platform. In Q2, we saw continued progress against core development goals for each of the components of our platform, and I look forward to additional progress toward commercial launch in 2025. For a more detailed R&D update, let me now turn the call
over to Parag. Parag? Thanks, Sujal. You may remember that during our last call, I reported on the promising data we released at US HUPO late in Q1. Among other things, we shared how by exploiting the core capability of our platform to iteratively probe individual protein molecules, we were able to measure 32 distinct tau proteoforms from control samples. We also demonstrated the ability to perform measurements of enriched salicyates. These results are the foundation of future assays, which will be accessible to the broader biological community, enabling more detailed investigation into molecular mechanisms of diseases like Alzheimer's and other tauopathies. In addition, they suggest new frontiers in diagnostics. As proteoforms are not yet widely discussed outside of the proteomics research community, let me take just a moment to define what they are and why they're important. The term proteoform was introduced by Lloyd Smith and Neil Kelleher in 2013 to, quote, be used to designate all of the different molecular forms in which the protein product of a single gene can be found, including changes due to genetic variations, alternatively spliced RNA transcripts, and post-translational modifications, end quote. We know from examples like signaling molecules, cyclin-dependent kinases, oncogenes, histones, et cetera, that what makes proteoforms an important driver of biological outcome is not just that a protein has a mutation, a splice variant, or a post-translational modification. What matters from the perspective of biological relevance is the combination and pattern of those modifications. The result is an exponentiation in complexity of proteins' actions that have tremendous potential to alter the behavior of a biological system. Researchers seeking insight into the role that proteoforms may play in, for example, the progression of Alzheimer's or other neurologic diseases, have been limited by a lack of robust and accessible technologies to measure proteoforms at scale. Approaches such as Western Blotts and digital ELISA assays can only measure one post-translational modification at a time, typically as a bulk measurement averaging across collections of molecules. When there are potentially millions of patterns of modifications across billions of protein molecules in a sample, being able to measure one modification yields a very limited biological insight. Other technologies such as bottom-up shotgun mastectrometry, or frankly any peptide-centric technology, are simply unable to measure proteoforms as they cannot know that multiple alterations were present on a given protein molecule. These methods also cannot measure modifications at low concentrations. In general, measuring protein presence at low concentration is hard. Measuring particular variants of proteins that are at even lower concentrations is exponentially harder. But it may be that these low abundance proteins and proteoforms hold the keys to unlocking new, more effective drugs across a range of indications. To date, the majority of proteoform studies have been performed using top-down mastectrometry. This technology is the basis of ongoing efforts to build a proteoform atlas. However, though powerful, this technology is extremely complex and unlikely to be able to be broadly accessible to the wider biological community. The limitations in existing technologies have prevented meaningful analysis of what is believed to be an extraordinarily complex interplay of diverse proteoforms. This gap has inhibited meaningful understanding of disease mechanisms and drug actions. In addition, examples like troponin and prostate-specific antigen, PSA, have shown how proteoforms can serve as powerful biomarkers. Creating a technology to see these proteoform patterns and measure their relationship to one another has the potential to hugely advance biomarker identification, drug discovery and development, and precision medicine. We believe that the Nautilus platform holds precisely that potential. The single molecule capabilities of the Nautilus platform, combined with the system's dynamic range, sensitivity, and ease of use, enable researchers to reveal and leverage extraordinarily valuable proteoform data that has never been available. In concert with our team's continued focus on the platform's broad-scale discovery capabilities, we're concurrently creating proteoform assays that quantify, at scale, the functional proteoforms present in a sample. Tissue and cell lysates initially, with blood and CSF to follow, in a way that has not been possible with the bulk analysis of the past. Since we announced our preliminary proteoform data at USHUPO, we have heightened our focus on proteoform development activities, primarily in response to, as you'll hear from Sujal in just a moment, an enthusiastic reaction to the data from the research community. Specifically, based on the experimental work done in Q2, we have been able to reproducibly quantify mixtures of proteoforms, improve our assay performance, and successfully extract, enrich, and detect proteoforms from humanized mouse brain. We have also demonstrated that those patterns of proteoform abundances can be shifted with biochemical perturbations, such as by kinases and phosphatases. This latest data demonstrates the platform can be applied to important biological questions in relevant biological samples. We are very excited about our progress on this front, and look forward to updating the community further at the HUPO World Congress in late October. As I wrap up, I want to emphasize the fact that any advances made to our core platform accrue value to both our targeted proteoform detection capabilities and our broad scale discovery capabilities. Both modes of the platform rely upon a single molecule library preparation, nanopattern chips supporting superpoisson deposition of that library, iterative probing of individual molecules with fluorescently labeled affinity reagents, and machine learning software to infer molecule identities and quantities. In Q2, in addition to meaningful advances in our proteoform assay capabilities, we continued to make progress against our core and broad scale development goals. We remained focused on increasing scale, stability, and reproducibility across our consumables, assay, and platform, and continue to see meaningful gains along those and related areas. In particular, this quarter saw the successful execution of the largest scale of experiments we've performed to date. This progress is in lockstep with advancing the reliability, quality, and customer readiness of our instrument and software. With that, I'll turn the call back to Sujal.
Thanks for the update, Parag. I could not agree more with Parag's enthusiasm for our progress in detecting proteoforms and the substantial impact we could initially with TAL on the efficiency and cost effectiveness of biomarker discovery and drug development in Alzheimer's and other neurodegenerative diseases. This progress represents a perfect example of our platform's unique ability to and our continued focus on enabling both targeted proteoform analysis and broad scale discovery proteomics. That understanding of the platform's dual value is shared by others. Extensive voice of the customer work done since USHUPO during which we previewed our latest data shows enthusiasm for targeted proteoform analysis from both academic researchers and pharma for use in drug targeting and drug discovery efforts. In fact, one high profile KOL said as part of our DOC interviews that he believes building a reference database containing millions of proteoforms will transform biological research and healthcare. We share his and others enthusiasm about the potential here and are energized to generate and share additional high value data. As Parag mentioned, our next significant opportunity to educate the community about the platform and our progression towards commercial availability will occur when Nautilus participates as a top sponsor of this year's HUPO World Congress, October 20th through 24th in Dresden, Germany. As we've previously discussed, my management team and I, in fact the entire Nautilus team, continue to proactively manage our resources to maximize our cash runway while balancing that with investments to drive our scientific progress forward. As of the end of last quarter, we still hold on our books over half of the cash that we've raised in our seven and a half years as a business and at our anticipated 2024 run rate, we expect to be resourced through commercial launch. For more on that and other financials, let me hand the call over to Anna. Anna?
Thanks, Sujal. Total operating expenses for the second quarter of 2024 were $20.8 million, up $1.8 million compared to the second quarter of 2023 and $0.8 million below last quarter. This 9% increase in operating expenses year over year was driven primarily by continued investment and personnel and their activities towards the development of our platform, as well as investment in personnel and services engaged in maturing our business operations. Research and development expenses in the second quarter of 2024 were $12.4 million compared to $11.9 million in the prior year period. General and administrative expenses were $8.4 million in the second quarter of 2024 compared to $7.1 million in the prior year period. Overall net loss for the second quarter of 2024 was $18.0 million compared to $15.8 million in the prior year period. Turning to our balance sheet, we ended the quarter with approximately $233 million in cash, cash equivalents, and investments compared to $248 million at the end of last quarter. As our Q2 results show, we continue to tightly manage our spend. Given our operating expenses in the first half of 2024 combined with our spend expectations in the second half, we anticipate our total operating expense growth for the full year to be between $1.8 million between 15 and 20%, well below our previous guidance of 25%. Importantly, we remain committed to disciplined cash management and running an efficient organization as we execute our strategy to launch our revolutionary proteome analysis platform. With that, I'll turn it back to Sujal.
Thanks, Anna. We're excited about what lies ahead for Nautilus and the difference we believe our platform can make. I'm grateful to our team, our investors, our strategic partners, and our research collaborators for joining us on this journey to revolutionize proteomics and empower the scientific community in ways never thought possible. We made good progress in Q2 and look forward to building on those successes as we move through the remainder of 2024 on our way to our expected commercial launch in 2025 and beyond. With that, I'm happy to open the call up
for questions. Operator? And thank you. As a reminder to ask a question, please press star 11 on your telephone and wait for your name to be announced. To withdraw your question, please press star 11 again. Please stand by. We'll compile the Q&A roster and we ask that you limit yourself to one question, one follow-up. Again, that's one question, one follow-up. And one moment for our first question. And our first question comes from Sabu Nambi from Guggenheim Securities. Your line is now open.
Good morning. This is Ricky Levittas on for Sabu Nambi at Guggenheim. Thanks for taking our question. Are you able to provide any further specificity on the launch timeline other than calendar year 2025? And if not, when would you be able to provide that insight? Is there a specific milestone that you might be looking to achieve? And then I have a follow-up.
Thanks, Ricky. This is Sigal Nambi. I'll take this one first. So I think that when you look at the core things that are necessary for the launch that we've been describing for 2025, it's a continued set of development activities related to bringing all of our platform components together and building the 300 or so reagents that we need, which are the multi-affinity probes and our labels that we need to be able to provide the information from each of the molecules on our chip to be able to detect exactly what protein they are, which gene encoder protein they are. We continue to believe that 2025 is still an appropriate time for the launch. We've still got a number of development activities that we're working through. We still have a significant amount of effort that continues on reagent development, qualification, movement to our platform, and we are still in the process of putting all those things together. And so that's what I would say in terms of guidance for launch. You asked the question around what the milestone would be where we'll provide more specificity. And I think that one of the things that I've mentioned for a number of quarters now is that there will be a milestone coming up where at one of the HUPO conferences or perhaps at another venue, we will bring data that shows the ability to measure, call it anywhere from 1,000 proteins or more from cell lysate, meaning from a complex sample. And by the time that we're able to do that, the vast majority of our technology development has been completed. The reagents, we have more than half of them to be able to get to that goal. And because our system is not a system where five reagents detects five things, 10 detects 10, it's an exponential curve because of the data science behind how we detect what molecules are on the chip. That's a point where we'll have much more specificity in terms of launch timing and in terms of specifications of our product and so forth. Now, we do anticipate that that milestone will come before our early access period and then following an early access period of call it six months or so, you'll have a product launched. So if you look at all of those things, you could back into the fact that when you look at what a launch timing for 2025 is, it's probably not the first half, but it still looks good for us in terms of where we're at and where we need to get to.
Great, thanks. And then a follow-up on the early access launch. Are there any updates or additional color you could provide on what strategy you're looking at for that beta testing and what customers you'd be targeting? Thank you.
Yeah, so this is Susan. When you look at our early access program, our early access program's goals are first and foremost to give customers who have notoriety in the proteomics world and customers who are proteomics savvy early access to our platform so that they can generate unique and meaningful biological insight. And that biological insight has two goals. One is the value that we get out of it, publishing, bringing into conferences, papers, abstracts, posters. That value really is important to us because it provides the customer evidence that we need for the next stage of the business and for landing the first instrument deals and so forth. The second major activity that we want out of that early access program is really related to signing pre-orders for the instrument. And so when you think about those two goals, the types of customers that we will have in our early access program are very similar to the types of companies we're working with today on our collaboration. So it'll be pharmaceutical organizations like Genentech, who we've been working with as a collaborator for quite some time, and it will be academic and nonprofit research organizations, particularly those that are proteomic savvy and the key opinion leaders in the proteomics world. And then as well in there, you'll have some diagnostic types of applications.
And thank you. And one moment for our next question. And our next question comes from Matt Sykes from Goldman Sachs. Your line is now open.
Good morning. Thanks for taking my questions. Maybe just first on something that I don't think has been discussed in the last couple of quarters, just the bioinformatics platform that is going to be attached to the NOLAS instrument. Just curious, given how unique and novel the data sets that you're providing, the proteoforms for our scientists are, could you just maybe dig a little bit more into the bioinformatics platform and what that looks like? And have you worked with customers to make sure that they get reports and data that's easily understandable, just given, again, how novel the information is to them, given the capabilities of the instrument?
Hi, this is Parag. I'll take the first crack. It's a great question. I think when you think about our bioinformatics platform, when we think about it, we think about it as a couple layers. The first layer is just at the level of primary data, quality, confidence, that did the experiment run well. And the metrics on our data are very different than the metrics on standard mass spectrometry data set. So we've had to build a series of metrics to say, yes, this looks great. So that's layer one at the level of primary data. Then there's the next layer up from that about the protein identification and quantification layer. And again, there you want to provide both primary data access so that people can download simple spreadsheets of protein identities, quantities, and false discovery rates. And then you want to be able to enable them to visualize the data, look at the data within the context of their own data sets. And so that's layer two. Layer three is really comparative analysis. And this is where you analyze between different cohorts and case control studies, respond and non-responders. And that's really where you start getting into the biology. And then there's a fourth layer ultimately, which is incredibly powerful, which is the integration of our data with other data. So when we think of the Bioinformatics portal, we think across that span. And the places we've done a tremendous amount of voice of customer work in understanding what are the gaps for people, what are, depending upon the type of consumer, whether they're extremely sophisticated, have an existing bioinformatics process, or whether they're an earlier stage in their bioinformatics development or biological researchers. And there are a set of fairly common analyses that come out from that, things like the ability to do principal components analysis or generate volcano plots, pathway analysis. And so we've heard all of that back in our incorporating it. On the protea form side, this is an entirely new modality, the level of detail that people haven't ever seen before. And so in for those, we've actually been developing custom visualizations as well, to enable people to look at that, that incredible detail on individual protein molecules that they've never been able to see before. So all of that has been, anytime we develop these things, we do spend effort going and talking to the customers and saying, Hey, how do you feel about this? What else are you looking for? And the feedback has been very consistent and positive.
Got it. Thank you very much, Prague. And then, Sujil, just, you know, as we look into 25 and the launch, it's obviously unclear what kind of NIH NSF budgets may or may not be, but there's clearly some concern that those budgets could be somewhat compromised next year. I think you've stated in the past that you feel like the novelty of the instrument will likely penetrate through different types of budget environments. But just curious how you're thinking about the potential level of spend and budget for the academic end market next year, and the various scenarios under which, you know, the NIH budget or the NSF budget could kind of be and what your -to-market, if that would change it at all?
Yeah, it's a good question. I think that when, I think that my comments that I've made previously are still relevant today, which is that we are building a technology that is extremely novel and produces a breadth and scale of biological insight that no other instrument is capable of doing, and it's extremely valuable data to our customers. So we believe that with that as a backdrop, and with that as a backdrop, we think that that will still be able to push through, even if some of the government funding moves down through 2025. That being said, whenever there's some downward pressure on government funding, what you'll find is that there could be some elongation in sales cycles or could be a little more complicated to acquire funds in those organizations that rely on the government. So that's typically academic and nonprofit research. And I think that the way to overcome that is that, you know, in DXN tools, it's quite common to provide lots of different on-ramps onto a new technology. One is if you instrument purchase, one is that a customer may choose to run in a service model for a longer period of time and then switch to an instrument purchase. There are other models, such as instrument rentals and instrument lease types of opportunities and, you know, consumable prepays and those types of things. We're not committed to any of those models, but they're all possible, and we're open that if we need some of those on-ramps to help customers get into our platform, we're open to those types of things. One of the things that is particularly beneficial to us when we think about those alternatives is that, you know, typically you don't want to put capital out there and not be able to recoup at least the cost of it very, very quickly. Because all of our revenue streams, including the instrument or high gross margin, it gives us a little bit more flexibility and should we need it in 25 or 26, based on where government funding trends, I think, will be able to react quickly.
Got it. And if I could squeeze one more in for Anna, just on the total OPEX growth of 15 to 20 million, you said versus the previous guy of 25, what areas are you kind of achieving some level of savings to modify that guy that you had previously?
Matt, I can definitely speak to that. In our original operating expense plan, we had anticipated investments in a targeted way across all areas of the business. On the R&D side, we've had a few years of growth there, and so we've realized that we have the resources we need and we can limit further growth and just work with what we have. We've been repurposing or reallocating resources from areas of the business to the areas of highest need. We've also brought down our cost of reagents in a way that offsets the growth and consumption of those reagents. That's really what has driven our ability to hold R&D expense growth a little bit lower. On the G&A side, we've found savings there as well, and as you know, we hold off on hiring the commercial team until we hit those product milestones. So the combination of those is really behind the reduced OPEX guidance.
Got it. Thanks. Very helpful.
And thank you. And one moment for our next question. And our next question comes from Tia Savant from Morgan Stanley. Your line is now open.
Good morning. This is Yuko. Thank you for taking your questions. Would you talk about where you are in development progress for the instrument with respect to launch target in 2025? Would you say that development of city reagents is the gating factor at this point?
Thanks for the question. Bragg, you want to? Yeah, this is Bragg. I'll take this first. And then one of the things that I'm very excited about and I mentioned earlier was our continued efforts to improve the scale and quality of our large scale experiments. As you know, those experiments bring together a mix of large numbers of affinity reagents, large numbers of cycles, the newest chips, the newest instruments, and advances that we really focus on are, one, the ability to execute those experiments, two, as Anna mentioned, the costing of those, three, the reliability of each of the components of that system from the consumables, which includes the nanoparticles for protein deposition, the affinity reagents, as well as all the buffers in the system and ultimately the bioinformatics. And the last quarter, we've seen a tremendous increase in both the scale of those experiments and in terms of stability, facets like cycle after cycle, is the chip remaining clean? Are the data just beautiful, frankly, in terms of the nonspecific binding backgrounds, the removal efficiencies, all of those continuing to improve. And so very exciting progress in development.
Great. Thank you for that, Kaller. And then a second question for me, regarding development cadence to reach a milestone where you're able to measure, let's say, the 1000 proteins reproducibly, to unlock greater visibility towards that specific development timeline. Is this something that would happen fairly quickly once you hit a certain point, like 100 proteins measured, or is it something where development would move in a fairly linear fashion?
Yeah, I can take this one. Do you want to start, and then I'll take it from there?
Well, I'll just mention that one of the most exciting aspects of the platform really is this exponential non-linearity in how the number of proteins decoded scales with the number of cycles. And so my expectation would be that there would be a very strongly non-linear aspect to that. But Sujal, please set some color there.
Yeah, that's right. I mean, I think that, you know, we've talked about this for a number of years. With any traditional platform that uses antibodies to measure proteins, you need one or two of those antibodies or affinity reagents to be able to detect each of the different proteins in the gene-coded human proteome. Our technology is very different. With only 300 or so multi-affinity probes, as we call them, you're able to gather all of the information that you need from a particular molecule to almost with, you know, almost with 100% certainty differentiate it from every other molecule in the human proteome, and so therefore identifying it accurately. In order to make an accurate identification of just about anything from a complex sample, we have to have most of the probes and we have to gather quite a bit of information. And so once we cross over that point, you'll cross through 100, 500, 1000, 2000 proteins pretty rapidly because it really just has to do with getting more of those reagents on the platform. In terms of cycle count, you know, Parag mentioned earlier that our large cycle experiments are performing quite well, and so we feel like the assay stability and reliability are there. And so now really it's focused on getting the reagents that have the right characteristics on our platform to be able to put the entire set together to be able to get to first those early milestones that I talked about, but then ultimately comprehensive proteomic coverage.
Great. Thank you.
Yeah. And thank you. And one moment for our next question. And our next question comes from Tycho Peterson from Jeffreys. Your line is now open.
Okay. Good morning. Thanks for taking the questions. Maybe you could touch on publication roadmap. You know, how important is that out of early access that gets good publications? You know, what should we be focused on there?
Hi, this is Parag. That's a great question. And we definitely view publications as critical for sharing information with the community and getting them excited. In general, there are a couple different types of publications that we focus on. The first are ones like our PRISM manuscripts that we get to how does the platform work. We really view those as both core demonstrations of the capabilities of the platform as well as exposition, helping the scientific community understand the core components and how they work. It's the first layer of publication. The second are applications. And these are really things that we would do with our partners to say, hey, look, you can use this kind of you can even learn this kind of biology with our platform and demonstrating not just the components, but the integrated system and how it can be used together and how it performs. And then the next layer beyond that whole integrated system are even more application studies of, hey, I'm asking this biological question and using the Nautilus platform to see something that I wasn't able to see any other way. Here's what we learned. And we see that layering of components to integrated system to biology as a multi-layer stack that brings in different communities from the early adopters to the middle adopters and helps drive a cycle of excitement about the platform and what it can provide.
Right. And then you've had a number of questions on the tech development. You know, I guess Matt obviously asked about informatics, maybe flipping it around, you know, upstream. Is there anything on the sample prep front we should be paying attention to in terms of kind of improvements there?
Absolutely. I think one of the on the sample prep, when we there, the first aspect there is, it's really just the simplicity of the workflow, the amount of input material that's required, the extent of demonstration that we have about does the sample prep bias the output in any way? And so I think we've had really great progress in the last quarter on that last one, which has been a question that comes up. Hey, there's a chemical process, there's a functionalization, there's attachments nanoparticles. Does this substantially bias you towards this class of proteins or that class of proteins? And the data coming back very strongly indicates no, there's that chemistry is very general, which is very exciting. So that'll be data that we'll share as well at Hoopoe. And then the other aspect of sample prep is what is the amount of input material for proteoforms? Again, there is a question of if we're doing an enrichment, how much enrichment are we achieving? Are there biases in that enrichment towards or away from particular proteoforms? And so that's other data that looks really exciting and excited to share that at Hoopoe as well.
Okay. And then I want to follow up on the question earlier on funding. And so you mentioned maybe, you know, entertaining leasing, you know, reagent rental, other other kinds of types of business models. I'm just curious how seriously you're thinking about that and how we should think about your willingness to kind of carry the cost of the capital equipment on your balance sheet if you do move to a kind of reagent rental model?
So I would characterize that our thinking in that regard is still relatively early. And so by relatively early, for example, we haven't really floated that with potential customers as a model. I think that with all of those types of strategies, I view them as a bridge to instrument purchase. And so with that, we're not going to be carrying the cost of the instrumentation on our balance sheet for a whole lot of time. You know, that being said, sometimes there are some special cases, there's a, you know, a particular researcher that you want to do business with and you want to do that, use that model for longer. And given that the cost of the instrument for us to manufacture that is relatively low compared to the sales price of a instrument deal, which is roughly a million dollars, I think that that sort of bottle intuitively is more doable. But we haven't done the detailed work yet. And I think that as we get closer to launch and we're closer to conversations with customers about what the capital acquisition cycle looks like, I think we'll be able to go and think through that in more detail.
Okay. And then one last one on the diagnostic front, kind of a couple angles here. Roche is obviously entering the clinical mass spec markets, they're talking about blood-based tests for amyloid pathology and Alzheimer's. I'm just curious how you think about them in the context of the space, how you think about kind of what needs to happen for the diagnostic market to open up more broadly. Will you guys go down the regulatory path to the box at some point down the road? Can you maybe just talk a little bit about how you think about the diagnostic opportunity involved?
Yeah, why don't I start and then Rod can add any detail in here. I think that, you know, first and foremost, we think that the Nautilus platform is really important for the diagnostic world in really two categories. Number one, on the broad scale proteome side, our platform provides a dynamic range and a sensitivity which enables you to reach much rarer biomarkers in blood. And so if you think about biomarkers that are present in blood at low concentration, these are things that are shedded from tissue, potentially from tumors. And so you really have to have a huge dynamic range of single molecule sensitivity to be all the way down to the lowest concentration protein. And that, I think, is going to unlock new biomarkers that are going to be really interesting. On the proteoform side, as Parag talked about and has prepared remarks, the ability to detect an entire proteoform is a whole new level of biological insight. Today, the standard analysis that can be done with assays and with aspects can detect protein modifications, which is a single modification. For example, there's a modification on tau at site 217 with phosphorylation. But what you can't tell is there are three phosphorylations and they're at three different sites in this population and two different sites in that population. That proteoform information, we believe, and many KOLs in the proteomics world believe, will unlock a new type of biomarker and a new class of biomarkers that will be really important to diagnostics. And so we think that from a discovery perspective, finding those biomarkers enables DX companies to do some really exciting things over the course of the next, you know, two, three, four, five, ten years. The question around whether we will enter the clinical space with this product, not initially, for sure. The first use cases of this product for a number of years will be all RUL use cases. And when a customer makes a discovery, we'll say, hey, great, you know, customer, you found this great biomarker, now go build a high throughput assay, go get it cleared through the FDA, and, you know, we're going to go on to the next research discovery. But there will quickly come a point where either the dynamic range of sensitivity of our platform or its unique nature to measure proteoforms will become a necessity, and the customer won't have a way to really build an assay that replicates the finding that they made with the nonless platform. And I think that's probably the right point for us to start pushing the product through the FDA and moving towards clinical. I don't think that's in the first four to five years of shipping. I might be surprised, but I don't think it is. I think that those RUL use cases are going to be more than enough to fuel our growth for a number of years.
Great. That's very helpful. Thank you.
And thank you. And if you'd like to ask a question, that is star 11. Again, if you'd like to ask a question, star 11. One moment for our next question. And our next question comes from Dan Brennan from TD Cowan. Your line is now open.
Great. Thanks. Thanks for the questions here. Maybe just back to the timing of the launch. It appears to have slipped a little bit here from mid-25 to, I guess, back half-25. Obviously, you guys are tackling a very ambitious goal with very novel single molecule protein detection, and that's not surprising things can slip. But just given the series of slippage that you've seen from end of 23 to end of 24 to mid-25 and now back half-25, I'm just wondering, can you address kind of the key factors for the latest delay? And how should investors gain confidence that this won't continue to slip, let's say, beyond 25 into 26 or even later?
Yeah. Thanks for the question. This is Sejal. Maybe I'll take this one first. First and foremost, I wouldn't characterize my comments here today as being another slip. In our previous guidance from our last earnings call, it was that we intend to launch in 2025. We didn't provide any further specificity on that timeline. And I think our official guidance continues to be a launch in 2025. But certainly, it is fair to say that it has taken us longer than we would have liked or we thought it would. And certainly, if you went back to when we went public, we thought we would be commercial by now. I think that the nature of bringing something that is truly revolutionary and truly groundbreaking to market is characterized often by a lengthy development period and a lot of hard work and blood, sweat, and tears that goes into building that first product. And if you look at where we are closing in on, I mean, as we get close to the end of the year here, it'll be eight years since I got together to get this company off the ground and start development of the product. It has been a long journey, but we are building something that is truly groundbreaking in a lot of different areas. The ability to immobilize billions of molecules on a flow cell and a chip, the ability to build this very new novel class of reagents, and the ability to build an instrument and an assay that can cycle those reagents one after the other and build up data points on single molecules. Together, that is a massive amount of work, and we continue to make really good progress on that front. Today, there's an instrument that is able to perform all the cycles that it needs to reach our launch targets. It's able to provide, perform an assay reliably. The data quality is already sufficient to be able to measure proteoforms. For example, we believe that we should be able to measure in the near term a thousand different proteoforms of tau. And with that capability coming, we do expect ahead of the proteome launch to do more engagements on the proteoform side of the fence. All those are indications that the platform is coming together and working. Now, you asked a very pointed question about how do we know that it's not going to be 25 or 26 or 27. I think the answer to that question is no one knows. Myself, Parag, Super Sankarha runs our R&D organization, my management team. We spend a lot of time looking at what our progress is towards our goals in R&D, what the data looks like. We spend a lot of time pattern matching against the experiences that we've had in the past. And all of those things continue to tell us that we are heading in the right direction and we are doing the things that we need to to get the product out. And on the other side, I will use this as an opportunity to throw in there, Anna, myself and the entire management team, frankly every person in our company is very focused on making sure that we run incredibly capital efficiently so that as the timeline has elongated, we've also been able to significantly stretch how long our cash lasts from those original projections back in 2023, 2024. And so I think that's a long way of saying I think we're operating the business well. I think we're on the right track. And I continue to have a ton of confidence that we're making the progress that we need to get this product in the hands of biologists all over the world where it can do good.
Terrific. Thanks, Parag. And I know you also mentioned during the Q&A and the prepared remarks, you're making meaningful advances in Q2 on scale and stability. So could you just quantify that a little bit? I know in the past you've talked about probes for cycle. You've talked about number of reagents to get to full coverage kind of where you're at. Can you just provide updates maybe on those metrics or whatever metrics you think are relevant to distill what the scale and stability advances that you saw?
Sure. This is Parag. So I think we haven't specifically disclosed number of probes run, but it can be inferred. The number of cycles at, if we go back a couple quarters, we showed data of stability and chip stability, removal performance and efficiency that was on the order of about 25 cycles. Then at the HUPO after that, we showed about 70 cycles. The most recent US HUPO, we showed about 100 cycles. And I think now we're in the 125, 150 that we are expecting to show shortly. That shows both, again, the stability of the platform, removal efficiency so that after you introduce and probe a reagent, are you kicking it off effectively and getting rid of it so that it's hanging around? Same thing with that baseline of non-specific binding. We showed that up to about 100 cycles previously and we're stretching that by another 25 to 50 cycles. So those are key metrics that get to the quality of the data. One of the other aspects we look at is degradation of signal over the course of N cycle. Several years ago, we would be able to get to about five cycles and then the signal would have decayed. Now when we do these experiments where we introduce defined positive control cassettes at 15 cycle interviews or 15 cycle intervals, we see substantially, we're able to carry those out throughout the entire run. And so those are all really key metrics that show the improvement in stability of the platform over increasing numbers of cycles.
Got it. And then maybe a final one for Anna. So Anna, with the reduced burn or the reduced OPEX, I know you mentioned in the prepared remarks something about where the cache gets you through, but can you just provide an update there in terms of timing of how far your cache runway is now with the reduced burn?
Thanks. Dan, thanks for the question. I can speak to that. The previous guidance which you're referring to, we said we had cache runway into the second half of 2026. Our reduced OPEX certainly helps us in achieving that target. With that being said, second half of 2026 is still two years away and our commercial build out is yet to come. So I think while we have the ability to extend cache runway further if necessary, we're not ready at
this point. That cache, just the cache forecast that Anna gave you includes pitching the product, building out a commercial team, launching and starting to get into the revenue ramp before cache out. And those activities of commercialization are expensive. And so the question before was about if there was any further slip hypothetically, the runway on cache would elongate in that case because the commercialization build would get pushed out. I just wanted to make sure that I connected all those dots.
Got it. Now that makes sense. Thank you.
And thank you. And I'm showing no further questions. This concludes today's conference call. Thank you for participating. You may now disconnect.