5/7/2026

speaker
Operator
Conference Operator

Good day and thank you for standing by. Welcome to the Rackspace first quarter 2026 earnings webcast. At this time, all participants are in a listen-only mode. After the speaker's presentation, there will be a question and answer session. To ask a question during the session, you'll need to press star 1-1 on your telephone. You will then hear an automated message advising your hand is raised. To withdraw your question, please press star 1-1 again. Please be advised that today's conference is being recorded. I'd now like to hand the conference over to Sagar Hebar, head of investor relations. Please go ahead.

speaker
Sagar Hebar
Head of Investor Relations

Thank you, and welcome to Rackspace Technologies' first quarter 2026 earnings conference call. I'm Sagar Hebar, head of investor relations. Joining me today are Gajan Pandaya, our chief executive officer, and Mark Marino, our chief financial officer. As a reminder, certain comments we make on this call will be forward-looking. These statements involve risks and uncertainties which could cause actual results to differ. A discussion of these risks and uncertainties is included in our SEC filings. Backspace technology assumes no obligation to update the information presented on the call except as required by law. In particular, our discussion today will include forward-looking statements regarding our recently announced Memorandum of Understanding with AMD, including statements regarding the anticipated scope, benefits commercial potential of the collaboration, deployment timelines or financial projections, the expected execution of definitive agreements, and the anticipated impact of the partnership on our business, financial results, and capital structure. The MOU represents a non-binding framework only and does not constitute a binding commitment by either party to complete any specific transaction, financing, or other commercial arrangement. no definitive agreements with amd have been reached discussions remain preliminary and there can be no assurance that any such arrangements will be entered into that the parties will reach agreement on terms or that the anticipated benefits of the collaboration will be realized any third party financing required to implement the transactions contemplated by the mou is subject to the availability of financing on acceptable terms there can be no assurance that any such financing will be obtained Our presentation includes certain non-GAAP financial measures and adjustments to these measures, which we believe provide useful information to our investors. In accordance with SEC rules, we have provided a reconciliation of these measures to their most directly compatible GAAP measures in the earnings test release and presentation, both of which are available on our investor relations website. I will now turn the call over to Gajan for an update on the business.

speaker
Gajan Pandaya
Chief Executive Officer

Thank you, Sagar. Last quarter, I said rapidly the provider to becoming the orchestrator and operator of enterprise AI in regulated environments. We laid out three specifics, a partnership with Palantir anchored by a core build-out of forward deployed engineers, a technology stack with VMware as the control plane rubric for cyber resilience, and Palantir as the data and AI platform layer, spanning infrastructure, resilience, and AI, and accelerating demand for private cloud in regulated environments. The results this quarter reinforce the strategy we've been executing against, what we call where enterprise AI goes to production. governed infrastructure as the foundation, an integrated technology stack of curated partners on top of it, and one accountable operator running it end to end. Every win this quarter sits inside that frame. We secured regulated and sovereign private cloud deals across healthcare, telecoms, and financial services. We also closed our first joint a US-based solar tracking manufacturer where the problem was costly and quantifiable, 16 and a half days to move from a customer inquiry to a signed quote, burdened by manual intake and fragmented handoffs. Our FDEs deployed AI-enabled workflows on Palantir Foundry directly inside the customer's environment, reducing the coding cycle by 94% and earning an expanded engagement to extend the FDE model into EMEA. We are also deploying Palantir inside Rackspace, running end-to-end business workflows on Foundry natively. We are not just recommending Palantir to customers, we are operating our own business on it. We continue to expand our partner ecosystem. Today, I am pleased to announce the signing of a memorandum of understanding with AMD that establishes a new category of governed enterprise AI infrastructure. We are integrating AMD Instinct GPU accelerators, AMD EPYC CPUs, and the RockM software ecosystem into a fully managed, governed technology stack. Purpose-built for enterprise including healthcare, financial services, and sovereign environment where security, compliance, and accountability are non-negotiable. The MOU establishes AMD as the launch silicon across our four integrated capabilities. Enterprise AI Cloud, our fully managed private, public, and sovereign AI environment with one operator accountable across the stack. Enterprise Inference Engine, a context-aware inference runtime that retains domain knowledge, session history, and enterprise-specific data context across queries. with Rackspace owning the SLA, inference as a service, dedicated accelerated compute as a governed alternative to commodity GPU rental, launching with AMD Instinct, and bare metal accelerated compute, launching with AMD Instinct for training and inference workloads requiring deterministic performance. Production inference is heterogeneous. Frontier models run on GPU, small language models, classical ML embeddings, and many domain-specific workloads run more efficiently on CPU. AMD is the partner that brings both Instinct GPUs and EPYC CPUs inside one integrated architecture, which lets us route each workload to the right compute. That is what production economics requires. This puts Rackspace in a unique category. The market today is dominated by commodity GPU rental where capacity is sold by the hour and the customer carries the burden of integration, security, and accountability. We are building the opposite. AMD's leadership in open, high-performance AI acceleration combined with our operator-grade outcomes as a service model delivers governed AI infrastructure that is accountable from silicon to outcomes. We expect the definitive agreement with AMD to be executed in the near term. Governed infrastructure is where enterprise AI either succeeds or stalls. When AI works with patient records, financial data, or sovereign information, where that data sits and how access is governed determines compliance or exposure. That is why Rackspace's over 25-year history managing data centers and infrastructure is more important than ever. And this is why one of the largest epic environments runs on Rackspace. The second reason enterprises choose us is how we handle technical complexity. Enterprise AI Cloud is not a single component problem. It takes data, compute, models, small language models, working together in real time. If even one element in the technology stack is off, cost per token skyrockets and operational risk increases. We solve this by integrating each vendor's IP, making technologies fit together and operate as one. The third reason is accountability. In a fragmented enterprise AI cloud vendor ecosystem, nobody owns the outcome or takes responsibility when something breaks down. We solve that by being one accountable partner in the eyes of the customer, responsible for how the system performs and the outcome it delivers. That is why we are seeing momentum across the business. At our core, Rackspace is a data center and infrastructure company. we own and operate the physical infrastructure that enterprise ai runs on that foundation combined with our ability to take end-to-end accountability for ai in production from governed private cloud to ai inference and agents in production is exactly what our enterprise customers are looking for and with that let me get into our business performance starting with private cloud First quarter private cloud revenue was $235 million with first half revenue on track with the timing of a large deal onboarding within our healthcare vertical, consistent with the dynamics we outlined last quarter. Segment operating margin came in at 24.7% up 30 basis points year over year, driven by continued cost discipline. Our customer wins this quarter tell a consistent story. Enterprises in regulated industries are choosing Rackspace to modernize and operate environments where governance, reliability, and compliance are non-negotiable, and where those environments increasingly serve as the foundation for AI adoption. For example, in financial services, we secured a long-term recommitment from a leading global online trading platform modernizing core infrastructure through software-defined private cloud, improving resilience and user experience in a latency-sensitive, highly regulated environment. In healthcare, we signed a multi-year agreement with a major UK NHS foundation trust to migrate and operate workloads in a sovereign healthcare cloud with full outcome as a service and security embedded from the outset. and this quarter we expanded our relationship with advent health a long-standing customer we already host and manage the infrastructure of their epic ehr one of the top five epic systems in the world and this quarter we expanded our relationship to host and manage over 400 additional workloads on rackspace private cloud healthcare is one of our most expressions of our strategy epic managed services is proprietary rackspace ip purpose built for governance performance and uptime that clinical environments demand as regulated healthcare organizations move from ai experimentation to ai in production where data sits and how it's governed becomes the defining question that is exactly the environment we are built to operate This extends into sovereign markets. In Saudi Arabia, our partnership with Sadaya places us inside one of the world's most advanced national AI programs, built on in-country infrastructure, jurisdictional accountability, and managed operations. In the UK, BT recently selected Rackspace as the infrastructure foundation for BT Sovereign Cloud. positioned as UK's first full suite of sovereign services hosted and operated entirely within the UK with security cleared operations teams and managed services covering migration, operations and ongoing compliance. That is the kind of public anchor that validates our sovereign thesis. These are environments where AI cannot be deployed without full control over data and infrastructure, and they are increasingly central to how sovereign and enterprise AI is deployed. What makes these environments possible at scale is VMware Cloud Foundation 9, the control plane at the center of our governed AI strategy. It unifies compute, storage, networking, and security into one operating substrate with native AI workload support, data residency controls, and policy enforcement that needs regulated and sovereign requirements out of the box. Our deepening partnership with Broadcom around VCF9 is one of the most strategic commitments we are making this year because it gives our customers a sense with the workload with elasticity to public cloud where it makes sense. Running on top of that foundation is where our AI platform partnerships come to life. This quarter, we expanded our relationship with Unifor adding agent-based workflows to our governed AI technology staff. Together, we are building context-aware inference a capability that retains domain knowledge, session history, and enterprise-specific data context across queries. So AI agents and large language models perform with the consistency and institutional memory that production environments require. Like Palantir, our engineers are trained on the Unifor platform and embedded directly inside customer environments. We are not just orchestrating infrastructure. We are orchestrating outcomes. VCF9 as the control plane, Dell for core infrastructure, Palantir and Unifor for government AI and agent workflows, Rubrik for data resilience, AMD for enterprise-ready compute. Each partner is best in class, but the value Rackspace delivers is making them operate as one integrated system with full accountability for how the system performs and the outcomes it delivers. Looking ahead, the next phase is already emerging. As enterprise AI evolves towards agentic workflows, where machines interact with machines and processes run end-to-end without human intervention, the demands of governed infrastructure become even more acute. Training will largely sit with specialized providers, but inference, particularly context-aware inference on regulated data, is where production enterprise AI lives. That is the workload we are built to operate. And as customers develop a clearer picture of their data residency requirements, more of those workloads will move into governed private cloud deployed across our global data center footprint in the jurisdictions and sovereignty zones our customers require. That is why we are doubling down on VCF9 and Broadcom this year. Our full-year private cloud growth outlook remains on track. We have signed engagements with AdventHealth, Seattle Children's, and a strategic database as a service partner, onboarding through the rest of the year. We are also seeing encouraging pipeline momentum on our Palantir and Unifor partnerships, where context-aware inference and governed agent workflows are gaining traction at deal sizes that we have not historically seen. The AMD partnership announced today adds a further layer of future optionality as governed AI compute becomes more central to how regulated enterprises operate. Together, these give us confidence in the full-year private cloud growth profile we are reaffirming today. Now, for our public cloud update. First quarter public cloud revenue was $443 million. services revenue grew 10%, reflecting our continued shift towards higher value engagements. Our customer wins this quarter highlight the breadth of our platform capabilities and our deepening presence in the AI space. First, we are powering a large-scale enterprise-wide multi-cloud transformation for a leading healthcare technology organization. Through a governance model, we are delivering program-managed migrations, modern architecture, intelligent automation, and measurable cost optimization, ensuring each workload is placed on the right platform for the right reasons. Second, Rackspace is serving as the growth AI-native database as a service partner operating across both public and private cloud environments. Our execution capabilities are a direct accelerant to our partners' client acquisition and market expansion, reflecting a high-value compounding partnership driving differentiated multi-cloud database as a service outcomes. Our service portfolio is built for where enterprise AI is headed, production, not experimentation. We are embedding engineers directly into customer environments, moving from strategy to live deployment in weeks, with governance and accountability built in from day one. New partnerships expand our ability to deploy context-aware inference, governed agent workflows, and forward giving enterprises a governed path from strategy to inference workloads in production. We are complementing this with purpose-built capabilities in AIOps, identity security, and data resilience, addressing the operational and security demands that become non-negotiable once AI moves into production environments. In summary, public cloud is executing. As inference workloads move into production, we are increasingly positioned as the partner enterprises rely on to operate, secure, and optimize their cloud environments with full accountability to match. The results this quarter confirm the thesis, governed AI infrastructure as the foundation an integrated technology stack of curated partners running on top of it, one accountable operator responsible for the outcomes. That is what today's Rackspace delivers. With that, I will turn it over to Mark for our financial results.

speaker
Mark Marino
Chief Financial Officer

Thank you, Gajan. In the first quarter, total company GAAP revenue was $678 million, up 2% year-over-year, driven by solid public cloud performance. Non-GAAP gross profit margin was 18.3% of GAAP revenue, down 160 basis points year-over-year, reflecting the private cloud revenue timing dynamics we discussed. Non-GAAP operating profit was $31 million, up 20% year-over-year, driven by continued operating expense discipline. Non-GAAP loss per share was 6 cents flat year-over-year. Cash flow from operations was $5 million and free cash flow was negative $9 million. We ended the quarter with $94 million in cash and $295 million in total liquidity, inclusive of the undrawn portion of our revolving credit facility. During the quarter, we repurchased approximately $96 million of debt, reflecting our continued commitment to disciplined capital allocation and active deleveraging. This reduces our interest burden and strengthens our overall capital structure. We are making deliberate progress on leverage reduction while continuing to invest in strategic growth. Turning to our segment results, private cloud gap revenue for the first quarter was $235 million, down 6% year-over-year, reflecting the timing of large-field onboarding within our healthcare vertical, consistent with the dynamics we outlined last quarter. Non-GAAP gross margin was 36%, down 110 basis points year-over-year, driven by lower fixed cost absorption on reduced revenue. Non-GAAP segment operating margin was 24.7%, an improvement at 30 basis points year-over-year, reflecting continued operating expense discipline. In our public cloud segment, GAAP revenue was $443 million, up 7% year-over-year, with services revenue growing 10% year-over-year. Non-GAAP gross margin was 8.9%, down 60 basis points year-over-year, reflecting higher infrastructure costs. Non-GAAP segment operating margin was 4.7%, up 50 basis points year-over-year, driven by improved operating expense efficiency. Now on to our guidance. We are reaffirming our full year 2026 guidance in its entirety. Revenue, EBITDA, and cash flow outlook all remain unchanged. The Q1 private cloud timing we described is fully reflected in our annual plan, and our confidence in the full-year outlook is unchanged. We continue to win larger complex engagements that carry longer deployment cycles, but deliver greater revenue visibility, higher lifetime value, and more durable recurring revenue streams. As they come online throughout the year, we expect private cloud to reflect the growth profile we committed to for 2026. With that, I'll turn it back over to Gijan.

speaker
Gajan Pandaya
Chief Executive Officer

The market is trending in line with our expectations, and this quarter we delivered proof across every layer of that thesis. Regulated enterprises are making a deliberate decision about where their AI was, who operates it, and who is accountable for outcomes. Healthcare is now a pillar. One of the top five epic workloads in the world runs on Rackspace-governed AI infrastructure. Epic Managed Services is proprietary Rackspace IP, decades in the making, and increasingly the foundation our healthcare customers are choosing as AI moves into production. Sovereign is validated. BT, Sovereign Cloud, runs on Rackspace-governed AI infrastructure. Sadiya in Saudi Arabia places us inside one of the world's most advanced national AI programs. These are anchor commitments, not pilots. The technology stack is complete, and this quarter we extended it further. VMware Cloud Foundation 9 as the control plane running across private, public, edge, and sovereign environments. Palantir for government data and AI operations with our first joint deal closing and a growing pipeline. Unit 4 enabling agent-based workflows with context-aware inference. Rubrics for Data Resilience, and AMD, where we are establishing a new category of governed enterprise AI infrastructure, delivering four integrated capabilities from silicon to outcomes. Enterprise AI Cloud, Enterprise Inference Engine, Inference as a Service, and Bare Metal AMD Instinct. One integrated system with an investment-grade counterparty co-invested in our success. and Rackspace accountable for how it performs end-to-end. We are the operator of the full enterprise AI technology stack, one accountable partner where enterprise AI goes to production. That is Rackspace. Thank you to our customers, partners, and every Racker. And with that, back to Sagar.

speaker
Sagar Hebar
Head of Investor Relations

Thank you, President. Let us begin the question and answer session. Please go ahead.

speaker
Operator
Conference Operator

As a reminder, to ask a question, please press star 1-1 on your telephone and wait for your name to be announced. To withdraw your question, please press star 1-1 again. Our first question comes from Kevin McVeigh with UBS.

speaker
Kevin McVeigh
Analyst, UBS

Great. Thanks so much. Good morning, and let me start just congratulating you folks because obviously there's been a lot of work to be done to get you folks to this level, and a lot of patience and, you know, just that that needs to be recognized. And I think I just wanted to kind of highlight that because there's a lot that's going into the results that are here today. I guess, and there was an incredible amount of detail again, but maybe talk to how AMD dovetails into Palantir. And, you know, what else? It sounds like the MOU is pretty far along. What else needs to be done to, I guess, get it across the goal line? It sounds like it is, but... you know, is there anything, you know, in terms of what we should look for just as that officially gets signed or is it officially signed? It's just, again, it seems like it's pretty far along, but just if you could help us with that a little bit.

speaker
Gajan Pandaya
Chief Executive Officer

Hey, Kevin. Thank you and appreciate your comments. Now, look, I think when we look at this, you know, I would try to think about Palantir and AMD somewhat distinct from each other just so that Now, starting with the volunteer relationship, that's really all about deploying and running customer workflows for the customer with Power Deployment Engineers, somewhat independent of what compute platform it runs on, right? Really think about compute more as what's the most efficient place to run that work, any given workload. And then having, and then the AMD piece, really fits into how, you know, first and foremost, it gives us CPU and GPU, which I think as we move further into inference and production workflows, you know, being able to deliver that in an efficient manner allows us to now do it across some of the CPU, GPU stack. And then in terms of the partnership itself, I think we are, you know, we are certainly well along the way there. You know, I think we still need to get, the financing lockdown and sort of, you know, tighten it up. But we feel pretty confident that we are on our way to getting that done and hopefully get it announced here in the near future. We feel pretty good about it.

speaker
Kevin McVeigh
Analyst, UBS

That's super helpful. And then just, again, if you could remind us, the capacity in the private cloud versus, you know, the public in, you know, as these initiatives kind of scale, particularly AMD and Palantir, Is that primarily across the private cloud as opposed to the public or, you know, just maybe help us understand that a little bit because obviously there's a lot to digest and just a really, really nice outcome.

speaker
Gajan Pandaya
Chief Executive Officer

Now, great question, Kevin. You know, this is sort of this mock confusion. I think we should stay away, right, because customer workloads are going to run across private and public, depending on where that workload needs to land, right? And that's why part of our VCF9 partnership is a Broadcom VMware partnership. It's the sort of thing that the control plane across which we could somewhat drastically drive the workload, whether it be in private or public cloud. So capacity-wise, you know, we have the partnerships on the public side, and now we have the partnership and, you know, hopefully here soon, the compute side up and running from a GPU perspective as well. which allows us then to really be somewhat agnostic with the customer, really focus on what specific outcome they want and then how do we deliver that in the most efficient way for them across either a CPU or a GPU landscape, and that could be private or public, right? So, like you said at the beginning, Kevin, that's like a ton of work that goes into sort of figuring all this stuff out. And, you know, part of the challenge our customers have, right, is, to think all of that stuff through, right, in terms of, you know, we're building a small language model, or you're running on a large language model, you know, where do you run the inference, where do you, you know, how do you orchestrate that, how do you ensure that it's running as efficiently as possible, secure as possible, data residency is thought through, right, all of those, you know, and our ambition is, you know, how do you take that complexity off the table for them, and with our forward-deployed engineers, really enable, support, and accelerate their journey to become, you know, more AI-enabled or operate on a fully AI stack, right? So that's the opportunity we saw, and that's what we are, you know, truly – and our customers are really guiding us through this. So pretty excited about it.

speaker
Kevin McVeigh
Analyst, UBS

No, it's amazing. And then just one more. I don't want to be – I want to be respectful of your time. But, you know, it sounds like, you know, any sense of how this starts to kind of fan in, it sounds like maybe the back half of 26, and then is there any way to think about kind of just what type of margin this work would be coming in at? I know it's probably relatively, maybe a tougher question, but just any way to think about that, and then what potential capital needs you could have as you're standing some of this stuff up?

speaker
Gajan Pandaya
Chief Executive Officer

I think, you know, we think of it this way, Kevin, we think of are very there are four distinct capability sets if you will right for like a better way that we that we are bringing to market right it's it's govern private cloud on amd silicon right think of that as we we own the entire outcome for our customer in partnership with our customers so they don't think about anything that sits in between right so so yeah that would be if you think of it through the length of margin probably our most profitable business You know, then there's context-aware inference, which is really the next level of, you know, business where you're driving domain-specific data through inference and maintaining that domain data throughout the entire process. That's probably your next tier when you think about margin coming down, if you will, right? Then there is the inference there, just purely we are providing the tokens or the intelligence customers are using it through an API. And then lastly, sort of, you know, a lot of what the neoclubs do, which is the, you know, bare metal, right, which is probably your lowest end on the margin, right? So, yeah, I think that as we ramp up, we will see our business sort of fluctuate across these four areas. Obviously, our intent is to end up with, you know, fully managed, governed outcomes, but there's a journey to get there, and I think that's something we need to work our way through before, you know, we can give, you know, clear guidance around how that plays out.

speaker
Mark Marino
Chief Financial Officer

Yeah, Kevin, this is Mark. I would agree with that. I also think that, you know, it's going to be largely on par, if not accreted, to existing gross market rates across our private cloud business. And just in terms of timing, you know, this is not something that we've got materially factored into our 26 guidance, right, just in terms of supply chain and delivery timing.

speaker
Kevin McVeigh
Analyst, UBS

Well, it sounds like you're well on your way. And, again, congratulations. Thank you.

speaker
Operator
Conference Operator

Our next question comes from David Page with RDC Capital Markets. Hi.

speaker
David Page
Analyst, RDC Capital Markets

Good morning. Thank you for taking my question, and congrats on the great results here. I guess at a higher level, it seems like Ratspace is moving in the right direction. You move not only, you know, internally as a company, but where the industry is going in terms of, you know, CPU, GPU. running SLM channel lines, et cetera. So I'd be curious, you know, you seem like you're the first, you know, you're the leader, but I guess how's the competitive environment looking? And I guess as a follow-up, you mentioned the pipeline is strong, so should we expect more deals in the future or maybe just flush that out a little bit?

speaker
Gajan Pandaya
Chief Executive Officer

Thank you. Sure. Good to meet you, David, and thank you for your comments as well. So, yeah, I think – Now, when I think about where we are, the orientation of the business right now is very much along the lines of helping customers really understand how they want to run AI workloads, right? Because if you think about where we sit today in our private cloud business especially, a lot of the customer workloads that are regulated run on our environment. And so, you know, the ability for us to sort of guide them from there on to running AI-based workloads is sort of where we are seeing the most opportunity. And when you look at the partnerships, either on the application stack, the parent-tier unit force, or on the compute stack, they just give us a much more integrated view of trying to tie all of this together, or not trying, but tying all of this together and delivering it. So, when you think of kind of your first question in terms of competitive environment, I, you know, I haven't seen anyone yet that is able to put all of this together in one place and then own the outcome, right? I think that sort of makes a distinct difference, especially in a regulated or sovereign environment, because I think that it becomes significantly unique To give you an example, when I say governed in healthcare, it means HIPAA compliance, PHI security, clinical SLAs, that all of that has to be put onto the same platform and integrated. and then deliver, right? So I don't, I'm not, you know, I'm sure there will be competitors that show up, but having the consulting, the forward-deprived engineers, the infrastructure, the compute, and the partnership all stitched together, I hope it gives us a little bit of a lead and an edge in terms of where we sit. Sorry for the wrong answer, but hope that makes sense, Dave.

speaker
David Page
Analyst, RDC Capital Markets

No, that was very helpful. Thank you. And I would agree. It does seem like you have that leadership position, which is great. So, I guess, yeah, no, thank you. That's helpful.

speaker
Gajan Pandaya
Chief Executive Officer

Thank you.

speaker
David Page
Analyst, RDC Capital Markets

Yeah, maybe one more. There's some comments about the capital structure. It looks like it's getting into a better place. Just how should we think about the capital structure over the next, like, 12 to 24 months, just of all day? Thank you.

speaker
Mark Marino
Chief Financial Officer

Yeah. Yeah, hey, David. This is Mark. Hey, Mark. our motivation or our intent is to be leveraging. That's our top priority. As we think about some of the deals we've announced, some of our capital requirements for this year, the debt stack that's going to be due in the middle of 2028 and getting the leverage through, you know, increase in operating leverage, EBITDA, as well as additional cash flow. So, as we structure some of these deals, right, the intent isn't to go, you know, take on more, you know, expenses, kind of add to our existing debt maturities, but to decrease our operating leverage by, I think, from 8.6 to 8.3, quarter over quarter, right? And we continue to stay focused on, you know, the out-quarters and finding ways to do leverage, right? You'll notice in the quarter, we actually repurchased some of our debt, roughly $96 million notional at a pretty significant discount, right? So, we're looking for ways to deploy capital such that, you know, we can reduce that, you what we've seen was.

speaker
David Page
Analyst, RDC Capital Markets

Great. Thank you. That's very helpful to grab some momentum and looking forward to working together.

speaker
Operator
Conference Operator

Thank you. That concludes today's question and answer session. I'd like to turn the call back to Sagar Hebar for closing remarks.

speaker
Sagar Hebar
Head of Investor Relations

Thank you, everyone, for joining us. If you have any questions, please email us at irattractives.com. Have a great rest of your day. Thanks, guys.

speaker
Operator
Conference Operator

This concludes today's conference call. Thank you for participating. You may now disconnect.

Disclaimer

This conference call transcript was computer generated and almost certianly contains errors. This transcript is provided for information purposes only.EarningsCall, LLC makes no representation about the accuracy of the aforementioned transcript, and you are cautioned not to place undue reliance on the information provided by the transcript.

-

-