This conference call transcript was computer generated and almost certianly contains errors. This transcript is provided for information purposes only.EarningsCall, LLC makes no representation about the accuracy of the aforementioned transcript, and you are cautioned not to place undue reliance on the information provided by the transcript.
spk01: Good afternoon. My name is Regina, and I will be your conference operator today. At this time, I would like to welcome everyone to the Cadence First Quarter 2024 Earnings Conference Call. All lines have been placed on mute to prevent any background noise. After the speaker's remarks, there will be a question and answer session. If you would like to ask a question during this time, simply press star, then the number one on your telephone keypad. Thank you. I will now turn the call over to Richard Gu, Vice President of Investor Relations for Cadence. Please go ahead.
spk12: Thank you, operator. I'd like to welcome everyone to our first quarter of 2024 earnings conference call. I'm joined today by Anur Devgn, President and Chief Executive Officer, and John Wall, Senior Vice President and Chief Financial Officer. The webcast of this call and a copy of today's prepared remarks will be available on our website cadence.com. Today's discussion will contain forward-looking statements, including our outlook on future business and operating results. Due to risks and uncertainties, actual results may differ materially from those projected or implied in today's discussion. For information on factors that could cause actual results to differ, please refer to our SEC filings, including our most recent forms 10-K and 10-Q, CFO Commentary, in today's earnings release. All forward-looking statements during this call are based on estimates and information available to us as of today, and we disclaim any obligation to update them. In addition, we'll present certain non-GAAP measures which should not be considered in isolation from or as a substitute for GAAP results. Reconciliations of GAAP to non-GAAP measures are included in today's earnings release. For the Q&A session today, We would ask that you observe a limit of one question and one follow-up. Now, I'll turn the call over to Anirudh.
spk11: Thank you, Richard. Good afternoon, everyone, and thank you for joining us today. I'm pleased to report that Cadence had a strong start to the year, delivering solid results for the first quarter of 2024. We came in at the upper end of our guidance range on all key financial metrics, and are raising our financial outlook for the year. We exited Q1 with a better-than-expected record backlog of $6 billion, which sets us up nicely for the year and beyond. John will provide more details in a moment. Long-term trends of hyperscale computing, autonomous driving, and 5G, all turbocharged by AI supercycles, are fueling strong broad-based design activity. We continue to execute our long-standing intelligent system design strategy as we systematically build out our portfolio to deliver differentiated end-to-end solutions to our growing customer base. Technology leadership is foundational to Cadence, and we are excited by the momentum of our product advancement over the last few years. and the promise of our newly unveiled products. Generative AI is reshaping the entire chip and system development process, and our Cadence.ai portfolio provides customers with the most comprehensive and impactful solutions for chip-to-systems intelligent design acceleration. Built upon AI-enhanced core design engines, our GenAI solution boosted by foundational LLM co-pilot are delivering unparalleled productivity, quality of results, and time to market benefits for our customers. Last week at Cadence Live Silicon Valley, several customers including Intel, Broadcom, Qualcomm, Juniper, and Arm shared their remarkable successes with solutions in our Cadence.ai portfolio. Last week, we launched our third generation dynamic duo, the Palladium Z3 emulation and Protium X3 prototyping platform to address the insatiable demand for higher performance and increased capacity hardware accelerated verification solutions. Building upon the successes of the industry leading Z2 X2 systems, this new platform set a new standard of excellence, delivering more than twice the capacity and 50% higher performance per rack than the previous generation. Palladium Z3 is powered by our next generation custom processor and was designed with Cadence AI tools and IP. The Z3 system is future-proof with its massive 48 billion gate capacity, enabling emulation of the industry's largest design for the next several generations. The Z3 X3 systems have been deployed at select customers and were endorsed by NVIDIA, ARM, and AMD at launch. We also introduced the Cadence Reality digital twin platform. which virtualizes the entire data center and uses AI, high-performance computing, and physics-based simulation to significantly improve data center energy efficiency by up to 30%. Additionally, Cadence's cloud-native molecular design platform, Orion, will be supercharged with NVIDIA's BioNemo and NVIDIA microservices for drug discovery. to broaden therapeutic design capabilities and shorten time to trusted results. In Q1, we expanded our footprint at several top tier customers and further our relationship with key ecosystem partners. We deepened our partnership with IBM across our core EDA and systems portfolio, including a broad proliferation of our digital, analog, and verification software and expansion of our 3D IC packaging and system analysis solutions. We strengthen our collaboration with Global Foundry through a significant expansion of our EDA and system solutions that will enable GF to develop key digital analog RF, MM wave, and silicon photonics design for aerospace and defense, IoT, and automotive end markets. We announced a collaboration with ARM to develop a chiplet-based reference design and software development platform to accelerate software-divine vehicle innovation. We also further extended our strategic partnership with Dassault Systems, integrating our AI-driven PCB solution with Dassault's 3DEXPERIENCE Works portfolio, enabling up to a 5x reduction in design turnaround time for SOLIDWORKS customers. Now let's talk about our key highlights for Q1. Increasing system complexity and growing hyperconvergence between the electrical, mechanical, and physical domain is driving the need for tightly integrated core design and analysis solutions. Our system design and analysis business delivered steady growth as our AI-driven design optimization platforms integrated with our physics-based analysis solutions continue delivering superior results across multiple end markets. Over the past six years, we have methodically built out our system analysis portfolio. And with the signing of the definitive agreement to acquire beta CAE, are now extending it to structural analysis. thereby unlocking a multi-billion dollar TAM opportunity. Beta CAE's leading solutions have a particularly strong footprint in the automotive and aerospace verticals, including at customers such as Stellantis, General Motors, Renault, and Lockheed Martin. Our Millennium Supercomputing Platform, delivering phenomenal performance and scalability for high fidelity simulation is ramping up nicely. In Q1, a leading automaker expanded its production deployment of Millennium to multiple groups after a successful early access program in which it realized tremendous performance benefits. Allegro X continued its momentum and is now deployed at well over 300 customers. While Allegro XAI, the industry's first fully automated PCB design engine, is enabling customers to realize significant 4 to 10x productivity gains. Samsung used Celsus Studio to uncover early design and analysis insights to precise and rapid thermal simulations for 2.5D and 3D packages. attaining up to a 30% improvement in product development time. And a leading Asian mobile chip company used Optimality Intelligent System Explorer AI technology and Clarity 3D Solver, obtaining more than 20x design productivity improvement. Ever-increasing complexities in the system verification and software bring-up continue to propel the demand of our functional verification products. With hardware accelerated verification, now a must-have part of the customer design flow. On the heels of a record year, our hardware products continue to proliferate at existing customers while also gaining some notable competitive wins, including at a leading networking company, and at a major automotive semiconductor supplier. Demand for hardware was broad-based, with the particular strength seen at hyperscalers, and over 85% of the orders during the quarter included both platforms. Our Verisium platform that leverages big data and AI to optimize verification workloads, boost coverage, and accelerate root cause analysis of bugs saw accelerating customer adoption. At Cadence Live Silicon Valley, Qualcomm said that they used Verisim SimAI to increase total design coverage automatically while getting up to a 20x reduction in verification workload runtime. Our digital IC business had another solid quarter as our digital full flow continued to proliferate at the most advanced nodes. We had strong growth at hyperscalers, and over 50 customers have deployed our digital solutions on three nanometer and below design. Cadence Cerebrus, which leverages gen AI to intelligently optimize the digital full flow in a fully automatic manner now has been used in well over 350 tape outs. Delivering best-in-class PPA and productivity benefits, it's fast becoming integral part of the design flow at marquee customers as well as in DTCO flows for new process nodes at multiple foundries. In custom IC business, Virtuoso Studio, delivering AI-powered layout automation and optimization, continued Rampli strongly, and 18 of the top 20 SEMIs have migrated to this new release in its first year. Our IP business continued to benefit from market opportunities offered by AI and multi-chiplet-based architecture. We are seeing strong momentum and interface IPs that are essential to AI use cases, especially HBM, DDR, UCIE, and PCIE at leading edge nodes. In Q1, we partnered with Intel Foundry to provide design software and leading IP solutions at multiple Intel advanced nodes. Our Tensilica business reached a major milestone of 200 software partners in the HiFi ecosystem, the de facto standard for automotive infotainment and home entertainment. And we extended our partnership with one of the top hyperscalers in its custom silicon SOC design with our Xtensa NX controller. In summary, I'm pleased with our Q1 results and the continuing momentum of our business. Viralink chip and system design complexity and the tremendous potential of AI-driven automation offer massive opportunities for our computational software to help customers realize these benefits. In addition to our strong business results, I am proud of our high-performance inclusive culture and thrilled that Cadence was named by Fortune and Great Place to Work as one of the 2024's 100 Best Companies to Work For. ranking number nine. Now I will turn it over to John to provide more details on the Q1 results and our updated 2024 outlook.
spk02: Thanks, Anirudh, and good afternoon, everyone. I am pleased to report that Cadence delivered strong results for the first quarter of 2024. First quarter bookings were a record for Q1, and we achieved record Q1 backlog of approximately $6 billion. A good start to the year, coupled with some impressive new product launches, sets us up for strong growth momentum in the second half of 2024. Here are some of the financial highlights from the first quarter, starting with the P&O. Total revenue was $1.9 billion. Gap operating margin was 24.8% and non-gap operating margin was 37.8%. Gap EPS was $0.91 and non-GAAP EPS was $1.17. Next, turning to the balance sheet and cash flow, cash balance at quarter end was $1.12 billion, while the principal value of debt outstanding was $650 million. Operating cash flow was $253 million. DSOs were 36 days, and we used $125 million to repurchase Caden shares in Q1. Before I provide our updated outlook, I'd like to share some assumptions that are embedded in our outlook. Given the recent launch of our new hardware systems, we expect the shape of hardware revenue in 2024 to weigh more toward the second half, as our team works to build inventory of the new system. Our updated outlook does not include the impact of our pending beta CAE acquisition, and it contains the usual assumption that export control regulations that exist today remain substantially similar for the remainder of the year. Our updated outlook for fiscal 2024 is revenue in the range of $4.56 to $4.62 billion. GAAP operating margin in the range of 31 to 32%. Non-GAAP operating margin in the range of 42 to 43%. GAAP EPS in the range of $4.04 to $4.14. Non-GAAP EPS in the range of $5.88 to $5.98. Operating cash flow in the range of $1.35 to $1.45 billion. And we expect to use at least 50% of our annual free cash flow to repurchase cadence shares. With that in mind, for Q2, we expect revenue in the range of $1 billion and $30 million, to $1,050,000,000. Gap operating margin in the range of 26.5 to 27.5%. Non-gap operating margin in the range of 38.5 to 39.5%. Gap EPS in the range of 73 to 77 cents. Non-gap EPS in the range of $1.20 to $1.24. And as usual, we've published a CFO commentary document on our investor relations website, which includes our outlook for additional items, as well as further analysis and gap to non-gap reconciliations. In summary, Cadence continues to lead with innovation and is on track for a strong 2024 as we execute to our intelligent system design strategy. I'd like to close by thanking our customers, partners, and our employees for their continued support. And with that, operator, we will now take questions.
spk01: At this time, I would like to remind everyone who wants to ask a question to please press star then the number one on your telephone keypad now. We ask that you please limit yourself to one question and one follow-up. We'll pause for a moment to compile the Q&A roster. Your first question comes from the line of Joe Verwink with Baird. Please go ahead.
spk00: Great. Hi, everyone. Thanks for taking my questions. Maybe just to start with your outlook for the year, can you perhaps provide maybe your second half assumption before this quarter versus where it stands today in terms of just recalibrating around delivery schedules and maybe a good way to frame it. I think in the past you gave a share of this year's revenue that was going to come from upfront products. Is that still the right range? But if it is the right range, you can obviously see more is going to end up landing in the second half, and so that kind of fits to your original views, or how is that, I guess, skewed relative to what might have been the expectation a quarter ago?
spk02: That's a great question, Joe, and I think you've hit on the main point there. But upfront revenue is driving a lot of the quarter-over-quarter trends this year. When I look at last year, you recall that we had a large backlog of hardware orders and we dedicated 100% of the production, hardware production in Q1 to deliver that hardware in Q1 2023. As a result, in Q1 2023, 20% of our Q1 2023 revenue was from upfront revenue sources. In contrast, Q1 this year, it's only 10% of the of the total revenue for this Q1 is coming from upfront revenue. But again, last year and to to reflect on where we where we thought we were this time last quarter, that we still expect that upfront revenue will probably be 15 to 20%. I mean, around the midpoint, there is 17 and a half percent in expectation for upfront revenue this year and a midpoint of, say, 82 and a half percent for recurring revenue. That's still the same as what we thought this time last quarter. But that contrast with last year was I think 16% of our revenue was upfront last year. And to put dollar terms on it, last year $650 million of our revenue was upfront. This year we're expecting roughly 800 million to be upfront. But the first half versus first half, last year we had 350 in the first half and 300 in the second half because we had prioritized all those shipments in hardware and it skewed the numbers toward the first half last year. So 350 and 300 ending with the 650 of upfront revenue last year. This year, it looks more like 250 and 550 at the back end. But I know that's largely as a result of we had a record backlog or record bookings quarter in Q1. We've got substantial backlog in IP that we're scaling up to deliver. A lot of that revenue falls into the second half. And also we launched these new hardware systems last week. Hardware revenue is expected to be more second half weighted now because based on what we've heard, and I'll chime in here on the on the technical aspects of their new hardware systems, but we expect them to be so popular that a lot of demand will shift to those new hardware systems and we'll have to ramp up production to be able to deliver that demand. So it shifts some of the upfront revenue to the second half. So I think upfront revenue is really driving a lot of the skewed metrics. Andrew, do you want to talk about Z3?
spk11: Yeah, absolutely. So we are very proud of the new systems we launched. As you know, you know, We are a leader in hardware-based emulations with Z2X2. And last time we launched them was in 2021. So that was like a six-year cycle. Z1X1 was 2015, and then Z2X2 was 21. So what I'm particularly pleased about is we have a major, major refresh. It's a game-changing product, but it was also developed in only three years. So in 2024, we have a new refresh. And it's a significant leap in terms of capacity. And even last week at our Cadence Live conference, NVIDIA and Jensen talked about how they use Z2 to design their latest chip like Blackwell. And it's also used by all the major silicon companies and system companies to design their chips. But what is truly exciting about Z3 and X3 is there's a big leap. It's like Z3 is, four or five times more capacity than Z2. It's a much higher performance. So it sets us up nicely for next several years to be able to design the, you know, several generations of the world's largest chips. Okay. So that's the right thing to do. And the reason we can do it in three years versus six years is, you know, we use our own, this is designed internally, you know, in cadence for TSMC advanced nodes. So we're using all our latest tools or the latest AI tools. We are using all our IP. There's a very good validation of our own capabilities that we can accelerate our design process, but really sets up hardware verification and overall verification flow for using the new systems. Now, as a result, normally there is a transition period when you have a new system. And we went through that twice already in the last, you know, 10 years. And the customers naturally will go to the new systems and then we build them over, you know, next one or two quarters. But that is the right thing to do for the business long-term. You know, the time, you know, it's good to accelerate the, because these AI chips are getting bigger and bigger, right? So the demand for emulation is getting bigger and bigger and I can give you more stats later. So we felt that it was important to accelerate the development of the next generation system to get ready for this coming AI wave. for next several years, then we are very well positioned. As a result, it does have some impact on quarter to quarter, but that's well worth it in the long run.
spk00: That's all very helpful. Thank you. Second question, I wanted to ask how some of the things you just spoke of, but also AI, start to change the frequency of customers engaging with you, how they approach renewals. So you just brought up how the hardware platforms the velocity there has improved from first generation to next six years. Now we're down to a three-year new product cycle. When I listened to your customers last week talk about AI, they're not just generating ML models that can be reused, but then, of course, each run becomes better if you're incorporating prior feedback. So would just seem like ai itself not only creates stickiness but there would be an incentive to deploy it maybe more broadly than a customer traditionally would think about deploying new products does that mean the average run rate of a renewal ends up becoming much bigger and we'll start to see that flow in the backlog yeah that's the correct observation you know like what we have
spk11: As you know, what we have said before, AI has a lot of profound impact to cadence, a lot of benefit to our customers. So there are three main areas. One is, you know, the build out of the AI infrastructure, you know, whether it's NVIDIA or AMD or all the hyperscalers. And we are fortunate to be working, you know, with all the leading AI companies. So that's the first part. And in that part, you know, as they design bigger and bigger chips, because the big thing in AI systems is they're parallel. So they need to be bigger and bigger chips. So the tools have to be more efficient. The hardware platform have to support that. And that's why the new systems. Now, the second part of AI is applying AI to our own products, which is the Cadence.AI portfolio. And like you mentioned last week, we had several customers talking about success, you know, with that portfolio, including Intel. Like I mentioned, Intel, Broadcom, Qualcomm, Juniper, Arm, and the results are significant. So we are no longer in kind of a trial phase of whether these things will work. Now we're getting pretty significant improvements. Like we mentioned, MediaTek got like 6% power improvement. One of the hyperscale companies got 8% to 10% power improvement. These are significant numbers. So it is leading to deployment of our AI portfolio. And I think we mentioned the AI run rate on a trailing 12-month basis is up 3x. And I think design process already was well automated. EDI has a history of automating design over the last 30 years. So AI is in a unique position because you need the base process to be somewhat automated to apply AI. So we were already well automated, and now AI can take it to the next level of automation. So that's the second part of AI, which I'm pretty pleased about, you know, is applying to our own product. And then the third part of AI proliferation is new markets that open up, you know, which things like, you know, data center design with reality that we announced or, you know, the millennium, which is designing, you know, systems with acceleration or digital biology, you know, those are like a little, you know, they take a little longer to ramp up, but we have these three kind of, uh, impact of AI, the first being direct design of AI chips and systems, second, applying AI to our own products, and third being, you know, new applications of AI. That's great. Thank you very much.
spk01: Your next question will come from the line of Charles Shi with Niedermann Company. Please go ahead.
spk10: Hi, thanks. Good afternoon. I just want to ask about the China revenue in Q1. It looks pretty light. I just wonder whether that's part of the reason that's weighing on your Q2. I understand you mentioned that you're going through that second-gen to third-gen hardware transition right now. Maybe that's another factor, but from your perspective, graphical standpoint is what's the outlook for China for the rest of the year and specifically Q2? Thanks.
spk02: Hi, Charles. That's a great observation. If you recall this time last year, we were talking about a very strong Q1 for China for functional verification and for upfront revenue. I think those three things are often linked. But you contracted with this year, China is down at 12%. Upfront revenues is lower at 10% compared to 20%. And functional verification, of course, is lapping those really tough comps when we dedicated 100 billion production to deliveries. I think when you look at China, we're blessed that we have the geographical diversification that we have across our business. What we're seeing in China is strong design activity. And while the percentage of revenue dropped to 12%, it pretty much goes in line with a lower hardware, lower functional verification, lower upfront revenue quarter would generally lead to a lower China percentage quarter. But we have good diversification, and while China is coming down, we could see other Asia increasing, and our customer base is really mobile. That geographical mix of revenue is based on consumption and where the products are used. But, you know, as we do more upfront revenue in the second half, we'd expect the China percentage to increase.
spk10: Thanks. I want to ask another question about the, the upcoming ramp of the third generation hardware, what exactly is the nature of the demand? Is that the replacement demand, like your customers replacing your Z2, X2 with the Z3, X3, or you expect a lot more greenfield customers adopting Z3, X3? And more importantly, I think a You mentioned about four to five times capacity increase. They can design much larger chips with a lot more transistors. How much of an ASP uplift you're expecting from the Z3X3 versus Z2X2?
spk11: Charles, all good observations, so let me try to answer them one by one. I mean, in terms of your last point, you know, we, we, you know, normally if the system has more capacity, like this one has, it can do more. So it produces, you know, it gives more value to our customers. So we are able to get more value back. So typically newer systems are, are better that way for us and better for the customer. Okay. And to give you an example, I mean, these things are pretty complicated. So, you know, we'll just take Z3, for example. So Z3 itself, you know, we designed this advanced TSMC chip ourselves. And this is one of the biggest chips that TSMC makes. And one rack will have like, you know, more than 100 of these chips. And then we can connect like up to 16 racks together. So if you do that, you have thousands of full radical chips emulating, you know, that's, and these are all liquid cooled, you know, connected by optical and infinite van interconnect. So this is like a truly a multi rack supercomputer. And what it can do is just emulate very, very large systems very, very efficiently. So even Z2, you know, like Nvidia talked about it last week, even Blackwell, which is the biggest chip in the world right now with 200 billion transistors was emulated on few racks of Z2. Okay. So now with 16 rack of Z3, we can emulate chips, which are like five times bigger than Blackwell, which is already the biggest chips in the world, right? So that gives a lot of runway for our customers because with AI, the key thing is that the capacity of the chip needs to keep going up, you know, not just a single chip, you know, look at Blackwell, they have two full radical chips on a package. So as you know, you will see more and more, not just big chips on a single node, but multiple chips in a package for this AI and also 3D stacking of those chips. So what this allows is not just emulating a single large chip, but multiple chips, which is super critical for AI. So I think this is what I feel that this puts us in a very good position for all this AI boom that is happening, not just with our partners like Nvidia and AMD, but also all the hyperscaler companies. And so that will be the primary demand is more capacity chips require more hardware. And then X3 will go for that with software prototyping, which is used on FPGA. And then we have some unique workload capabilities apart from size of these big systems, the capacity being much better and performance. There are new features for low power, for analog emulation that helps in the mobile market. So we talked about Samsung, working with us, especially on this four-state emulation, which is a new capability in emulation over the last 10 years. So I think it's just a combination of new customers, a combination of competitive, but also continuing to lead in terms of the biggest chips in the world, which are required for AI processing now and years from now. I think the size of these chips, as you know, is only going to get bigger in the next few years. And we feel that Z3XC is already set up for that. Thanks.
spk01: Your next question will come from the line of Lee Simpson with Morgan Stanley. Please go ahead.
spk04: Great. Thanks. And thanks very much for squeezing me on. I just wanted to go back to what you'd said last quarter, if I could. It did seem as though you were saying that there was an element of exclusivity around your partnership with Arm, your EDA partnership around Arm Total Design. I wondered how that was developing, if indeed you're collaborating to accelerate the development of custom SOCs using NeoVerse. It looks as though it's pulled in quite a lot of work or continues to pull in quite a lot of work around functional verification. And I guess as we look at now third generation tool sets for Palladium and Protium, leaving aside some of the rack scale development that we're seeing out there, whether or not ARM's total design I guess, development work is pulling in or is likely to pull in some of that second half business. And that means not just hyperscalers, but perhaps in AI PCs and beyond. Thanks.
spk11: Yeah, thank you for the question. I mean, we are proud to have a very strong partnership with Arm and with our joint customers, you know, Arm and Cadence customers. I think we have had a very strong partnership over the last 10 years, I would like to say, and it's getting better and better, you know, And, yes, we talked about our new partnership on Total Compute. Also, I think this quarter we talked about our partnership with Arm on Automotive. Because what is interesting to see, which, you know, of course, you know this already, but, you know, Arm continues to do well in mobile, but also now in kind of HPC server and automotive and markets. So we are pleased with that partnership, you know, and they are also doing more subsystems and, you know, higher order development, and that requires more partnership with Cadence in terms of the backend, you know, and digital flow and also verification with hardware platforms and other verification tools.
spk04: Great. Maybe just a quick follow-up. You know, we've seen quite a bit of M&A activity from yourselves of late, including the IP house acquisition of Invicus. You've had Rambus bought. You've now acquired Beta in the computer-aided emulation space for the car. And there's been quite a lot of speculation in the market about the possibility of a transformative deal being done. And I guess, given that we have you on the mic here, maybe if you'd get a sense from yourself, what would be the sort of thing that a business like Cadence could look for? Would you look for a high value in a contiguous vertical to what you've already addressed, let's say, in automotive? Or would it be something more waterfront, a business that spans several verticals, maybe being more relevant across the industrial software space? Could that be the sort of ambition that Cadence would have, given the silicon to systems opportunities that are emerging? Thanks.
spk11: Well, thank you for the question. And, you know, a lot of times there are a lot of reports and we don't normally don't comment on these reports and people get very creative on these reporting. But what I would like to say is that our strategy hasn't changed. Okay. It's the same strategy from 2018. You know, first of all, I want to make sure, you know, that we are focused in our core business, which is EDN IP. And, and yes, I launched this whole initiative on systems and it's super critical, you know, chips, silicon to systems. But what is one thing that I even mentioned last time, what is different from 2018 to now is that EDA and IP is much more valuable to the industry. Our core business itself has become much more valuable because of AI. So our first focus is in our core business. We are leading in our core business. Our first focus is on organic development. That's what we like. We always say that's the best way forward now. And along with that, we will do some, you know, we have done, like you mentioned some, you know, opportunistic M&A, which is, you know, usually I would like to say is the tuck in M&A in the past. And that adds to our portfolio. You know, it helped us in system analysis. We also did it in IP because I'm very optimistic about IP growth this year. And, you know, we talked about our new partnership with Intel Foundry in Q1. Also, you know, we acquired Rambus IP assets, which are, uh, HBM and HBM is of course a critical technology in AI. And we are seeing a lot of growth in HBM this year. Now it's, we have booked that business. The deliveries will happen towards second half of the year, as John was saying earlier. Uh, but so that's, that's the thing now in terms of beta, you know, it made sense because it is, uh, uh, is a very good technology. You know, it's the right size for us, you know? And we are focused on finishing that acquisition, also integrating that, that will take some time. So that's our primary focus in terms of M&A. And it's a very good technology. They have very good footprint in automotive and aerospace verticals. So just to clarify, we have the same strategy from 18 and that's doing working as well. It's primarily organic with very synergistic computational software. mostly tuck-in acquisitions. That's great. Thank you.
spk01: Your next question comes from the line of Ruben Roy with Stiefel. Please go ahead.
spk08: Thank you. Anirudh, I had a follow-up on the V3X3 commentary that you had, and one of the things I was thinking about, especially as you talked about the InfiniBand low-latency network across the multiple racks of James Ingram, Z3, you had mentioned that you're up to 85% attach rate of both systems with Z2X2. I would imagine that would continue to go up. And if you can comment on if the new systems incorporating FinnaBand across James Ingram, Z3 and X3, and if so, you know, do you do you expect that to be sort of a selling point for your customers that are designing these big shifts, which in many cases these days, you know, have software development attached to the design process, do you think that the attach rates continue to move higher for both systems?
spk11: Yes, absolutely. You know, I think these, you know, this, I think I started this in, I forget now, 16, I think, you know, Dynamic Duo, or 15 and 16, which is, you know, we have a custom processor for palladium, and we use FPGA for protium, okay? So this is what we call Dynamic Duo, because then palladium is best in class for chip verification and RTL design. And Proteum is best in class for software bring up. Okay. And with the common front end. So as a result over the years, you know, this has become the right approach. Okay. And our customers are fully, you know, embracing both these systems as they invariably do, you know, both chip development and software development. I mean, perfect example is of course, you know, our long-term development partner, NVIDIA. I mean, NVIDIA is no longer doing, just chip development, they have a massive software stack. And that's true for all the hyperscalers. So we see that trend continuing. And now we do use NVIDIA's products like InfiniBand in our systems on Z3 to your question, which is because Z3 is a very unique architecture. So it requires very, very high speed interconnect. So it's almost like a super computer. So then it requires optical and InfiniBand in Z3. Now in X3, we are using AMD FPGAs, which are fabulous, but it does not require that tight interconnect speed. So InfiniBand is more used in Z3 versus X3. But X3 is a great system too. We're using the latest AMD FPGAs. You know, it has 8x higher capacity than X2 and all kinds of innovation on the software side as well. So we are very pleased. I'm very confident that we have true leadership in these hardware platforms, both Palladium and Proteum. And we are also pleased, like I said earlier, that we are able to refresh it, you know, much sooner than the market expected, given our track record. And then we are seeing a lot of demand for both of these systems together going forward. Yeah.
spk08: That's helpful. Thank you, Anirudh. And then a follow-up for John. Anirudh mentioned HBM IT business booked and shipping in second half. I was wondering if you can kind of give us a, bigger picture uh update on how you're viewing ip in general uh in terms of um you know bookings relative to you know sort of ramps of those ip sales is it you know sort of the entire segment sort of a second half should we think about a second half ramping uh at a heavier weight than first half or any update there would be helpful yeah thanks reuben i mean q1 ip performance and bookings were ahead of our expectations
spk02: And everything remains on track there for a very strong growth year for 2024 for the IP business. Of course, the timing of revenue recognition depends on the timing of deliveries. But we had a tremendous bookings quarter in Q1 and we're preparing to scale for a number of deliveries of IP in the second half. But we expect the IP to have a very strong year this year. We're pleased with the overall business momentum. But we need to scale up some headcount to prepare to deliver on some of the larger backlog orders.
spk11: Yeah, one thing I want to highlight, I think you may have seen this. I just want to highlight our partnership with Intel and IFS that was concluded in Q1. And so it's really good to see, you know, Pat and Intel investing more in the foundry business and also working more closely with us. So that's also a key contributor to IP. But like John said, we have to, you know, hire the people, you know, do the, we need to port our portfolio to the Intel process, okay, and that takes some time. So that's more will come towards the end of the year and next year. But we are pleased with that new partnership on IP, okay.
spk08: Very helpful. Thanks, guys.
spk01: Your next question comes from the line of Jay Bleashour with Griffin Securities. Please go ahead.
spk13: Thank you. For you, John, first, and then Anirudh. So for John, thinking back to a recent conversation we had, could you comment as a measure of EDA market health or dynamics, what you're seeing or expecting in terms of intra-contract new or expansion business? You know, this is an ongoing phenomenon in EDA. Maybe talk about what you're seeing in that kind of business beyond the customary renewals schedule? And then relatedly, how are you thinking about pricing for this year, given that EDA generally has substantially better pricing capacity than you might have had in years past? And then my follow-up for Anirudh.
spk02: Sure. Thanks, Jay. Great question. Yeah, I think what you're getting at there is what we would call add-ons. Typically, we have the very predictable software renewal business And you'll see in the recurring revenue part of our business, I think we're at double-digit revenue growth. But over the past few years, I think that's been at low teens. But we're seeing that a number of customers that have adopted AI tools are maybe not coming back and purchasing add-ons as frequently. But right now, we're focused on proliferating those AI tools into accounts. I think there's an opportunity to increase pricing there, but maybe now is not the right time. I think we have such strong momentum on the upfront revenue business. We're preparing for scale into the second half there, but we'll have plenty of revenue growth in the second half of the year. We can continue to focus on proliferating our AI tools and technology into accounts. And pricing is something certainly we can focus on more intently in future years, but right now the focus is on proliferation. I don't know, Anirudh, do you have anything to add to that?
spk13: Okay, so Anirudh, thinking back to your conference last week, particularly the Gen AI track, it was interesting, of course, to hear the adoption presentations by Renaissance, Intel, and so forth. But what seemed to be taking place is a heavy focus on Cerebrus, which makes sense. It's the one longest in market. So perhaps you could talk about how you're thinking about the adoption curve for the other brands aside from Cerebris. And are there any critical parts of the design flow that might not necessarily be amenable to AI enablement? We hear a lot about implementation, analog, verification, but we don't hear a lot about AIs being applicable to synthesis, for example. So maybe talk about those areas where it makes a lot of sense and those where perhaps it will remain more or less conventional technology.
spk11: Yeah, thanks, Jay, for the question. So, you know, as you know, we have five major AI platforms, with Cerebras and digital implementation being the one that has been out the longest. And Cerebras is doing, you know, quite well, like you noted. And, you know, we also commented on more than 350 tapers, a lot of PPA improvement. But all the other ones are doing well too. Sometimes we have like too many products. We don't talk enough about the others, but like verification, like Verisium is doing quite well. And I mentioned Qualcomm last week talked about pretty impressive results because verification, as you know, is the exponential problem because as the chips get bigger, the verification task gets exponentially bigger. So the benefit of AI can be significant in verification. So I think you will see that in the next, you know, few quarters and years that verification will be as important as implementation in terms of benefits of AI. Okay. And then the other area I would like to highlight is PCB and Allegro and packaging, you know, because that area hasn't seen that much, you know, automation, you know, in PCB and Allegro is a leading platform for packaging and PCB, but really proud of Allegro XAI. And we talked about several customers, including Intel last week, talked about four to 10 X improvement using XAI and PCB. So apart from digital, I think the next two ones I feel are verification and Allegro and PCB. And then the areas that haven't done as well, I mean, is more not in design optimization is like design generation. And I think there this LLM based models do provide a lot of promise. So historically we haven't done as much design generation which is this is like almost pre-rtl right going from spec to rtl that's the generate that's the truly the creative part of the design process and then once you have rtl is more optimization part in digital and verification so i think that's where we have to see but you know some initial results which we haven't talked uh i think mentioned last week but we we work with a but we we have to see still in early phases but we work with one or two customers in which we took like a 40, 50 page spec document, this English document, and able to automatically generate RTL from it. And the RTL quality is pretty good. So again, we have to see how that goes, but that requires these really advanced LLM capabilities. So that's something to be seen, but if that works well, that could be another kind of very interesting kind of application of GenAI. Very good. Thank you.
spk01: Your next question comes from the line of Gary Mobley with Wells Fargo Securities. Please go ahead.
spk03: Hey, guys. Thanks so much for taking my questions. John, I appreciate the fact that China revenue in the first quarter was down against a tough year ago comp on the hardware verification side as you worked on backlog. And I assume that you still expect China to be diluted to overall company growth in the fiscal year. Could you speak to whether or not you're starting to see, you know, U.S. export controls begin to impact your ability to do business there, whether that be a function of restrictions around gate all around or certain China customers added to the entity list?
spk02: Hi, Gary. Thanks for the question. And just to clarify, I think last quarter I said I expected China revenue to be flattened down this year. I think we still expect that. Um, but, and that's because last year was such a strong year and there was a lot of, there was kind of an oversized portion of that hardware catch up that we had that was delivered to China. So I, I think it skewed the China number higher last year. So, so we're lapping pretty tough comps, but, uh, design activity in China remains very strong though. Um, and the we're, we're, um, you know, we have, uh, a lot of diversification, there's, there's strength in other parts of the world. We're very comfortable with the 2024 outlook and we factored all the impact of geopolitical risk in there to the best we can and try to de-risk China as much as we can in our guide.
spk03: Okay. To follow up, I wanted to ask about bookings trends for the balance of the year. You obviously highlighted better than seasonal Q1 booking trends. How would you expect the bookings to play out for the balance of the year and to what extent will Z3 and X3 factor into that for the balance of the year? Thank you.
spk02: Yeah, I mean, it's hard to predict in terms of Z3 and X3 that we definitely need another quarter to see that. I expect, I mean, we expect strong demand and we expect strong revenue growth into the, like we're preparing for scale into the second half. on the hardware side, but we need to at least see another quarter of demand. And normally with hardware, I don't like taking up the year for hardware until I see the pipeline in the summer. So we're trying to be conservative there. But generally on the hardware side, yeah, we're basically preparing for scale. We'll build those systems as quickly as possible. We expect strong demand there.
spk01: Your next question comes from the line of Jason Salino with KeyBank Capital Markets. Please go ahead.
spk07: Hey, thanks for fitting me in. And Andrew, congrats to your R&D team. You know, it is impressive that they reduced the cycle there, you know, all while designing that M1 box too, right? So maybe first, just how many of the, so the Z3 and X3, does it become available in Q3? I guess when can customers start putting orders in for that?
spk11: Yeah, first of all, thanks. And yes, they become available in, in now. Okay. You know, and you know, they'll ramp in, but it will ramp Q3 and then Q4, but we already have them running at several early customers. So, I mean, normally when you, when we announce something, as you know, you know, like at one of our lead partners, they'll be running for three months already and very stable. But in general, it will be more Q3 and then Q4 in terms of, because normally in any system, there's like a three to six month kind of overlap. So we will still sell Z2, X2, and then move to Z3, X3. So that's a natural part.
spk02: that's also you know contributing to this quarter by quarter variation a little bit but but it will ramp and q3 will be bigger and then q4 should be bigger than that yeah we've kind of we tried to de-risk the guide for with the assumption that there's going to be strong demand for the newer systems but uh it'll give us the opportunity to put some of the older systems into the cloud because we have a we have a large underserved community that uh that want to use uh our emulation capacity but we've We haven't had a lot of capacity to share with them through our cloud offering. To the extent we do that, that will lead to rateable revenue, though, because I think when it's used in the cloud, you get revenue over time, whereas when we deliver and they use it on-prem, we take revenue up front. Yeah.
spk11: But the demand is good, but it takes like one to two quarters to wrap. Yes. Yeah.
spk07: Okay. Yeah, because that's kind of what I was going to ask next is I think last time in 2021, you had like a six-month period where you were selling both, and I think you were trying to clear inventory for the Z1 and X1. It doesn't sound like you'll be trying to do that again, because when I think about this Q2 air pocket, is it a function of customers waiting for Z3X3, or is it a function of, you know, they might not want to buy the older versions?
spk02: Well, we've used the guide on the assumption that many customers might wait. We intend to sell them side by side, but to the extent the customers wait, it will shift some hardware revenue into the second half of the year. And we've anticipated that, so that's within the guide. To the extent the customers continue to buy Z2, and we're not putting those into the cloud, but selling those outright as well, well, then that will change the profile or the shape of revenue. But we expect that this new system, this new system will trigger a lot of demand for it.
spk07: Okay. Perfect. Thank you both.
spk01: Your next question comes from the line at Vivek Arya with Bank of America Securities. Please go ahead.
spk06: Thank you for taking my question. I think you mentioned second half growth will be driven a lot more by hardware. Do you think you will see all the benefit of the hardware refresh within this year. Will it be done? Will it continue into 25? I guess my bigger question really is that if I exclude the upfront benefits from last year and this year, your recurring business is expected to grow about 10%. And I'm curious, Anirudh, is that in line with the kind of recurring revenue growth you are expecting or we should be expecting going forward, along with periodic hardware refreshes, or is that not the right way to interpret you know, your core recurring part of your business?
spk11: A very good question. You know, first of all, in non-recurring, it's not just hardware, but it's also IP in terms of the second half. Because like we mentioned, we have new IP business driven by, you know, HPM and AI and also Intel IFS. So that is, you know, also back-end loaded along with hardware. And then hardware... So hardware, normally when we launch a new system, it takes one or two years for it to fully. So even though we are not commenting about next year, I'd be surprised if this time it's only a six-month impact. So I expect these things to be used in design for next five, seven years. So the impact will be also not just this year, but following years. And in terms of recurring revenue, I think the best way, like we have said, is to look at a three-year CAGR basis, because there could be some fluctuation and all. And overall, we are pleased with, you know, the recurring revenue growth, and we go from there.
spk02: Yeah, and Rebecca, if I could, I'd like to kind of carry in some of Gary's question earlier that I don't think I addressed, because he was asking about the bookings profile for the year. Q2 for software renewals, I think, is our latest software renewals quarter. Um, for the year, but we, we, I think we explained last, um, last quarter that we expect the, um, the waiting of bookings first half, the second half to be about 40, 60 this year. But, um, um, the recurring revenue right now in the guide is about, uh, double digits, about 10%. And in the past it's been about 13%. Now we're, we're. we're not really anticipating a huge number of add-ons to grow that above 10%. To the extent that that comes through, it'll be upside to the guys. But what we try to do when we do the guide is de-risk for the risks that we can see.
spk06: Thank you. For my follow-up question on incremental EBIT margin, do you think this greater mix of hardware is impacting the incremental EBIT margin? I think if I calculated correctly, the new guidance is still below the 50% incremental, right? Or right about, which is lower than what you have had the last two, three years. Is that the right interpretation and what can change that?
spk02: Yeah, Vivek, I think what you're referring to really is that, I mean, for what, seven years in a row now, we think we've been achieving over 50% incremental margins. It's a matter of pride here. We try to achieve that every year. We'll certainly be trying to achieve that this year. I think we're in the high 40s, probably about 47% when you look at this guide right now. I think one of the biggest challenges with something like that is, you know, we do small tuck in M&A, but I don't want to go over Lee Simpson or the answer. Andrew gave to Lee Simpson, but organic is delicious here at Cadence. We focus on innovation and growing with organically driven products and then with small token M&A. But to the extent that we do some larger M&A and of course we have beta CAE, which apparently is the gold standard in structural simulation. So that's a big acquisition for us. But now I think the size of that probably still qualifies as a small token. But when you do something like that, those M&A transactions typically are headwinds to that incremental margin calculation. In the short term, they'll be beneficial in the long term, but in the short term, M&A can be dilutive pretty much in the first year and then becomes accretive later. When we look at our incremental margin, that's a headwind, but we try to overcome that headwind, because normally all we do is these small token M&As. So I haven't given up on 50% incremental margin for this year. It's a challenge, but we'll do our best to achieve it. Thank you.
spk01: Your final question will come from the line of Harlan Sir with JP Morgan. Please go ahead.
spk09: Good afternoon. Thanks for taking my question. You know, after a strong 2023, SDA is starting the year relatively slattish and down about 5% to 6% sequentially. I think it feels like it's an unusual starting point for SDA, especially given all of the drivers that you guys have articulated. Is SDA expected to also be more second half loaded? And do you expect SDA, this is X beta CAE, but do you expect SDA to grow in line or faster than your overall corporate growth target for the full year?
spk02: Yeah, Harlan, that's a great question. And thanks for highlighting that because I had that on my list of things to say. I think there's something funny going on with the rounding on when you kind of apply the growth rate. for STA for Q1 over Q1. The actual growth rate is probably high single digits, Q1 over Q1. Now, I know that's lapping tough comps against Q1 23. I think if you look on a two-year CAGR basis, I think it's up about 17% per annum on a two-year CAGR basis for STNA. But we're expecting strong STNA growth again this year, and it'll be higher than the cadence average. That's our expectation.
spk09: Great. Thanks for that. And then, Anirudh, you know, lots of new accelerated compute AI SOC announcements, just even over the past few weeks, right? We saw Blackshift Blackwell GPU announcement by one of your big customers, NVIDIA. But we've actually seen even more announcements by your cloud and hyperscale customers bringing their own custom AI ASICs to the market, like Google with TPUv5, Google with their ARM-based CPU ASIC. Meta unveiled their Gen 2 TPU AI processor chips as well. And in addition to that, like their roadmaps seem to be accelerating. So can you give us an update on your systems and hyperscale customers? I mean, are you seeing the design activity accelerating within this customer base? And is the contribution mix from these customers rising above that sort of roughly 45% level going forward?
spk11: Yeah, Harlem, that's a very good observation. And, you know, the pace of AI innovation, like, you know, is increasing and not just in the big semi companies, but of course in these system companies. And, uh, and I think several announcements did come out, right. And including, you know, I think now Meta is public that Meta is designing a lot of silicon for AI and of course, Google, Microsoft, Amazon. So all the big, really hyperscaler companies, along with, you know, Nvidia, AMD, you know, Qualcomm, all the other kind of, uh, Samsung has a iPhone this year. So, I mean, there is a lot of acceleration, both on the semi side and on the system side. And, you know, we are involved with, you know, all the major players there, and we're glad to, you know, provide our solutions. And I do think, and this is the other thesis we have talked about for years now, right? Five, seven years, that the system companies will do silicon. because of a lot of reasons, you know, for customization, for schedule and supply chain control, for cost benefits, if there is enough scale. And I think the workload of AI, you know, like if you look at, I think, you know, some of the big hyperscaler and social media companies, they're talking about using like 20, 24,000 GPUs to train these new models. I mean, this is an immense amount of, And then the size of the model and the number of models increase. So that could go to a much, much higher number than right now that is required to train these models. And of course, to do inference on these models. So I think we are still in the early innings in terms of system companies developing their own chips and at the same time working with the semi companies. So I expect that to grow and those that our business with those system companies doing silicon, I would like to say is growing faster than cadence average. But the good thing is the semi guys are also doing a lot of business. So I don't know if that 45% will, because that's a combination of a lot of companies, but overall the AI and hyperscalers, they are doing a lot more and then so are the big semi companies.
spk09: Perfect, thank you. Thanks.
spk01: I'll now turn it back over to Anirudh Devgn for closing remarks.
spk11: Thank you all for joining us this afternoon. It's an exciting time for Cadence as our broad portfolio and product leadership highly positions us to maximize the growing opportunities in the semiconductor and systems industry. And on behalf of our employees and our board of directors, we thank our customers, partners, and investors for their continued trust and confidence in Cadence.
spk01: Thank you for participating in today's Cadence First Quarter 2024 Earnings Conference Call. This concludes today's call and you may now disconnect.
Disclaimer