Arista Networks, Inc.

Q4 2023 Earnings Conference Call

2/12/2024

spk08: The reminder, this conference is being recorded and will be available for replay from the Investor Relations section at the ERISA website following this call. Ms. Liz Stein, ERISA's Director of Investor Relations, you may begin.
spk10: Thank you, Operator. Good afternoon, everyone, and thank you for joining us. With me on today's call are Jay Sriyulal, ERISA Network's Chairperson and Chief Executive Officer, Eda Brennan, Arista's outgoing chief financial officer, and Chantel Brightup, Arista's incoming chief financial officer. This afternoon, Arista Networks issued a press release announcing the results for its fiscal fourth quarter ending December 31st, 2023. If you would like a copy of this release, you can access it online from our website. During the course of this conference call, Arista Networks management will make forward-looking statements. including those relating to our financial outlook for the first quarter of the 2024 fiscal year, longer-term financial outlooks for 2024 and beyond, our total addressable market and strategy for addressing these market opportunities, including AI, customer demand trends, supply chain constraints, component costs, manufacturing output, inventory management and inflationary pressures on our business, lead time, product innovation, working capital optimization, and the benefits of acquisitions, which are subject to the risks and uncertainties that we discuss in detail in our documents filed with the SEC, specifically in our most recent Form 10-Q and Form 10-K, and which could cause actual results to differ materially from those anticipated by these statements. These forward-looking statements apply as of today, and you should not rely on them as representing our views in the future. We undertake no obligation to update these statements after this call. Also, please note that certain financial measures we use on this call are expressed on a non-GAAP basis and have been adjusted to exclude certain charges. We have provided reconciliations of these non-GAAP financial measures to GAAP financial measures in our earnings press release. With that, I will turn the call over to Jayshree.
spk12: Thank you, Liz. Thank you, everyone, for joining us this afternoon for our fourth quarter 2023 earnings call. 2023 has been another memorable year for RISTA. We gave initial guidance of 25% year-over-year revenue growth and instead achieved well beyond that at 33.8%, driving revenue to $5.86 billion, coupled with a record non-GAAP earnings per share for the year of $6.94, up in excess of 50% annually. Back to some Q4 specifics, we delivered revenues of $1.54 billion for the quarter, with a non-GAAP record earnings per share of $2.08 due to a one-time favorable tax rate. Services and software support renewals contributed approximately 17% of revenue. Our non-GAAP gross margins of 65.4% was influenced by improving supply chain and greater enterprise mix. International contributions for the quarter registered at 22.3%, with the Americas at 77.7%. This was one of our strongest performing international quarters in recent history. Shifting to annual sector revenue for 2023, cloud titans contributed significantly at approximately 43%. Enterprises, including financials, were strong at approximately 36%, while the providers were at 21%. Both Meta and Microsoft are greater than 10% customer concentration at 21% and 18% respectively. Despite multiple capex reductions last year and the normal volatility of Cloud Titan and AI pivots, we cherish our privileged status with both M and M. Speaking of AI, in fall of 2023, Andy and I attended the 50th Golden Anniversary of Ethernet at the Computer History Museum. It truly is a reminder of how familiar and widely deployed Ethernet is, with its speed increasing by orders of magnitude from a shared collision 2.95 megabits for file printed share to a 10-bit Ethernet switching in the AI and ML era. AI workloads are placing greater demands on Ethernet as they are both data and compute intensive across thousands of processors today. Basically, AI at scale needs Ethernet at scale. AI workloads cannot tolerate the delays in the network because a job can only be completed after all flows are successfully delivered to the GPU clusters. All it takes is one culprit or worst case link to throttle an entire AI workload. Three improvements are being pioneered by Arista and the founding members of the Ultra Ethernet Consortium to improve job completion time. Number one, packet spraying. AI network topology needs packet spraying to allow every flow to simultaneously access all parts of the destination. Arista is developing multiple forms of load balancing dynamically with our customers. Two is flexible ordering. Key to an AI job completion is the rapid and reliable bulk transfer with flexible ordering using Ethernet links to optimally balance AI-intensive operations, unlike the rigid ordering of InfiniBank. Arista is working closely with its leading NIC vendors to achieve this. Finally, network congestion. In AI networks, there's a common in-cast congestion problem whereby multiple uncoordinated senders can send traffic to the receiver simultaneously. Arista's platforms are purpose-built and designed to avoid these kind of hotspots, evenly spreading the load across multipaths across a virtual output queuing VOQ lossless fabric. In terms of annual 2023 product lines, our core, which consists of cloud, AI, and data center products, are built upon a highly differentiated Arista extensible operating software system stack. It is successfully deployed across 10, 25, 100, 200, and 400 gig speeds. Our cloud networking products deliver power efficient, high availability zones without doubling the cost of redundancy as data centers demand insatiable bandwidth capacity and network speeds for both the front end and back end storage and compute clusters. The core drove approximately 65% of our revenue. We continue to gain share in our highest performance switching of 100, 200, and 400 gig ports to attain the number one position at approximately 40 plus percent, according to industry analysts. We have increased our 400 gig customer base from 600 customers in 2022 to approximately 800 customers in 2023. We expect both 400 and 800 gigabit Ethernet will emerge as important pilots for AI backend GPU clusters. We are cautiously optimistic about achieving our AI revenue goal of at least $750 million in AI networking in 2025. Our second market is network adjacencies comprised of routing, replacing routers, and our cognitive campus workspaces. We continue to make progress in campus, aiming for the $750 million revenue by 2025 that we have shared at many analysts' gaze. Our investments in cognitive wired and wireless, zero-touch provisioning, and the introduction of AGNI, Arista Guardian for Network Identities, as well as AVA sensors for threat mitigation is resonating well with our campus customers. The post-pandemic campus is seeking network as a service overlays and zero-trust network embedded in with high availability, observability, and consistency across our OS and management domains. We are also successfully deployed in many routing edge and peering use cases. Just in 2023 alone, we introduced six EOS software releases across 600 new features and 50 platforms. In fall of 2023, we introduced our WAN routing system with a focus on scale, encryption, and WAN transit routing capabilities. It has positioned us well, giving our customers a seamless enterprise LAN and WAN portfolio. The campus and routing adjacencies together contribute approximately 19% of revenue. Our third category is network software and services, based on subscription models such as Arista ACARE, Cloud Vision, DANS monitoring fabric or DMF observability, and advanced threat sensors for network detection and response. Arista's subscription-based network services and software contributed approximately 16% of the total revenue. We surpassed 2,400 cumulative customers with Cloud Vision, pivotal to building a modern operating model for the enterprise. Please note that perpetual software licenses are not included here and are counted inside the core or adjacent markets. While 2023's headline has been mostly about AI, we are pleased with the momentum of enterprise and provider customers as well. Arista continues to diversify its business globally with multiple use cases and verticals. We have more than doubled our enterprise revenue in the last three years, and we are becoming the gold standard for client to cloud to AI networking with one EOS and one Cloud Vision Foundation. Our million-dollar customer logos increased steadily in 2023 at approximately 35% as a direct result of our campus and enterprise momentum. Three principles continue to differentiate us as we are poised to be a market share gainer in the enterprise. One, best-in-class, highly available proactive products with resilience and hitless upgrades at multiple levels. Two, zero-touch automation for predictive client-to-cloud one-click operations that relies less on human staff or manual operations and is instead software-driven. And finally, prescriptive insights based on AI ML autonomous virtual assist AVA algorithms for increased security, observability, and root cause analysis. Our foundational network data lake architecture and the ability to gather, store, and process multiple modalities of network data is the only way to reconcile all the incongruent silos for network operators. While legacy vendors that are 30 to 40 years old are aiming for consolidation, Arista remains the only pure-play networking innovator, earning top spots in Forrester Wave's programmable switching and customer validation in Gartner's voice of customer for campus in 2023. In December, 2023, we conducted one of our largest customer events called Innovate in Vegas. Well, not my most favorite location, our customers and prospects found it very exciting and compelling for their network transformation initiative. They resonate deeply with our Arista 2.0 vision, building best of breed data-driven networking platform. In summary, as we wrap up another fantastic year in 2023, I am so proud of the team's execution across multiple dimensions. They have all worked tirelessly to improve our operational metrics, such as lead time, growth margin, and on-time shipments. Simply put, we outpace the industry in quality, support, and innovation. We set the direction for the future of networking, working intimately with our strategic customers. Despite limited visibility at this time, We reiterate our double-digit growth of 10% to 12% from analyst day, aiming for approximately $6.5 billion in 2024. With that, I'd like to turn it one last time to review our financial metrics with Ida Brennan. Ida?
spk09: Thanks, Jayshree, and good afternoon. This analysis is our Q4 and full year 2023 results, and our guidance for Q1 2024 is based on non-GAAP and excludes all non-cash stock-based compensation impacts certain acquisition-related charges, and other non-recurring items. Full reconciliation of our selected GAAP to non-GAAP results is provided in our earnings release. Total revenues in Q4 were $1.54 billion, up 20.8% year-over-year, and towards the upper end of our guidance of $1.5 to $1.55 billion. Services and subscription software contributed approximately 17% of revenue in the fourth quarter, up from 16.8% in Q3. International revenues for the quarter came in at $343.5 million, or 22.3% of total revenue, up from 21.5% last quarter. This quarter-over-quarter increase largely reflected a healthy contribution from our in-region EMEA customers. Overall gross margin in Q4 was 65.4%, well above our guidance of approximately 63%, and up from 63.1% last quarter. As a recap for the year, we continue to see incremental improvements in gross margin quarter over quarter with higher enterprise shipments and better supply chain costs somewhat offset by the need for additional inventory reserves as customers refined their forecast product mix. Operating expenses for the quarter were $262.7 million, or 17.1% of revenue, up from last quarter at $255.6 million. R&D spending came in at $165 million, or 10.7% of revenue, consistent with last quarter, but reflecting lower levels of new product introduction costs versus what we experienced in the first half of 2023 and what we expect for the first half of 2024. This reflects the timing of prototype and other costs associated with the development of next generation products. Sales and marketing expenses were 83.4 million, or 5.4% of revenue, up from 79 million last quarter, with increases increased sales compensation, and travel costs. Our G&A cost came in at $14.3 million, or 0.9% of revenue, up from $12.1 million last quarter, reflecting some seasonal fourth quarter spending. Our operating income for the quarter was $744 million, or 48.3% of revenue. Other income and expense for the quarter was a favorable $54.5 million, and our effective tax rate was 16.8%. This lower-than-normal quarterly tax rate reflected the release of tax reserves due to the expiration of the statute of limitations and some true-up of jurisdictional earnings mix. This resulted in net income for the quarter of $664.3 million, or 43.1% of revenue. Our diluted share number was 318.85 million shares, resulting in a diluted earnings per share number for the quarter of $2.08, up 47.5% from the prior year. Now I turn to the balance sheet. Cash, cash equivalents and investments entered the quarter at approximately $5 billion. We did not repurchase shares of our common stock in the quarter. To recap our repurchase program to date, we have repurchased $855.5 million, or 8 million shares, as an average price of $107 per share under our current $1 billion board authorization. This leaves $144.5 million available for repurchase in future quarters. The actual timing and amount of future repurchases will be dependent on market and business conditions, stock price, and other factors. Now turning to operating cash performance for the fourth quarter. We generated approximately $526.5 million of cash from operations in the period, protecting strong earnings performance combined with some increase in deferred revenue, offset by reductions in taxes payable. DSOs can end at 61 days, up from 51 days in Q3, affecting the timing of shipments and seasonal strength and service renewal billings. Inventory returns were 1.07 times, down slightly from 1.1 last quarter. Inventory increased slightly to 1.95 billion, affecting the ongoing receipt and consumption of components from our purchase commitments and an increase in switch-related finished goods. Our purchase commitments at the end of the quarter were 1.59 billion, down from 2 billion at the end of Q3. We expect to continue to reduce our overall purchase commitment number. However, we will maintain a healthy position related to key components, especially as we focus on new products. Our total deferred revenue balance was $1.51 billion, up from $1.195 billion in Q3. The majority of the deferred revenue balance is services related and directly linked to the timing and term of service contracts, which can vary on a quarter-by-quarter basis. Our product deferred revenue balance increased approximately $153 million over last quarter. This was ahead of our expectations for the quarter, and yet again shows that this balance can move significantly on a quarterly basis. As of now, we expect this balance to decline somewhat in Q1-24, but still be up significantly from Q3-23 levels. Account stable days were 72 days, up from 44 days in Q3, reflecting the timing of inventory, receipts, and payments. Capital expenditures for the quarter were $6 million. I will now just turn the call back to Jayshree. Jayshree?
spk12: Thank you, Ida. First of all, for an incredible eight and a half years as our Chief Financial Officer, we're going to miss you a lot and wish you all the best in your next innings. And if you ever miss an earnings call, please come. We'll invite you for one. Now, to describe our Q1 2024 guidance, it's my pleasure to introduce our incoming Chief Financial Officer, Chantel Brightup. for her very first earnings call at Arista. Welcome, Chantal.
spk11: Thank you, Jayshree. Ida, congratulations on all that you have achieved during your tenure with Arista. Your partnership during our transition is greatly appreciated. Since joining Arista, I have been impressed by both the outstanding leadership team and the highly innovative engineering team who both serve a set of marquee customers that are redefining the future of networking. Arista began shipping products in 2008. And in 15 years, the annual bandwidth of the data centers has grown 350 fold. In just the past two years, the annual bandwidth has doubled with Arista shipping a cumulative 75 million ports in that timeframe. Our acceleration of the data center switching market in recent quarters is evidenced by our market share gains in the 20 plus percent range of both ports and dollars. I am thrilled to be joining Arista at such an exciting time. Now turning to our outlook for the first quarter of 2024, and the remainder of the fiscal year. We remain confident with our analyst debut, which called for fiscal year 2024 revenue growth of 10 to 12%. This reflects our outlook for moderated cloud spending after multiple years of accelerated growth, combined with a continued growth trajectory in the enterprise business. For gross margin, we reiterate the range for the fiscal year of 62 to 64%. With T124 expected to be at the lower end, due to a heavier cloud mix, including some expected release of deferred revenue. In terms of spending, we expect to invest in gross spending faster than revenue, in line with our analyst debut, with an operating margin of approximately 42% in 2024. This incremental investment may include go-to-market resourcing and increased new product introduction costs to support our product roadmap. This latter trend is already evident in Q124, as R&D is expected to rebound from the unusually low levels in the second half of 2023. On the cash front, we will continue to work to reduce our working capital investments and drive some further reduction in inventory as we move through the year. Our structural tax rate is expected to remain at 21.5%, back to the usual historical rate, up from the unusually low one-time rate of 16.8% experienced last quarter Q4 FY23. With all of this as a backdrop, our guidance for the first quarter, which is based on our non-GAAP results and excludes any non-cash stock-based compensation impacts and other non-recurring items is as follows. Revenues of approximately $1.52 to $1.56 billion, gross margin of approximately 62%, and operating margin at approximately 42%. Our effective tax rate is expected to be approximately 21.5%, with approximately 319.5 million diluted shares. In summary, I am excited to lead Arista 2.0 journey as CFO. We will migrate our best-of-breed products to best-of-breed data-driven platforms, enabling our impressive TAM of $60 billion. With that, I now turn the call back to Liz for Q&A. Liz?
spk10: Thank you, Chantal. We will now move to the Q&A portion of the Arista earnings call. To allow for greater participation, I'd like to request that everyone please limit themselves to a single question. Thank you for your understanding. Operator, take it away.
spk08: Thank you. We will now begin the question and answer portion of the ERISA earnings call. In order to ask a question during this time, simply press star, then the number one on your telephone keypad. If you would like to withdraw your question, press the star and the number one again. We ask that you pick up your handset before asking questions in order to ensure optimal quality. Your first question comes from the line of Aaron Rakers from Wells Fargo. Please go ahead. Your line is open.
spk07: Yeah, thanks for taking the question. And, Ida, it's been great working with you. I wish you the best in retirement. I guess my question is, you know, Jayshree, you know, just obviously the focus on AI and the build-up of back-end networks based on 400 and 800 gig ethernet. I'm just curious, like, you know, as you, as we progress through these last three months, you know, how has your views evolved? And just remind us of the cadence of kind of product cycles that really set the table for Arista and this AI opportunity as we move through 24 and particularly into 25. Thank you.
spk12: Thank you, Aaron. And yes, we will all miss Ida. So our AI performance continues to track well for the $750 million revenue goal that we set last November at Analyst Day. To give you some color on the last three months, I would say it's difficult to project anything in three months. But if I look at the last year, which may be, you know, last 12 months is a better indication, we have participated in a large number of AI bids. And when I say large, I should say they're large AI bids, but they're a small number of customers, actually, to be more clear. And in the last four out of five AI networking clusters, we have participated on Ethernet versus InfiniBand. Arista has won all four of them for Ethernet. One of them still stays on InfiniBand. So these are very high-profile customers. We are pleased with this progress. But as I said before, last year was the year of trials. This is the year of pilots. and true production truly sets in only in 2025.
spk08: Great. Your next question comes from the line of Tal Liani from Bank of America. Please go ahead.
spk14: Hi. I'm trying to find, because we don't have the backlog contribution of last year, I'm trying to kind of dissect the numbers and see what's the correlation with, core data center business and traditional compute. So if server sales cycle is low and we see some declines in servers, does it mean that, at least in the short run, excluding the back row contribution, there is also a decline in the orders? How does it work between server demand and switching demand? Thanks.
spk12: Yeah. So, Kyle, first of all, as you know, Ida and I or Chantel and I would never really comment on bookings, orders. We find these all to be kind of useless metrics because ultimately what matters is what we ship, which is revenue. But just to sort of answer your question on ratio of CPUs or for that matter GPUs in the future to the network, typically we have to have the CPUs or GPUs come in before we can outfit the network. They kind of go hand in hand. But as you know, in AI, we've been waiting for the GPUs and in the last couple of years, they've been waiting for everything with a long lead time. But I would say generally in the leaf architecture, they go hand in hand where you have to create a rack of, you know, 1,000 servers or whether they're CPUs and GPUs. And generally, they look to rack and stack the cable, the CPUs, and the network together. On the spine, which is our, you know, which connects all of our leafs, that decision can be made independently, even if the processors are not available. So on the leaf, it's more correlated. On the spine, it's not. Great. Thank you.
spk08: Thank you. Your next question comes from the line of Sebastian Nagy from William Blair. Please go ahead. Your line is open.
spk00: Great. Thank you. I just wanted to start and echo everyone's commentary and wish you the best, Ida. It's been a pleasure. It has to do with Whitebox. People have been talking about the threat for Whitebox since Arista has been around, and it hasn't really impacted Arista's ability to grow. Can you maybe articulate why you believe in the world of AI networks? More of the market would not move to Whitebox or vice versa, maybe why more of the market would move away from Whitebox?
spk12: It's a good question, Sebastian. Thank you. Look, I think Whitebox is here to stay for a very long time if somebody just wants a throwaway commodity product. But how many people want throwaway commodity in the data center? They're so mission critical. And they're even more mission critical for AI. If I'm going to spend, you know, multi-million dollars on a GPU cluster, then the last thing I'm going to do is put a toy network in, right? So to put this sort of in perspective, we will continue to coexist with Whitebox. There will be use cases where Arista's Blue Box or a standalone White Box can run either Sonic or FBOS. But many times the EOS software stack is really, really something they depend on for availability, analytics, automation. And, you know, you can get your network for zero cost, but the cost of downtime is millions and millions of dollars. So we have always embraced White Box. We coexist with it. but it continues to be a relatively small use case in the larger mission-critical data centers for enterprise and companies.
spk08: Thanks, Mastin.
spk03: Thank you, Jayshree.
spk08: Your next question comes from the line of Matt Nicknam from Deutsche Bank. Please go ahead. Your line is open.
spk15: Hey, thanks so much for taking the question. Maybe a higher level strategy question. We've seen two of your key networking peers scale up through sizable M&A over the last several months. So can you talk a little bit about how you view the value of such scale in order to maybe better serve and target both the cloud and AI titan as well as enterprise verticals? Thanks.
spk12: Yeah, Matt, that's a good question. I think on the cloud and AI, we feel pretty bulked up. to deal with those customers because they don't look for size and bulk. They look for, as you know, networking innovation capabilities. And this has been Arista's heritage for 10 years and will continue to be with the AI cycle for the next foreseeable 10 years. On the enterprise, there are multiple markets and size helps. I think if you are targeting the, you know, early adopters, Arista has traditionally done very, very well there. And the last few years is a good example of how well we've done there, both in the data center and in the campus. If you look at the next category of sort of the, not necessarily the screaming early adopters, but maybe the fast followers, I think Arista will continue to do well there in the large enterprise. We are so underserved and under-penetrated in both the Fortune 5000 and the Global 2000. We've got a long, long ways to go. You know, we probably have 20% of those customers. We've got 80% of them left to go. And I'm not even talking about the mid-market and the SMB, which is a whole other market that we are underserved in. So absolutely, we need to make more investments in enterprise there. When I look at what Anshu, Krishmit, Ashwin are doing, this is exactly where we're doubling down. This is exactly where we doubled down in the last three years post-pandemic. And we have more than doubled our revenue and increased our logo presence because of this investment in the enterprise. I can comment about consolidation of vendors, but when vendors don't grow, 5 plus 5 sometimes is 10. But if you're careful on integration, 5 plus 5 can sometimes be 7 too. So that's somebody else's responsibility, not mine. I think we can get a lot of organic growth.
spk08: Your next question comes from the line of Amita Marshall from Morgan Stanley. Please go ahead. Your line is open.
spk01: Great. Thanks. Jayshree, maybe just a question. You noted limited visibility and understand that this early during the year, but would you say that it's timing of when some of these back-end pilots scale into production? Is it kind of level of front-end spending? Is it enterprise projects? Just like where you were finding just more commentary on the visibility comments. And then second question, you know, you guys noted on the gross margins that it's a portion of mix and kind of supply chain costs coming down. But just if there's any one bias towards what led to the gross margin upside in the quarter. Thanks.
spk09: Yeah, I mean, maybe I'll take that last one first. I mean, a lot of the upside in the fourth quarter was really just customer mix, right? I mean, we were weighted heavily towards enterprise in Q4, not for any particular reason. It just happened to be that way, and that kind of drove the margins higher.
spk12: And Amina, to answer your question on enterprise and AI activity, I think Arista continues to drive the concept of EOS, multi-domain routing, campus, high availability, mission-critical enterprises for multiple verticals. We're making good progress there, and this is going to be the heart of our mainstream innovation and go-to-market. On the AI side, we continue to track well. I think we're moving from what I call trials, which is connecting hundreds of GPUs to pilots, which is connecting thousands of GPUs this year. And then we expect larger production clusters. I think one of the questions that we will be asking ourselves and our customers is how these production clusters evolve. Is it going to be 400, 800, or a combination thereof? The role of Ultra Ethernet Consortium and standards and the ecosystem all coming together, very similar to how we had these discussions in 400 gig, will also play a large part. But we're feeling pretty good about the activity, and I think moving from trials to pilots this year will give us considerable confidence on next year's number. Great.
spk08: Thank you. Thank you. Your next question comes from the line of James Fish from Piper Sandler. Please go ahead. Your line is open.
spk05: Hey, thanks for the question. Maybe, Ida, for you, and I'll miss having you on here, by the way. Congrats on retirement. But what's causing the delay being able to ship that we saw that product deferred revenue jump as much as we did? Or should we think about this as normal to see this level of jump in Q4s as based on what you've disclosed in the past? It doesn't seem like this is a normal jump. I guess what's the hang up? And with supply chains starting to go the other way, it's probably more readily available. Could we actually see the price increases you guys have enacted in the past now have to be given back at some point in 24, 25?
spk09: Yeah, Jim, I think the deferred, if you think back to how this works, obviously it's been shipped for it to actually be in deferred, right? So I think that's, you know, it's just timing. And we've talked about this lots over the past. I'm sure Chantal's going to talk about it. again in the future, right, is that it really is just purely timing of shipments and, you know, where we have some new type projects, new capabilities that we're trialing with the customer, that's causing it to get caught in the deferred. But it's not, you know, it's not a fundamental underlying driver of the business. You know, I think on, you know, on pricing, and very little is happening in terms of pricing that's kind of out of the ordinary. It's just a normal pricing environment where we continue to compete for business. I don't think there's anything particularly different there that we've seen.
spk05: Thanks.
spk08: Your next question comes from the line of Itai Kidron from Oppenheimer. Please go ahead. Your line is open.
spk02: Thanks, and congrats to you as well, Itai. I'll miss you. And Chantelle, good luck, of course, to you in your new role. I have a couple of ones for me. First of all, on the cloud mix, it kind of declined a little bit on the year. Maybe you can tell us what are your underlying working assumptions for 24. And then more broadly on the 24 guide, Chantelle, it feels light. You know, you're talking about about $600 million increase year over year in revenue. It feels like half of it can already come from the AI networking, giving your 25 targets. And you seem very comfortable about your 25 targets. I would think your 24 should be comfortable as well. So why, if I assume that $200 million, $300 million come from AI networking this year, why should the rest of the business generate only $300 million? to get to your annual targets? Why such an aggressive conservatism here on the guide?
spk12: Okay, Itay, let me take the first question and then I'll pass it over to Itay and Shanta for what you call conservatism. So first of all, our cloud mix is very strong, very good. But I think what you should take away from this is not that our cloud mix came down, but iEnterprise did really, really well. And since 100% is the total pie, when something does really well, then the others look less so. So we're doing well on all three sectors, and we're very proud of the enterprise momentum. AI is going to come. It is yet to come. Certainly in 2023, as I've said to you many, many times, it was a very small part of our number, but it will gradually increase. Okay, Richard, one of my fantastic CFOs, wants to take the question. Conservatism question.
spk11: I'll start the call with that. Thank you, Ty. Nice to meet you. Thank you for the well wishes. You know, I think coming into 2024, it's a balanced view in the sense of we want to have multiple options to get to our year. And so we'll work through what those mixes are and how to get to that performance that we've laid out for our guidance. I think that, you know, Jayshree very eloquently put in the sense of 23, 24, 25 on what we expect from AI, going from trials to pilots to production. It's a little work through what that means in 2024, but I think to change anything in Q1 at this time, we're just going to go a quarter at the time, especially with me coming in, and we'll see how the year progresses.
spk02: Very good. Good luck.
spk08: Thank you. Thank you. Your next question comes from the line of Alex Henderson from Needham & Company. Please go ahead. Your line is open.
spk04: Ida, I can't believe you're leaving us. I'm going to miss you. Go ahead.
spk12: No, she said she will miss you.
spk04: I'm sorry. Go ahead.
spk12: No, go ahead, Alan.
spk04: Take your question. Yeah. So the question I have really is what are you hearing from the field, particularly in the enterprise segment? There's been a lot of noise about indigestion of large amounts of volume that have been shipped to various companies. And clearly there's some concern that there's some oversupply over the last year into the enterprise market. And I think you've talked to a lot of CEOs. What are they telling you in terms of where their IT spending intentions are for 24? Where are they saying the spending is going relative to the networking gear versus alternative spending priorities? Thanks.
spk12: That's a good question, Alex. I certainly talk to a lot of CIOs and CEOs. And if I rewind the clock to January last year, I think the crisis was a lot spookier then. We were going through this whole financial crisis, Silicon Valley Bank, this, that, the other. And if I now fast forward to a year later, our momentum in the enterprise is actually stronger now than it was a year ago. So all this decision-making, customers are looking for that innovation, modern network model, CICD principles, bringing DevOps, NetOps, SecOps, all of this together. And so Arista continues, in my view, with a large TAM we have at the enterprise of at least $30 billion out of that $60 billion, to have the opportunity to really deliver that vision of client to cloud, break down the operational silos. And I would say today the CIOs recognize us as the pure play innovator more than any other company.
spk08: Thanks, Alex. Your next question comes from the line of Oteef Malik from Citi. Please go ahead. Your line is open.
spk16: Thank you for taking my question. Jayshree, thanks for providing that comment on the four wins against InfiniBand. Now, your networking competitor announced a collaboration with NVIDIA on Ethernet AI enterprise solutions last week. Can you talk about what this means for your Ethernet backend business, if anything?
spk12: Yeah, I don't understand the announcement as well as probably my competitor does. I think it has more to do with UCS and Cisco validated designs. Specific to our partnership, you know, you can be assured that we'll be working with the leading GPU vendors. And as you know, NVIDIA has 90 or 95% of the market. So Jensen and I are going to partner closely. It is vital to get a complete AI, you know, network design going. We will also be working with our partners in AMD and Intel. So we will be the Switzerland of XPUs, whatever the GPU might be, and we look to supply the best network ever.
spk02: Thank you.
spk08: Your next question comes from the line of Tim Long from Barclays. Please go ahead. Your line is open.
spk03: Thank you. Yeah, Ida, going to miss you as well. Good luck. So I wanted to follow up a little bit more on that AI, Jayshree. You talked about those wins. Could you just talk a little bit about a little bit more color there? Do you think these deployments are going to be more sole sourced or will there be multiple vendors? Did you face kind of a different competitive landscape than normal in these? And what are you thinking about breadth of this business? I'm sure it's a lot of the the really large customers, as you said, right now. But can you talk a little bit about how you see this moving into whether it's other service providers or the enterprise vertical? Thank you.
spk12: Yeah, thanks, Tim. Okay, so let me just step back and say the first real, you know, consultative approach from Arista is to provide our expertise on how to build a robust back-end AI network. And so the whole discussion of Ethernet versus InfiniBand becomes really important because if you may recall, a year ago, I told you we were outside looking in. Everybody had an Ethernet. Everybody had an InfiniBand HPC cluster that was kind of getting bundled into AI. But a lot has changed in a year. And the popular product we are seeing right now as the backend cluster for our AI is the Arista 7800 AI spine, you know, which in a single chassis with north of 500 terabit of capacity can give you a substantial number of ports, 400 or 800. So you can connect up to 1,000 GPUs just doing that. And that kind of data parallel scale-out can improve the training time dimensions, large LLMs, massive integration of training data. And of course, as we shared with you at the analyst day, we can expand that to a two-tier AI leaf and spine with a 16-way CMP to support close to 10,000 GPUs non-blocking. This lossless architecture for Ethernet And then the overlay we will have on that with the Ultra Ethernet Consortium in terms of congestion control, packet spring, and working with a suite of UEC NICs is what I think will make Ethernet the default standard for AI networking going forward. Now, will it be sole-sourced? Gosh, I would be remiss if I didn't tell you that our cloud networking isn't sole-sourced, so probably our AI won't be, too. But, you know, today's models are moving very rapidly, relying on a, you know, high bandwidth, predictable latency, the focus on application performance requires you to be sole source initially. And over time, I'm sure it'll move to multiple sources. But I think Arista is very well positioned for the first innings of AI networking, just like we were for the cloud networking decade. And one other thing I want to say is although a lot of these customers are doing AI pivots, these AI pivots will result in revisiting the front-end cloud network too. So sort of this AI anatomy is being really well understood, and if you take a deep look at the centerpiece of it, which is all the GPUs, they have to connect to something very reliable, and this is really where we come in. And so this, you know, being actively involved is going to pay a lot of dividends, but we're still very much in our first innings of AI.
spk03: Very great. Thank you.
spk08: Thanks, Jen. Thank you. Your next question comes from the line of Ben Ruiz from Milius Research. Please go ahead.
spk13: Hey, thanks for the question. And obviously, Eta, it's been great working with you. Thanks for all you've done for us. I wanted to ask about your guidance and the conservatism from another lens here. With regard to 2024, since your November 9th analyst day, some things have changed. Microsoft, Meta, and Google have all raised their CapEx forecast for 2024. Obviously, your guidance for 2024 stays the same, and I know you're usually conservative. And then for 2025, AMD upped their TAM very significantly for AI by a multiple, and I guess they're seeing something. you know, that many of us are seeing with regard to the future demand. And you've kept your guidance at 750. I just, with that backdrop and the changes since November 9th and you guys keeping your guidance, and I understand you're conservative, do you mind addressing your conservativeism or your guidance from those lenses, both with regard to 24 and 25, Jayshree?
spk12: So, Ben, I'm going to let my two CFOs speak to the conservatism, and then I'll add more color. How about that? Who wants to go first?
spk16: Okay, great.
spk11: Ben, nice to meet you. It's Chantal. You know, I think that, you know, a change from November to January, February timeframe, I don't think would change our guidance on the year, kind of similar to the question before. I think that our guide right now resembles where we think we're at in the sense of what will materialize in 24. We'll take it one quarter at a time. The reflections of the changes you're mentioning, the timing of that, we have to wait and see. There's no guarantee that's within our 12-month guidance timeframe, and we'll watch, wait, and see. But Jayshree?
spk09: Yeah, I think that says it all. I mean, all the drivers that you mentioned are great drivers. It's the timing of everything that's always complex, right? So we'll take it a quarter of the time and see how things play out.
spk12: And look, if our conservatism changes to more optimism in the second half or more likely in 2025, we'll keep you posted.
spk13: All right. Thanks a lot. Take care.
spk10: Operator, we have time for one last question.
spk08: Thank you. Your final question comes from the line of Carl Ackerman from BNP Paribas. Please go ahead. Your line is open.
spk06: Yes, thank you for squeezing me in, and good evening from Paris. So there have been several companies within the optimal supply chain that indicate the market for 800 gig and early deployments of 1.6T ports will begin to inflect later this year for actually front-end networks. And so I guess why would I be wrong to conclude that your hardware sales would be a leading indicator of that? And I guess as a result, Shouldn't cloud tighten revenue grow at least in line with your outlook for 2024 double-digit growth? Thank you.
spk12: Yeah. Thank you, Carl. Again, I'll step, history is a good indicator of future. And if you look at our 400 gig, everybody asked me the same question. They said, how come 400 gig isn't taking off in 2019 or 20? And it turned out it took our ecosystem several years, and of course the pandemic didn't help, whether it was optics or NICs or the whole entire thing to come together. And I don't doubt we will have trials for 800 gig this year, but I think real production 800 gig will happen in 2025. I'd like to be proven wrong, and maybe it'll come in sooner, in which case, like I said, we'll let you know. But at the moment, this is our best case prediction.
spk10: Thanks, Carl. This concludes the Arisa Network's fourth quarter 2023 earnings call. We have posted a presentation which provides additional information on our results, which you can access on the investor section of our website. Thank you for joining us today, and thank you for your interest in Arista. And thank you for your interest in Arista.
Disclaimer

This conference call transcript was computer generated and almost certianly contains errors. This transcript is provided for information purposes only.EarningsCall, LLC makes no representation about the accuracy of the aforementioned transcript, and you are cautioned not to place undue reliance on the information provided by the transcript.

-

-