2/12/2026

speaker
Regina
Conference Operator

Welcome to the fourth quarter 2025 ERISA Networks Financial Results Earnings Conference Call. During the call, all participants will be in a listen-only mode. After the presentation, we will conduct a question and answer session. Instructions will be provided at that time. If you need to reach an operator at any time during the conference, please press the star key followed by zero. As a reminder, this conference is being recorded and will be available for replay from the Investor Relations section on the Arista website following this call. Mr. Rudolf Araujo, Arista's VP of Investor Advocacy, you may begin.

speaker
Rudolf Araujo
VP of Investor Advocacy

Thank you, Regina. Good afternoon, everyone, and thank you for joining us. With me on today's call are Joshi Ulal, Arista Network's Chairperson and Chief Executive Officer, and Chantel Bright, Arista's Chief Financial Officer. This afternoon, Arista Networks issued a press release announcing the results for its fiscal fourth quarter ending December 31st, 2025. If you want a copy of the release, you can access it online on our website. During the course of this conference call, Arista Networks management will make forward-looking statements, including those relating to our financial outlook for the first quarter of the 2026 fiscal year, longer-term business model, and financial outlook for 2026 and beyond. our total addressable market and strategy for addressing these market opportunities, including AI, customer demand trends, tariffs and trade restrictions, supply chain constraints, component costs, manufacturing output, inventory management, and inflationary pressures on our business, lead times, product innovation, working capital optimization, and the benefits of acquisitions, which are subject to the risks and uncertainties that we discuss in detail in our documents filed with the SEC, specifically in our most recent Form 10-Q and Form 10-K. and which could cause actual results to differ materially from those anticipated by these statements. These forward-looking statements apply as of today, and we should not rely on them as representing our views in the future. We undertake no obligation to update these statements after this call. This analysis of our Q4 results and our guidance for Q1 2026 is based on non-GAAP and excludes all non-cash stock-based compensation impacts, certain acquisition required charges, and other non-recurring items. A full reconciliation of our selected GAAP to non-GAAP results is provided in our earnings release. With that, I will turn the call over to Jayshree.

speaker
Jayshree Ullal
Chairperson and Chief Executive Officer

Thank you, Rudy, and thank you, everyone, for joining us this afternoon for our fourth quarter and full 2025 earnings call. Well, 2025 has been another defining year for Arista. With the momentum of generative AI and cloud and enterprise, we have achieved well beyond our goal at 28.6% growth, driving a record revenue of $9 billion, coupled with non-GAAP gross margin of 64.6% for the year and a non-GAAP operating margin of 48.2%. The Arista 2.0 momentum is clear as we surpass 150 million cumulative ports of shipments in Q4-25. International growth was a good milestone in both Asia and Europe, growing north of 40% annually. As expected, we have exceeded our strategic goals of 800 million in campus and branch expansion, as well as 1.5 billion in AI center networking. Shifting to annual customer sector revenue for 2025, cloud and AI titans contributed significantly at 48%. Enterprise and financials recorded at 32%. while AI and specialty providers, which now includes Apple, Oracle, and their initiatives, as well as emerging neo-clouds, performed strongly at 20%. We had two greater than 10 customer concentration in 2025. Customer A and B drove 16 and 26% of our overall business. We cherish our privileged partnerships that have spanned 10 to 15 years of collaborative engineering. With our ever-increasing AI momentum, we anticipate a diversified customer base in 2026, including one, maybe even two additional 10% customers. In terms of annual 2025 product lines, our core cloud AI and data center products built upon a highly differentiated Arista EOS stack is successfully deployed across 10 gig to 800 gigabit Ethernet speeds with 1.6 terabit migration imminent. This includes our portfolio of EtherLink AI and our 7,000 series platforms for best-in-class performance, power efficiency, high availability, automation, agility, for both the front and back-end compute, storage, and all of the interconnect zones. Of course, we interoperate with NVIDIA, the recognized worldwide market leader in GPUs. but also realize our responsibility to broaden the open AI ecosystem, including leading companies such as AMD, Anthropic, ARM, Broadcom, OpenAI, Pure Storage, and Vastata, to name a few, that create the modern AI stack of the 21st century. Arista is clearly emerging as the gold standard terabit network to run these intense training and inference models processing tokens at teraflops. Arista's core sector revenue was driven at 65% of revenue. We are confident of our number one position in market share in high-performance switching, according to most major industry analysts. We launched our Blue Box initiative, offering enriched diagnostics of our hardware platforms, dubbed NetDI, that can run across both our flagship EOS and our open NAS platforms. We saw an excellent uptake in 800 gig adoption in 2025, gaining greater than 100 customers cumulatively for our EtherLink products. And we are co-designing several AI rack systems with 1.6T switching emerging this year. With our increased visibility, we are now doubling from 2025 to 2026 to 3.25 billion in AI networking revenue. Our network adjacencies market is comprised of routing replacing routers and our cognitive AI-driven AVA campus. Our investments in cognitive wired and wireless, zero-touch operation, network identity, scale and segmentation get several accolades in the industry. Our open modern stacking with SWAG, Switched Aggregation Group, and our recent VESPA for Layer 2 and Layer 3 wired and wireless scale are compelling campus differentiators. Together with our recent VeloCloud acquisition in July 2025, we are driving that homogenous, secure, client-to-branch-to-campus solution with unified management domains. Looking ahead, we are committed to our aggressive goal of $1.25 billion for 26 for the cognitive campus and branch. We have also successfully deployed in many routing edge, core spine, and peering use cases. In Q4, 2025, Arista launched our flagship 7800R4 spine for many routing use cases, including DCI, AI spines, with that massive 460 terabits of capacity to meet the demanding needs of multi-service routing, AI workloads, and switching use cases. The combined campus and routing adjacencies together contribute approximately 18% of revenue. Our third and final category is the network software and services based on subscription models, such as ACARE, Cloud Vision, observability, advanced security, and even some branch edge services. We added another 350 Cloud Vision customers a day, almost one new customer a day, and deployed an aggregate of 3,000 customers with Cloud Vision over the past decade. Arista's subscription-based network services and software revenue contributed approximately 17%, and please note that it does not include perpetual software licenses that are otherwise included in core or adjacent markets. Arista 2.0 momentum is clear. We find ourselves at the epicenter of mission-critical network transactions. We are becoming the preferred network innovator of choice for client to cloud and AI networking with a highly differentiated software stack and a uniform cloud vision software foundation. We are proud to power Warner Brothers distribution network streaming for 47 markets in 21 languages in the Pan-European Winter Olympics that is happening as I speak. We are now north of 10,000 cumulative customers, and I'm particularly impressed with our traction in the 5 to 10 million customer category, as well as the 1 million customer category in 2025. Arista's 2.0 vision resonates with our customers who value us for leading that transformation from incongruent silos to reliable centers of data. The data can reside as campus centers, data centers, WAN centers, or AI centers, regardless of their location. Networking for AI has achieved production scale with an all Ethernet-based Arista AI center. In 2025, we are a founding member of the Ethernet-based standards for both ScaleUp with eSUN, as well as completing the UltraEthernet Consortium 1.0 specification for ScaleOut AI networking. These AI centers seamlessly connect the back-end AI accelerators to the front-end of compute storage, WAN, and classic cloud networking. Our AI-accelerated networking portfolio, consisting of three families of EtherLink spine-leaf fabric, are successfully deployed in scale-up, scale-out, and scale-across networks. Network architectures must handle both training and inference frontier models to mitigate congestion. For training, the key metric is obviously job completion time, the amount of time taken between admitting a job, training job, to an AI accelerator cluster, and the end of a training run. For inference, the key metric is slightly different. It's the time taken to a first token, basically the amount of latency it takes for a user submitting a query to receive their first response. Arista has clearly developed a full AI suite of features to uniquely handle the fidelity of AI and cloud workloads in terms of diversity, duration, size of traffic flow, and all the patterns associated with it. Our AI for networking strategy based on AVA, Autonomous Virtual Assist, curates the data for higher level functions, Together with our published, subscribed state foundation in EOS, NetDL, or Network Data Lake, we instrument our customers' networks to deliver proactive, predictive, and prescriptive features for enhanced security, observability, and agentic AI operations. Coupled with the Arista validated designs for network simulation, digital twin, and validation functionality, Arista platforms are perfectly optimized and suited for network as a service. Our global relevance with customers and channels is increasing. In 2025 alone, we conducted three large customer events across three continents, Asia, Europe, and United States, and many other smaller ones, of course. We touched 4,000 to 5,000 strategic customers and partners in the enterprise. While many customers are struggling with their legacy incumbents, Arista is deeply appreciated for redefining the future of networking. Customers have long appreciated our network innovation and quality demonstrated by our highest net promoter score of 93% and lowest security vulnerabilities in the industry. We now see the pace of acceptance and adoption accelerating in the enterprise customer base. Our leadership team, including our newly appointed co-presidents, Ken Duda and Todd Nightingale have driven strategic and cohesive execution. Tyson Lamoureux, our newest senior vice president, who joined us with deep cloud operator experience, has ignited our hypergrowth across our AI and cloud Titan customers. Exiting 2025, we are now at approximately 5,200 employees, which also includes the recent VeloCloud acquisition. I am incredibly proud of the entire Arista A team, and thank you, all employees, for your dedication and hard work. Of course, our top-notch engineering and leadership team has always steadfastly prioritized our core Arista way principles of innovation, culture, and customer intimacy. Well, I think you would agree that 2025 has indeed been a memorable year, and we expect 2026 to be a fantastic one as well. We are amid an unprecedented networking demand with massive and a growing TAM of 100 plus billion. And so, despite all the news on the mounting supply chain, allocation, rising cost of memory, and silicon fabrication, we increased our 2026 guidance to 25% annual growth, accelerating now to $11.25 billion. And with that happy news, I turn it over to Chantelle, our CFO.

speaker
Chantel Bright
Chief Financial Officer

Thank you, Jayshree, and congratulations to you and our employees on a terrific 2025. As you outlined, this was an outstanding year for the company, and that strength is clearly reflected in our financial results. Let me walk through the details. To start off, total revenues in Q4 were $2.49 billion, up 28.9% year-over-year, and above the upper end of our guidance of $2.3 to $2.4 billion. It was great to see that all geographies achieved strong growth within the quarter. Services and subscription software contributed approximately 17.1% of revenue in the fourth quarter, down from 18.7% in Q3, which reflects the normalization following some non-recurring VeloCloud service renewal in the prior quarter. International revenues for the quarter came in at $528.3 million, or 21.2% of total revenue, up from 20.2% last quarter. This quarter-over-quarter increase was driven by a stronger contribution from our large global customers across our international markets. The overall gross margin in Q4 was 63.4%, slightly above the guidance of 62 to 63%, and down from 64.2% in the prior year. This year-over-year decrease is due to the higher mix of sales to our cloud and AI Titan customers in the quarter. Operating expenses for the quarter were $397.1 million, or 16% of revenue, up from the last quarter at $383.3 million. R&D spending came in at $272.6 million, or 11% of revenue, up from 10.9% last quarter. Arista continued to demonstrate its commitment and focus on networking innovation with a fiscal year 25 R&D spend at approximately 11% of revenue. Sales and marketing expense was $98.3 million, or 4% of revenue, down from $109.5 million last quarter. FY25 closed the year with sales and marketing at 4.5%, representative of the highly efficient Arista go-to-market model. Our G&A cost came in at $26.3 million, or 1.1% of revenue, up from $22.4 million last quarter, reflecting continued investment in systems and processes to scale Arista 2.0. For fiscal year 25, G&A expense held at 1% of revenue. Our operating income for the quarter was $1.2 billion, or 47.5% of revenue. This strong Q4 finish contributed to an operating income result for fiscal year 2025 of $4.3 billion, or 48.2% of revenue. Other income and expense for the quarter was a favorable $102 million, and our effective tax rate was 18.4%. This lower than normal quarterly tax rate reflected the release of tax reserves due to the expiration of the statute of limitations. Overall, this resulted in net income for the quarter of $1.05 billion, or 42% of revenue. It is exciting to see Arista delivering over $1 billion in net income for the first time. Congratulations to the Arista team on this impressive achievement. Our diluted share number was 1.276 billion shares, resulting in a diluted earnings per share for the quarter of 82 cents, up 24.2% from the prior year. For fiscal year 25, we are pleased to have delivered a diluted earnings per share of $2.98, a 28.4% increase year over year. Now turning to the balance sheet. Cash, cash equivalents, and marketable securities ended the quarter at approximately $10.74 billion. In the quarter, we repurchased $620.1 million of our common stock at an average price of $127.84 per share. Within fiscal 2025, We repurchased $1.6 billion of our common stock at an average price of $100.63 per share. Of the $1.5 billion repurchase program approved in May 2025, $817.9 million remain available for repurchase in future quarters. The actual timing and amount of future repurchases will be dependent on market and business conditions, stock price, and other factors. Now turning to operating cash performance for the fourth quarter, we generated approximately $1.26 billion of cash from operations in the period. This result was an outcome of strong earnings performance with an increase in deferred revenue offset by an increase in accounts receivable driven by higher shipments and end-of-quarter service renewals. DSOs came in at 70 days, up from 59 days in Q3, driven by renewals and the timing of shipments in the quarter. Inventory turns were 1.5 times up from 1.4 last quarter. Inventory increased marginally to $2.25 billion, reflecting diligent inventory management across raw and finished goods. Our purchase commitments at the end of the quarter were $6.8 billion, up from $4.8 billion at the end of Q3. As mentioned in prior quarters, this expected activity mostly represents purchases for chips related to new products and AI deployments. We will continue to have some variability in future quarters due to the combination of demand for our new products, component pricing, such as the supply constraint on DDR4 memory and the lead times from our key suppliers. Our total deferred revenue balance was $5.4 billion, up from $4.7 billion in the prior quarter. In Q4, the majority of the deferred revenue balance is product related. Our product deferred revenue increased approximately $469 million versus last quarter. We remain in a period of ramping our new products, winning new customers, and expanding new use cases, including AI. These trends have resulted in increased customer-specific acceptance clauses and an increase in the volatility of our product deferred revenue balances. As mentioned in prior quarters, the deferred balance can lose significantly on a quarterly basis, independent of underlying business drivers. Accounts payable days were 66 days up from 55 days in Q3, reflecting the timing of inventory receipts and payments. Capital expenditures for the quarter were $37 million. In October 2024, we began our initial construction work to build expanded facilities in Santa Clara and incurred approximately $100 million in capex during fiscal year 2025 for this project. As we move through 2025, we have gained visibility and confidence for fiscal year 2026. As Jayshree mentioned, we are now pleased to raise our 2026 fiscal year outlook to 25% revenue growth, delivering approximately $11.25 billion. We maintain our 2026 campus revenue goal of $1.25 billion and raise our AI center's goal from $2.75 to $3.25 billion. For gross margin, we reiterate the range for the fiscal year of 62 to 64%, inclusive of mix and anticipated supply chain cost increases for memory and silicon. In terms of spending, we expect to continue to invest in innovation, sales, and scaling the business to ensure our status as a leading pure play networking company. With our increased revenue guidance, we are now confident to raise the operating margin outlook to approximately 46% in 2026. On the cash front, we will continue to work to optimize our working capital investments with some expected variability in inventory due to the timing of component receipts on purchase commitments. Our structural tax rate is expected at 21.5%, back to the usual historical rate, up from the seasonally lower rate of 18.4% experienced last quarter, Q4-25. With all of this as a backdrop, our guidance for the first quarter is as follows. Revenues of approximately $2.6 billion, gross margin between 62% and 63%, and operating margin at approximately 46%. Our effective tax rate is expected to be approximately 21.5%, with approximately 1.275 billion diluted shares. In closing, at our September Analyst Day, we had the theme of building momentum, and we are doing just that. In the campus WAM data and AI centers, we are uniquely positioned to deliver what customers need. We will continue to deliver both our world-class customer experience and innovation. I am enthusiastic about our fiscal year ahead. Now, back to you, Rudy, for Q&A.

speaker
Rudolf Araujo
VP of Investor Advocacy

Thank you, Chantelle. We will now move to the Q&A portion of the Arista earnings call. To allow for greater participation, I'd like to request that everyone please limit themselves to a single question. Thank you for your understanding. Regina, please take it away.

speaker
Regina
Conference Operator

we will now begin the Q&A portion of the Arista earnings call. To ask a question during this time, simply press star and then the number one on your telephone keypad. If you would like to withdraw your question, press star and the number one again. Please pick up your handset before asking questions to ensure optimal sound quality. Our first question will come from the line of Amita Marshall with Morgan Stanley. Please go ahead.

speaker
Amita Marshall
Analyst, Morgan Stanley

Great, and congratulations on the quarter. I guess in terms of kind of the commentary you had, Jayshree, on the one or two additional 10% customers. I guess just digging more into that, what are the puts and takes of, you know, is it bottlenecks in terms of their building? Is it like what would make or break kind of whether those become two new additional kind of 10% customers? Thank you.

speaker
Jayshree Ullal
Chairperson and Chief Executive Officer

Thank you, Mita, for the good wishes. So obviously, if I didn't have confidence, I wouldn't dare to say that, would I? But there's always variables. Some of it may be sitting in deferred, so there's an acceptance criteria that we have to meet. And there's also timing associated with meeting the acceptance criteria. Some of it is demand that is still underway. And, you know, in this age of all this supply chain allocation and inflation, we've got to be sure we can ship. So we don't know if it's exactly a 10% or high single digits. or low double digits, but a lot of variables will decide that final number, but certainly the demand is there.

speaker
Amita Marshall
Analyst, Morgan Stanley

Great. Thank you.

speaker
Jayshree Ullal
Chairperson and Chief Executive Officer

Thank you.

speaker
Regina
Conference Operator

Our next question will come from the line at the SOMIC Chatterjee with JP Morgan. Please go ahead.

speaker
Samit Chatterjee
Analyst, JP Morgan

Hi. Thanks for taking my question, and Jeshi, congrats on the quarter and the outlook. I don't want to sort of say that the 25% growth is not impressive, Since you're doing 30% is what the guidance is for one queue, maybe if I could understand what's maybe sort of leading to somewhat of a cautious in terms of visibility for the rest of the year. Is it these sort of one to two new customers and their ramps that you're sort of more cautious about? Or is it availability of supply in terms of the components or memory that's sort of giving you maybe a bit more cautiousness about the visibility for the remainder of the year if you can understand the drivers there?

speaker
Jayshree Ullal
Chairperson and Chief Executive Officer

Yeah, thank you. Thank you, Samit. First, I don't think I'm being cautious. I think I went all out to give you a high dose of reality, but I understand your views on caution given all the CapEx numbers you see from customers. That's an important thing to understand, that we don't track the CapEx. The first thing that happens in the CapEx is they've got to build the data centers and get the power and get all of the GPUs and accelerators, and the network lags a little. So demand is going to be very good, but whether the shipments exactly fall into 26 or 27, Todd, you can clarify when they really fall in, but there's a lot of variables there. That's one issue. The second, as I said, is a large amount of these are new products, new use cases, highly tied to AI, where customers are still in their first innings. So, again, you know, I'm giving you the greatest visibility I can, you know, fairly early in the year on the reality of what we can ship, not what the demand might be. It might be a multi-year demand that ships over multiple years. So let's hope it continues. But, of course, you must understand that we're also facing a low of large numbers. So 25% on a base of now $9 billion when we started last year at $8.25 is a really, really early and good start.

speaker
David Vogt
Analyst, UBS

Thank you.

speaker
Regina
Conference Operator

Our next question will come from the line of David Vogt with UBS. Please go ahead.

speaker
David Vogt
Analyst, UBS

Great. Thanks, guys, for taking my question. Maybe Chantal and Jayshree, can you help quantify sort of both the revenue impact and potential kind of gross margin impact embedded in your guide from the memory dynamics and the constraints? I know last quarter you even mentioned in this quarter, you know, obviously the supply chain does have some constraints. When you think about, I think, Casey, you just said kind the real outlook that you see, maybe you can help parameterize, you know, what you think could hold you back, if that's the way to phrase it, and just give us a sense for what upside could be, you know, in a perfect world effectively, if you could share that.

speaker
Jayshree Ullal
Chairperson and Chief Executive Officer

I'm going to give some general commentary, and Shantar, if you don't mind adding to it. You know, our peers in the industry have been facing this probably longer than we have because I think the server industry probably saw it first because they're more memory intensive. Add to that that we're expecting increases from the silicon fabrication that all the chips are made, as you know, centrally with one company, Taiwan Semiconductor. So, Arista has taken a very thoughtful approach, being aware of this, since 2025 and frankly absorbed a lot of the costs in 2025 that we were incurring. However, in 2026, the situation has worsened significantly. We're having to smile and take it just about at any price we can get, and the prices are horrendous. They're an order of magnitude exponentially higher. So clearly with the situation worsening and also expected to last multiple years, we are experiencing shortages in memory. Thankfully, as you can see reflected in our purchase commitments, we are planning for this. And I know that memory is now the new gold for the AI and automotive sector. But clearly it's not going to be easy, but it's going to favor those who plan and those who can spend the money for it.

speaker
Chantel Bright
Chief Financial Officer

Yeah, and I think the only other thing I'd add to your question, David, and thank you for that is that so we're comfortable in the guide, and that's why we have the guide and why we raised the numbers that we did. So we're comfortable we have a path to there within the numbers we provided. The range of 62, 64, I think we were pleased to hold despite this kind of pressure coming into it. You know, this has been our guide since September at our analyst day, so we're pleased to hold that guide and find ways to mitigate this, you know, this journey. Now, whether it ends up being, you know, 62.5 versus 63.5 in the guide in that range, that's where we'll continue to update you, but the range we're comfortable with.

speaker
David Vogt
Analyst, UBS

Understood. Thanks, guys.

speaker
Regina
Conference Operator

Thank you, David. Our next question comes from the line of Aaron Rakers with Wells Fargo. Please go ahead.

speaker
Aaron Rakers
Analyst, Wells Fargo

Yeah, thanks for taking the question and congrats as well on the quarter and the guide. I guess when we think about the $3.25 billion guide for the AI contribution this year, I'm curious, Jayshree, how much you're factoring in, if any, from scale-up networking opportunity, how do you see Is that more still the 27? And also, can you unpack, like, X the AI and X the campus contribution? It appears that you're guiding still pretty muted, low single-digit growth on non-AI. Just curious to how you see the non-AI, non-campus growth.

speaker
Jayshree Ullal
Chairperson and Chief Executive Officer

Okay, yeah. Well, you know, rising tide rises all boats, but some go higher and some go lower. But to answer your specific question, what was it, Ron? We have consistently described that today's configurations are mostly a combination of scale out and scale up. We're largely based on 800 gig and smaller radix. Now that the ESUN specification is well underway, and Ken Duda, I think the spec will be done in a year, or this year for sure. So Ken and Hugh Holbrook are actively involved in that. We need a good solid spec, otherwise we'll be shipping proprietary products like some people in the world do today. And so we will tie our scale-up commitment greatly to availability of new products and a new ESUN spec, which we expect the earliest to be Q4 this year. And therefore, majority of the, we'll be in some trials where a lot of, you know, Andy and the team is working on a lot of active AI racks with scale up in mind. But the real production level will be in 2027, primarily centered around not just 800 gig, but 1.16.

speaker
Regina
Conference Operator

And I think that.

speaker
Adith Malik
Analyst, Citi

Thank you. Oh, okay.

speaker
Regina
Conference Operator

Thank you, Erin. Our next question will come from the line of Amit Daryanani with Evercore ISI. Please go ahead.

speaker
Amit Daryanani
Analyst, Evercore ISI

Thanks a lot and congrats from my end as well for some really good numbers here. Jesse, if I think some of these model builders like Anthropic that I think you folks have talked about, they're starting to build these multi-billion dollar clusters on their own now. Can you just talk about your ability to participate in some of these build-outs as they happen, be that on the DCI side or maybe even beyond that? And by extension, Does this give you an opportunity to ramp up with some of the larger cloud companies that these model builders are partnering with over time as well as they build out TP or training clusters? I'd love to just understand how that kind of business scales with you folks. Thank you.

speaker
Jayshree Ullal
Chairperson and Chief Executive Officer

Yeah, no, Amit, that's a very thoughtful question. And I think you're absolutely right. The network infrastructure is playing a critical role with these model builders in a number of ways. If you look at us initially, we were largely working with, you know, one or two model builders and one or two accelerators, NVIDIA and AMD and OpenAI was the primarily dominant one. But today we see that there's really, you know, multiple layers in a cake where you've got the GPU accelerators. Of course, you've got power as the most difficult thing to get. But Arista needs to deal with multiple domains and model builders. And appropriately, whether it is Gemini or, you know, XAI or Anthropic Cloud or OpenAI and many more coming, these models and the multi-protocol algorithm or nature of these models is something we have to make sure we build a network correctly for so that's one and then to your second point you're absolutely right i think the biggest issue is not only the model builders but they're no more in silos in one data center and you're going to see them across multiple colos and multiple locations and multiple partnerships with our cloud titan customers that we've historically not worked with this so i think you'll see more co-pilot versions of it, if you will, with a number of our cloud titans. So we expect to work with them as AI specialty providers, but we also expect to work with our cloud titans in bringing the cloud and AI together.

speaker
Amit Daryanani
Analyst, Evercore ISI

Thank you.

speaker
Regina
Conference Operator

Thank you, Ahmed. Our next question comes from the line of George Nodder with Wolf Research. Please go ahead.

speaker
George Nodder
Analyst, Wolfe Research

Hi, guys. Thanks very much. I was just curious about – the product deferred revenue and how you see that, you know, coming off the balance sheet ultimately. Obviously, it's just been stacking up here quarter after quarter after quarter. So, a few questions here. Does that come off in big chunks that we'll see, you know, at different quarters in the future? Does it come off more gradually? Does it continue to build? Like, what does the profile look like for that product deferred coming off the balance sheet and playing for the P&L? And then also, I'm curious about how much product deferred Do you have in the full year revenue guidance the 25%? Thanks a lot.

speaker
Chantel Bright
Chief Financial Officer

Yeah. Hey, George. Thanks for the questions. Not much has changed in the sense of how we have this conversation. What goes into deferred is new product, new customers, new use cases. The great new use case is AI. The acceptance criteria for that for the larger deployments is 12 to 18 months. Some can be as short as six months, so there's wide variety that goes in. Deferred has balances coming in and out every quarter. We don't guide deferred, and we don't say product-specific. What I can tell you in your questions is that there will be times where there are larger deployments that will feel a little lumpier as we go through. But again, it's a net release of a balance, so it depends what comes in at that same quarter timing.

speaker
George Nodder
Analyst, Wolfe Research

Got it. Okay. Any sense for what's in the full-year guide then? I assume not much. Is that fair to say?

speaker
Jayshree Ullal
Chairperson and Chief Executive Officer

Yes. It's super hard, George, when the acceptance criteria happens. You know, if it happens December 32nd, it's a different situation. If it all happens in, you know, Q2, Q3, Q4, that's a different. So, that's something we really have to work with the customer. So, sorry that we're not able to be clairvoyant on that.

speaker
George Nodder
Analyst, Wolfe Research

Makes sense. Thank you.

speaker
Regina
Conference Operator

Thank you. Thank you. Our next question comes from the line of Ben Reitzes with Milius Research. Please go ahead.

speaker
Ben Reitzes
Analyst, Melius Research

Hey, thanks a lot. And I guess my congrats to you guys. You know, this execution and guide is really something. So I wanted to ask. You're welcome. I wanted to ask about two things that I just was wondering if you could talk a little bit more about your neocloud momentum and what that is looking like in terms of materiality. And then also, if you don't mind touching on AMD with the launch. We're kind of hearing about you getting a lot of networking attached to the 450-type product or their new chips. Wondering if that is a catalyst or not as you go throughout the year. Thanks so much.

speaker
Jayshree Ullal
Chairperson and Chief Executive Officer

Yeah. So, Ben, as you can imagine, the specialty cloud providers have historically had a cacophony of many types of providers. We are definitely seeing AI as one of the clear impasses used to be content providers, tier two cloud providers, but AI is clearly driving that section. And it's a suite of customers some of who have real financial strength and are looking now to invest and increase and pivot to AI. So the rate at which they pivot in AI will greatly define how well we do this. And, you know, they're not yet titans, but they'll want to be or could be titans is the way to look at it. And we're going to invest with them, and these are healthy customers. It's nothing like the dot-com era, so we feel good about that. There are a set of neoclouds that we watch more carefully because some of them are, you know, oil money converted into AI or crypto money converted into AI. And over there, we're going to be much more careful because some of those neoclouds are, you know, are looking at Arista as the preferred partner, but we would also be looking at the health of the customer, or they may just be a one-time. We don't know the exact nature of their business, and those will be smaller. and they don't contribute in large dollars, but they are becoming increasingly plentiful in quantity, even if they're not yet in numbers. So I think you're seeing this dichotomy of two types in that category, or three types, the classic CDN and security specialty providers, tier two cloud, the AI specialty are going to lean in and invest, and then the neoclouds in different geographies.

speaker
Ben Reitzes
Analyst, Melius Research

And the AMD?

speaker
Jayshree Ullal
Chairperson and Chief Executive Officer

Ah, yes, the AMD question. You know, a year ago, I think I said this to you, but I'll repeat it. A year ago, it was pretty much 99% NVIDIA, right? Today, when we look at our deployments, we see about 20%, maybe a little more, 20% to 25% where AMD is becoming the preferred accelerator of choice. And in those scenarios, ERISA is clearly preferred because they're building best-of-breed building blocks for the NIC, for the network, for the I.O., and they want open standards as opposed to a full-on vertical stack from one vendor. So you're right to point out that AMD, and in particular, it's a joy to work with Lisa and Forrest and the whole team, and we do very well in that multi-vendor open configuration.

speaker
Regina
Conference Operator

Our next question will come from the line of Tim Long with Barclays. Please go ahead.

speaker
Tim Long
Analyst, Barclays

Thank you. Yeah, appreciate all the color. Maybe we could touch a little bit on scale across. It's obviously gotten a lot of attention, particularly on the optics layer from some others in the industry. Obviously, you guys have been in DCI, which is kind of a similar type technology, I'm curious what you think as far as Arista's participation in more of these next-gen scale across networks, and is this something that would be good for like a blue box type of product, or would that more be in the scale up? So, if you could give a little color there, that would be great.

speaker
Jayshree Ullal
Chairperson and Chief Executive Officer

Right. Okay. So, you know, most of our participation today we thought would be scale out. But what we are finding is due to the distributed nature of where and how they can get the power and the bisectional bandwidth growth where essentially the throughput scale out or scale across is all about how much data you can move, right? As the workloads become more and more complex, you have to make them more and more distributed because you just can't set them in one data center, both from a power bandwidth throughput capacity. Also, these GPUs are trying to minimize the collective degradation. So as you scale up or out, the communication patterns become very, very much of a bottleneck. And one way to solve it is to extend this across data centers, both through fiber, and as you rightly pointed out, a very high injection bandwidth DCI routing. And then there's a sustained real world utilization you need across all of these. So for all these reasons, we are pleasantly surprised with the role of coherent long haul optics, which we don't build, but we have worked in the past very greatly with companies that do, and they're seeing the lift. and the 7800 spine chassis as the flagship platform and preferred choice that has been designed by our engineering team now for several years for this robust configuration. So less blue box there and much, much more of a full-on Arista flagship box with EOS and all of the virtual output queuing and buffering to interconnect regional data centers. with extremely high levels of routing and high availability too. So this really lends into everything Arista stands for, coming all together in a universal AI spine.

speaker
Tim Long
Analyst, Barclays

Okay, excellent. Thank you, Jayshree.

speaker
Regina
Conference Operator

Thank you. Our next question will come from the line of Carl Ackerman with BNP Paribas. Please go ahead.

speaker
Carl Ackerman
Analyst, BNP Paribas

Yes, thank you. Agentech AI should support an uptake in conventional server CPUs where your switches have high share within data centers. And so getting your upwardly revised outlook of 25% growth for this year, could you speak to the demand process you are seeing for front-end high-speed switching products that address Agentech AI products? Thank you.

speaker
Jayshree Ullal
Chairperson and Chief Executive Officer

Yeah, exactly, Carl. I think in the beginning, let's just go back in time in history. It's not that long ago. Three years ago, we had no AI. We were staring at InfiniBand being deployed everywhere in the back end. And we pretty much characterized our AI as only back end, just to be pure about it, right? Three years later, I'm actually telling you we might do north of $3 billion this year and growing, right? That number definitely includes the front end as it's tied to the back-end GPU clusters, and it's an all-Ethernet, all-AI system for agentic AI applications. A lot of the agentic AI applications are mostly running with some of our largest cloud AI and specialty providers. But I don't rule out the possibility, you can see this in our numbers, with north of 80, 800 gig customers, that many of that is going to feed into the enterprise as well, as agentic AI applications come for genomic sequencing, science, you know, automation of software. I don't know. I don't think, Ken, any of us believe that AI is eating software, but AI is definitely enabling better software, right? And we're certainly seeing that in Ken's team as well in our adoption of that. So the rise of agentic AI will only increase not just the GPU, but all gradations of XPU that can be used in the backend and frontend.

speaker
Regina
Conference Operator

Thank you. Thank you, Karthik. Our next question comes from the line of Simon Leopold with Raymond James. Please go ahead.

speaker
Simon Leopold
Analyst, Raymond James

Thank you very much for taking the question. I wanted to come back on the issue around sort of what's going on with the memory market. So two aspects to this. One, I'm wondering how much of a tool has been price hikes, you raising your prices to customers, and or whether or not within the substantial amount of purchase commitments you have, whether there's a significant aspect of memory in there so you've pre-purchased memory effectively at much lower prices than their spot market today. Thank you.

speaker
Jayshree Ullal
Chairperson and Chief Executive Officer

Thank you. Okay. I wish I could tell you we did purchase all that memory that we needed. No, we didn't. But while our peers in the industry have done multiple price hikes already, especially those in the server market or memory intensive switches, we have clearly been absorbing it, and memory is in our purchase commitments, but so is everything else. The entire silicon portfolio is in our purchase commitments. Due to some of the supply chain reactions, Todd and I have been reviewing this, and we do believe there will be a one-time increase on selected, especially memory-intensive SKUs, to deal with it, and we cannot absorb it if the prices keep going up the way they have in January and February. And I would tell you that all the purchase commitments I have in my current, in Chantel's current commitments, are not enough. We need more memory.

speaker
Simon Leopold
Analyst, Raymond James

Thank you.

speaker
Regina
Conference Operator

Our next question will come from the line of James Fish with Piper Sandler. Please go ahead.

speaker
James Fish
Analyst, Piper Sandler

Hey, ladies. Great quarter, great end to the year. J3, are hyperscalers getting nervous now at all in ordering ahead? What's your sense of pull in of demand potentially here, including for your own blue box initiative? And Chantel, for you, just going back to George's question, are you, I know it's difficult to answer, but are you anticipating that that product-deferred revenue is going to continue to grow through the year, or just it's way too difficult to predict, and you've got customers that could just say, you know, we accept and ship them all now, and so we end up with a big quarter, but product-deferred down?

speaker
Jayshree Ullal
Chairperson and Chief Executive Officer

I'm going to let Chantal answer the difficult questions over and over again.

speaker
Chantel Bright
Chief Financial Officer

Sure. Happy. Thank you, James. I appreciate it. So I think for deferred generally is so we don't guide deferred, but to try to give you more insight, there will be, back to George's question, there will be certain deployments that get accepted and released. But the part that's difficult is what comes into the balance, right, James? So I can't guide. That would be. That would be a wild guess on what's going to go in, which is not prudent, I think, from my perspective. So we'll continue to mention what's in it. We'll continue to show you through the balances. We'll talk about it in the script in the sense of the movement. But that's probably as much as I can tell you with a responsible answer looking forward.

speaker
Jayshree Ullal
Chairperson and Chief Executive Officer

James, this is one of those times, no matter how many times you answer this question in several different ways, the answer doesn't change.

speaker
James Fish
Analyst, Piper Sandler

I mean, we're all, Xanadu is doing the same thing over and over again.

speaker
Adith Malik
Analyst, Citi

Yeah, I know, I know.

speaker
Jayshree Ullal
Chairperson and Chief Executive Officer

So, on the hyperspace, are they getting nervous? I don't think they're getting nervous. You know, you've seen what a strong business they have, how much cash they put out, and how successful they are. But I do think they're working more closely with us. Typically, we had a three- to six-month visibility. We're getting greater visibility.

speaker
Regina
Conference Operator

Our next question will come from the line of Tal Liani with Bank of America. Please go ahead.

speaker
Tal Liani
Analyst, Bank of America

Hi, guys. I almost had the same question to you when I asked you last quarter because you increased the guidance. Yeah, no, I'll explain. You increased the guidance, but the entire increase in the guidance is basically the cloud. And if I look at It's very simple to dissect your numbers. If I remove campus and I remove cloud and you provide these two numbers for both 25 and 26, the rest of the business, which is 60% of the business, you guide it to grow zero. And in previous years, I can make estimates, it was anywhere from 10% to 30% growth. So the question is, why are you guiding this way that 60% of the business is not going to grow Is it because the No, can I pause you there?

speaker
Jayshree Ullal
Chairperson and Chief Executive Officer

Because I know you like to dissect our map several different ways and come up with conclusions. We're not guiding that our business is going to be flat or we're not going to grow here or grow there. But generally, when something is very fast-paced and growing, then other things grow less. And exactly whether it will be flat or grow double digits or single digits, it's February. I don't know what the rest of the year will be, okay?

speaker
Tal Liani
Analyst, Bank of America

No, but that's the question. The question is, is there allocation here? Meaning if you, let's say you have only set number of, you know, memory slots, so you allocate it to cloud and then the rest of the business doesn't get it, or is it just conservatism and lack of ability?

speaker
Jayshree Ullal
Chairperson and Chief Executive Officer

It's neither of the above. We don't allocate to our customers. It's first and first served. And, in fact, the enterprise customers get a very high sense of priority as do our cloud. Customers come first. But allocation of memory may allow us to be in a situation where the demand is greater than our ability to supply. We don't know. It's too early in the year. We're confident that we could guide, you know, six months after analyst day to a higher number, but we don't know what the next four quarters will look like to the precision you're asking for.

speaker
Tal Liani
Analyst, Bank of America

Got it. Thank you.

speaker
Regina
Conference Operator

Thank you. Our next question comes from the line of Adith Malik with Citi. Please go ahead.

speaker
Adith Malik
Analyst, Citi

Hi. It's Adrienne Colby for Adith. Thank you for taking my question. I was hoping to ask about for an update on the risk for large AI customers. I know that that fourth customer you talked about was a bit slower to ramp to 100,000 GPUs. Just wondering if you can update us on their progress there and perhaps what's next for the other few customers that have already crossed that threshold. And lastly, is there any indication that that fifth customer that ran into funding challenges might come back to you?

speaker
Jayshree Ullal
Chairperson and Chief Executive Officer

Okay. Adrienne, I'll give you some update. I'm not sure I have precise updates, but we are in all four customers deploying AI with Ethernet. So that's the good news. Three of them have already deployed a cumulative of 100,000 GPUs and are now growing from there. And, you know, clearly migrating now into beyond pilots and production to other centers, power being the biggest constraint. Our fourth customer is migrating from InfiniBand, so it's still below 100,000 GPUs at this time. But I fully expect them to get there this year, and then we shall see how they get beyond that.

speaker
Regina
Conference Operator

Our next question will come from the line of Michael Ng with Goldman Sachs. Please go ahead.

speaker
Michael Ng
Analyst, Goldman Sachs

Hey, good afternoon. Thank you for the question. I just have one in one follow-up. First, I was wondering if you could talk a little bit about the new customer segmentations that you guys unveiled with cloud and AI and AI and specialty. You know, what's the philosophy around that? And, you know, does that kind of signal more opportunity in places like Oracle and the With cloud and AI at 48% of revenue and A and B at a combined 36%, you have 12% left over. Is that a hyperscale customer? Does it kind of imply that you have a new hyperscaler that is approaching 10%? Because obviously – You know, we thought that, you know, the next biggest one would have been Oracle, but that's moved out of cloud now. So any thoughts there would be great. Thank you.

speaker
Jayshree Ullal
Chairperson and Chief Executive Officer

Yeah. Yeah, sure, Michael. So, well, first of all, my math is 26 to 16, so it's 42. So I don't have 12% unless you had 58. It's really only 6%. So on the cloud and AI titans, the way we classify that is it's significantly large-scale customers with greater than a million servers, greater than 100,000 GPUs, and R&D focus on models and sometimes even their own XPUs. And this can, of course, change. Some others may come into it. But it's a very select few set of customers, you know, less than five or about five. That's the way to think of it. On the change on the specialty cloud, as I said, we're noticing that some customers are really, really focused solely on AI with some cloud as opposed to cloud with some AI. So when it's a heavily set AI-centric, especially with Oracle's AI Acceleron and multi-tenant partnerships that they've created, they have naturally got a dual personality, some of which is OCI, the Oracle Cloud, but some of it is really AI, fully AI-based. So, the shift in their strategy made us shift the category and buy for K through 2.

speaker
Rudolf Araujo
VP of Investor Advocacy

Thank you, Jayshree.

speaker
Jayshree Ullal
Chairperson and Chief Executive Officer

Thank you.

speaker
Rudolf Araujo
VP of Investor Advocacy

Regina, we have time for one last question.

speaker
Regina
Conference Operator

Our final question will come from the line of Ryan Koontz with Needham & Company. Please go ahead.

speaker
Ryan Koontz
Analyst, Needham & Company

Great. Thanks for squeezing me in. Jayshree, in your prepared remarks, you talked about your telemetry capabilities, and I wonder if you could expand on that and discuss where you're seeing that key differentiation, what sorts of use cases you're able to really seize the upper hand competitively with your telemetry capabilities. Thank you.

speaker
Jayshree Ullal
Chairperson and Chief Executive Officer

Yeah. I'm going to say some, and I think Ken, who's been designing this and working on it, will say even more. Ken Duda, our president at CTOs. So telemetry is at the heart of our technology. both our EOS software stack as well as our cloud vision for enterprise customers. We have a real-time streaming telemetry that has been with us since the beginning of time, and it's constantly keeping track of all our switches. It isn't just a pretty management tool. And at the same time, our cloud customers and AI customers are seeking some of that visibility, too. And so we have developed some deeper AI capabilities for telemetry as well. Over to you, Ken, for some more detail.

speaker
Ken Duda
Co-President and CTO

Yeah, no, thanks for that question. That's great. Look, the EOS architecture is based on state orientation. This is the idea that we capture the state of the network and then stream that state out from the system database on the switches into whatever cloud vision or whatever system can then receive it. And we're extending that capability for AI with a combination of in-network data sources related to flow control, RDMA counters, buffering and congestion counters, and also host-level information, including what's going on in the RDMA stack on the host, what's going on with collectives, latencies, any flow control problems or buffering problems in the host NIC. Then we pull that information all together in Cloud Vision and give the operator a unified view of what's happening in the network and what's happening in the host. And this greatly aids our customers in building an overall working solution because the interactions between the network and the host can be complicated and difficult to debug when it's different systems collecting them.

speaker
Jayshree Ullal
Chairperson and Chief Executive Officer

Great job, Ken. I can't wait for that product.

speaker
Rudolf Araujo
VP of Investor Advocacy

Thank you. This concludes Arista Network's fourth quarter 2025 earnings call. We have posted a presentation that provides additional information on our results, which you can access in the investor section of our website. Thank you for joining us today and for your interest in Arista.

speaker
Regina
Conference Operator

Thank you for joining, ladies and gentlemen. This concludes today's call. You may now disconnect.

Disclaimer

This conference call transcript was computer generated and almost certianly contains errors. This transcript is provided for information purposes only.EarningsCall, LLC makes no representation about the accuracy of the aforementioned transcript, and you are cautioned not to place undue reliance on the information provided by the transcript.

-

-