Intel Corporation

Q2 2023 Earnings Conference Call

7/27/2023

spk10: Thank you for standing by, and welcome to Intel's Corporation's second quarter 2023 earnings conference call. At this time, all participants are in listen-only mode. After the speaker's presentation, there will be a question and answer session. To ask a question during the session, you'll need to press star 1-1 on your telephone. To remove yourself from the queue, simply press star 1-1 again. As a reminder, today's program is being recorded. And now I'd like to introduce your host for today's program, Mr. John Pitzer, Corporate Vice President of Investor Relations.
spk03: Thank you, Jonathan. By now, you should have received a copy of the Q2 earnings release and earnings presentation, both of which are available on our investor website, intc.com. For those joining us online, the earnings presentation is also available in our webcast window. I am joined today by our CEO, Pat Gelsinger, and our CFO, David Zinsner. In a moment, we will hear brief comments from both, followed by a Q&A session. Before we begin, please note that today's discussion does contain forward-looking statements based on the environment as we currently see it, and as such, are subject to various risks and uncertainties. Our discussion also contains references to non-GAAP financial measures that we believe provide useful information to our investors. Our earnings release, most recent annual report on Form 10-K, and other filings with the SEC provide more information on specific risk factors that could cause actual results to differ materially from our expectations. They also provide additional information on our non-GAAP financial measures, including reconciliations where appropriate to our corresponding GAAP financial measures. With that, let me turn things over to Pat.
spk06: Thank you, John, and good afternoon, everyone. Our strong second quarter results exceeded expectations on both the top and bottom line, demonstrating continued financial improvement and confirmation of our strategy in the marketplace. Effective execution across our process and product roadmaps is rebuilding customer confidence in Intel. Strength in client and data center and our efforts to drive efficiencies and cost savings across the organization all contributed to the upside in the quarter and a return to profitability. We remain committed to delivering on our strategic roadmap, achieving our long-term goals, and maximizing shareholder value. In Q2, we began to see real benefits from our accelerating AI opportunity. We believe we are in a unique position to drive the best possible TCO for our customers at every node on the AI continuum. Our strategy is to democratize AI, scaling it and making it ubiquitous across the full continuum of workloads and usage models. We are championing an open ecosystem with a full suite of silicon and software IP to drive AI from cloud to enterprise, network, edge, and client across data prep, training, and inference in both discrete and integrated solutions. As we have previously outlined, AI is one of our five superpowers, along with pervasive connectivity, ubiquitous compute, clouds-to-edge infrastructure, and sensing, underpinning a 1 trillion semi-industry by 2030. Intel Foundry Services, or IFS, positions us to further capitalize on the AI market opportunity, as well as the growing need for a secure, diversified, and resilient global supply chain. IFS is a significant accelerant to our IDM 2.0 strategy, and every day of geopolitical tension reinforces the correctness of our strategy. IFS expands our scale, accelerates our ramps at the leading edge, and creates long tails at the trailing edge. More importantly for our customers, it provides choice, leading edge capacity outside of Asia, and at 18A and beyond, what we believe will deliver leadership performance. We are executing well on our Intel 18A as a key foundry offering and continue to make substantial progress against our strategy. In addition, in July, we announced that Boeing and Northrop Grumman will join the RAMP-C program along with IBM, Microsoft, and Nvidia. The Rapid Assurance Microelectronics Prototypes commercial, or RAMP-C, is a program created by the U.S. Department of Defense in 2021 to assure domestic access to next generation semiconductors, specifically by establishing and demonstrating a U.S.-based foundry ecosystem to develop and fabricate chips on Intel 18A. RAMPC continues to build on recent customer and partner announcements by IFS, including MediaTek, Arm, and the leading cloud edge and data center solutions provider. We also made good progress on two significant 18A opportunities this quarter. We are strategically investing in manufacturing capacity to further advance our IDM 2.0 strategy and overarching foundry ambitions while adhering to our smart capital strategy. In Q2, we announced an expanded investment to build two leaving edge semiconductor facilities in Germany, as well as plans for a new assembly and test facility in Poland. The building out of Silicon Junction in Magdeburg is an important part of our go-forward strategy, and with our investment in Poland and the Ireland sites, we already operate at scale in the region. We are encouraged to see the passage of the EU CHIPS bill supporting our building out an unrivaled capacity corridor in Europe. In addition, a year after being signed into law, we submitted our first application for U.S. CHIPS funding for the on-track construction of our fab expansion in Arizona, working closely with the U.S. Department of Commerce. It all starts with our process and product roadmaps, and I am pleased to report that all our programs are on or ahead of schedule. We remain on track to five nodes in four years and to regain transistor performance and power performance leadership by 2025. Looking specifically at each node, Intel 7 is done, and with the second half launch of Meteor Lake Intel 4, our first EUV node is essentially complete with production ramping. For the remaining three nodes, I would highlight Intel 3 met defect density and performance milestones in Q2, released PDK 1.1, and is on track for overall yield and performance targets. We will launch Sierra Forest in first half 24, with Granite Rapids following shortly thereafter, our lead vehicles for Intel 3. On Intel 20A, our first node using both RibbonFET and Power Via, Arrow Lake, a volume client product, is currently running its first stepping in the fab. In Q2, we announced that we will be the first to implement backside power delivery in silicon two plus years ahead of the industry, enabling power savings, area efficiency, and performance gains for increased compute demands ideal for use cases like AI, CPUs, and graphics. In addition, backside power improves ease of design, a major benefit not only for our own products, but even more so for our Foundry customers. On Intel 18A, we continue to run internal and external test chips and remain on track to being manufacturing ready in the second half of 2024. Just this week, we were pleased to have announced an agreement with Ericsson to partner broadly on their next generation optimized 5G infrastructure. Reinforcing customer confidence in our roadmap, Ericsson will be utilizing Intel's 18A process technology for its future custom 5G SOC offerings. Moving to products, our client business exceeded expectations and gained share yet again in Q2 as the group executed well, seeing a modest recovery in the consumer and education segments, as well as strength in premium segments where we have leadership performance. We have worked closely with our customers to manage client CPU inventory down to healthy levels. As we continue to execute against our strategic initiatives, we see a sustained recovery in the second half of the year as inventory has normalized. Importantly, we see the AI PC as a critical inflection point for the PC market over the coming years that will rival the importance of Centrino and Wi-Fi in the early 2000s. and we believe that Intel is very well positioned to capitalize on the emerging growth opportunity. In addition, we remain positive on the long-term outlook for PCs as household density is stable to increasing across most regions and usage remains above pre-pandemic levels. Building on strong demand for our 13th Gen Intel processor family, Meteor Lake is ramping well in anticipation of a Q3 peer queue and will maintain and extend our performance leadership and share gains over the last four quarters. Meteor Lake will be a key inflection point in our client processor roadmap as the first PC platform built on Intel 4, our first EUV node, and the first client chiplet design enabled by Foveros advanced 3D packaging technology, delivering improved power efficiency and graphics performance. Meteor Lake will also feature a dedicated AI engine, Intel AI Boost. With AI Boost, our integrated neural VPU, enabling dedicated low-power compute for AI workloads, we will bring AI use cases to life through key experiences people will want and need for hybrid work, productivity, sensing, security, and creator capabilities, many of which were previewed at Microsoft's Build 2023 conference. Finally, while making the decision to end direct investment in our next unit of computing or NUC business, this well-regarded brand will continue to scale effectively with our recently announced ASUS partnership. In the data center, our fourth gen Xeon scalable processor is showing strong customer demand despite the mixed overall market environment. I am pleased to say that we are poised to ship our one millionth fourth gen Xeon unit in the coming days. This quarter, we also announced the general availability of 4th Gen cloud instances by Google Cloud. We also saw great progress with 4th Gen's AI acceleration capabilities, and we now estimate more than 25% of Xeon data center shipments are targeted for AI workloads. Also in Q2, we saw third-party validation from ML Commons when they published MLPerf training performance benchmark data showing that 4th Gen Xeon and Hibana Gaudi 2 are two strong, open alternatives in the AI market that compete on both performance and price versus the competition. End-to-end AI-infused applications like DeepMind's AlphaFold and algorithm areas such as graph neural networks show our fourth Gen Z on outperforming other alternatives, including the best published GPU results. Our strengthening positioning within the AI market was reinforced by our recent announcement of our collaboration with Boston Consulting Group, to deliver enterprise-grade secure and responsible generative AI, leveraging our Gaudi and 4th Gen Xeon offerings to unlock business value while maintaining high levels of security and data privacy. Our data center CPU roadmap continues to get stronger and remains on or incrementally ahead of schedule with Emerald Rapids, our 5th Gen Xeon scalable set to launch in Q4 of 23. Sierra Forest, our lead vehicle for Intel 3, will launch in first half of 24. Granite Rapids will follow shortly thereafter. For both Sierra Forest and Granite Rapids, volume validation with customers is progressing ahead of schedule. Multiple Sierra Forest customers have powered on their boards and silicon is hitting all power and performance targets. Clearwater Forest, the follow-on to Sierra Forest, will come to market in 2025 and be manufactured on Intel 18A. While we performed ahead of expectations, the Q2 consumption TAM for servers remained soft on persistent weakness across all segments, but particularly in the enterprise and rest of the world, where the recovery is taking longer than expected across the entire industry. We see the server CPU inventory digestion persisting in the second half. Additionally, impacted by the near-term wallet share focus on AI accelerators rather than general purpose compute in the cloud. We expect Q3 server CPUs to modestly decline sequentially before recovering in Q4. Longer term, we see AI as TAM expansive to server CPUs, and more importantly, we see our accelerator product portfolio as well positioned to gain share in 2024 and beyond. The surging demand for AI products and services is expanding the pipeline of business engagements for our accelerator products, which includes our Gaudi, Flex, and Max product lines. Our pipeline of opportunities through 2024 is rapidly increasing and is now over $1 billion and continuing to expand with Gaudi driving the lion's share. The value of our AI products is demonstrated by the public instances of Gaudi at AWS and the new commitments to our Gaudi product line from leading AI companies, such as Hugging Face and Stability AI, in addition to emerging AI leaders including Indian Institute of Technology, Madras, Pravartak, and Genesis Cloud. In addition to building near-term momentum with our family of accelerators, we continue to make key advancements in next-generation technologies, which present significant opportunities for Intel. In Q2, we shipped our test chip, Tunnel Falls, a 12-qubit silicon-based quantum chip, which uniquely leverages decades of transistor design and manufacturing investments and expertise. Tunnel Falls fabrication achieved 95% yield rate with voltage uniformity similar to chips manufactured under the more usual CMOS process, with a single 300-millimeter wafer providing 24,000 quantum dot test chips. We strongly believe our silicon approach is the only path to true cost-effective commercialization of quantum computing. A silicon-based qubit approach is a million times smaller than alternative approaches. Turning to PSG, NEX, and Mobileye, demand trends are relatively stronger across our broad-based markets like industrial, auto, and infrastructure. Although as anticipated, NEX did see a Q2 inventory correction, which we expect to continue into Q3. In contrast, PSG, IFS, and Mobileye continue on a solid growth trajectory, and we see the collection of these businesses in total growing year-on-year in calendar year 23, much better than third-party expectations for a mid-single-digit decline in the semiconductor market, excluding memory. Looking specifically at our programmable solutions group, we delivered record results for a third consecutive quarter. In Q2, we announced the Intel Agile X7 with the R-Tile chiplet is shipping production-qualified devices in volume to help customers accelerate workloads with seamless integration and the highest bandwidth processor interfaces. We have now PRQed 11 of the 15 new products we expected to bring to market in calendar year 23. For NEX, during Q2, Intel, Ericsson, and HPE successfully demonstrated the industry's first VRAN solution running on the fourth-gen Intel Xeon scalable processor with Intel VRAN Boost. In addition, we will enhance the collaboration we announced at Mobile World Congress to accelerate industry scale Open RAN utilizing standard Intel Xeon based platforms as telcos transform to a foundation of programmable software defined infrastructure. Mobileye continued to generate strong profitability in Q2 and demonstrated impressive traction with their advanced product portfolio by announcing a supervision, eyes-on, hands-off design win with Porsche and a mobility-as-a-service collaboration with Volkswagen Group that will soon begin testing in Austin, Texas. We continue to drive technical and commercial engagement with them, co-developing leading FMCW LiDAR products based on Intel silicon photonics technology, and partnering to drive the software-defined automobile vision that integrates mobilized ADAS technology with Intel's cockpit offerings. Additionally, in the second quarter, we executed the secondary offering that generated meaningful proceeds as we continue to optimize our value creation efforts. In addition to executing on our process and product roadmaps during the quarter, we remain on track to achieve our goal of reducing costs by $3 billion in 2023 and $8 to $10 billion exiting 2025. As mentioned during our internal foundry webinar, our new operating model establishes a separate P&L for our manufacturing group, inclusive of IFS and TD, which enables us to facilitate and accelerate our efforts to drive best-in-class cost structure, de-risk our technology for external foundry customers, and fundamentally changes incentives to drive incremental efficiencies. We have already identified numerous gains in efficiency, including factory loading, test and sort time reduction, packaging cost improvements, lithofield utilization improvements, reductions in steppings, expedites, and many more. It is important to underscore the inherent sustained value creation due to the tight connection between our business units and TD manufacturing and IFS. Finally, as we continue to optimize our portfolio, we agreed to sell a minority stake in our IMS nanofabrication business to Bain Capital, who brings a long history of partnering with companies to drive growth and value creation. IMS has created a significant market position with multi-beam mask writing tools that are critical to the semiconductor ecosystem for enabling EUV technology and is already providing benefit on our five nodes, four years efforts. Further, this capability becomes even more critical with the adoption of high NA EUV in the second half of the decade. As we continue to keep Moore's Law alive and very well, IMS is a hidden gem within Intel, and the business's growth will be exposed and accelerated through this transaction. While we still have work to do, we continue to advance our IDM 2.0 strategy. Five nodes in four years remains well on track. Our product execution and roadmap is progressing well. We continue to build out our foundry business, and we are seeing early signs of success as we work to truly democratize AI from cloud to enterprise, network, edge, and client. We also saw strong momentum on our financial discipline and cost savings as we return to profitability. are executing our internal foundry model by 2024 and are leveraging our smart capital strategy to effectively and efficiently position us for the future. With that, I will turn it over to Dave.
spk15: Thank you, Pat, and good afternoon, everyone. We drove stronger than expected business results in the second quarter, comfortably beating guidance on both the top and bottom line. While we expect continued improvement to global macroeconomic conditions, the pace of recovery remains moderate. We will continue to focus on what we can control, prioritizing investments critical to our IDM 2.0 transformation, prudently and aggressively managing expenses near term, and driving fundamental improvements to our cost structure longer term. Second quarter revenue was $12.9 billion, more than $900 million above the midpoint of our guidance. Revenue exceeded our expectations in CCG, DCAI, IFS, and Mobileye, partially offset by continued demand softness and elevated inventory levels in the network and edge markets, which impacted NEX results. Gross margin was 39.8%, 230 basis points better than a guidance on stronger revenue. EPS for the quarter was 13 cents, beating guidance by 17 cents as our revenue strength, better gross margin, and disciplined OpEx management resulted in a return to profitability. Q2 operating cash flow was $2.8 billion, up $4.6 billion sequentially. Debt inventory was reduced by $1 billion, or 18 days in the quarter, and accounts receivable declined by $850 million, or seven days, As we continue to focus on discipline cash management. Net capex was $5.5 billion resulting in adjusted free cash flow of negative $2.7 billion and we paid dividends of a half billion dollars in the quarter. Our actions in the last few weeks, the completed secondary offering of mobilized shares and the upcoming investment in our IMS nanofabrication business by Bain Capital, will generate more than $2.4 billion of cash and help to unlock roughly $35 billion of shareholder value. These actions further bolster our strong balance sheet and investment grade profile with cash and short-term investments of more than $24 billion exiting Q2. We'll continue to focus on avenues to generate shareholder value from our broad portfolio of assets in support of our IDM 2.0 strategy. Moving to second quarter business unit results, CCG delivered revenue of $6.8 billion, up 18% sequentially and ahead of our expectations for the quarter as the pace of customer inventory burns slowed. As anticipated, we see the market moving toward equilibrium and expect shipments to more closely align to consumption in the second half. ASPs declined modestly in the quarter due to higher education shipments and sell through of older inventory. CCG showed outstanding execution in Q2, generating operating profit of $1 billion, an improvement of more than $500 million sequentially on higher revenue, improved unit costs, and reduced operating expenses, offsetting the impact of pre-PRQ inventory reserves in preparation for the second half launch of Meteor Lake. DCAI revenue was $4 billion, ahead of expectations and up 8% sequentially, with the Xeon business up double digits sequentially. Data center CPU TAM contracted meaningfully in the first half of 23, and while we expect the magnitude of year-over-year declines to diminish in the second half, a slower-than-anticipated TAM recovery in China and across enterprise markets has delayed a return of CPU TAM growth. CPU market share remained relatively stable in Q2 and the continued ramp of Sapphire Rapids contributed to CPU ASP improvement of 3% sequentially and 17% year over year. DCAI had an operating loss of $161 million, improving sequentially on higher revenue and ASPs and reduced operating expenses. Within DCAI, our FPGA products delivered a third consecutive quarter of record revenue, up 35% year over year, along with another record quarterly operating margin. We expect this business to return to more natural demand profile in the second half of the year as we work down customer backlog to normalized levels. NEX revenue was $1.4 billion below our expectations in the quarter and down significantly in comparison to a record Q2 22. Network and edge markets are slowly working through elevated inventory levels, elongated by sluggish China recovery, and telgos have delayed infrastructure investments due to macro uncertainty. We see demand remaining weak through at least the third quarter. Q2 NEX operating loss of $187 million improved sequentially on lower inventory reserves and reduced operating expenses. Mobileye continued to perform well in Q2. Revenue was $454 million, roughly flat sequentially and year over year, with operating profit improving sequentially to $129 million. This morning, Mobileye increased fiscal year 2023 outlook for adjusted operating income by 9% at the midpoint. Intel Foundry Services revenue was $232 million, up 4x year-over-year and nearly doubling sequentially on increased packaging revenue and higher sales of IMS nanofabrication tools. Operating loss was $143 million, with higher factory startup costs offsetting stronger revenue. Q2 was another strong quarter of cross-company spending discipline, with operating expenses down 14% year-over-year. We're on track to achieve $3 billion of spending reductions in 2023. With the decision to stop direct investment in our client Nook business earlier this month, we have now exited nine lines of business since Pat rejoined the company with a combined annual savings of more than $1.7 billion. Through focused investment prioritization and austerity measures in the first half of the year, some of which are temporary in nature, OpEx is tracking a couple hundred million dollars better than our $19.6 billion 23 committed goal. now turning to q3 guidance we expect third quarter revenue of 12.9 to 13.9 billion dollars at the midpoint of 13.4 billion dollars we expect client cpu shipments to more closely match sell-through data center network and edge markets continue to face mixed macro signals and elevated inventory levels in the third quarter while ifs and mobileye are well positioned to generate strong sequential and year-over-year growth We're forecasting gross margin of 43%, a tax rate of 13%, and EPS of 20 cents at the midpoint of revenue guidance. We expect sequential margin improvement on higher sales and lower pre-PRQ inventory reserves. While we're starting to see some improvement in factory under load charges, most of the benefit will take some time to run through inventory and positively impact cost of sales. Investment in manufacturing capacity continues to be guided by our smart capital framework, creating flexibility through proactive investment in shelves and aligning equipment purchases to customer demand. In the last few weeks, we have closed agreements with governments in Poland and Germany, which includes significant capital incentives, and we're well positioned to meet the requirements of funding laid out by the U.S. CHIPS Act. Looking at capital requirements and offsets made possible by our smart capital strategy, we expect net capital intensity in the mid 30s as a percentage of revenue across 23 and 24 in aggregate. While our expectations for gross capex have not changed, the timing of some capital offsets is uncertain and could land in either 23 or 24, depending on a number of factors. Having said that, we're confident in the level of capital offsets we will receive over the next 18 months and expect offsets to track to the high end of our previous range of 20 to 30%. Our financial results in Q2 reflect improved execution and improving macro conditions. Despite a slower than expected recovery in key consumption markets like China and the enterprise, we maintain our forecast of sequential revenue growth throughout the year. Accelerating AI use cases will drive increased demand for compute across the AI continuum, and Intel is well positioned to capitalize on the opportunity in each of our business units. We remain focused on the execution of our near and long-term product, process, and financial commitments, and the prioritization of our owner's capital to generate free cash flow and create value for our stakeholders. With that, let me turn the call back over to John.
spk03: Thank you, Dave. We will now transition to the Q&A portion of our earnings presentation. As a reminder, we would ask each of you to ask one question with a brief follow-up question where appropriate. With that, Jonathan, can we have the first caller, please?
spk10: Certainly. And our first question comes from the line of Ross Seymour from Deutsche Bank. Your question, please.
spk14: Hi, guys. Thanks for letting me ask the question. Congrats on the strong results. I wanted to focus, Pat, on the data center, the DCAI side of things. Strong upside in the quarter, but it sounds like there's still some mixed trends going forward. So I guess a two-part question. Can you talk about what drove the upside and where the concern is going forward? And part of that concern, that crowding out potential that you discussed with accelerators versus CPUs, how is that playing out and when do you expect it to end?
spk06: Yeah, thanks, Ross. And You know, thanks for the congrats on the quarter as well i'm super proud of my team for the great execution this quarter top bottom line beats raise. You know just great execution across every aspect of the business both financially, as well as roadmap execution, you know with regard to the data Center. Obviously, the good execution, I'll just say we executed well. Winning designs, fighting hard in the market, regaining our momentum, good execution. As you said, we'll see the Sapphire Rapids hit the millionth unit in the next couple of days, our Xeon Gen 4. So overall, it's feeling good. Roadmaps in very good shape. So we're feeling very good about the future outlook of the business as well. As we look to 5th Gen, E-Core, P-Core with Sapphire and Granite Rapids. So all of those, I'll just say we're performing well. That said, we do think that the next quarter at least we'll show some softness. There's some inventory burn that we're still working through. We do see that big cloud customers in particular have put a lot of energy into building out their high-end AI training environments. And that is putting more of their budgets focused or prioritized into the AI portion of their build out, you know that said, we do think this is a near term right to surge, you know that we expect will balance over time. We see Ai as a workload, not as a market right, which will affect every aspect of the business, whether it's client whether it's edge whether it's standard data Center on premise enterprise. or cloud. We're also seeing that Gen 4 Xeon, and then we'll be enhancing that in the future roadmap, has significant AI capabilities. And as you heard in the prepared remarks, we expect about 25% today and growing of our Gen 4 is being driven by AI use cases. And obviously, we're going to be participating more in the accelerator portion of the market with our Gaudi Flex and Max product lines, particularly Gaudi is gaining a lot of momentum. In my formal remarks, we said we now have over a billion dollars of pipeline, 6X in the last quarter. So we're going to participate in the accelerator portion of it. We're seeing a real opportunity for the CPU as that workload balances over time between CPU and accelerator. And obviously, we have a strong position to democratize AI across our entire portfolio of products.
spk04: Ross, do you have a quick follow-up?
spk14: I do. I just wanted to pivot to Dave on a question on the gross margin side. Nice beat in the quarter and the sequential increase for the third quarter as well. Beyond the revenue increase side, which I know is important, can you just walk us through some of the pluses and minuses sequentially into the third quarter and even into the back half, some of the pre-PRQ reversals, underutilization, any of those kind of idiosyncratic blocks that we should be aware of as we think about the gross margin in the second half of the year?
spk15: Yeah, good question, Rob. So in the second quarter, just to repeat what I said in the prepared remarks, you know, that was largely a function of revenue. We had obviously beat revenue significantly and we got a good fall through given the fixed cost nature of our business. And so that really was what helped us really outperform significantly on the gross margin side in the second quarter. In the third quarter, we do obviously at the midpoint see revenue growth sequentially, and so that will be helpful in terms of gross margin improvement. We expect, again, pretty good fall through as we get that incremental revenue. We're also going to see underloadings come down. I would say modestly come down for two reasons. One, we get that period charge for some of our underloading, but some of our underloading is actually just function of the cost of the inventory. And so that will take some time to flow through. So it'll be a modest decline, but nevertheless helpful on the gross margin front. And then as you point out, we will have pre-PRQ reserves in the third quarter, but they're meaningfully down from the second quarter. Meteor Lake will not be a pre-PRQ reserve in the third quarter because we expect to launch that this quarter. But we have Emerald Rapids, that will certainly have some impact. And then some of the other SKUs, will also impact it. So coming down, but not to zero. So we have an opportunity actually to perform better in the fourth quarter, obviously dependent on the revenue and so forth, given that pre-PRQ reserves are likely to come off again in the fourth quarter. We should improve on the loading front in the fourth quarter as well. And so there's, I think, some good tailwinds on the gross margin front. I'll just take an opportunity to talk longer term. We will continue to be way down for some quarters on underload because of the nature of just having it cycle through inventory and then come out through cost of sale. So for multiple quarters, we'll have some underloading charges that we'll see. And then as we talked about, since really Pat joined and we kind of launched into the five nodes in four years, we're going to have a significant amount of startup costs that will hit gross margins that will affect us for a couple of years. But we're really optimistic about where gross margins are going over the long term. Ultimately, we will get back to process parity and leadership, and that will enable us to not have these startup costs be a headwind. And of course, as you bring out products at a high performance in terms of process and in terms of product, that shows up in terms of our margins. And then as Pat mentioned, he went through a laundry list in the prepared remarks of areas of benefit that the internal boundary model will give us. You know, we expect a pretty meaningful amount of that to come out by the time we hit 2026, but we won't be done there. I mean, I think there'll be multiple opportunities over the course of multiple years to improve the gross margin. So, you know, Pat has, you know, talked about a pretty significant improvement in gross margins over time. And I think, you know, what we're seeing today is the beginnings of seeing that improvement show up in the P&L.
spk03: Perfect. Ross, thanks for the question. Jonathan, can we have the next one, please?
spk10: Certainly. One moment for our next question. And our next question comes from the line of Joe Moore from Morgan Stanley. Your question, please.
spk09: Great. Thank you. Dave, I think you said in your prepared remarks that data center pricing was up 17% year-on-year and that Sapphire Rapids was a factor there. Can you just talk to that and kind of obviously Sapphire Rapids is going to get bigger. Can you talk about what you expect to see with platform costs in DCAI?
spk15: Platform costs. Okay. Well, first of all, you know, AST is obviously improving as we increase core count. And, you know, as we get more competitive on the product offerings, that enables us to, you know, to have more confidence in the market in terms of our pricing. So that's certainly helpful. Obviously, with the increase in core count, that affects the cost as well. So cost is obviously goes up. But, you know, the longer, the larger drivers of our cost structure will be around what we do in terms of the internal foundry model as we get up in terms of scale and get away from these underloading charges. And as we get past the startup costs on five nodes in four years, which data center is certainly getting hit with. And so those things I think longer term will be the bigger strivers of gross margin improvement. And as we get launched Sierra Forest in the first half of next year and Granite later thereafter, and start to produce products on the data center side that are really competitive, that enables us to even be stronger in terms of our margin outlook and should help improve the overall P&L of data center. Joe, do you have a follow-up question?
spk09: Sure. Just also on servers, as you look to Q3, I think you talked about some of the cautious trends there. Can you talk to enterprise versus cloud? Is it different between the two? And also, are you seeing anything different in China for data center versus what you're seeing in North America?
spk06: Yeah, and as we said, Joe, and thanks for the question, as we said in the prepared remarks, we do expect to be seeing the TAM down in Q3, somewhat driven by all of it. It's a little bit of data center digestion for the cloud guys, a bit of enterprise weakness, and some of that is more inventory related. And the China market, I think this has been well reported, hasn't come back as strongly as people would have expected overall. And then the last factor was one of the first question from Ross around the pressure from accelerator spend being stronger. So I think those four somewhat together are leading to a bit of weakness, at least through Q3. That said, our overall position is strengthening, and we're seeing our products improve. We're seeing the benefits of the AI capabilities in our Gen 4 and beyond products improving. We're also starting to see some of the use cases like graph neural networks, Google's Alpha Fold showing best results on CPUs as well, which is increasingly gaining momentum in the industry as people look for different aspects of Data preparation, data processing, different innovations in AI. So all of that taken together, we feel optimistic about the long-term opportunities that we have in data center. And of course, the strengthening accelerator roadmap with Gaudi 2, 3, Falcon Shores. being now well executed. Also, our first wafers are in hand for Gaudi 3. So we see a lot of long-term optimism, even as near-term we're working through some of the challenging environments of the market not being as strong as we would have hoped.
spk03: Joe, thanks for the question. Jonathan, can we have the next question, please?
spk10: Certainly. And our next question comes from the line of CJ Muse from Evercore ISI. Your question, please.
spk02: Yeah, good afternoon. Thank you for taking the question. I guess first question in your prepared remarks, you talked about AI being a TAM expander for servers. And I guess I was hoping you could elaborate on that, given the productivity gains through acceleration. Would love to hear why you think that will grow units, and particularly if you could bifurcate your commentary across both training and inference.
spk06: Yeah. And thanks, CJ. And generally, there are great analogies here that's from From history, we point to, you know, cases like virtualization was going to destroy, you know, the CPU TAM and then ended up driving new workloads, right? You know, if you think about a DGX platform, the leading edge AI platform, it includes CPUs, right? Why? Right? Head nodes, data processing, data prep, you know, dominate certain portions of the workload. You know, we also see, as we said, AI as a workload where, you know, you might spend, you know, 10 megawatts in months training a model, you know, but then you're going to use it very broadly for inferencing. We do see with Meteor Lake ushering in the AI PC generation where you have tens of watts, you know, being responding in a second or two, and then AI is going to be in every hearing aid in the future, including mine. where it's, you know, 10 microwatts and instantaneous. So, you know, we do see as AI drives workloads across the full spectrum of applications. And for that, we're going to build AI into every product that we build, you know, whether it's a client, whether it's an edge platform, you know, for retail and manufacturing and industrial use cases, whether it's an enterprise data center where they're not going to stand up a dedicated 10 megawatt farm, but they're not going to move their private data off premises and use foundational models that are available in open source, as well as in the big cloud and training environments as well. We firmly believe this idea of democratizing AI, opening the software stack, creating and participating with this broad industry ecosystem that's emerging. It was a great opportunity and one that Intel is well positioned to participate in. We've seen that the AI TAM is part of the semiconductor TAM. We've always described this trillion-dollar semiconductor opportunity and AI being one of those superpowers, as I call it, of driving it. But it's not the only one and one that we're going to participate in broadly across our portfolio.
spk03: CJ, do you have a follow-up question? Yeah, please.
spk02: You talked a little bit about 18A and backside power. We'd love to hear, you know, what you're seeing today in terms of both scaling and power benefits and how your potential foundry customers, you know, are looking at that technology in particular.
spk06: Yeah, thank you. And, you know, we continue to make good progress on our five nodes in four years. And with that, you know, that culminates in 18A. and 18A is proceeding well, and we got particularly good response this quarter to Power Via, the backside power that we believe is a couple of years ahead, as the industry measured it, against any other alternative in the industry. We're very affirmed by the Ericsson announcement, which is you know, reinforcing the strong belief they have in 18A. But over and above that, you know, I mentioned the in the prepared remarks, the two major significant opportunities that we made very good progress on as a big 18A foundry customers this quarter, and an overall growing pipeline of potential foundry customers, test chips and process as well so you know we feel five nodes and four years is on track 18a is the culmination of that and good interest from the industry across the board you know i'd also say that you know as part of the overall strength in the foundry business as well and maybe tying the first part and the second part of your question together you know is that our packaging technologies are particularly interesting uh in the marketplace an area that intel never stumbled This is an area of sustained leadership that we've had. And today, many of the big AI machines are packaging limited. And because of that, we're finding a lot of interest for our advanced packaging. And this is an area of immediate strength for the foundry business. We set up a specific packaging business unit within our foundry business and finding a lot of great opportunities for us to pursue there as well.
spk03: CJ, thanks for the questions. Jonathan, can we have the next caller, please?
spk10: Certainly. And our next question comes from the line of Timothy O'Curry from UBS. Your question, please.
spk13: Thanks a lot. First, Dave, I have one for you. If I look at the third party contributions, they were down a little bit, which was a little bit of a surprise. But you did say that the Arizona FAB is on track. Can you sort of talk about that? And I know last quarter you said gross capex would be first half weighted and the offsets would be back half weighted. Is that still the case?
spk15: Yeah. So we did. you know, manage CapEx a bit better than I was hoping. We thought it would be more front-end loaded. It's looking, look, it's going to be a lot more evenly distributed first half versus the second half. And we managed CapEx in particular this quarter really well, which I think, you know, obviously helped on the free cash flow side. You know, it's kind of a, when you manage the CapEx, you get less offsets. And so, you know, that kind of drove the lower capital offsets for the quarter. But for the year, we're still on track to get the same amount of capital offsets through skip that we had anticipated. And that's really where most of the capital offsets have come so far. Now, obviously, you know, as we get into, you know, CHIPS incentives that should be coming here in the not too distant future, you know, that will add to the offsets that we get. We go into next year, we start getting the investment tax credit that will help on the capital offsets. So there'll be more things that come, you know, in the future. But right now it's largely skip and it's skip one and
spk06: that's a function of you know where the spending lands quarter to quarter yeah and just maybe to pile on to that a bit you know obviously getting eu chips act approved we're excited about that for the germany and poland projects you know which are you know we'll go for formal dg comp approval we're also very happy we submitted our first proposal the on track arizona facility But we'll have three more proposals going in for U.S. CHIPS Act this quarter. And so we're now at pace for those. So everything there is feeling exactly as we said it would and super happy with the great engagement both in Europe as well as with the U.S. Department of Commerce as we're working on those application processes.
spk03: Tim, do you have a follow-up question?
spk13: I do. Yeah, Pat. So you talked about an accelerated pipeline of more than a billion dollars. And I think Sandra has been recently implying that you could do over a billion dollars in Gaudi next year. So the question is, is that the commitment? And then also at the data center day, you had talked about merging the GPU and the Gaudi roadmaps into Falcon Shores, but that's not going to come out until 2025. So the question really there is, wondering where that leaves customers in terms of their commitment to your roadmap, given those changes.
spk06: Yeah. And let me take that and Dave can add, you know, overall, you know, as we said, the accelerator pipeline is now well over a billion dollars and growing rapidly about six X this past quarter that's led by, but not exclusively Gaudi, you know, that also includes the max and flex product lines as well. But the, the lion's share of that is Gaudi Gaudi too. is the shipping volume product today. Gaudi 3 will be the volume product for next year, and then Falcon Shores in 25, and we're already working on Falcon Shores 2 for 26. So we have a simplified roadmap as we bring together our GPU and our accelerators into a single offering. But the progress that we're making with Gaudi 2, it becomes more generalized with Gaudi 3. The software stack, our one API approach, that we're taking will give customers confidence that they have forward compatibility into Gaudi 3 and Falcon Shores. And we'll just be broadening the flexibility of that software stack. We're adding FP8. We just added PyTorch 2 support. So every step along the way, it gets better and broader use cases, more language models are being supported, more programmability is being supported in the software stack. And we're building that full solution set as we deliver on the best of GPU and the best of matrix acceleration in the Falcon Shores timeline. But every step along the way, it just gets better. Every software release gets better. Every hardware release gets better. along the way to cover more of the overall accelerator marketplace. And as I said, we now have Gaudi 3 wafers. First ones are in hand, so that program is looking very good. And with this rapidly accelerating pipeline of opportunity, we expect that we'll be giving you very positive updates there in the future with both customers as well as expanded business opportunities.
spk03: Tim, thanks for the question. Jonathan, can we have the next caller, please?
spk10: Certainly. And our next question comes from the line of Ben Reitzes from Milius Research. Your question, please.
spk08: Yeah, thanks a lot. Appreciate the question. Pat, you caught my attention with your comment about PCs next year or with AI having a Centrino moment. Do you mind just talking about that and what When Centrino took place, you know, it was very clear we unplugged from the wires and investors really grasped that. What is the aha moment with AI that's going to accelerate the client business and benefit Intel?
spk06: Yeah. And, you know, I think, you know, the real question is what applications are going to become AI enabled? And today you're starting to see that, you know, people are going to the cloud and, you know, goofing around with a chat GPT, writing a research paper and, you know, that's like super cool. Right. And kids are of course, uh, you know, simplifying their homework assignments that way, but you're not going to do that for every client becoming AI enabled. It must be done on the client for that to occur, right? You can't go to the cloud. You can't round trip to the cloud. All of the new effects, real-time language translation in your Zoom calls, real-time transcription, automation, inferencing, relevance portraying. know generated content and gaming environments real-time creator environments being done you know through adobe's and others that are doing those as part of the client new productivity tools you know being able to do local uh you know legal brief generations on clients one after the other right across every aspect of consumer of developer and enterprise efficiency use cases, we see that there's going to be a raft of AI enablement and those will be client-centered. Those will also be at the edge. You can't round trip to the cloud. You don't have the latency, the bandwidth, or the cost structure to round trip, let's say, inferencing in a local convenience store to the cloud. It will all happen at the edge and at the clients. So with that in mind, we do see this idea of bringing AI directly into the client, and MeteorLate, which we're bringing to the market in the second half of the year, is the first major client product that includes native AI capabilities, the neural engine that we've talked about. And this will be a volume delivery that we will have. And we expect that Intel is the volume leader for the client footprint, is the one that's going to truly democratize AI at the client and at the edge. And we do believe that this will become a driver of the TAM because people will say, oh, I want those new use cases. They make me more efficient and more capable, just like Centrino made me more efficient because I didn't have to plug into the wire, right? Now I don't have to go to the cloud to get these use cases. I'm going to have them locally on my PC in real time and cost effective. We see this as a true AI PC moment that begins with Meteor Lake in the fall of this year.
spk03: Ben, do you have a follow-up question, please?
spk08: Yeah, thanks, John. I wanted to double-click on your sequential guidance in the client business. There's been, you know, there's some concerns out there with investors that there was some demand pull-in in the second quarter, given some comments from some others, and just wanted to talk about your confidence for sequential growth in that business based on what you're seeing and if there was any more color there. Thanks.
spk06: Yeah, let me start on that and Dave can jump in. You know, the biggest change quarter and quarter that we see is that we're now at healthy inventory levels. You know, and we work through inventory Q4, Q1, and some in Q2. You know, we now see the OEMs and the channel at healthy inventory levels. We continue to see solid demand signals, you know, for the client business from our OEMs and even some of the end of quarter and early quarter sale through are clear indicators of, you good strength in that business. And obviously we combine that with gaining share again in Q2. So we come into the second half of the year with good momentum and a very strong product line. So we feel quite good about the client business outlook.
spk15: I just add, normally over the last few quarters, you've seen us identify in the 10Q strategic sales that we've made where we've negotiated kind of attractive deals which have accelerated demand, let's call it. When you look at our 10Q, which will either be filed late tonight or early tomorrow, you'll see that we don't have a number in there for this quarter, which is an indication of how little we did in terms of strategic purchases. So to your question of did we pull in demand, I think that's probably will give you a pretty good assessment of that.
spk03: Ben, thanks for the questions. Jonathan, can we have the next caller, please?
spk10: Certainly. And our next question comes from the line up. Sreeni Parjuri from Raymond James. Your question, please.
spk01: Thank you. Pat, I have a question on AI as it relates to custom silicon. It's great to see that you announced a customer for 18A on custom silicon, but there's a huge demand, it seems like, for custom silicon on the AI front. I think some of your hyperscale customers are already successfully using custom silicon as an AI accelerator. So, I'm just curious what your strategy for that market is. Is that a focus area for you? If so, do you have any engagements with customers right now?
spk06: Yeah. Yeah. Thank you, Srini. And the simple answer is yes. And I have, you know, multiple ways to play in this market. Obviously, one of those is founder customers. You know, we have a good pipeline of foundry customers for 18A foundry opportunities, and several of those opportunities that we're investigating are exactly what you described. You know, people looking to do their own unique versions of their AI accelerator components, and we're engaging with a number of those. But some of those are going to be variations of Intel standard products, and this is where the IDM 2.0 strength really comes to play, where they could be using some of our silicon, combining it with some of their silicon designs. And given our advanced packaging strength, that gives us another way to be participating in those areas. And of course, that reinforces some of the near-term opportunities will just be packaging, right, where they already have designed with one or the other foundry, but we're going to be able to augment their capacity opportunities with immediately being able to engage with packaging opportunities, and we're seeing pipeline of those opportunities. So overall, we agree that this is clearly going to be a market. We also see that some of the ones that you've seen most in the press are about particularly high-end training environments. But as you said, we see AI being infused in everything. And there's going to be AI trips for the edge, AI trips chips for the communications infrastructure, AI chips for sensing devices, for automotive devices. And, you know, we see opportunities for us both as a product provider and as a foundry and technology provider across that spectrum. And that's part of the unique positioning that IDM 2.0 gives us for the future.
spk03: Srini, do you have a follow up question?
spk01: Yeah, it's for Dave. Dave, it's good to see the progress on the working capital front. I think previously you said your expectation is that, you know, free cash flow would turn positive sometime in second half. Just curious if that's still the expectation. Also, on the gross margin front, is there any, I guess, you know, PRQ charges that we should be aware of as we go into fourth quarter? Thank you.
spk15: Okay, so let me just take a moment just to give the team credit on the second quarter in terms of working capital because we brought inventory down by a billion dollars. Our day sales outstanding on the AR front is down to 24 days, which is exceptional. So a lot of what you saw in terms of the improving free cash flow from Q1 to Q2 was working capital. So I think the team's done an outstanding job just really focusing on all the elements that drive free cash flow. Our expectation is still by the end of the year to get to break even free cash flow. There's no reason why we shouldn't achieve that. Obviously, the net capex might be a little different this year than we thought coming into the year. But as we talked about, it's just the focus on free cash flow, the improved outlook in terms of the business. We think we can get to break even by the end of the year. As it relates to pre-PRQ reserves in the fourth quarter, we're likely to have some, but it should be a pretty good quarter-over-quarter improvement from the third quarter, which was obviously a good quarter-over-quarter improvement from the second quarter.
spk03: Srini, thanks for the questions. Jonathan, I think we have time for one last caller, please.
spk10: Certainly. And our final question for today then comes from the line of Aaron Rakers from Wells Fargo. Your question, please.
spk12: Yeah, thanks for taking the question. You know, and I do have a quick follow-up as well. Just kind of going back to the gross margin a little bit, you know, I think, you know, Dave, when you guided this quarter, you talked about just looking backwards, you know, the PRQ impact was going to be about 250 basis points. I think there was also an underload, you know, impact that I think you guided to around 300 basis points. So I'm just curious, you know, what were those numbers in this most recent quarter? relative to kind of as we try and frame what the expectation is going forward?
spk15: Yeah, they were largely as expected, although it was off of a lower revenue number. So the absolute dollars were as expected. They had a little bit of a less of an impact given the revenue was higher. And both of those numbers, like I said, will be lower in the third quarter. Aaron, do you have a quick follow-up?
spk12: I do just real quickly on just kind of the AI narrative. You know, we talk about Gaudi a lot in the pipeline build out. I'm curious as you look forward, you know, as part of that pipeline, you know, Pat, do you expect to see a deployment in some of the hyperscale cloud guys and competing against, you know, directly, you know, some of the large competitors on the GPU front with Gaudi in cloud?
spk06: Simple answer.
spk12: Yes.
spk06: Right. And, you know, everyone is looking for alternatives. You know, clearly the ML perf numbers that we posted recently with Gaudi to show very competitive numbers, you know, significant TCO benefits for customers. They're looking for alternatives. They're also looking for more capacity. And so we're definitely engaged. You know, we already have Gaudi instances on AWS as available today already. And some of the names that we described in our earnings calls, Stability, AI, Genesis Cloud, you know, so some of these are the proven, I'll say at scale tier one cloud providers, but some of the next generation ones are also engaging. Overall, absolutely, we expect that to be the case. We're also on our own dev cloud. We're making it easier for customers to test Gaudi more quickly. And with that, we now have 1,000 customers now who are taking advantage of the Intel development cloud. We're building 1,000 node Gaudi clusters so that they can be at scale. with their testing a very large training environment. So overall, the simple answer is yes, very much so. And we're seeing a good pipeline of those opportunities. So with that, let me just wrap up our time together today. Thank you. We're grateful that you would join us today, and we're thankful that we have the opportunity to update you on our business. And simply put, it was a very good quarter. We exceeded expectations on top line, on bottom line. We raised guidance, and we look forward to the continued opportunities that we have of accelerating our business and seeing the margin improvement that comes in the second half of the year. But even more important to me was the operational improvements that we saw, good fiscal discipline, cost-saving discipline, and best of all, the progress that we've made, right, on our execution, our process execution, product execution, the transformational journey, you know, that we're in. And, you know, I just want to say a big thank you to my team for having a very good quarter that we could tell you about today. We look forward to talking to you more, you know, particularly at our innovation in September. You know, we'll be hosting an investor Q&A track, and we hope to see many, if not all of you there. It'll be a great time. Thank you.
spk10: Thank you, ladies and gentlemen, for your participation in today's conference. This does conclude the program. You may now disconnect. Good day. you Thank you. Thank you. Thank you. Thank you. Thank you. Thank you for standing by, and welcome to Intel's Corporation's second quarter 2023 earnings conference call. At this time, all participants are in listen-only mode. After the speaker's presentation, there will be a question and answer session. To ask a question during the session, you'll need to press star 1-1 on your telephone. To remove yourself from the queue, simply press star 1-1 again. As a reminder, today's program is being recorded. And now I'd like to introduce your host for today's program, Mr. John Pitzer, Corporate Vice President of Investor Relations.
spk03: Thank you, Jonathan. By now, you should have received a copy of the Q2 earnings release and earnings presentation, both of which are available on our investor website, intc.com. For those joining us online, the earnings presentation is also available in our webcast window. I am joined today by our CEO, Pat Gelsinger, and our CFO, David Zinsner. In a moment, we will hear brief comments from both, followed by a Q&A session. Before we begin, please note that today's discussion does contain forward-looking statements based on the environment as we currently see it, and as such, are subject to various risks and uncertainties. Our discussion also contains references to non-GAAP financial measures that we believe provide useful information to our investors. Our earnings release, most recent annual report on Form 10-K, and other filings with the SEC provide more information on specific risk factors that could cause actual results to differ materially from our expectations. They also provide additional information on our non-GAAP financial measures, including reconciliations where appropriate to our corresponding GAAP financial measures. With that, let me turn things over to Pat.
spk06: Thank you, John, and good afternoon, everyone. Our strong second quarter results exceeded expectations on both the top and bottom line, demonstrating continued financial improvement and confirmation of our strategy in the marketplace. Effective execution across our process and product roadmaps is rebuilding customer confidence in Intel. Strength in client and data center and our efforts to drive efficiencies and cost savings across the organization all contributed to the upside in the quarter and a return to profitability. We remain committed to delivering on our strategic roadmap, achieving our long-term goals, and maximizing shareholder value. In Q2, we began to see real benefits from our accelerating AI opportunity. We believe we are in a unique position to drive the best possible TCO for our customers at every node on the AI continuum. Our strategy is to democratize AI, scaling it and making it ubiquitous across the full continuum of workloads and usage models. We are championing an open ecosystem with a full suite of silicon and software IP to drive AI from cloud to enterprise, network, edge, and client across data prep, training, and inference in both discrete and integrated solutions. As we have previously outlined, AI is one of our five superpowers, along with pervasive connectivity, ubiquitous compute, cloud-to-edge infrastructure, and sensing, underpinning a $1 trillion semi-industry by 2030. Intel Foundry Services, or IFS, positions us to further capitalize on the AI market opportunity, as well as the growing need for a secure, diversified, and resilient global supply chain. IFS is a significant accelerant to our IDM 2.0 strategy, and every day of geopolitical tension reinforces the correctness of our strategy. IFS expands our scale, accelerates our ramps at the leading edge, and creates long tails at the trailing edge. More importantly for our customers, it provides choice, leading edge capacity outside of Asia, and at 18A and beyond, what we believe will deliver leadership performance. We are executing well on our Intel 18A as a key foundry offering and continue to make substantial progress against our strategy. In addition, in July, we announced that Boeing and Northrop Grumman will join the RAMP-C program along with IBM, Microsoft, and NVIDIA. The Rapid Assurance Microelectronics Prototypes commercial, or RAMP-C, is a program created by the U.S. Department of Defense in 2021 to assure domestic access to next generation semiconductors, specifically by establishing and demonstrating a U.S.-based foundry ecosystem to develop and fabricate chips on Intel 18A. RAMPC continues to build on recent customer and partner announcements by IFS, including MediaTek, Arm, and the leading cloud edge and data center solutions provider. We also made good progress on two significant 18A opportunities this quarter. We are strategically investing in manufacturing capacity to further advance our IDM 2.0 strategy and overarching foundry ambitions while adhering to our smart capital strategy. In Q2, we announced an expanded investment to build two leaving edge semiconductor facilities in Germany, as well as plans for a new assembly and test facility in Poland. The building out of Silicon Junction in Magdeburg is an important part of our go-forward strategy, and with our investment in Poland and the Ireland sites, we already operate at scale in the region. we are encouraged to see the passage of the EU CHIPS bill supporting our building out an unrivaled capacity corridor in Europe. In addition, a year after being signed into law, we submitted our first application for US CHIPS funding for the on-track construction of our fab expansion in Arizona, working closely with the US Department of Commerce. It all starts with our process and product roadmaps, and I am pleased to report that all our programs are on or ahead of schedule. We remain on track to five nodes in four years and to regain transistor performance and power performance leadership by 2025. Looking specifically at each node, Intel 7 is done, and with the second half launch of a meteor like Intel 4, our first EUV node is essentially complete with production ramping. For the remaining three nodes, I would highlight Intel 3.0 met defect density and performance milestones in Q2, released PDK 1.1, and is on track for overall yield and performance targets. We will launch Sierra Forest in first half 24, with Granite Rapids following shortly thereafter, our lead vehicles for Intel 3.0. On Intel 20A, our first node, using both RibbonFET and PowerVIA, Aerolake, a volume client product, is currently running its first stepping in the fab. In Q2, we announced that we will be the first to implement backside power delivery in silicon two plus years ahead of the industry, enabling power savings, area efficiency, and performance gains for increased compute demands ideal for use cases like AI, CPUs, and graphics. In addition, backside power improves ease of design, a major benefit not only for our own products, but even more so for our Foundry customers. On Intel 18A, we continue to run internal and external test chips and remain on track to being manufacturing ready in the second half of 2024. Just this week, we were pleased to have announced an agreement with Ericsson to partner broadly on their next generation optimized 5G infrastructure. Reinforcing customer confidence in our roadmap, Ericsson will be utilizing Intel's 18A process technology for its future custom 5G SOC offerings. Moving to products, our client business exceeded expectations and gained share yet again in Q2 as the group executed well, seeing a modest recovery in the consumer and education segments, as well as strength in premium segments where we have leadership performance. We have worked closely with our customers to manage client CPU inventory down to healthy levels. As we continue to execute against our strategic initiatives, we see a sustained recovery in the second half of the year as inventory has normalized. Importantly, we see the AI PC as a critical inflection point for the PC market over the coming years that will rival the importance of Centrino and Wi-Fi in the early 2000s. and we believe that Intel is very well positioned to capitalize on the emerging growth opportunity. In addition, we remain positive on the long-term outlook for PCs as household density is stable to increasing across most regions and usage remains above pre-pandemic levels. Building on strong demand for our 13th Gen Intel processor family, Meteor Lake is ramping well in anticipation of a Q3 peer queue and will maintain and extend our performance leadership and share gains over the last four quarters. Meteor Lake will be a key inflection point in our client processor roadmap as the first PC platform built on Intel 4, our first EUV node, and the first client chiplet design enabled by Foveros advanced 3D packaging technology, delivering improved power efficiency and graphics performance. Meteor Lake will also feature a dedicated AI engine, Intel AI Boost. With AI Boost, our integrated neural VPU, enabling dedicated low-power compute for AI workloads, we will bring AI use cases to life through key experiences people will want and need for hybrid work, productivity, sensing, security, and creator capabilities, many of which were previewed at Microsoft's Build 2023 conference. Finally, while making the decision to end direct investment in our next unit of computing or NUC business, this well-regarded brand will continue to scale effectively with our recently announced ASUS partnership. In the data center, our fourth gen Xeon scalable processor is showing strong customer demand despite the mixed overall market environment. I am pleased to say that we are poised to ship our one millionth fourth gen Xeon unit in the coming days. This quarter, we also announced the general availability of fourth gen cloud instances by Google Cloud. We also saw great progress with fourth gen's AI acceleration capabilities, and we now estimate more than 25% of Xeon data center shipments are targeted for AI workloads. Also in Q2, we saw third-party validation from ML Commons when they published MLPerf training performance benchmark data showing that 4th Gen Xeon and Hibana Gaudi 2 are two strong open alternatives in the AI market that compete on both performance and price versus the competition. End-to-end AI-infused applications like DeepMind's AlphaFold and algorithm areas such as graph neural networks show our fourth Gen Z on outperforming other alternatives, including the best published GPU results. Our strengthening positioning within the AI market was reinforced by our recent announcement of our collaboration with Boston Consulting Group, to deliver enterprise-grade secure and responsible generative AI, leveraging our Gaudi and 4th Gen Xeon offerings to unlock business value while maintaining high levels of security and data privacy. Our data center CPU roadmap continues to get stronger and remains on or incrementally ahead of schedule with Emerald Rapids, our 5th Gen Xeon scalable set to launch in Q4 of 23. Sierra Forest, our lead vehicle for Intel 3, will launch in first half of 24. Granite Rapids will follow shortly thereafter. For both Sierra Forest and Granite Rapids, volume validation with customers is progressing ahead of schedule. Multiple Sierra Forest customers have powered on their boards and silicon is hitting all power and performance targets. Clearwater Forest, the follow-on to Sierra Forest, will come to market in 2025 and be manufactured on Intel 18A. While we performed ahead of expectations, the Q2 consumption TAM for servers remained soft on persistent weakness across all segments, but particularly in the enterprise and rest of world, where the recovery is taking longer than expected across the entire industry. We see the server CPU inventory digestion persisting in the second half. Additionally, impacted by the near-term wallet share focus on AI accelerators rather than general purpose compute in the cloud. We expect Q3 server CPUs to modestly decline sequentially before recovering in Q4. Longer term, we see AI as TAM expansive to server CPUs, and more importantly, we see our accelerator product portfolio as well positioned to gain share in 2024 and beyond. The surging demand for AI products and services is expanding the pipeline of business engagements for our accelerator products, which includes our Gaudi, Flex, and Max product lines. Our pipeline of opportunities through 2024 is rapidly increasing and is now over $1 billion and continuing to expand with Gaudi driving the lion's share. The value of our AI products is demonstrated by the public instances of Gaudi at AWS and the new commitments to our Gaudi product line from leading AI companies, such as Hugging Face and Stability AI, In addition to emerging AI leaders, including Indian Institute of Technology, Madras, Pravartak, and Genesis Cloud. In addition to building near-term momentum with our family of accelerators, we continue to make key advancements in next generation technologies, which presents significant opportunities for Intel. In Q2, we shipped our test chip, Tunnel Falls, a 12-qubit silicon-based quantum chip, which uniquely leverages decades of transistor design and manufacturing investments and expertise. Tunnel Falls fabrication achieved 95% yield rate with voltage uniformity similar to chips manufactured under the more usual CMOS process, with a single 300-millimeter wafer providing 24,000 quantum-dop test chips. We strongly believe our silicon approach is the only path to true cost-effective commercialization of quantum computing. A silicon-based qubit approach is a million times smaller than alternative approaches. Turning to PSG, NEX, and Mobileye, demand trends are relatively stronger across our broad-based markets like industrial, auto, and infrastructure. Although as anticipated, NEX did see a Q2 inventory correction, which we expect to continue into Q3. In contrast, PSG, IFS, and Mobileye continue on a solid growth trajectory, and we see the collection of these businesses in total growing year-on-year in calendar year 23, much better than third-party expectations for a mid-single-digit decline in the semiconductor market, excluding memory. Looking specifically at our programmable solutions group, we delivered record results for a third consecutive quarter. In Q2, we announced the Intel Agilec 7 with the R-Tile chiplet is shipping production-qualified devices in volume to help customers accelerate workloads with seamless integration and the highest bandwidth processor interfaces. We have now PRQed 11 of the 15 new products we expected to bring to market in calendar year 23. For NEX, during Q2, Intel, Ericsson, and HPE successfully demonstrated the industry's first VRAN solution running on the fourth-gen Intel Xeon scalable processor with Intel VRAN Boost. In addition, we will enhance the collaboration we announced at Mobile World Congress to accelerate industry scale Open RAN utilizing standard Intel Xeon based platforms as telcos transform to a foundation of programmable software defined infrastructure. Mobileye continued to generate strong profitability in Q2 and demonstrated impressive traction with their advanced product portfolio by announcing a supervision, eyes-on, hands-off design win with Porsche and a mobility as a service collaboration with Volkswagen Group that will soon begin testing in Austin, Texas. We continue to drive technical and commercial engagement with them, co-developing leading FMCW LiDAR products based on Intel silicon photonics technology, and partnering to drive the software-defined automobile vision that integrates mobilized ADAS technology with Intel's cockpit offerings. Additionally, in the second quarter, we executed the secondary offering that generated meaningful proceeds as we continue to optimize our value creation efforts. In addition to executing on our process and product roadmaps during the quarter, we remain on track to achieve our goal of reducing costs by $3 billion in 2023 and $8 to $10 billion exiting 2025. As mentioned during our internal Foundry webinar, Our new operating model establishes a separate P&L for our manufacturing group, inclusive of IFS and TD, which enables us to facilitate and accelerate our efforts to drive best-in-class cost structure, de-risk our technology for external foundry customers, and fundamentally changes incentives to drive incremental efficiencies. We have already identified numerous gains in efficiency, including factory loading, test and sort time reduction, packaging cost improvements litho field utilization improvements reductions and steppings expedites and many more it is important to underscore the inherent sustained value creation due to the tight connection between our business units and td manufacturing and ifs finally As we continue to optimize our portfolio, we agreed to sell a minority stake in our IMS nanofabrication business to Bain Capital, who brings a long history of partnering with companies to drive growth and value creation. IMS has created a significant market position with multi-beam mask writing tools that are critical to the semiconductor ecosystem for enabling EUV technology and is already providing benefit on our five nodes, four years efforts. Further, this capability becomes even more critical with the adoption of high-end EUV in the second half of the decade. As we continue to keep Moore's Law alive and very well, IMS is a hidden gem within Intel, and the business's growth will be exposed and accelerated through this transaction. While we still have work to do, we continue to advance our IDM 2.0 strategy. Five nodes in four years remains well on track. Our product execution and roadmap is progressing well. We continue to build out our foundry business, and we are seeing early signs of success as we work to truly democratize AI from cloud to enterprise, network, edge, and client. We also saw strong momentum on our financial discipline and cost savings as we return to profitability. are executing our internal foundry model by 2024 and are leveraging our smart capital strategy to effectively and efficiently position us for the future. With that, I will turn it over to Dave.
spk15: Thank you, Pat, and good afternoon, everyone. We drove stronger than expected business results in the second quarter, comfortably beating guidance on both the top and bottom line. While we expect continued improvement to global macroeconomic conditions, the pace of recovery remains moderate. We will continue to focus on what we can control, prioritizing investments critical to our IDM 2.0 transformation, prudently and aggressively managing expenses near term, and driving fundamental improvements to our cost structure longer term. Second quarter revenue was $12.9 billion, more than $900 million above the midpoint of our guidance. Revenue exceeded our expectations in CCG, DCAI, IFS, and Mobileye, partially offset by continued demand softness and elevated inventory levels in the network and edge markets, which impacted NEX results. Gross margin was 39.8%, 230 basis points better than a guidance on stronger revenue. EPS for the quarter was 13 cents, beating guidance by 17 cents as our revenue strength, better gross margin, and disciplined OpEx management resulted in a return to profitability. Q2 operating cash flow was $2.8 billion, up $4.6 billion sequentially. Debt inventory was reduced by $1 billion, or 18 days in the quarter, and accounts receivable declined by $850 million, or seven days, as we continue to focus on disciplined cash management. Net capex was $5.5 billion resulting in adjusted free cash flow of negative $2.7 billion and we paid dividends of a half billion dollars in the quarter. Our actions in the last few weeks, the completed secondary offering of mobilized shares and the upcoming investment in our IMS nanofabrication business by Bain Capital, will generate more than $2.4 billion of cash and help to unlock roughly $35 billion of shareholder value. These actions further bolster our strong balance sheet and investment grade profile with cash and short-term investments of more than $24 billion exiting Q2. We'll continue to focus on avenues to generate shareholder value from our broad portfolio of assets in support of our IDM 2.0 strategy. Moving to second quarter business unit results, CCG delivered revenue of $6.8 billion, up 18% sequentially and ahead of our expectations for the quarter as the pace of customer inventory burns slowed. As anticipated, we see the market moving toward equilibrium and expect shipments to more closely align to consumption in the second half. ASPs declined modestly in the quarter due to higher education shipments and sell-through of older inventory. CCG showed outstanding execution in Q2, generating operating profit of $1 billion, an improvement of more than $500 million sequentially on higher revenue, improved unit costs, and reduced operating expenses, offsetting the impact of pre-PRQ inventory reserves in preparation for the second half launch of Meteor Lake. DCAI revenue was $4 billion, ahead of expectations and up 8% sequentially, with the Zeon business up double digits sequentially, Data Center CPU TAM contracted meaningfully in the first half of 23, and while we expect the magnitude of year-over-year declines to diminish in the second half, a slower than anticipated TAM recovery in China and across enterprise markets has delayed a return of CPU TAM growth. CPU market share remained relatively stable in Q2, and the continued ramp of Sapphire Rapids contributed to CPU AST improvement of 3% sequentially and 17% year-over-year. DCAI had an operating loss of $161 million, improving sequentially on higher revenue and ASPs and reduced operating expenses. Within DCAI, our FPGA products delivered a third consecutive quarter of record revenue, up 35% year-over-year, along with another record quarterly operating margin. We expect this business to return to more natural demand profile in the second half of the year as we work down customer backlog to normalized levels. An EX revenue was $1.4 billion below our expectations in the quarter and down significantly in comparison to a record Q2 22. Network and edge markets are slowly working through elevated inventory levels, elongated by sluggish China recovery, and telgos have delayed infrastructure investments due to macro uncertainty. We see demand remaining weak through at least the third quarter. Q2 NEX operating loss of $187 million improved sequentially on lower inventory reserves and reduced operating expenses. Mobileye continued to perform well in Q2. Revenue was $454 million, roughly flat sequentially and year over year, with operating profit improving sequentially to $129 million. This morning, Mobileye increased fiscal year 2023 outlook for adjusted operating income by 9% at the midpoint. Intel Foundry Services revenue was $232 million, up 4x year over year, and nearly doubling sequentially on increased packaging revenue and higher sales of IMS nanofabrication tools. Operating loss was $143 million, with higher factory startup costs offsetting stronger revenue. Q2 was another strong quarter of cross-company spending discipline, with operating expenses down 14% year over year. we're on track to achieve $3 billion of spending reductions in 23. With the decision to stop direct investment in our client Nook Business earlier this month, we have now exited nine lines of business since Pat rejoined the company with a combined annual savings of more than $1.7 billion. Through focused investment prioritization and austerity measures in the first half of the year, some of which are temporary in nature, OpEx is tracking a couple hundred million dollars better than our $19.6 billion 23 committed goal. Now turning to Q3 guidance. We expect third quarter revenue of $12.9 to $13.9 billion. At the midpoint of $13.4 billion, we expect client CPU shipments to more closely match sell-through. Data center, network, and edge markets continue to face mixed macro signals and elevated inventory levels in the third quarter, while IFS and Mobileye are well positioned to generate strong sequential and year-over-year growth. We're forecasting gross margin of 43%, a tax rate of 13%, and EPS of 20 cents at the midpoint of revenue guidance. We expect sequential margin improvement on higher sales and lower pre-PRQ inventory reserves. While we're starting to see some improvement in factory under load charges, most of the benefit will take some time to run through inventory and positively impact cost of sales. Investment in manufacturing capacity continues to be guided by our smart capital framework, creating flexibility through proactive investment in shelves and aligning equipment purchases to customer demand. In the last few weeks, we have closed agreements with governments in Poland and Germany, which includes significant capital incentives, and we're well positioned to meet the requirements of funding laid out by the U.S. CHIPS Act. Looking at capital requirements and offsets made possible by our smart capital strategy, we expect net capital intensity in the mid-30s as a percentage of revenue across 23 and 24 in aggregate. While our expectations for gross capex have not changed, The timing of some capital offsets is uncertain and could land in either 23 or 24, depending on a number of factors. Having said that, we're confident in the level of capital offsets we will receive over the next 18 months and expect offsets to track to the high end of our previous range of 20 to 30%. Our financial results in Q2 reflect improved execution and improving macro conditions. Despite a slower than expected recovery in key consumption markets like China and the enterprise, we maintain our forecast of sequential revenue growth throughout the year. Accelerating AI use cases will drive increased demand for compute across the AI continuum, and Intel is well positioned to capitalize on the opportunity in each of our business units. We remain focused on the execution of our near and long-term product, process, and financial commitments and the prioritization of our owner's capital to generate free cash flow and create value for our stakeholders. With that, let me turn the call back over to John.
spk03: Thank you, Dave. We will now transition to the Q&A portion of our earnings presentation. As a reminder, we would ask each of you to ask one question with a brief follow-up question where appropriate. With that, Jonathan, can we have the first caller, please?
spk10: Certainly. And our first question comes from the line of Ross Seymour from Deutsche Bank. Your question, please.
spk14: Hi, guys. Thanks for letting me ask the question. Congrats on the strong results. I wanted to focus, Pat, on the data center, the DCAI side of things. Strong upside in the quarter, but it sounds like there's still some mixed trends going forward. So I guess a two-part question. Can you talk about what drove the upside and where the concern is going forward? And part of that concern, that crowding out potential that you just discussed with accelerators versus CPUs, how is that playing out and when do you expect it to end?
spk06: Yeah, thanks, Ross. And You know, thanks for the congrats on the quarter as well. I'm super proud of my team for the great execution this quarter top bottom line beats raise, you know, and just great execution across every aspect of the business both financially as well as roadmap execution, you know, with regard to the data center. Obviously, the good execution, I'll just say we executed well. Winning designs, fighting hard in the market, regaining our momentum, good execution. As you said, we'll see the Sapphire Rapids hit the millionth unit in the next couple of days, our Xeon Gen 4. So overall, it's feeling good. Roadmaps in very good shape. So we're feeling very good about the future outlook of the business as well. As we look to 5th Gen, E-Core, P-Core with Sapphire and Granite Rapids. So all of those, I'll just say we're performing well. That said, we do think that the next quarter at least we'll show some softness. There's some inventory burn that we're still working through. We do see that big cloud customers in particular have put a lot of energy into building out their high-end AI training environments, and that is putting more of their budgets focused or prioritized into the AI portion of their build-out. You know that said, we do think this is a near term right to surge, you know that we expect will balance over time. We see Ai as a workload, not as a market right, which will affect every aspect of the business, whether it's client whether it's edge whether it's standard data Center on premise enterprise. or a cloud. We're also seeing that Gen 4 Xeon, and then we'll be enhancing that in the future roadmap, has significant AI capabilities. And as you heard in the prepared remarks, we expect about 25% today and growing of our Gen 4 is being driven by AI use cases. And obviously, we're going to be participating more in the accelerator portion of the market with our Gaudi flex and max product lines, particularly Gaudi is gaining a lot of momentum. And my formal remarks, we said, we now have, you know, over a billion dollars of pipeline 6X in the last quarter. So we're going to participate in the accelerator portion of it. You know, we're seeing a real opportunity for the CPU as that workload balances over time between CPU and accelerator. And obviously, you know, we have a strong position to democratize AI across our entire portfolio of products.
spk04: Ross, do you have a quick follow-up?
spk14: I do. I just wanted to pivot to Dave on a question on the gross margin side. Nice beat in the quarter and the sequential increase for the third quarter as well. Beyond the revenue increase side, which I know is important, can you just walk us through some of the pluses and minuses sequentially into the third quarter and even into the back half, some of the pre-PRQ reversals, underutilization, any of those kind of idiosyncratic blocks that we should be aware of as we think about the gross margin in the second half of the year?
spk15: Yeah, good question, Rob. So in the second quarter, just to repeat what I said in the prepared remarks, you know, that was largely a function of revenue. We had obviously beat revenue significantly and we got a good fall through given the fixed cost nature of our business. And so that really was what helped us really outperform significantly on the gross margin side in the second quarter. In the third quarter, we do obviously at the midpoint see revenue growth sequentially, and so that will be helpful in terms of gross margin improvement. We expect, again, pretty good fall through as we get that incremental revenue. We're also going to see underloadings come down. I would say modestly come down for two reasons. One, we get that period charge for some of our underloading, but some of our underloading is actually just function of the cost of the inventory. And so that will take some time to flow through. So it'll be a modest decline, but nevertheless helpful on the gross margin front. And then as you point out, we will have pre-PRQ reserves in the third quarter, but they're meaningfully down from the second quarter. Meteor Lake will not be a pre-PRQ reserve in the third quarter because we expect to launch that this quarter. But we have Emerald Rapids, that will certainly have some impact. And then some of the other SKUs, will also impact it. So coming down, but not to zero. So we have an opportunity actually to perform better in the fourth quarter, obviously dependent on the revenue and so forth, given that pre-PRQ reserves are likely to come off again in the fourth quarter. We should improve on the loading front in the fourth quarter as well. And so there's, I think, some good tailwinds on the gross margin front. I'll just take an opportunity to talk longer term. We will continue to be way down for some quarters on underload because of the nature of just having it cycle through inventory and then come out through cost of sale. So for multiple quarters, we'll have some underloading charges that we'll see. And then as we talked about, you know, since really Pat joined and we, you know, kind of launched into the five notes and four U's, we're going to have a significant amount of startup costs that will hit gross margins that will affect us, you know, for a couple of years. But we're really optimistic about where gross margins are going over the long term. Ultimately, we will get back to process parity and leadership, and that will enable us to not have these startup costs be a headwind. And of course, as you bring out products at a high performance in terms of process and in terms of product, that shows up in terms of our margins. And then as Pat mentioned, he went through a laundry list in the prepared remarks of areas of benefit that the internal foundry model will give us. You know, we expect a pretty meaningful amount of that to come out by the time we hit 2026, but we won't be done there. I mean, I think there'll be multiple opportunities over the course of multiple years to improve the gross margin. So, you know, Pat has, you know, talked about a pretty significant improvement in gross margins over time. And I think, you know, what we're seeing today is the beginnings of seeing that improvement show up in the P&L.
spk03: Perfect. Ross, thanks for the question. Jonathan, can we have the next one, please?
spk10: Certainly. One moment for our next question. And our next question comes from the line of Joe Moore from Morgan Stanley. Your question, please.
spk09: Great. Thank you. Dave, I think you said in your prepared remarks that data center pricing was up 17% year-on-year and that Sapphire Rapids was a factor there. Can you just talk to that and kind of, obviously, Sapphire Rapids is going to get bigger. Can you talk about what you expect to see with platform costs and DCAIs?
spk15: Platform costs. Okay. Well, first of all, you know, AST is obviously improving as we increase core count. And, you know, as we get more competitive on the product offerings, that enables us to, you know, to have more confidence in the market in terms of our pricing. So that's certainly helpful. Obviously, with the increasing core count, that affects the cost as well. So cost is obviously goes up. But, you know, the longer, the larger drivers of our cost structure will be around, you know, what we do in terms of the internal foundry model as we get up in terms of scale and get away from these underloading charges. And, you know, as we get past the startup costs on five nodes in four years, which, you know, data center is certainly getting hit with. And so those things I think longer term will be the bigger strivers of gross margin improvement. And, you know, as we get launched Sierra Forest in the first half of next year and Granite later thereafter, and start to produce products on the data center side that are really competitive, that enables us to even be stronger in terms of our margin outlook and should help improve the overall P&L of data center. Joe, do you have a follow-up question?
spk09: Sure. Just also on servers, as you look to Q3, I think you talked about some of the cautious trends there. Can you talk to enterprise versus cloud? Is it different between the two? And also, are you seeing anything different in China for data center versus what you're seeing in North America?
spk06: Yeah. And as we said, Joe, and thanks for the question, as we said in the prepared remarks, we do expect to be seeing the TAM down in Q3, somewhat driven by all of it. It's a little bit of data center digestion for the cloud guys, a bit of enterprise weakness And some of that is more inventory. And the China market, I think, has been well reported, hasn't come back as strongly as people would have expected overall. And then the last factor was one of the first question from Ross around the pressure from accelerator spend being stronger. So I think those four somewhat together are leading to a bit of weakness, at least through Q3. You know, that said, our overall position is strengthening, and we're seeing our products improve, right? We're seeing the benefits of, you know, the AI capabilities in our Gen 4 and beyond products improving. You know, we're also starting to see some of the use cases like, you know, graph neural networks, Google's Alpha Fold, you know, showing best results on CPUs as well, which is increasingly gaining, you know, momentum in the industry as people look for different aspects of Data preparation, data processing, different innovations in AI. So all of that taken together, we feel optimistic about the long-term opportunities that we have in data center. And of course, the strengthening accelerator roadmap with Gaudi 2, 3, Falcon Shores. being now well executed. Also, our first wafers are in hand for Gaudi 3. So we see a lot of long-term optimism, even as near-term we're working through some of the challenging environments of the market not being as strong as we would have hoped.
spk03: Joe, thanks for the question. Jonathan, can we have the next question, please?
spk10: Certainly. And our next question comes from the line of CJ Muse from Evercore ISI. Your question, please.
spk02: Yeah, good afternoon. Thank you for taking the question. I guess first question in your prepared remarks, you talked about AI being a TAM expander for servers. And I guess I was hoping you could elaborate on that, given the productivity gains through acceleration. Would love to hear why you think that will grow units, and particularly if you could bifurcate your commentary across both training and inference.
spk06: Yeah, you know, and thanks, CJ. And generally, you know, there are great analogies here that, you know, from history we point to. You know, cases like virtualization was going to destroy, you know, the CPU TAM and then ended up driving new workloads, right? You know, if you think about a DGX platform, the leading edge AI platform, it includes CPUs, right? Why? Right, head nodes, data processing, data prep, dominate certain portions of the workload. We also see, as we said, AI as a workload where you might spend 10 megawatts a month training a model, but then you're going to use it very broadly for inferencing. We do see with Meteor Lake ushering in the AI PC generation where you have tens of watts you know, being responding in a second or two. And then AI is going to be in every hearing aid in the future, including mine. where it's, you know, 10 microwatts and instantaneous. So, you know, we do see as AI drives workloads across the full spectrum of applications. And for that, we're going to build AI into every product that we build, you know, whether it's a client, whether it's an edge platform, you know, for retail and manufacturing and industrial use cases, whether it's an enterprise data center where they're not going to stand up a dedicated 10 megawatt farm, but they're not going to move their private data off-premises and use foundational models that are available in open source, as well as in the big cloud and training environments as well. We firmly believe this idea of democratizing AI, opening the software stack, creating and participating with this broad industry ecosystem that's emerging. It was a great opportunity and one that Intel is well-positioned to participate in. We've seen that the AI TAM is part of the semiconductor TAM. We've always described this trillion-dollar semiconductor opportunity and AI being one of those superpowers, as I call it, of driving it. But it's not the only one and one that we're going to participate in broadly across our portfolio.
spk03: CJ, do you have a follow-up question? Yeah, please.
spk02: You talked a little bit about 18A and backside power. I would love to hear, you know, what you're seeing today in terms of both scaling and power benefits and how your potential foundry customers, you know, are looking at that technology in particular.
spk06: Yeah, thank you. And, you know, we continue to make good progress on our five nodes in four years. And with that, you know, that culminates in 18A. and 18A is proceeding well, and we got particularly good response this quarter to Power Via, the backside power that we believe is a couple of years ahead as the industry measures it against any other alternative in the industry. We're very affirmed by the Ericsson announcement, which is you know, reinforcing the strong belief they have in 18A. But over and above that, you know, I mentioned in the prepared remarks, the two major significant opportunities that we made very good progress on as a big 18A foundry customers this quarter, and an overall growing pipeline of potential foundry customers, test chips, and processed as well so you know we feel five nodes and four years is on track 18a is the culmination of that and good interest from the industry across the board you know i'd also say that you know as part of the overall strength in the foundry business as well and maybe tying the first part and the second part of your question together you know is that our packaging technologies are particularly interesting uh in the marketplace an area that intel never stumbled This is an area of sustained leadership that we've had. And today, many of the big AI machines are packaging limited. And because of that, we're finding a lot of interest for our advanced packaging. And this is an area of immediate strength for the foundry business. We set up a specific packaging business unit within our foundry business and finding a lot of great opportunities for us to pursue there as well.
spk03: CJ, thanks for the questions. Jonathan, can we have the next caller, please?
spk10: Certainly. And our next question comes from the line of Timothy O'Curry from UBS. Your question, please.
spk13: Thanks a lot. First, Dave, I have one for you. If I look at the third party contributions, they were down a little bit, which was a little bit of a surprise. But you did say that the Arizona FAB is on track. Can you sort of talk about that? And I know last quarter you said gross capex would be first half weighted and the offsets would be back half weighted. Is that still the case?
spk15: Yeah. So we did. you know, manage CapEx a bit better than I was hoping. We thought it would be more front-end loaded. It's looking, look, it's going to be a lot more evenly distributed first half versus the second half. And we managed CapEx in particular this quarter really well, which I think, you know, obviously helped on the free cash flow side. You know, it's kind of a, when you manage the CapEx, you get less offsets. And so, you know, that kind of drove the lower capital offsets for the quarter. But for the year, we're still on track to get the same amount of capital offsets through skip that we had anticipated. And that's really where most of the capital offsets have come so far. Now, obviously, you know, as we get into, you know, CHIPS incentives that should be coming here in the not too distant future, you know, that will add to the offsets that we get. We go into next year, we start getting the investment tax credit that will help on the capital offsets. So there'll be more things that come, you know, in the future. But right now it's largely skip and it's skip one and
spk06: that's a function of you know where the spending lands quarter to quarter yeah and just maybe to pile on to that a bit you know obviously getting eu chips act approved we're excited about that for the germany and poland projects you know which are you know we'll go for formal dg comp approval we're also very happy we submitted our first proposal the on track arizona facility But we'll have three more proposals going in for U.S. CHIPS Act this quarter. And so we're now at pace for those. So everything there is feeling exactly as we said it would and super happy with the great engagement both in Europe as well as with the U.S. Department of Commerce as we're working on those application processes. Tim, do you have a follow-up question?
spk13: I do. Yeah, Pat. So you talked about an accelerated pipeline of, you know, more than a billion. And I think Sandra has been recently implying that you could do over a billion dollars in Gaudi next year. So the question is, is that the commitment? And then also at the, you know, data center day, you, you had, you know, talked about merging the, you know, GPU and the Gaudi, you know, roadmaps into Falcon shores, but but that's not going to come out until 2025. So the question really there is wondering where that leaves customers in terms of their commitment to your roadmap, given those changes.
spk06: Yeah, and let me take that and Dave can add, you know, overall, you know, as we said, the accelerator pipeline is now well over a billion dollars and growing rapidly about 6x this past quarter. That's led by, but not exclusively Goudy, you know, that also includes the Max and Flex product lines as well. But the lion's share of that is Goudy, Goudy 2.0. is the shipping volume product today. Gaudi 3 will be the volume product for next year, and then Falcon Shores in 25, and we're already working on Falcon Shores 2 for 26. So we have a simplified roadmap as we bring together our GPU and our accelerators into a single offering. But the progress that we're making with Gaudi 2, it becomes more generalized with Gaudi 3. The software stack, our one API approach, that we're taking will give customers confidence that they have forward compatibility into Gaudi 3 and Falcon Shores. And we'll just be broadening the flexibility of that software stack. We're adding FP8. We just added PyTorch2 support. So every step along the way, it gets better and broader use cases, more language models are being supported, more programmability is being supported in the software stack. And we're building that full solution set as we deliver on the best of GPU and the best of matrix acceleration in the Falcon Shores timeline. But every step along the way, it just gets better. Every software release gets better. Every hardware release gets better. along the way to cover more of the overall accelerator marketplace. And as I said, we now have Gaudi 3 wafers. First ones are in hand, so that program is looking very good. And with this rapidly accelerating pipeline of opportunity, we expect that we'll be giving you very positive updates there in the future with both customers as well as expanded business opportunities.
spk03: Tim, thanks for the question. Jonathan, can we have the next caller, please?
spk10: Certainly. And our next question comes from the line of Ben Reitzes from Milius Research. Your question, please.
spk08: Yeah, thanks a lot. Appreciate the question. Pat, you caught my attention with your comment about PCs next year or with AI having a Centrino moment. Do you mind just talking about that and what When Centrino took place, you know, it was very clear we unplugged from the wires and investors really grasped that. What is the aha moment with AI that's going to accelerate the client business and benefit Intel?
spk06: Yeah. And, you know, I think, you know, the real question is what applications are going to become AI enabled? And today you're starting to see that, you know, people are going to the cloud and, you know, goofing around with a chat GPT, writing a research paper and, you know, that's like super cool. Right. And kids are of course, uh, you know, simplifying their homework assignments that way, but you're not going to do that for every client becoming AI enabled. It must be done on the client for that to occur, right? You can't go to the cloud. You can't round trip to the cloud. All of the new effects, real-time language translation in your Zoom calls, real-time transcription, automation, inferencing, relevance portraying. you know, generated content and gaming environments, real-time creator environments being done, you know, through Adobe's and others that are doing those as part of the client, new productivity tools, you know, being able to do local, you know, legal brief generations on clients, one after the other, right across every aspect of consumer, of developer and enterprise efficiency use cases, we see that there's going to be a raft of AI enablement, and those will be client-centered. Those will also be at the edge. You can't round trip to the cloud. You don't have the latency, the bandwidth, or the cost structure to round trip, let's say, inferencing in a local convenience store to the cloud. It will all happen at the edge and at the clients. So with that in mind, we do see this idea of bringing AI directly into the client. And MeteorLate, which we're bringing to the market in the second half of the year, is the first major client product that includes native AI capabilities, the neural engine that we've talked about. And this will be a volume delivery that we will have. And we expect that Intel is the volume leader for the client footprint, is the one that's going to truly democratize AI at the client and at the edge. And we do believe that this will become a driver of the TAM because people will say, oh, I want those new use cases. They make me more efficient and more capable, just like Centrino made me more efficient because I didn't have to plug into the wire, right? Now I don't have to go to the cloud to get these use cases. I'm going to have them locally on my PC in real time and cost effective. We see this as a true AI PC moment that begins with Meteor Lake in the fall of this year.
spk03: Ben, do you have a follow-up question, please?
spk08: Yeah, thanks, John. I wanted to double-click on your sequential guidance in the client business. There's some concerns out there with investors that there was some demand pull-in in the second quarter, given some comments from some others, and just wanted to talk about your confidence for sequential growth in that business based on what you're seeing and if there was any more color there. Thanks.
spk06: Yeah, let me start on that and Dave can jump in. You know, the biggest change quarter and quarter that we see is that we're now at healthy inventory levels. You know, and we work through inventory Q4, Q1, and some in Q2. You know, we now see the OEMs and the channel at healthy inventory levels. We continue to see solid demand signals, you know, for the client business from our OEMs and even some of the end of quarter and early quarter sale through are clear indicators of demand good strength in that business. And obviously we combine that with gaining share again in Q2. So we come into the second half of the year with good momentum and a very strong product line. So we feel quite good about the client business outlook.
spk15: I just add, normally over the last few quarters, you've seen us identify in the 10Q strategic sales that we've made where we've negotiated kind of attractive deals which have accelerated demand, let's call it. When you look at our 10Q, which will either be filed late tonight or early tomorrow, you'll see that we don't have a number in there for this quarter, which is an indication of how little we did in terms of strategic versus purchases. So to your question of did we pull in demand, I think that's probably give you a pretty good assessment of that.
spk03: Ben, thanks for the questions. Jonathan, can we have the next caller, please?
spk10: Certainly. And our next question comes from the line of Sreeni Parjuri from Raymond James. Your question, please.
spk01: Thank you. Pat, I have a question on AI as it relates to custom silicon. It's great to see that you announced a customer for 18A on custom silicon, but there's a huge demand, it seems like, for custom silicon on the AI front. I think some of your hyperscale customers are already successfully using custom silicon as an AI accelerator. So, I'm just curious what your strategy for that market is. Is that a focus area for you? If so, do you have any engagements with customers right now?
spk06: Yeah. Yeah. Thank you, Srini. And the simple answer is yes. And I have, you know, multiple ways to play in this market. Obviously, one of those is founder customers. You know, we have a good pipeline of foundry customers for 18A foundry opportunities. And several of those opportunities that we're investigating are exactly what you described. You know, people looking to do their own unique versions of their AI accelerator components. And we're engaging with a number of those. But some of those are going to be variations of Intel standard products. And this is where the IDM 2.0 strength really comes to play where they could be using some of our silicon, combining it with some of their silicon designs. And given our advanced packaging strength, that gives us another way to be participating in those areas. And of course, that reinforces some of the near-term opportunities will just be packaging, right? Where they already have designed with one of the other foundry, but we're going to be able to augment their capacity opportunities with immediately being able to engage with packaging opportunities, and we're seeing pipeline of those opportunities. So overall, we agree that this is clearly going to be a market. We also see that some of the ones that you've seen most in the press are about particularly high-end training environments. But as you said, we see AI being infused in everything. And there's going to be AI trips for the edge, AI trips chips for the communications infrastructure, AI chips for sensing devices, for automotive devices. And, you know, we see opportunities for us both as a product provider and as a foundry and technology provider across that spectrum. And that's part of the unique positioning that IDM 2.0 gives us for the future.
spk03: Srini, do you have a follow up question?
spk01: Yeah, it's for Dave. Dave, it's good to see the progress on the working capital front. I think previously you said your expectation is that, you know, free cash flow would turn positive sometime in second half. Just curious if that's still the expectation. Also, on the gross margin front, is there any, I guess, you know, PRQ charges that we should be aware of as we go into fourth quarter? Thank you.
spk15: Okay, so let me just take a moment just to give the team credit on the second quarter in terms of working capital because we brought inventory down by a billion dollars. Our day sales outstanding on the AR front is down to 24 days, which is exceptional. So a lot of what you saw in terms of the improving free cash flow from Q1 to Q2 was working capital. So I think the team's done an outstanding job just really focusing on all the elements that drive free cash flow. Our expectation is still by the end of the year to get to break even free cash flow. There's no reason why we shouldn't achieve that. Obviously, the net capex might be a little different this year than we thought coming into the year. But as we talked about, it's just the focus on free cash flow, the improved outlook in terms of the business. We think we can get to break even by the end of the year. As it relates to pre-PRQ reserves in the fourth quarter, we're likely to have some, but it should be a pretty good quarter-over-quarter improvement from the third quarter, which was obviously a good quarter-over-quarter improvement from the second quarter.
spk03: Srini, thanks for the questions. Jonathan, I think we have time for one last caller, please.
spk10: Certainly. And our final question for today then comes from the line of Aaron Rakers from Wells Fargo. Your question, please.
spk12: Yeah, thanks for taking the question. You know, and I do have a quick follow-up as well. Just kind of going back to the gross margin a little bit, you know, I think, you know, Dave, when you guided this quarter, you talked about just looking backwards, you know, the PRQ impact was going to be about 250 basis points. I think there was also an underload, you know, impact that I think you guided to around 300 basis points. So I'm just curious, you know, what were those numbers in this most recent quarter? relative to kind of as we try and frame what the expectation is going forward?
spk15: Yeah, they were largely as expected, although, you know, it was off of a lower revenue number. So the absolute dollars were as expected. They had a little bit of a less of an impact given the revenue. The revenue was higher. And both of those numbers, like I said, will be lower in the third quarter. Aaron, do you have a quick follow up?
spk12: I do just real quickly on just kind of the AI narrative. You know, we talk about Gaudi a lot in the pipeline build out. I'm curious as you look forward, you know, as part of that pipeline, you know, Pat, do you expect to see a deployment in some of the hyperscale cloud guys and competing against, you know, directly, you know, some of the large competitors on the GPU front with Gaudi in cloud?
spk06: Simple answer.
spk12: Yes.
spk06: Right. And, you know, everyone is looking for alternatives. You know, clearly the MLPerf numbers that we posted recently with Gaudi 2, you know, show very competitive numbers, you know, significant TCO benefits for customers. They're looking for alternatives. They're also looking for more capacity, and so we're definitely engaged You know, we already have Gaudi instances on AWS as available today already. And some of the names that we described in our earnings calls, Stability, AI, Genesis Cloud, you know, so some of these are the proven, I'll say at scale tier one cloud providers, but some of the next generation ones are also engaging. Overall, absolutely, we expect that to be the case. We're also on our own dev cloud. We're making it easier for customers to test Gaudi more quickly. And with that, we now have 1,000 customers now who are taking advantage of the Intel development cloud. We're building 1,000 node Gaudi clusters so that they can be at scale. you know, with their testing a very large training environment. So overall, you know, the simple answer is yes, very much so. And we're seeing a good pipeline of those opportunities. So with that, let me just wrap up our time together today. Thank you. We're grateful that you would join us today, and we're thankful that we have the opportunity to update you on our business. And simply put, it was a very good quarter. We exceeded expectations on top line, on bottom line. We raised guidance, and we look forward to the continued opportunities that we have of accelerating our business and seeing the margin improvement that comes in the second half of the year. But even more important to me was the operational improvements that we saw, good fiscal discipline, cost-saving discipline, and best of all, the progress that we've made, right, on our execution, our process execution, product execution, the transformational journey, you know, that we're in. And, you know, I just want to say a big thank you to my team for having a very good quarter that we could tell you about today. We look forward to talking to you more, you know, particularly at our innovation in September. You know, we'll be hosting an investor Q&A track, and we hope to see many, if not all of you there. It'll be a great time. Thank you.
spk10: Thank you, ladies and gentlemen, for your participation in today's conference. This does conclude the program. You may now disconnect. Good day.
Disclaimer

This conference call transcript was computer generated and almost certianly contains errors. This transcript is provided for information purposes only.EarningsCall, LLC makes no representation about the accuracy of the aforementioned transcript, and you are cautioned not to place undue reliance on the information provided by the transcript.

-

-