This conference call transcript was computer generated and almost certianly contains errors. This transcript is provided for information purposes only.EarningsCall, LLC makes no representation about the accuracy of the aforementioned transcript, and you are cautioned not to place undue reliance on the information provided by the transcript.
spk04: My name is Martin Vieka, VP of Investor Relations, and I'm joined today by Elon Musk, Vaibhav Dhanesha, and a number of other executives. Our Q1 results were announced at about 3 p.m. Central Time in the update deck we published at the same link as this webcast. During this call, we will discuss our business outlook and make forward-looking statements. These comments are based on our predictions and expectations as of today. Actual events and results could differ materially due to a number of risks and uncertainties, including those mentioned in our most recent filings with the SEC. During the question and answer portion of today's call, please limit yourself to one question and one follow-up. Please use the raise hand button to join the question queue. But before we jump into Q&A, Elon has some opening remarks.
spk03: Elon? Thanks, Martin. So to recap, in Q1, we navigated several unforeseen challenges as well as the ramp for the updated Model 3 in Fremont. There was, as people have seen, the EV adoption rate globally is under pressure and a lot of other water manufacturers are pulling back on EVs and pursuing plug-in hybrids instead. We believe this is not the right strategy and electric vehicles will ultimately dominate the market. Despite these challenges, the Tesla team did a great job executing in a tough environment, and energy storage deployments, the Megapack in particular, reached an all-time high in Q1, leading to record profitability for the energy business. And that looks likely to continue to increase in the quarters and years ahead. It will increase. We actually know that it will. So significantly faster than the car business, as we expected. We also continue to expand our AI training capacity in Q1 more than doubling our training compute sequentially. In terms of the new product roadmap, there's been a lot of talk about our upcoming vehicle line in the past several weeks. We've updated our future vehicle lineup to accelerate the launch of new models ahead of previously mentioned start of production in the second half of 2025. So we expect it to be more like the early 2025, if not late this year. These new vehicles, including more affordable models, will use aspects of the next generation platform as well as aspects of our current platforms and will be able to be produced on the same manufacturing lines as our current vehicle lineup. So it's not contingent on any new factory or massive new production line. It'll be made on our current production lines much more efficiently. And we think this should allow us to get to over 3 million vehicles of capacity when realized to the full extent. Regarding FSD version 12, which is the pure AI-based self-driving, if you haven't experienced this, I strongly urge you to try it out. It's profound. And the rate of improvement is rapid. So we've now turned that on for all cars with the cameras and inference computer and everything from hardware three on in North America. So it's been pushed out to around 1.8 million vehicles. And we're seeing about half of people use it so far. And that percentage is increasing with each passing week. So we now have over 300 billion miles that have been driven with FSD V12. Since the launch of supervised full self-driving, it's become very clear that the vision-based approach with end-to-end neural networks is the right solution for scalable autonomy. It's really how humans drive. Our entire road network is designed for biological neural nets and eyes. So naturally, cameras and digital neural nets are the solution to our current road system. To make it more accessible, we've reduced the subscription price to $99 a month, so it's easy to try out. And as we've announced, we'll be showcasing our purpose-built robot taxi or cyber cab in August. Yeah. Regarding AI compute, Over the past few months, we've been actively working on expanding Tesla's core AI infrastructure. For a while there, we were training constrained in our progress. We are, at this point, no longer training constrained, and so we're making rapid progress. We've installed and commissioned, meaning they're actually working, 35,000 H100 computers or GPUs. GPU is the wrong word. They need a new word. I always feel like a wince when I say GPU, because GPU stands for graphics, and it doesn't do graphics. But anyway, roughly 35,000 H100s are active, and we expect that to be probably 85,000 or thereabouts by the end of this year. And training, just for training. We are making sure that we're being as efficient as possible in our training. It's not just about the number of H100s, but how efficiently they're used. So in conclusion, we're super excited about our autonomy roadmap. I think it should be obvious to anyone who's driving a version 12 in a Tesla that is only a matter of time before we exceed the reliability of humans and not much time at that. And we're really headed for an electric vehicle, an autonomous future. And I'll go back to something I said several years ago, that in the future, gasoline cars that are not autonomous will be like riding a horse and using a flip phone. And that will become very obvious in hindsight. We continue to make the necessary investments that will drive growth and profits for Tesla in the future. And I wanted to thank the Tesla team for incredible execution during this period and look forward to everything that we have planned ahead. Thanks.
spk04: Thank you very much. And Aviv, after some comments as well.
spk01: Thanks. You know, it's important to acknowledge what Elon said from our auto business perspective. We did see a decline in revenues quarter over quarter, and those are primarily because of seasonality. on certain macroeconomic environment and the other reasons which Elon had mentioned earlier. Auto margins declined from 18.9 to 18.5 percent, excluding the impact of Cybertruck. The impact of pricing actions was largely offset by reductions in per unit cost and the recognition of revenue from auto park feature for certain vehicles in US that previously did not have that functionality. Additionally, While we did experience higher costs due to the ramp of Model 3 in Fremont and disruptions in Berlin, these costs were largely offset by cost reduction initiatives. In fact, if we exclude Cybertruck and Fremont Model 3 ramp costs, the revenue from auto margins improves slightly. Currently, normalized Model Y cost per vehicle in Austin and Berlin are already very close to that of Fremont. Our ability to reduce costs without sacrificing on quality was due to the amazing efforts of the team in executing Tesla's relentless pursuit of efficiency across the business. We've also witnessed that as other OEMs are pulling back on their investments in EV, there is increasing appetite for credits, and that means a steady stream of revenue for us. Obviously, seeing others pull back from EV is not the future we want. We would prefer if the whole industry went all in. On the demand front, we have undertaken a variety of initiatives, including lowering the price of both the purchase and subscription options for FSD, launching extremely attractive leasing specials for the Model 3 in the US for $299 a month, and offering attractive financing options in certain markets. We believe that our awareness activities paired with attractive financing will go a long way in expanding our reach and driving demand for our products. Our energy business continues to make meaningful progress with margins reaching a record of 24.6%. We expect the energy storage deployments for 2024 to grow at least 75% higher from 2023. And accordingly, this business will begin contributing significantly to our overall profitability. Note that there is a bit of lumpiness in our storage deployments due to a variety of factors that are outside of our control, so deployments may fluctuate quarter over quarter. On the operating expense front, we saw a sequential increase from our AI initiatives, continued investment in future projects, marketing, and other activities. We had negative free cash flow of 2.5 billion in the first quarter. The primary driver of this was an increase in inventory from a mismatch between bills and deliveries, as discussed before, and our elevated spend on CapEx across various initiatives, including AI Compute. We expect the inventory bill to reverse in the second quarter and free cash flow to return to positive again. As we prepare the company for the next phase of growth, we had to make the hard but necessary decision to reduce our headcount by over 10%. The savings generated are expected to be well in excess of 1 billion on an annual basis. We are also getting hyper focus on capex efficiency and utilizing our installed capacity in a more efficient manner. The savings from these initiatives, including our cost reductions, will help improve our overall profitability and ultimately enable us to increase the scale of our investments in AI. In conclusion, the future is extremely bright and the journey to get there while challenging will be extremely rewarding. Once again, I would like to thank the whole Tesla team for delivering great results, and we can open it up to Q&A. Thank you.
spk04: Okay, let's start with investor Q&A. The first question is, what is the status of 4680? What is the current output? Lars?
spk02: Sure. 4680 production increased about 18%, 20% from Q4, reaching greater than 1K a week for Cybertruck, which is about 7 gigawatt hours per year, as we posted on X. We expect to stay ahead of the Cybertruck ramp with the cell production throughout Q2 as we ramp a third of four lines in phase one, while maintaining multiple weeks of cell inventory to make sure we're ahead of the ramp. Because we're ramping, COGS continues to drop rapidly week over week, driven by yield improvements throughout the lines and production volume increases. So our goal, and we expect to do this, is to beat supplier costs of nickel-based cells by the end of the year.
spk04: Thank you. The second question is on Optimus. So what is the current status of Optimus? Are they currently performing any factory tasks? When do you expect to start mass production?
spk03: We are able to do simple factory tasks, or at least I should say factory tasks in the lab. In terms of actually, we do think we will have Optimus in limited production in the factory, in the actual factory itself, doing useful tasks before the end of this year. And then I think we may be able to sell it externally by the end of next year. These are just guesses. As I've said before, I think Optimus will be more valuable than everything else combined. because if you've got a sentient humanoid robot that is able to navigate reality and do tasks at request, there is no meaningful limit to the size of the economy. So that's what's going to happen. And I think Tesla is best positioned of any humanoid robot robot maker to be able to reach volume production with efficient inference on the robot itself. I mean, this perhaps is a point that is worth emphasizing. Tesla's AI inference efficiency is vastly better than any other company. There's no company even close to the inference efficiency of Tesla. We've had to do that because we were constrained by the inference hardware in the car. We didn't have a choice. But that will pay dividends in many ways.
spk04: Thank you. The third question is, what is Tesla's current assessment of the pathway towards regulatory approval for unsupervised FSD in the US? And how should we think about the appropriate safety threshold compared to human drivers? Sure.
spk02: There are a handful of states that already have adopted autonomous vehicle laws. These states are paving the way for operations, while the data for such operations guides a broader adoption of driverless vehicles. I think Ashok can talk a little bit about our safety methodology, but we expect that these states and the work ongoing, as well as the data that we're providing, will pave the way for a broad-based regulatory approval in the U.S. at least, and then other countries as well.
spk03: Yeah, it's actually been pretty helpful that other autonomous car companies have been cutting a path through the regulatory jungle. But that's actually quite helpful. And they have obviously been operating in San Francisco for a while. I think they got approval for City of LA. So these approvals are happening rapidly. I think if you've got at scale, it's a statistically significant amount of data that shows conclusively that the autonomous car has let's say uh half the accident rate of a human driven car i think that's difficult to ignore because at that point stopping autonomy means killing people so i actually do not think that there will be significant regulatory barriers provided there is conclusive data that the autonomous car is safer than a human driven car and in my view this will be much like elevators elevators used to be operated by a guy with a relay switch but sometimes that guy would get tired or drunk or just make a mistake and share somebody in half between flows so now we just have we just get in an elevator and press button we don't think about it in fact it's kind of weird if somebody's standing there with relay switch that'll be how cars work You just summon a car using your phone. You get in. It takes you to your destination. You get out.
spk08: You don't even think about it.
spk03: You don't even think about it. Just like an elevator. It takes you to your floor. That's it. Don't think about how the elevator is working or anything like that. And something I should clarify is that Tesla will be operating the fleet. So you can think of like how Tesla think of Tesla like some combination of Airbnb and Uber, meaning that, you know, there'll be some number of cars that Tesla owns itself and operates in the fleet. There'll be some number of cars and then there'll be a bunch of cars where they're owned by the end user. But that. end user can add or subtract their car to the fleet whenever they want. And they can decide if they want to only let the car be used by friends and family or only by five star users or by anyone at any at any time. They could have the car come back to them and be exclusively theirs like an Airbnb where you could rent out your guest room or not anytime you want. So Uh, as our fleet grows, we have 7 million cars gonna 9 million cars gonna, you know, eventually tens of millions of cars, uh, worldwide. Um, with, with, with a constant feedback loop, every time something goes wrong, that, that gets added to the training data and you get this training flywheel happening, um, in the same way that Google search. has the sort of flywheel, it's very difficult to compete with Google because people are constantly doing searches and clicking, and Google's getting that feedback loop. It's the same with Tesla, but at a scale that is maybe difficult to comprehend, but ultimately it'd be tens of millions. I think there's also some potential here for an AWS element. the road where if we've got very powerful inference um you know because we've got a hardware three in the cars but now all cars are being made with hardware four hardware five is pretty much designed and should be in cars uh hopefully towards the end of next year um and i think there's a potential to have for the to run, when the car is not moving, to actually run distributed inference. So kind of like AWS, but with distributed inference. It takes a lot of computers to train an AI model, but many orders of magnitude less compute to run it. So if you can imagine future paths where there's a fleet of 100 million Teslas and On average, they've got like maybe a kilowatt of inference compute. That's 100 gigawatts of inference compute distributed all around the world. It's pretty hard to put together 100 gigawatts of AI compute. And even in an autonomous future where the car is perhaps used, instead of being used 10 hours a week, it's used 50 hours a week. That still leaves over 100 hours a week where the car inference computer could be doing something else. and it seems like it would be a waste not to use it.
spk04: Ashu, do you want to chime in on the process and safety?
spk08: Yeah, we have one couple of years now evaluating the safety. In any given week, we train hundreds of neural networks that can produce different trajectories for how to drive the car. They replay them through the millions of clips that we have already collected from our users and our own QA. Those are critical events, like someone jumping out in front or other critical events that we have gathered database over many, many years, and we replay through all of them to make sure that we are net improving safety. On top of it, we have simulation systems that also try to recreate this and test this in closed-loop fashion. all of this is evaluated we give it to our own qa drivers we have hundreds of them in different cities in san francisco los angeles austin new york a lot of different locations they are also driving this and collecting real world miles and we have an estimate of what are the critical events are they net improvement compared to the previous week's bills and once we have confidence that the bill is a net improvement then we start shipping to early users like 2000 employees initially that they would get the build. They will give feedback on if it's an improvement or are they noting some new issues that we did not capture in our own QA process. Only after all of this is validated, then we go to external customers. Even when we go external, we have live dashboards of monitoring every critical event that's happening in the fleet, sorted by the criticality of it. So we are having a constant pulse on the builds quality and the safety improvement along the way. And then any failures like Ilona alluded to, you get the data back, add it to the training, and that improves the model in the next cycle. So we have this constant feedback loop of issues, fixes, evaluations, and then rinse and repeat. and especially with the new virtual architecture, all of this is automatically improving without requiring much engineering interventions in the sense that engineers don't have to be creative in how they code the algorithms. It's mostly learning on its own based on data. see that we failure or like this is how a person shows, this is how you drive this intersection or something like that. They get the data back, we add it to the neural network and it learns from that training data automatically instead of some engineers saying that, here you must rotate the steering wheel by this much or something like that. There's no hard influence conditions, it's everything is neural network, it's very soft, it's probabilistic, so it will adapt its probability distribution based on the new data that it's getting.
spk03: Yeah, and we do have some insight into how good the things will be in, let's say, three or four months, because we have advanced models that are far more capable than what is in the car, but have some issues with them that we need to fix. So there'll be a step change improvement in the capabilities of the car, but it'll have some quirks that need to be addressed in order to release it. As Ashok was saying, we have to be very careful in what we release to the fleet or to customers in general. So if we look at, say, 12.4 and 12.5, which really could arguably even be version 13 and version 14, because it's pretty close to a total retrain of the neural nets in each case, or substantially different. So we have... Good insight into where the models, how well the call will perform in say three or four months.
spk08: Scaling loss, people in the AI community generally talk about model scaling loss where they increase the model size a lot and then they have corresponding gains in performance. But we have also figured out scaling loss in other access. In addition to the model size scaling, we can also have data scaling. You can increase the amount of data you use to train in your network and that also gives similar gains. and you can also scale up by training compute. You can train it for much longer and get more GPUs or more Dojo nodes, and that also gives better performance. You can also have architecture scaling where you come up with better architectures that for the same amount of compute, produce better results. A combination of model size scaling, data scaling, training compute scaling, and the architecture scaling. If you continue scaling based on this ratio, sort of predict future performance. Obviously, it takes time to do the experiments because it takes, you know, a few weeks to train. It takes a few weeks to collect tens of millions of video clips and process all of them. But you can estimate what is going to be the future progress based on the trends that we have seen in the past. And that generally held true based on the past data.
spk04: Okay, thank you very much. Let's go to the next question, which is, can we get an official announcement of the timeline for the $25,000 vehicle?
spk02: I think Elon mentioned it in the opening remarks, but as you mentioned, we're updating our future vehicle lineup to accelerate the launch of our low-cost vehicles in a more CapEx-efficient way. That's our mission, to get the most affordable cars to customers as fast as possible. These new vehicles we built on our existing lines in open capacity, and that's a major shift to utilize all our capacity with marginal CapEx before we go spend high CapEx to do anything.
spk03: Yeah, we'll talk about this more on August 8th. So, but really the way to think of Tesla is almost entirely in terms of solving autonomy and being able to turn on that autonomy for a gigantic fleet. And I think it might be the biggest asset value appreciation in history when that happens, when you can do unsupervised full self-driving. Five million cars? Yeah. A little less. Some of those have more. Yeah. You know. be it'll be seven million cars you know in a year or so yeah um and then 10 million and then you know eventually we're talking about tens of millions of cars i'm not eventually it's like before the end of this yeah put it in the decades several tens of millions of cars i think thank you the next question is what is the progress of cyber track ram uh i can take that one too cyber truck hit 1k we could
spk02: just a couple of weeks ago. This happened in the first four to five months since we SOP late last year. Of course, volume production is what matters. That's what drives costs. And so our costs are dropping, but the ramp still faces like a lot of challenges with so many new technologies, some supplier limitations, et cetera, and we'll continue to ramp this year, just focusing on cost efficiency and quality.
spk04: Okay, thank you. The next question, have any of the legacy automakers contacted Tesla about possibly licensing FSD in the future?
spk03: We're in conversations with one major automaker regarding licensing FSD. Thank you.
spk04: The next question is about the Robotaxi Unveil. Elon already talked about that, so we'll have to wait till August. The following question is about the next generation vehicle. We already talked about that. Let's go to the SEMI. What is the timeline for scaling SEMI?
spk02: We're finalizing the engineering of SEMI to enable a super cost-effective high volume production with our learnings from our pilot fleet and Pepsi's fleet, which we're expanding this year marginally. In parallel, as we showed in the shareholders deck, we have started construction on the factory in Reno. our first vehicles are planned for late 2025 with external customers starting in 2026.
spk04: Okay. Couple more questions. So, um, our favorite, can we make FSD transfer permanent until FSD is fully delivered with level five autonomy? No. Okay. Next question. Um, what is the, uh, getting, uh, what is getting the production rep at Lathrop? Where do you see the mega pack run rate at the end of the year? Mike?
spk14: Yeah, Lathrop is ramping as planned. We have our second GA line allowing us to increase our exit rate from 20 gigawatt hours per year to, at the start of this year, to 40 gigawatt hours per year by the end of the year. That line is commissioned. There's really nothing limiting the ramp. Given the longer sales cycles for these large projects, we typically have order visibility 12 to 24 months prior to ship date. So we're able to plan, to build plans several quarters in advance. So this allows us to ramp the factory to align with the business and order growth. Lastly, we'd like to thank our customers globally for their trust in Tesla as a partner for these incredible projects.
spk04: Okay, thank you very much. Let's go to analyst questions. The first question comes from Tony Sakonagi from Bernstein. Tony, please go ahead and unmute.
spk12: uh thank you for taking the question um i was just wondering if you could elaborate a little bit more on kind of the new vehicles that you uh talked about today are these like tweaks on existing models uh given that they're going to be running on uh the same lines or these like new models and how should we think about them you know in the context of like the Model 3 Highland update. What will these models be like relative to that? And given the quick timeframe, you know, Model 3 Highland has required a lot of work and a lot of retooling. Maybe you can help put that all in context. Thank you. And I have a follow up, please.
spk03: I think we've said all we will on that front. So what's your follow up?
spk12: It's a more personal one for you, Elon, which is that you're leading many important companies right now. Maybe you can just talk about where your heart is at in terms of your interests, and do you expect to lessen your involvement with Tesla at any point over the next three years?
spk03: Well, Tesla constitutes the majority of my work time, and I work pretty much every day of the week. It's rare for me to take a Sunday afternoon off. So, I've got to make sure Tesla is very prosperous, and it is. I think it is prosperous, and it will be very much so in the future.
spk04: Okay, thank you. Let's go to Adam Jonas from Morgan Stanley. Adam, please go ahead and unmute.
spk00: Okay, great. Hey, Elon. So you and your team on volume expect a 2024 growth rate notably lower than that achieved in 2023. But what's your team's degree of confidence on growth above 0%? Or in other words, does that statement leave room for potentially lower sales year on year?
spk03: No, I think we'll have higher sales this year than last year.
spk00: Okay, my follow-up, Elon, on future product. If you had nailed execution, assuming that you nail execution on your next-gen, cheaper vehicles, more aggressive gigacastings, I don't want to say one piece, but getting closer to one piece, structural pack, unboxed, 300-mile range, $25,000 price point, putting aside RoboTaxi, those features, unique to you, how long would it take your best Chinese competitors to copy a cheaper and better vehicle that you could offer a couple years from now? How long would it take your best Chinese competitors to copy that? Thanks.
spk03: I mean, I don't know what our competitors could do, except we've done relatively better than they have. If you look at the drop in our competitors in China's sales versus our drop in sales, our drop was less than theirs. So we're doing well. Um, but, uh, you know, I, I, I think, you know, Kathy would set it best. Like really, we should be thought of as an AI robotics company. If you value Tesla as, as just like an auto company, you would just have the fundamentally, it's just the wrong framework. And it will come to, if you, if you ask the wrong question, then the right answer is impossible. Um, so. I mean, if somebody doesn't believe Tesla is going to solve autonomy, I think they should not be an investor in the company. That is what we will, and we are. And then you have a car that goes from 10 hours of use a week, like an hour and a half a day, to probably 50. But it costs the same.
spk01: I think that's the key thing to remember, especially if you look at FSD supervised. If you didn't believe in autonomy, they should give you a preview that this is coming. It's actually getting better day by day.
spk03: Yeah, if you've not tried the FSD 1.3, and like I said, 12.4 is going to be significantly better and 12.5 even better than that. and we have visibility into those things, then you really don't understand what's going on. It's not possible.
spk01: Yeah, and that's why we can't just look at just as a car company because a car company would just have a car. But here we have more than a car company because the cars can be autonomous. And like I said, it's happening.
spk08: Yeah, this is all in addition to Tesla's work. The overall AI community is just like increasing, like improving rapidly.
spk03: yeah yeah i mean we're putting the actual auto in automobile so you know so sort of like well sort of like tell us about future horse carriages you're making i'm like well actually it doesn't need a horse that's the whole point um that's that's really the whole point okay thank you the next question comes from alex potter from piper sandler um alex please go ahead and unmute
spk11: Great, thanks. Yeah, so couldn't agree more. The thesis hinges completely on AI, the future of AI, full self-driving, neural net training, all of these things. In that context, Elon, you've spoken about your desire to obtain 25% voting control of the company. And I understand completely why that would be. So I'm not necessarily asking about that. I'm asking if you've come up with any mechanism by which you can ensure that you'll obtain that level of voting control. Because if not, then the core part of the thesis could potentially be at risk. So any additional commentary you might have on that topic?
spk03: Well, I think no matter what, even if I cannot buy aliens tomorrow, Tesla will solve autonomy, maybe a little slower, but it would solve autonomy for vehicles at least. I don't know if it would win with respect to Optimus or with respect to future products, but there's enough momentum for Tesla to solve autonomy, even if I disappeared for vehicles. Now, there's a whole range of things we can do in the future beyond that. I'd be more reticent with respect to Optimus. You know, if we have a super sentient humanoid robot that can follow you indoors and that you can't escape, you know, we're talking Terminator level risk, then, yeah, I'd be uncomfortable with, you know, if there's not some meaningful level of influence over how that is deployed. And, you know, If those shareholders have an opportunity to ratify or re-ratify the sort of competition. I guess I can't say that. But that is a fact. They have an opportunity. Okay. Very good. And, yeah, we'll see. If the company generates a lot of positive cash flow, we can obviously buy back shares.
spk11: All right, that's actually all very helpful context. Thank you. Maybe one final question, then I'll pass it on. OPEX reductions, thank you for quantifying the impact there. I'd be interested also in potentially more qualitative discussion of what the implications are for these headcount reductions. What are the types of activities that you're presumably sacrificing as a result of parting ways with these folks? Thanks very much.
spk01: So, you know, like we said, we've done these headcount reductions across the board and, you know, as companies grow over time, you've, you know, there are certain redundancies, there's some duplication of efforts, which happens in certain areas. So you need to go back and look at where, where, where all these pockets are, get rid of it. So we're basically going through that exercise where, and we're like, Hey, how do we, how do we set this company right for the next phase of growth? and the way to think about it is you know any tree which grows it needs pruning this is the pruning exercise which we went through and at the end of it will be much stronger and much more resilient to deal with the future because the future is really bright like i said in my opening remarks we just have to get through this period and get there yeah we're not um giving up anything
spk03: that it's significant that I'm aware of. So we've had a long period of prosperity from 2019 to now. And so if a company sort of organizationally is 5% wrong per year, that accumulates to 25, 30% of inefficiency. We've made some corrections along the way, but it is time to reorganize the company for the next phase of growth. And you really need to reopenize it, just like a human when we start off with one cell and kind of zygote and blastocyst and you start growing arms and legs and briefly you have a tail.
spk02: But you shed the tail.
spk03: You shed the tail, hopefully. And then you're a baby. Basically, you have to be different. The organism... A company is kind of like a creature growing. And if you don't reorganize it for different phases of growth, it will fail. You can't have the same organizational structure if you're 10 cells versus 100 versus a million versus a billion versus a trillion. Where humans are around 35 trillion cells, it doesn't feel like it feels like I feel like one person, but you're basically a walking cell colony of roughly 35 trillion, depending on your body mass, and about three times that number in bacteria. So anyway, you've got to reorganize the company for a new phase of growth or it will fail to achieve that growth.
spk04: Thank you. Let's go to Mark Delaney from Goldman Sachs. Mark, please go ahead and unmute.
spk05: Yes, good afternoon. Thanks very much for taking the question. The company had previously characterized potential FSD licensing discussions as in the early phase, and some OEMs had not really been believing in it. Can you elaborate on how much the licensing business opportunity you mentioned today has progressed? And is there anything Tesla needs to achieve with the technology in terms of product milestones in order to be successful at reaching a licensing agreement in your view?
spk03: Well, I think it just needs to be obvious that our approach is the right approach. And I think we're now with 12.3. If you just have the car drive you around, it is obvious that our solution with a relatively low cost inference computer and standard cameras can achieve self-driving. No LIDARs, no radars, no ultrasonics, nothing.
spk02: Just no heavy integration work for vehicle manufacturers.
spk03: Yeah, so it would really just be a case of having them use the same cameras and inference computer and licensing our software. But it once becomes obvious that if you don't have this in a car, nobody wants your car. It's a smart car. I mean, I just remember back when Nokia was uh king of the hill yeah yeah crushing and um and and i suddenly come out with a smartphone that was basically a brick um with limited functionality um and then uh you know the iphone and android but people still did not understand that all the phones are going to be that way there's not going to be any flip phones if there'll be a niche product or home phones yeah not even exactly because last time you saw
spk02: I have no idea. In a hotel, sometimes in a hotel.
spk03: Yeah, the hotels have them. So the people don't understand, all cars will need to be smart cars, or you will not sell the car, nobody will buy it. Once that becomes obvious, I think licensing becomes not optional.
spk02: It becomes a method of survival.
spk03: Yeah, it's license it or nobody will buy your car.
spk01: I mean, one other thing which I'll add is in the conversations which we've had with some of these OEMs, I just want to also point out that they take a lot of time in their product life cycle.
spk06: Yeah.
spk01: They're talking about years before they will put it in their product. We might have a licensing deal earlier than that, but it takes a while. So this is where the big difference between us and them is.
spk03: Yeah, I mean, really, a deal signed now would result in it being on a card probably three years.
spk02: That would be early.
spk03: Yeah, that's like lightning, basically.
spk02: That's being eager, OEM.
spk03: Yeah. So I wouldn't be surprised if we do sign a deal. I think we have a good chance we do sign a deal this year. maybe more than one. But yeah, it would be probably three years before it's integrated with a car, even though all you need is cameras and our inference computer. So it's like not a massive design change.
spk01: Yeah, and again, just to clarify, it's not that work which we have to do, it's the work which they have to do which they've done.
spk02: Mark, do you have a follow-up?
spk05: Yeah, very helpful. Thank you. My follow-up was to better understand Tesla's approach to pricing going forward. Previously, the company had said that the price reductions were driving incremental demand with how affordable the cars have become, especially for vehicles that have access to IRA credits and some of the leasing offers that Tesla has in place. Do you still see meaningful incremental price reductions as making sense from here for the existing products? And can the company meaningfully lower prices from here and also stay free cash flow positive on an annual basis with the current product set? Thanks.
spk03: Yeah, I think we can be very cash flow positive, meaningfully. Yeah.
spk02: I think Vaibhav said it in his opening remarks, like our cost down efforts, we basically were offsetting the price cut. That's not our goal. We're trying to give it back to the customers.
spk03: Yeah. I mean, at the end of the day, for any given company, if you sell a great product at a great price, If you have a great product at a great price, the sales will be excellent. That's true of any arena. Over time, we do need to keep making sure that it's a great product at a great price, and moreover, that that price is accessible to people. You have to solve both the value for money and the fundamental affordability question. The fundamental affordability question is sometimes overlooked. If somebody is earning several hundred thousand dollars a year, they don't think of a car from a fundamental affordability standpoint. But the vast majority of people are living paycheck to paycheck. So it actually makes a difference if the cost per month for lease or financing is $10 one way or the other. So it is important to keep improving the affordability and to keep making the price Yeah, exactly. Make the price more accessible, the value for money better, and to keep improving that over time.
spk02: But also to make kick-ass cars that people want to buy.
spk03: Yeah, it's got to be a great product at a great price. And the standards for what constitutes a great product at a great price keep increasing. So you can't just be static. You have to keep making the car better, improving the price, improving the cost of production. And that's what we're doing.
spk01: Yeah, and in fact, like I said in my opening remarks also, the revised, the updated Model 3 is a fantastic car. I don't think people fully even understand the amount of engineering effort which has gone. And Lars and team have actually put out videos explaining how much the car is different, when it looks and feels different. Not only it looks and feels different, we've added so much value to it, but you can lease it for as low as $299 a month. Yeah.
spk04: Without gas. Yeah. All right. The next question comes from George from Canaccord. George, please go on unmute.
spk09: Hi. Thank you for taking my question. First, can you please help us understand some of the timing of launching FSD and additional geographies, including maybe clarifying your recent comment about China? Thank you.
spk03: I mean, like new markets, yeah. There are a bunch of markets where we don't currently sell cars that we should be selling cars in. We'll see some acceleration of that.
spk04: And FSD in new markets?
spk03: Yeah, so the thing about the end-to-end neural net-based autonomy is that, just like a human, it actually works... pretty well without modification in almost any market. So we plan on with the approval of the regulators releasing it as a supervised autonomy system in any market where we can get regulatory approval for that, which we think includes China. So yeah, it's a Just like a human, you can go rent a car in a foreign country and you can drive pretty well. Obviously, if you live in that country, you'll drive better. And so we'll make the car drive better in these other countries with country-specific training. But it can drive quite well almost everywhere.
spk08: The basics of driving are basically the same everywhere. Like a car is a car, traffic light road is their own. Yeah.
spk03: It understands that it shouldn't hit things no matter where it is.
spk08: Exactly. There are some road rules that you need to follow. In China, you shouldn't cross over a solid line to do a lane change. The U.S. has a recommendation. In China, you get fined heavily if you do that. We have to do some reductions, but it's mostly smaller reductions, not like an entire change of stack or something like that.
spk04: Yeah.
spk09: Hey, George, do you have a follow-up? So my follow-up has to do with the first quarter And I'm curious as to whether or not you feel that supply constraints that you mentioned throughout the release impacted the results. And maybe can you help us quantify that? And is that why you have some confidence in unit growth in 2024?
spk01: Yeah, I think we did cover this a little bit in the opening remarks too. Q1 had a lot of different things which were happening. Seasonality was a big one, continued pressure from the macroeconomic environment. We had attacks at our factory. We had Red Sea attacks. We're ramping model three. We're ramping model cyber truck. All these things are happening. I mean, it almost feels like a culmination of all those activities in a constrained period. And that gives us that confidence that, hey, we don't expect these things to recur.
spk03: Yeah, we think Q2 will be a lot better. Yeah. It's just one thing after another. Yeah, exactly. If you've got cars that are sitting on ships, they obviously cannot be delivered to people. And if you've got excess demand for Model 3 or Model Y in one market, but you don't have it there, it's an extremely complex logistics situation. know that and i'd say also the we we we did over complicate the sales process which we've just in the past a week or so have um greatly simplified so the it just it became far too complex to buy a tesla whereas it should just be you can buy the card in under a minute um so we're getting back to the you can buy a tesla in under a minute interface from what was quite complex.
spk04: Okay, thank you. Let's go to Colin Rush from Oppenheimer. Colin, go ahead and unmute, please.
spk07: Thanks so much, guys. Given the pursuit of Tesla really as a leader in AI for the physical world, and your comments around distributed inference, can you talk about what that approach is unlocking beyond what's happening in the vehicle right now?
spk08: do you want to say something yeah you know i mentioned like the car even when it's a full robot taxi that's probably going to be used for 50 hours a week that's my guess like a third of the hours of the week Yeah, it could be more or less, but then there's certainly going to be some hours left for charging and cleaning and maintenance. In that world, you can do a lot of other workloads. Even right now, we are seeing, for example, these LLM companies have this batch workloads where they send a bunch of documents, and those are run through pretty large neural networks. take a lot of compute to chunk through those workloads. And now that we have already paid for this compute in these cars, it might be wise to use them and not let them be idle, be like buying a lot of expensive machinery and letting them be idle. We don't want that. We want to use the compute as much as possible and close to like basically 100% of the time, make effective use of it.
spk03: I think it's analogous to Amazon Web Services, where people didn't expect that AWS would be the most valuable part of Amazon when it started out as a bookstore. So that was on nobody's radar. But they found that they had excess compute because the compute needs would spike to extreme levels for brief periods of the year, and then they had idle compute for the rest of the year. So then what should they do with it? all that excess compute for the rest of the year. That's kind of monetizing. So it seems like kind of a no-brainer to say, okay, if we've got millions and then tens of millions of vehicles out there where the computers are idle most of the time, that we might as well have them do something useful.
spk08: Exactly.
spk03: And then, I mean, if you get to the 100 million vehicle level, which I think we will at some point get to, and you've got a kilowatt of usable compute, and maybe you're on hardware six or seven by that time, then you really, I think you could have on the order of 100 gigawatts of usable compute, which might be more than anyone, more than any company, probably more than any company.
spk08: Probably, because it takes a lot of intelligence to drive the car anyway. And when it's not driving the car, you just put this intelligence to other uses, just solving scientific problems. Just like a human. Or answering dumb questions, of course.
spk04: We've already learned a lot about deploying workloads to these compute nodes.
spk08: Yeah. And unlike laptops and cell phones, it is totally under test as control. So it's easier to distribute the work product across different nodes as opposed to, you know, asking users for permission on their own cell phones to be very tedious. Well, you just drain the battery. Yeah, exactly. The batteries also.
spk03: So like technically, I suppose like Apple would have the most amount of distributed compute, but but you can't use it because you can't get the you can't just run the phone at full power and drain the battery. Yeah. So Whereas for the car, even if you're a kilowatt level inference computer, which is crazy power compared to a phone, if you've got a 50 or 60 kilowatt hour pack, it's still not a big deal to run. Whether you're plugged in or not. Whether you're plugged in or not plugged in. You could run for 10 hours and use 10 kilowatt hours. a few kilowatt of compute. Yeah, we're together built in liquid cold thermal management. Yeah, it's exactly that's for data centers. Sort of there in the car. Exactly. Yeah, it's distributed power generation, distributed access to power and distributed cooling. And it's already paid for.
spk01: Yeah, I mean, that distributed power and cooling, people underestimate that costs a lot of money.
spk08: Yeah, and the CapEx is shared by the entire world. Everyone owns a small chunk, and they get a small profit out of it.
spk06: Yeah.
spk07: Thanks so much, guys. And just my follow-up is a little bit more mundane. Looking at the 4680 ramp, can you talk about how close you are to target yields and when you might start to accelerate incremental capacity expansions on that technology?
spk03: You know, we're making good progress on that, but I don't think it's super important for at least the near term. As Laura said, we think it will be, you know, exceed the competitiveness of suppliers by the end of this year. And then we'll continue to improve.
spk02: I think it's important to note also that the ramp right now is relevant to the Cybertruck ramp. And so we're not going to just randomly build 4680s unless we have a place to put them. And so we're going to make sure we're prudent about that. But we also have a lot of investments with all our cell suppliers and vendors. They're great partners, and they've done great development work with us. And a lot of the advancements in technology and chemistry we found in 4680, they're also putting into their cells.
spk03: Yeah, a big part of the 4680 Tesla doing internal sales was a hedge against what would happen with our suppliers. Because for a while there, it was very difficult because every big car maker put in massive battery orders. And so the price per kilowatt hour of lithium ion batteries went to crazy numbers, crazy levels.
spk02: Bonkers.
spk03: Yeah, just bonkers. So like, okay, we've got to have some hedge here to deal with- know cost per kilowatt hours numbers that were double what we anticipated um if we have an internal cell production then we have that um hedge against uh demand shocks there was too much demand um that's that's really the way to think about it so it's not like we want to take on a whole bunch of problems that just for the hell of it um We did this whole program in order to address the crazy increase in cost per kilowatt hour from our suppliers due to gigantic orders placed by every car maker on earth.
spk04: Okay, thank you. And the last question comes from Ben Calo from Baird. Ben, go ahead and unmute.
spk03: you're still muted well i once again would just like to strongly recommend that anyone who is i guess thinking about the tesla stock should really drive fsd 12.3 you really you you can't it's impossible to understand the company if you do not do this all right so uh since
spk04: Ben is not unmuting. Let's try Shreyas Patel from Wolf Research. Final question.
spk10: Oh, hey. Thanks so much. Just, you know, Elon, during the investor day last year, you mentioned that auto cogs per unit for the next gen vehicle would decline by 50% versus the current three and Y. I think that was implying something around $20,000 of COGS. About a third of that was coming from the unbox manufacturing process. But I'm curious if you see an opportunity that some of the other drivers around powertrain cost reduction or material cost savings, would those be largely transferable to some of the new products that you're now talking about introducing?
spk02: Yeah, yeah, sure. I mean, in short, yes. I mean, like, you know, the unboxing manufacturing method is certainly great and revolutionary, but with it comes some risk because, you know, it's new production line, it's not. But all the subsystems we developed, whether it was power trains, you know, drive units, battery improvements in manufacturing and automation, thermal systems, seating, integration of interior components and reduction of LV controllers, all that's transferable. And that's what we're doing, you know, trying to get it in their products as fast as possible. And so, yeah, that engineering work, we're not trying to just throw it away and put it in a coffin. We're going to take it and utilize it and utilize it to the best advantage of the cars we make and the future cars we make.
spk10: Okay, great. And then just on that topic of 4680 cells, I know you mentioned it. You really thought of it more as like a hedge against rising battery costs from other OEMs. It seems, you know, even today, you know, it seems like you would have a cost advantage against some of those other automakers. And I'm wondering, you know, given the rationalizing of your vehicle manufacturing plans that you're talking about now, if there's an opportunity to maybe, you know, convert the 4680 cells and maybe sell those to other automakers and really generate an additional revenue stream. I'm just curious if you have any thoughts about that.
spk03: Right. What seems to be happening is that the unless I'm missing something, the orders for batteries from other automakers have declined dramatically. So we're seeing much more competitive prices for sales from suppliers dramatically more competitive than in the past. It is clear that a lot of our suppliers have excess capacity.
spk13: Yeah, in addition to what Elon said, this is kind of by the way, In addition to what Elon said about 4680, what 4680 did for us from a supply chain perspective was help us understand the supply chain that's upstream of our cell suppliers. So a lot of the deals that we had struck for 4680, we can also supply those materials to our partners, reducing the overall cost back to Tesla. So we're basically inserting ourselves in the upstream supply chain by doing that. So that's also been beneficial in reducing the overall pricing in addition to the excess capacity that these suppliers have.
spk03: Yeah, I mean, this is going to wax and wane, obviously. So there's going to be a boom and bust in battery cell production, where production exceeds supply, and then supply exceeds production, and back and forth, kind of like DRAM or something. So it's like, what is true today will not be true in the future. There's going to be somewhat of a boom and bust cycle here. And then there are additional complications with government incentives, like the Inflation Reduction Act, the IRA, which I always found a funny name for it.
spk02: A comical name.
spk03: Yeah. Is it like the Irish Republican Army? The Internet Research Agency from Russia? Independent Retirement Camp? Yeah, exactly. Roth IRA. There's like four Spider-Man situations, which IRA wins. But it does complicate the incentive structure so that there is a stronger demand for cells that are produced in the U.S. than outside the U.S. But then how long does the IRA last? I don't know.
spk02: Which is why it's important that we have both animal cells and vendor cells to hedge against all of this. Yeah.
spk04: Okay, thank you very much. That's all the time we have today. But at the same time, I would like to make a short announcement. And I wanted to let the investment community know that about a month ago, I met up with Elon and Vaibhav and announced that I'll be moving on from the world of investor relations. I'll be hanging around for another couple of months or so. So feel free to reach out anytime. But after this seven-year sprint, I'm going to be taking a break and spending some good quality time with my family. And I wanted to say that these seven years have been the greatest privilege of my professional life. I'll never forget the memories from I started literally at the beginning of production hell and just watching the company from the inside to see what it's become today. And I'm especially super thankful to the people in this room and dozens of people outside of this room that I've worked for over the years. I think the team strength and teamwork at Tesla is unlike anything else I've seen in my career. Elon, thank you very much for this opportunity that I got back in 2017. Thank you for seeking investor feedback and regularly and debating it with me.
spk03: Yeah, well, I mean, the reason I reached out to you was because I thought your analysis of Tesla was the best that I'd seen. Thank you. So, yeah, thank you for... helping Tesla get to where it is today over seven years. It's been a pleasure working with you.
spk04: Thank you so much. And yeah, thank you for all the thousands of shareholders that we've met over the years and walked around factories and loved all the interactions, even the tough ones. And yeah, looking forward to the call in the next three months, but I'll be on the other side listening in. Thank you very much. Thanks.
Disclaimer