GSI Technology, Inc.

Q1 2024 Earnings Conference Call

7/27/2023

spk04: Ladies and gentlemen, thank you for standing by. Welcome to GSI Technology's first quarter fiscal 2024 results conference call. At this time, our participants are in a listen-only mode. Later, we will conduct a question and answer session. At that time, we will provide instruction for those interested in entering the queue for the Q&A. Before we begin today's call, the company has requested that I read the following safety harbor statements. The matters discussed in this conference call may include forward look statements regarding future events and the future performance of GSI technology that involves risks and uncertainties that could cause actual results to differ materially from those anticipated. These risks and uncertainties are described in the Company Form 10-K, filed with the Securities and Extensions Commission. Additionally, I have also been asked to advise you that this conference call is being recorded today, July 27, 2023, at the request of GSI Technology. Hosting the call today is Lian Xu, the Company Chairman, President and Chief Executive Officer with him are Douglas Shirley, Chief Financial Officer, and Didier Lassière, Vice President of Sales. I would like now to turn the conference over to Mr. Xu. Please go ahead, sir.
spk05: Good day, everyone, and welcome to our first quarter fiscal year 2024 earnings call. We are happy to update you on our achievement of milestones on our journey toward innovation and growth. Our dedication and focus have allowed us to make good progress during our first quarter of fiscal 2024. Let's start with our progress on advancing our growth and innovation objectives. In line with our commitment to land GeminiOne customers, we have moved forward to the demo with two of our sharp targets. Additionally, we add new resource to address the fast vector search market and hone our product for this application. Didier will provide more color on this in his comments. Additionally, I am pleased to share that version 2 of our L-Python compiler stack is on track for release to beta customers by the end of this summer. This marked a significant step forward in our product roadmap, enabling us to deliver cutting-edge solutions and drive customer satisfaction. L-Python is designed to make it easy for other developers to contribute and improve the software. The appeal of L-Python is that it can be used on different operating systems, like Windows, Linux, and the Mac OS. The reason L-Python is so fast is because it performs optimization at both a high level and a low level. This means it tries to make the code more efficient before running it. Additionally, L-Python allows for easy customization of the different ways it can convert the code, which can be useful for specific needs or preferences. Not only is L-Python fast and flexible, but the stack is also usable for other applications. And we believe we could readily create an ecosystem beyond the APU. We are closing in on successfully completing the table of Gemini 2, which is expected to be finalized and sent off to TSMC in the next few weeks. This table is a major achievement and the showcase our commitment to push the boundaries of AI chip technology. GameNet 2 is an extremely complex chip, and the successful completion of this milestone serves as a testament to our talented team's hard work and the expertise. We anticipate sampling the solution during the second half of calendar year 2024. We remain focused on driving innovation delivering exceptional products and leveraging those strengths to foster strategic partnership that will help prepare our company forward. The strategic addition to our team reinforce our commitment to drive growth, fostering partnership, and delivering innovative solutions to our customers. We are excited about the opportunities and the value these individuals will bring to our organization as they work with our dedicated team to position us for success. I want to thank our employees, customers, and shareholders for their unwavering support and commitment. Together, we will continue to build a bright future for our company. Now I will hand the call over to Didier, who will discuss our business development and sales activities. Please go ahead, Didi.
spk03: Thank you, Lilim. I want to start by addressing a point mentioned earlier by Lilim. We have strengthened our team with the addition of two highly skilled professionals who will play pivotal roles in developing strategic partnerships with hyperscalers and establishing our presence in the fast vector search market. These individuals bring a wealth of knowledge and extensive experience in their respective fields. One of our new team members, who will assume the senior data scientist role, will lead our team on various projects and offload some of the workload from our division in Israel. With this team, we will transfer some functions to the U.S., including developing software applications, functions, and undertaking government-related projects that require collaboration with U.S.-based employees. Our U.S. data science team will play a crucial role in assisting customers with the compiler and conducting benchmarks across different platforms. Our new data scientist will collaborate with this team to optimize our plugin for fast vector search, paving the way for the successful deployment of this business line for our company. Our second new resource brings a wealth of experience from the semiconductor sector, having worked for the leading FPGA companies. This background has afforded him extensive industry connections, which will be invaluable as we strive to engage and form partnerships with our top hyperscalers. We will lead the building of our platform to, I'm sorry, he will lead the building of our platform to explore strategic partners for our APU technology to develop service and licensing revenue resources to fund future APU development. On the last call, we mentioned we were working with a major hyperscaler based on Gemini architecture for inference of large language models. This relationship holds great potential for our growth and we recently added additional resources to this team. We have conducted a feasibility study exploring Gemini architecture and I am delighted to say that we are making great progress in this prospect. The study specifically focuses on GPT inference utilizing a future APU. We found that the APU, when compared to existing technologies, can achieve significantly enhanced performance levels while utilizing the same process technology. GPT is a memory intensive application. It requires a very large and very fast memory hierarchy from external storage memory all the way to the internal processor's working memory. In the GPT-175 billion model, 175 gigabyte of fast memory is required to store the model's parameters. This can be accomplished by incorporating a processor die and several HBMs, which are high bandwidth memories, and they'll be put on a 2.5D substrate. It also requires large internal memory and very fast internal memory next to the processor core as a working memory to support the large matrix multiplication performed by the processor core. APU architecture has inherently large built-in memory and large memory bandwidth that not only provides memory throughput, but also supports very high performance computation. Gemini can achieve similar peak tops per watt as state-of-the-art GPUs on the same process technology node. However, with our massive L1 size and large bandwidth, the APU can sustain average tops nearly the same as peak tops, unlike a GPU. In a single module composed of a 5 nanometer Gemini die plus 6 HBM3 die, we have calculated that we could achieve more than 0.6 token per second per watt with the input size of 32 tokens to generate a context of 64 tokens in GPT-175 billion model. This output is more than 60 times the performance that could be delivered by a state-of-the-art GPU and a slightly better technology node. This study was done in conjunction with laying out the development roadmap for Gemini 3 to move further into generative AI territory. The APU holds a distinctive advantage in delivering low power consumption at peak performance levels given the in-memory processing capability. As we have seen, generative AI applications like ChatGPT are becoming more capable with each generation. The driving force behind this improvement capability is the number of parameters used by the large language model that power them. More parameters require more computation, leading to higher energy usage and a much larger carbon footprint. To help combat the carbon footprint growth, researchers are exploring new ways to compress data to reduce memory requirements. These are tradeoffs between the formats that researchers are investigating. To navigate these tradeoffs, they need a flexible solution. Unfortunately, GPUs and CPUs lack this flexibility and are limited to a small fixed set of data formats. GSI Technologies APU technology provides the flexibility to explore new methods. By allowing computation to be performed at the bit level, computation can be performed on any size data element. with a resolution as fine as a single bit. This will allow innovative solutions to be developed and reduce energy by optimizing the number of usable bits for each data transfer. As we work with potential strategic licensing partners, we can increase the awareness of our capabilities to solve some of AI's biggest challenges. Regarding our work on Gemini One solution, we have made notable progress with two of our SAR targets. underscoring our commitment to expanding our presence in this market. We have set a goal of closing a sale in FY 2024 with one of these customers. As I mentioned, we recently added resources to support our beta fast vector search customers. With additional resources in place, we anticipate building a SAS revenue source with customized solution for fast vector search customers before the end of the fiscal year. Let me switch now to the customer and product breakdown for the first quarter. In the first quarter of fiscal 2024, sales to Nokia were $1.9 million, or 33% of net revenues, compared to 1.3 million, or 14% of net revenues in the same period a year ago, and 1.2, or 21.8 of net revenues in the prior quarter. Military defense sales were 33.8%, of first quarter shipments compared to 22.3% of shipments in the comparable period a year ago and 44.2% of shipments in the prior quarter. Sigma Quad Sills were 58.6% of first quarter shipments compared to 44.8 in the first quarter of fiscal 2023 and 46.3 in the prior quarter. And now I'd like to hand the call over to Doug.
spk01: Please go ahead, Doug. Thank you, DDA. GSI reported a net loss of $5.1 million, or 21 cents per diluted share, on net revenues of $5.6 million for the first quarter of fiscal 2024, compared to a net loss of $4 million, or 16 cents per diluted share, on net revenues of $8.9 million for the first quarter of fiscal 2023, and a net loss of $4 million, or 16 cents per diluted share, on net revenues of $5.4 million for the fourth quarter of fiscal 2023. Gross margin was 54.9% in the first quarter of fiscal 2024 compared to 60.2% in the prior year period and 55.9% in the preceding fourth quarter. The year-over-year decrease in gross margin in the first quarter of fiscal 2024 was primarily due to the impact of fixed manufacturing costs and our cost of goods on lower net revenue. Total operating expenses in the first quarter of fiscal 2024 were $8.2 million, compared to $9.3 million in the first quarter fiscal 2023 and $6.9 million in the prior quarter. Research and development expenses were $5.2 million compared to $6.6 million in the prior year period and $5 million in the prior quarter. Selling general and administrative expenses were $3 million in the quarter ended June 30, 2023 compared to $2.7 million in the prior year quarter and $1.9 million in the previous quarter. We estimate that through June 30, 2023, we have incurred research and development spending in excess of $140 million on our APU product offering. First quarter fiscal 2024 operating loss was $5.1 million compared to an operating loss of $3.9 million in the prior year period and an operating loss of $3.9 million in the prior quarter. First quarter fiscal 2024 net loss included interest and other income of $80,000 and a tax provision of $51,000 compared to $26,000 in interest and other expense and a tax provision of $60,000 for the same period a year ago. In the preceding fourth quarter, net loss included interest and other income of $101,000 and a tax provision of $191,000. Total first quarter pre-tax stock-based compensation expense was $820,000 compared to $638,000 in the comparable quarter a year ago and $515,000 in the prior quarter. At June 30, 2023, the company had $27.7 million in cash, cash equivalents, and short-term investments compared to $30.6 million in cash, cash equivalents, and short-term investments at March 31, 2023. Working capital was $32.1 million as of June 30, 2023, compared to $34.7 million at March 31, 2023, with no debt. Stockholders' equity as of June 30, 2023, was $48.6 million compared to $51.4 million as of the fiscal year ended March 31, 2023. During the June quarter, the company filed a registration statement on Form S3 so that the company would be in a position to quickly access markets and raise capital if the opportunity arises. Operator, at this point, we'll open the call to Q&A.
spk04: Thank you. We will now be conducting a question and answer session. If you would like to ask a question, please press star 1 on your telephone keypad. A confirmation tone will indicate your line is in the question queue. You may press start to if you would like to remove your question from the queue. For participants using speaker equipment, it may be necessary to pick up your handset before press the start key. One moment, please, while we pull for questions. Our first question came from Nicky Doily, Netherland and Company. Please, sir, go ahead.
spk08: Nick Doyle from Needham. Thanks for taking my questions. Just first, could you expand on the drivers behind the gross margin this quarter and next quarter? So we see a little bit of a decline this quarter, and you expect it to increase next quarter. Could you just expand on why that's happening?
spk01: Yeah, it's really related to product mix. You know, we do our best effort at forecasting what we believe the revenues are going to be during a quarter. But obviously, with only about a third or so of the quarter booked at the beginning of the quarter, we have to estimate where the revenues are going to come from.
spk00: And it's strictly tied into product mix, nothing more.
spk09: Okay.
spk08: Could you just tell us, you know, what part of the mix was higher this quarter that's driving the lower margin?
spk01: Yeah, the biggest thing that impacts the margin is that we have quite a bit of military business, and that has the highest margin. Alcatel-Lucer revenues are – I'm sorry, Nokia revenues are generally at a reasonable level, and that also is good margin. So it really is dependent on, you know, probably the biggest factors in military sales at this point.
spk08: Okay, great. Makes sense. So you talked about how you tested your APU, which can basically sustain higher tops and drive better performance per watt with this specific GPT application. Can you just expand on how that's done, how your APU differentiates from CPUs and GPUs on the market? Is it entirely to do with the ability to do computations at the bit level? That was my understanding. Yeah, any to each other would be great.
spk05: Yeah, first of all, GPU has a very, very small cache. And I think it's good for the graphic processing. But when you talk about the huge parameter in the large language model, they can only do a fraction of what they can do from the top point of view. And in the GPU, we have a huge memory inside the chip. And And we calculate the top strictly from how we can support the processing with our memory. Okay, that's how we come up with the top. So that's why we have an average top just as our peak top. Okay, so I hope I answered your question.
spk08: Okay, if I could just sneak one more. I think in the past you talked about the cost of Gemini 2 is about $2.5 million. Is that still the case, and is that entire tape-out cost behind us, or it's still ongoing?
spk01: Just the tape-out cost. Yeah, the $2.5 million is the tape-out cost, so we will have a tape-out. The expense could hit later this quarter or the early part of the October quarter, but yeah, that's just the tape-out quarter. We've incurred, as we said in our comments, probably in excess of about $140 million developing this product line, and
spk00: That's for G1 and G2.
spk05: Great, thanks. Just one comment. We published a white paper on our website, and we have a further discussion on why APU is good for the large language model. So if you're interested, look at www.gsittechnology.com.
spk04: Just to remind you, if you would like to ask a question, please press star 1 on your telephone keypad. One moment, please, while we pull for questions. Our next question came from Luke Bohemond, private investor. Please, sir, go ahead.
spk02: Thanks. So in terms of that study, did you mention that that was projecting a 5 nanometer architecture for the study about comparing with GPUs and peak performance?
spk05: Correct.
spk02: I'm supposing, based on your understanding of the engineering and the physics of your APU architecture, that you projected that is feasible. Is that the case, and can you project even further to say that, yeah, there is a limit that's lower in terms of reducing to even more dense architecture?
spk05: Yeah, we pick 5 nanometer because at this moment, the state of data processor is either at 5 or 4 nanometer. So we want to have, you know, apple-to-apple comparison, so we pick 5 nanometer as a study base. Of course, if we want to implement a real chip, I think we want to do it with even, you know, more advanced technology. So, you know, just think of everybody else.
spk02: Okay. Yeah, so that is the intended plan is to make the leap basically from your current, I think you said, 16 with the Gemini 2 all the way to the 5 for Gemini 3. Yes.
spk05: Well, no, no. Gemini, I'm sorry. Gemini 3 is to be determined. We picked 5 nanometers just because everybody else is on the 5 nanometer. So it's a fair comparison.
spk03: Right. So, yeah, that 5 nanometer was picked just for a comparison for the study because that's what, as Alin just said, that's what the GPUs are on is 5 nanometers. So we wanted to do a straight comparison on technology. That does not mean Gemini 3 would be on that technology. It could be something more aggressive.
spk02: Ah, okay. Yeah, so not a limit point.
spk03: Correct.
spk02: Excellent. In terms of you all having larger memory cache and all the other advantages of flexibility in the memory that I read about in the white paper, how does that apply to comparing the APU to GPUs in machine vision for both real-world vision, talking about EVs, autonomous vehicles, and referencing The Tesla earnings call saying that they're buying as many NVIDIA GPUs as they can get their hands on. And Ural's earlier references of being able to apply the APU to that market, as well as more of the abstract machine vision drug discovery and genetic medicine, things like that. Are you seeing still similar advantages? Yes.
spk03: Yeah. So, I mean, the advantage, yes. The answer is, you know, our Gemini 1, we understood, was not a fit for what you talked about, ADAS. Gemini 2, we anticipate to be a better fit just because of the lack of an FPGA on the board with the Gemini 2. But the fundamental unique architecture is going to be the same, which is the fact that, you know, we're doing the computation or the search on the memory bit line in place. And so we're not going off-chip to fetch the data and then going back and rewriting the data. So that's, you know, the fundamental unique architecture that we have is regardless of the market and is, you know, available or, you know, there with Gemini 1 and Gemini 2. Awesome.
spk02: Yeah, I just wanted to get that clarification about it because since... Yeah, we talked about the performance being kind of or for GPUs being apt for visual processing. So I wanted to get that clarification about the more broader kind of machine vision, visual processing markets there. Yeah, that's great. I think I have one more question. Yeah, definitely applaud you all for getting moving forward with You have the SAS and you have vector search because there have been so many announcements recently about the value of large vector search, NLP, neural networks broadly, and seeing how much of that TAM you all can address. It's definitely good to hear that you're putting some more traction to that pathway. And one just Kind of funny curiosity, I've noticed the name Gemini associated with accelerated computing, most recently, most prominently with Google. And it always made sense to me in terms of parallel processing. You have the name Gemini and historical reference. But wondering, yeah, SpinQ and Google have now also adopted Gemini. I'm wondering if that is at all an encroachment on your intellectual, you know, if you find that to just be kind of a humorous affirmation since you're the first Gemini.
spk01: No, we definitely looked into it, and the issue we have is that our trademark is for a hardware device, a semiconductor device, and Google is software-related, so there's no overlapping.
spk02: Okay, that makes sense. Okay, so has anything shifted? I'm not sure if you've actually crunched numbers, but in terms of You have your TAM and SAM and these new focuses on the large language models. Yeah, how do you see kind of the concrete addressable market projections updated at this point in terms of timeline and size?
spk03: Yeah, so we're still working on those TAMs for that. And there's different segments, right? the retrieval and you had the generative. And so, you know, those are two different areas. We can certainly address the retrieval now with Gemini one and Gemini two. And we certainly feel for the generative side, it's going to be more with Gemini three, but, but yeah, we're working on those TAM SAMs now. They're just not available yet.
spk02: Yeah. Yeah. I know it's a hard thing to value, which is reflecting and yeah, all over the analyst side of things. Yeah. That's,
spk09: I think that's all I've got. Thank you. Thank you, Luke.
spk04: Our next question came from Jeff Bernstein, TD Cohen. Please, sir, go ahead.
spk07: Yeah. Hi, guys. A couple of questions for you. One, just on the last answer, you were talking about Gemini 1 and Gemini 2 addressing retrieval. So you mean queries there? And when you say addressing generative, are you talking about training or just clarify that a little bit?
spk03: The response, right? So, yeah, so you're retrieving the data, and that's something we do very well now, but it's really generating the response. And so, you know, that requires... very, very high memory bandwidth, which we have, and very, very high memory cache in general. And that's why we talked about pairing up with HBM3 for that. And so that's more on the generative side.
spk07: Okay, so training, Gemini 3.
spk03: No, no, no, no, no. It's inference. It's still inference, yeah. It's not training.
spk07: Okay, and then as long as you were talking about the potential for a 5 nanometer or more aggressive kind of Gemini 3 line width, what is the current tape-out cost for, I know that you're not a processor, you're more like a memory, so it might be less expensive, but what do you think a tape-out cost of 5 nanometer would be now?
spk05: Well, final meter, the mask cost itself about $15 million, one size. To have a design like a final meter, we probably need to have $100 million for the design. So what we are doing right now is we really are looking for the partner. We are not planning to do it ourselves. So the partnership. Okay.
spk07: And then I just want to talk about the capital project. situation. You've now got a registration statement in place. Unfortunately, you missed the big runoff in the stock. Why wouldn't you preferentially sell and lease back the headquarters for funds and then have some more tangible progress to show before we started talking about raising equity?
spk01: Well, we have looked into the sale of the building, and we haven't decided to do that yet, but that still is an option. You know, property values are significantly higher than when we purchased the building many years ago, and it is an opportunity that we have considered and we've discussed it with the board, but no decision as of yet has been made to sell the building.
spk07: Gotcha. Okay. And then just on the Nokia business – that if I remember correctly, you guys were in, now at this point, the pretty old Nokia 7970 and 7950 routers. I don't even see any reference anymore to the 50. What's going on there? How much lead time would you get if they were end-of-lifing that? Would there be some kind of lucrative end-of-life you know, revenue that you might get out of that, et cetera. Just give us a little feeling for your understanding of where you are with the Nokia business.
spk03: Sure. Yeah, so as you said, it's, you know, in the 7750 and 7950 platforms there, and they have extremely long life cycles, as, you know, we've been seeing. We get a 12-month rolling forecast from Nokia, and that's as far as they go, and the 12 months still looks healthy. What they did do a while back is they did what's called a midlife kicker to try and give a little bit more performance to those existing systems. And what that meant for us is that it went from a 72 megabit density into a 144 megabit density part, for that midlife kicker. And so the ASPs are obviously higher on the larger density part. So what we saw is even though some of the volumes have come down over time, it's been fairly flat on the revenue side just because the increase in the ASPs offset the decrease in the quantity. So at this point, you know, it's still going. We still have the 12-month forecast that looks healthy, and that's as much visibility as we get.
spk07: Gotcha. And then obviously there is some movement around the chip shortages and packaging shortages and that kind of thing. Are we now to a more normalized rate here going forward?
spk03: So the lead times have become more normalized. The pricing or the costs have not. So the price increases that were subjected to us, which in turn – forced us to raise prices to our customers, they're still there. And so, you know, we've kept our ASPs up and we'll keep them there until there's any kind of movement from TSMC or, you know, or any of the substrate folks that raise their prices. But at this point, the real change is the lead time. Lead times have come down to a more normalized area.
spk07: Gotcha. But just in terms of inventories, we should be at a more normal rate. kind of inventory situation going forward here.
spk01: Yes, that's what we fully believe. And our inventories have dropped the last quarter, too, and we expect them to drop the next couple quarters or so.
spk09: Great. Thank you. One moment, please, while we pull for questions.
spk04: Our next question comes from George Gasper, private investor. Please, sir, go ahead.
spk06: Thank you. It's George Gasper. Just again, I'd like to deal on the financing situation. Based on your current cash position and looking at your current development progress profile, what do you see is your forward view on the need to exercise financing requirement?
spk01: Well, at this point, given the materials we've discussed with the board, this fiscal year we'll certainly burn some cash, maybe $12 to $13 million if the revenue numbers hold up. And if the revenue numbers hold up next year, we could start turning the corner and actually having more cash at the end of fiscal 2025 than at the end of fiscal 2024.
spk06: I see. So what you're saying is that Based on the way you're moving along, that your present cash position is sufficient for what your targets are and the development that you see over the next year?
spk01: Currently, that's true. That's the situation.
spk00: It is.
spk06: Okay. All right.
spk09: Thank you. Thank you.
spk04: Thank you. There is no further question at the time. I would like now to turn the floor back over to Mr. Chu for closing comments. Please, sir, go ahead.
spk05: Thank you all for joining us. We look forward to speaking with you again when we report our second quarter fiscal 2024 return. Thank you.
spk04: This concludes today's teleconference. you may disconnect your lines at this time. Thank you for your participation.
Disclaimer

This conference call transcript was computer generated and almost certianly contains errors. This transcript is provided for information purposes only.EarningsCall, LLC makes no representation about the accuracy of the aforementioned transcript, and you are cautioned not to place undue reliance on the information provided by the transcript.

-

-