Innodata Inc.

Q2 2024 Earnings Conference Call

8/8/2024

spk03: In a minute, I will describe some of the business we have won, some of the new business opportunities we are pursuing. Suffice it to say, we are seeing an increase in both the number and the magnitude of potential customer requirements, which we reflect in our increased guidance. Therefore, we're taking steps to ensure that we have sufficient liquidity to accommodate working capital as our already substantial growth potentially accelerates. First, we have increased our receivables-based credit facility with Wells Fargo from $10 million to $30 million, subject to a borrowing-based limitation, with an accordion feature that enables it to expand up to $50 million, subject to the approval of Wells Fargo. Maryse, in her remarks, will give more color on the terms of this facility. I believe the Wells Fargo facility has now extended will be sufficient to fund our working capital requirements for our anticipated growth. That said, we want to be prepared to react quickly to customer demand that could result in us significantly exceeding our anticipated rate of growth and therefore having additional working capital needs. Toward that end, this afternoon we filed a universal shelf registration statement on form S3 with the SEC. Once the registration statement is declared effective by the SEC, we will have the flexibility to sell up to an aggregate of $50 million worth of our securities in registered offerings pursuant to the effective registration statement. We believe it is prudent and good corporate governance to have an effective shelf registration statement on file with the SEC to preserve the flexibility to raise capital from time to time if needed. As disclosed in the registration statement, we have no specific plans to raise money at this time. The intended uses for the net proceeds from any such offering would be set forth in a prospective supplement. Now I'll give you an overview of the success we're experiencing in the marketplace with both existing and new customers. On June 3rd, 2024, we announced one of our existing magnificent seven big tech customers, had awarded us two significant new LLM development programs. These programs are expected to deliver approximately 44 million of annualized run rate revenue and represents the single largest customer win in ADATA's history. These awards are in addition to the new programs and program expansion with this customer announced on April 24th, 2024 and May 7th, 2024. In the one year that InnoData has been working with this customer, InnoData has landed new programs and program expansions that bring the total value of the account to approximately 110.5 million of expected annual run rate revenue. InnoData aspires to replicate the success across the six other big tech customers already contracted for generative AI development and to land additional big tech accounts. We want several other new assignments in the quarter as well, and we expect to land several others in the near future. Some notables include a big tech company that would be a new customer for us. It is one of the most valuable companies in the world and one of the companies most often talked about in connection with generative AI. Another is an existing big tech customer In connection with this opportunity, we would aim to become certified to work on their premises. We believe being co-located with their engineering and operations teams may potentially enable us to access new attractive opportunities. We also expect to shortly sign a prominent social media platform that is building its own generative AI models and would be a new customer for Interdata. Another noteworthy win was with a clinical provider in the healthcare market. Up until now, we've been focused on the use of the Synodex platform as a tool for supporting insurance underwriting. This new engagement is the first time that we will be applying the platform in a clinical use case. We believe that the Synodex technology roadmap may enable us to expand to support additional clinical use cases in the future. We have also been awarded a deal to provide news briefs and media monitoring to a federal government agency that will be leveraging the new generative AI capabilities built into our Synodex platform. We are seeking to expand into the public sector, so we consider this a strategic win. We have started to integrate agility with what we call PR Copilot. Our purpose builds generative AI layer that enables PR professionals to get more done in less time and at lower costs. While we're only about 30% into our roadmap for PR co-pilot, it is already delivering tremendous business value. This quarter, agility revenue crossed $5 million mark for the first time. Our agility demo-to-deal win rate in the quarter was 36%, significantly higher than the sub-20% win rates we were achieving prior to starting this integration. And we doubled our new business bookings in Q2 compared to the prior year period, even though we're operating with a leaner sales force. Now, before I turn the call over to Mariz, I want to share our perspectives on the generative AI market opportunity and how we have shaped our strategy to capitalize on where we see the market going. In our view, the big tech companies are clearly bullish about how generative AI technology will support their core products and services and enable exciting new opportunities. For the MAG7, capital expenditures in the latest quarter were up 63% year over year. With the bulk of these expenditures tied to generative AI spending, it is clear that the market sees under-investing as a greater risk than over-investing. In the not distant future, we believe the technology will enable computers to reason and plan, to solve hard problems, and to self-organize in complex ways that help people accomplish their goals. Our belief is that generative AI technologies will soon sit deeply and ubiquitously in every tech stack. That's why none of the big tech companies can sit this one out. The shift in experience is destined to be too significant, making the risk of being left behind untenable. Just as the California gold rush began on January 24, 1848, The GenAI gold rush began on November 30th, 2022, when OpenAI demonstrated to the world the power of deep, the power of training a deep neural net on enormous quantities of data and utilizing massive compute for inferencing. As a result, the world's largest tech companies went on the offense, committing to massive GenAI programs solving for the next big market opportunity while simultaneously defending their hegemony. One analyst has forecast $1 trillion of Gen AI CapEx over the next several years. We prescribe wholeheartedly to the notion that in a gold rush, you want to be the person selling the shovels. The shovels required by the big tech companies in the Gen AI gold rush take the form of compute. and data compute is expensive and hard to come by which is why we believe nvidia's market cap has skyrocketed over 7x to 2.6 trillion dollars since the beginning of 2023 data is also expensive and hard to come by once more we believe the data that is likely to be required to train tomorrow's gen ai is going to become even more expensive and even harder to come by. And we believe that is in a data's opportunity. The next generation of LLMs will be trained to handle more complex tasks and to be more agent-like. The complexity will take the form of models that handle difficult multi-turn tasks. For example, asking an LLM to find out how much vacation I have left and book me a trip. Complexity will also take the form of deep domain-specific tasks, like helping doctors diagnose disease or helping banks sort out complex regulation. And complexity will also take the form of models that enable users to work with audio, video, and text interchangeably. You'll hear this referred to as multimodal capabilities. Training data will be required to build models that can handle this complexity. Unlike web data that gets users halfway there for today's LLMs, these more complex LLMs are going to require a high quantity of high quality data to be specifically developed to show the models how they're supposed to function. Right now, this data does not exist anywhere. It isn't on the web. It isn't on the cloud. It isn't on premises of enterprises. Because it is neither input nor output, it exists only in a transitory, unpreserved state. It's impermanence perhaps justified by its nature as byproduct. In other words, when we solve hard problems, we don't save our work. When we began building our own AI models and applying them to our managed services work in legal data and medical data, we had to build new workflow platforms to capture and preserve this interim knowledge in an organized way to be used to train our models. This was our eureka moment when we realized that our breakout opportunity would be in creating this byproduct of human thought in order to train other people's models. Doing this as a science in a way that is repeatable and scalable is a huge opportunity, and we are still in the early days. We intend to be the preferred provider of complex demonstration data at scale, required to train models for complex reasoning, multimodal use cases, agentic retrieval augmented generation, or RAG, and for domain specificity across all languages. Our competitive advantage is that for decades we've been providing high quality data across domains such as medical, law, regulatory, science, and finance. We're encouraged by the feedback from our customers who already recognize that no single factor has as much influence on LLM performance as the quality of customized data for supervised fine tuning. We will always be looking for ways to drive continuous improvement in how we operate, ensuring that our training data is both the best quality and the most economical. Now, on the enterprise side, we believe that in 18 to 24 months from now, enterprises will dramatically accelerate their generative AI adoption. We believe the catalyst for this will be generative AI that can tackle multi-phase tasks without losing its way, now often referred to as agentic RAG, in combination with advanced open source models which significantly lower the bar for experimentation. These smaller but highly trained language models will likely prove ideal for enterprise applications that require high accuracy for specific tasks. Like the big techs, we believe enterprises will drive both offensive and defensive strategies to support their investments. The offensive play will be defining new product experiences, while the defensive play will be keeping pace with competitors who we anticipate will work to enable their current products and reengineer their operations to be AI first. Just as with the big techs, we believe enterprises will come to recognize that you've got to be all in, even with uncertain near-term ROI. A few years from now, we envision enterprises will face a shortage of experienced talent and may struggle to manage their internal data. Thus, the shovels for enterprise will be the people with the experience to help them choose the right architectures, the right approaches, and the right models and to help them manage and deploy their internal data. InnoData's enterprise strategy is focused on this. Specifically, we see the opportunity to respond to these anticipated emerging ways, needs in three ways. First, for enterprises building their own capabilities, we will be ready to assist across the entire continuum of integration types and levels. from fine-tuning custom models to building agentic RAG applications. As enterprises move to NAI services from development to production, they will need to know, how are the models working? Are they performing as intended? Are they, as they were intended, helpful, harmless, and honest? We see a big opportunity in helping them monitor their LLMs for alignment and safety. We are developing both services and platforms to respond to this need, proven by high-quality custom data. Second, for enterprises that prefer to outsource, we will make available managed services that are engineered to leverage the technologies. And third, for enterprises that prefer generative AI encapsulated in industry platforms, We will provide platforms specifically designed for industry-specific, knowledge-intensive workflows. In this way, we intend to serve enterprises at their highest point of value. I'll now turn the call over to Maryse to go over the numbers, and then Maryse, Anish, and I will be available to take your questions.
spk00: Thank you, Jack, and good afternoon, everyone. Let me briefly share with you our 2024 second quarter financial results. Revenue for Q2 2024 reached $32.6 million, reflecting a year-over-year increase of 66%. On a sequential basis, we observed a 23% increase of $6 million from Q1 of 2024 revenue of $26.5 million. Adjusted growth margin for Q2 2024 was 32%, reflecting a sequential decrease from 41% we achieved in Q1 of 2024. This reduction is attributable to the $3.6 million of recruiting costs we incurred in the second quarter to support a substantial expansion for our organization to prepare for a significantly larger revenue base. When you separate these unusually high recruiting costs, adjusted gross margin in the quarter would have been approximately 44%. Similarly, adjusted EBITDA for the quarter was $2.8 million, a reduction from $3.8 million in Q1 of 2024. But without the $3.6 million of recruiting costs, adjusted EBITDA would have been $6.4 million, or 20% of revenues. There are three things noteworth noting. First, as Jack mentioned, we expect our adjusted EBITDA next quarter to be approximately triple the 2.8 million of adjusted EBITDA reported this quarter. Second, we have since enhanced our captive recruiting engine to enable us to reduce the cost of future larger-scale recruiting. And third, we expect weak payback on recruiting costs, typically within just a few months, and strong ROI. Our cash position at the end of Q2 was approximately $16.5 million, up from $13.8 million at year-end 2023. Let me also elaborate a bit on credit facility extension that Jack mentioned earlier. We are indeed very pleased to announce that Wells Fargo has increased on receivable back credit facility from $10 million to $30 million with an accordion feature that enabled it to scale up to $50 million subject to Wells Fargo's approval. The amount drawable under the facility at any point in time is determined based on the borrowing base formula. The facility has an attractive cost of capital for amounts drawn under the line set as software plus 2.25%. We greatly appreciate Wells Fargo's confidence in our business and believe the extended facility will be sufficient to fund our working capital requirements for our anticipated growth. That said, we want to be prepared to react quickly to customer demand, which could result in InnoData significantly exceeding the rate of growth we have guided to today. With this in mind, this afternoon we filed a universal registration statement on Form S3 with the FCC, Once the registration statement is declared effective by the FCC, we will have the flexibility to sell up to aggregate of 50 million worth of our securities in registered offering pursuant an effective registration statement. We believe it is prudent and good corporate governance to have an effective shelf registration statement on file with the Security and Exchange Commission to preserve the flexibility to raise capital from time to time if needed. We have no specific plan to raise money at this time. The intended uses for the net proceeds from any such offering would be set forth in a prospective supplement. In terms of preparing for accelerated growth, Our expanded Wells Fargo line of credit and the flexibility provided by the SHELF registration statement are expected to allow us to finance our short-cycle, growth-driven working capital need for a revenue base significantly higher than our current projection. Thank you, everyone, for joining us today. Mike, we're open for questions.
spk02: At this time, we will be conducting a question and answer session. If you would like to ask a question, please press star 1 on your telephone keypad. A confirmation tone will indicate that your line is in the question queue. You may press star 2 if you would like to remove your question from the queue. For participants using speaker equipment, it may be necessary to pick up your handset before pressing the star key. One moment, please, while we poll for questions. Okay, we do have our first questioner. Alan Key from Maxim Group.
spk06: Yes, hi. Great job. This is just a clarification question. In the press release, one of the first things you say is you won large language model development programs and expansion with a big tech customer valued at approximately 87.5 million annualized run rate. Is that a new contract, or is that, as you talk about below, the customer that you expanded with that you say has a total of $110.5 million?
spk03: Thank you. Correct, Alan. Those were the contracts that were announced during Q2. So that was a recap of what was won and announced in Q2.
spk06: Okay, great. And then some of these other new contracts, could maybe you talk a little bit about of the contracts you've announced, which ones are just generally, how many are fully ramped up or maybe what percent you expect to get a greater contribution in the future?
spk04: In Q2 or generally speaking going forward in the year?
spk06: Generally speaking going forward, you know, of the adjustments you've made, are there certain ones that, could you give us a sense of like, yeah, yeah.
spk03: Yeah, so there are seven big tech customers that were contracted now to perform generative AI work and that includes the one that scaled very nicely. Of the seven, I don't believe we're fully revved up with any of them. I believe that they all hold tremendous opportunity for us to expand into, and I think we're going to be making significant process along that path over the next several quarters. The other thing I would add to that is we also anticipate, as I mentioned in my remarks, landing another couple of big tech customers, which similarly will have the opportunity, offer us the opportunity for significant potential expansion.
spk06: So just following up on that, the guidance you give today, does that incorporate any contracts that haven't been announced yet that you may expect to win?
spk03: So, I think the revenue from contracts that we haven't yet announced but expect to win can certainly accrue to Q3 and Q4. When we provision our guidance, we're, you know, there are a lot of puts and takes. We're making, you know, all sorts of factors into that, including new contracts. But again, if you take that in the aggregate,
spk06: we're you know comfortable that our guidance is conservative and we think there's opportunity to um to exceed it okay thank you and then you mentioned that you're bringing the recruiting in-house uh and that's going to save money um so do you feel that or is it that you still have to recruit a lot more or that you can just do it more efficiently now that you're um saving the money and you feel comfortable about getting enough people to ramp up?
spk03: Yeah, we're very comfortable in our ability to recruit. It's not that we won't use external agencies anymore. I think we still will, especially for particular kinds of recruiting. But we're very excited to have built an internal recruiting engine. Had we had that in place, we probably could have avoided the several millions of dollars that we had to spend in this quarter. But recruiting costs is, you know, it's an elegant problem to have. You know, we recruit primarily reactively, pursue it to demand from our customers, and we get a very fast payback on those investments with a very high ROI. So it's good that we're you know, we've got a strategy now facilitated to lower those costs prospectively, but even without lowering them, the ROI and the payback is very fast and very compelling.
spk06: That's great. You talk about a bunch of contracts that you've won recently. In terms of what you're doing, is there anything different with them of what part of the, um, annotation and trading and, and monitoring is there, are they kind of everything or is there a certain focus that customers are looking for?
spk03: So there, there are a little bit of everything, but our strategy is very much focused on, you know, what I think of as three tiers, all of which are growth factors. You know, at the foundational layer, kind of the bottom tier, you've got the big techs and the ISVs who are developing generative AI foundation models. In the middle tier, you have enterprises who were helping leverage generative AI. And then at the top tier, you have, you know, us building generative AI-enabled platforms for kind of niche industry use cases. So in the things that I mentioned, there's a bit of a sampling of all three tiers. The tremendous growth that we're seizing on today is at that bottom layer. It's the enablement layer, working with the big techs. But we're aggressively planting seeds and earning referenceability in the other tiers as well. And especially long-term, we see those as, if we do things right and we're planting the seeds properly today, we see those as things that enable our growth three, four, five years out from now. Got it. Thank you.
spk06: You talked about agility and adding co-pilot and the benefits from that. Could you, I don't know if I caught everything, could you expand a little about what the value add of Copilot is and the opportunity you see of that growing agility?
spk03: Sure. So for, you know, in agility, just for a little bit of additional context, we have about 1,500 customers, $20 million of ARR, about 17%, you know, 18% growth, I think, year over year, 70% adjusted gross margin. So, you know, lots of operating leverage. We were performing very, very well. I mentioned that we doubled our bookings in the quarter with a sales force that I believe is about 15% smaller than it was last year. So very high performing, you know, operated very efficiently. The idea behind PR Copilot was that you could disassemble the workflow of PR professionals and enable them to use generative AI at multiple points in that workflow to enable them to do more with less resources for their customers, to be able to do more for less money. And as I mentioned, we're only 30% into the integration, meaning there are, you know, we'll call it, you know, eight different points within the PR workflow that we feel we can creatively leverage these technologies. And we've only gotten through a couple of them. We're planning on making an important announcement probably within the next few weeks about another, you know, element of that PR copilot integration. We're excited about that. And we've got every reason to believe that the improvement in the results that we're now seeing will be further accelerated as we further integrate that roadmap.
spk06: Thank you. For Synodex, you mentioned that this is the first time you have a clinical application. Could you go into a little bit of what that means?
spk03: Sure. So what we're doing in Synodex is we're extracting at a very granular, very detailed level medical information from patient healthcare records. And the use case that we've been working with up until now has been primarily life insurance underwriting and related insurance underwriting to property and casualty and things like this. What we haven't done is had the technology layer that's sufficient to support clinical use cases. So, for example, analyzing patient medical records in order to make determinations about treatments or make determinations about things that would occur and decisions that would be needing to be taken in a clinical use case, meaning hospitals and doctors and live patients. With this new win and in conjunction with the development we've been doing in our technology, we now see the opportunities now and down the road to increasingly target clinical use cases. And we're very excited about that, obviously, because that would represent an extended market.
spk06: Got it. Last question. So you mentioned that there's around 300,000 of recruiting costs that will be less in the second quarter. I'm sorry, the third quarter. If I was just thinking about expenses overall, is there... Any way to think about, like, do you try to, like, think about, like, the operating leverage and to what degree maybe operating expenses will grow at a lower rate than the top line?
spk03: Yeah, so I think the way to think about it is, you know, even if you just look at this quarter, you know, our sequential revenue is up about $6 million. And our adjusted EBITDA net of that 3.6 million of recruiting costs was up about 2.6 million. So that would be, you know, 2.6 over 6 is about 43% flow through to contribution. Now, you know, obviously that won't hold up or, you know, every quarter there are puts and takes in any quarter, but I think it's indicative. Now, if you look at operating costs, One of the benefits to executing as aggressively as we are in the big tech market is that it's very concentrated. You don't need a lot of sales and marketing in order to work these accounts. What you need primarily is great execution, and that's what we've been bringing to the table. So as you think about that contribution margin, which in this quarter would have been 43% or so absent the recruiting costs, you're not going to consume a lot of that in SG&A. Now, it's not all going to show up as operating profit, but a lot of it will.
spk06: That's great. Thank you, and really fantastic quarter. Thank you very much. Thank you, Alan. Appreciate it.
spk02: We now hear from Hamid Khorasan with BWS.
spk01: Hello, everyone. Thank you so much for taking my questions. This is Sarah calling in for Hamed at BWS. My first question is regarding the addition of the large tech company Wynn that you announced today. Does this indicate that you're at all seven, magnificent seven companies?
spk04: So, Sarah, first, thank you for bringing on the call.
spk03: Welcome. Say hello to Hamed for me. So we've, what we've talked about is that we're in seven big tech companies. And of those seven, five of them are mag seven companies.
spk01: Okay, thank you. And then yes, yes, very clear.
spk03: Thank you. Under NDAs, we can't use customer names, but it's, you know, five of the seven mag seven, plus two others who are very important and notable customers in the Gen AI market, but they're not MAG7 customers.
spk01: Right, got it. Thank you so much. My next question is, other than the one large tech company that has given you the $110 million worth of work, where are you in the revenue recognition process with the other six large tech companies?
spk03: I don't have a number for you handy right now, but what I would say is we're at an early stage. We're seeing that accelerate, especially with a few of them pretty rapidly now. And I think we're going to see that acceleration through the end of the year on several. We believe, frankly, that all of the seven are going to grow with us this year. So, you know, very early days, but very exciting. Our goal, of course, is to replicate the success that we've had with this big one, which was earliest out of the gate, with as many of the others as we possibly can. No guarantees, obviously, but, you know, we're competing against the same people, and we're bringing the same execution that has so differentiated us in this large account.
spk01: Thank you for that. My next question is regarding the previously mentioned, I guess, increase in recruiting costs. Do you find that you're still needing to hire people or are you at a good headcount figure? And I guess what is the timing of revenue to help offset the hiring and recruiting OPEX?
spk03: Sure, great question. So the timing is pretty fast. It's really within a matter of a couple of months. There's training that needs to go on. There's other things that need to take place. But generally speaking, it comes on very quickly. And that's why, as an investment, it's as compelling as it is. I think you should think about the recruiting spend, you know, over time as, you know, one that will be reactive. So there'll be a baseline spend that's, you know, kind of always going on and in a normalized growth mode. And then when we get big lands like we did in the quarter, you know, $44 million win, our largest ever, you know, single win, then there's going to be concentrated spend. Now, I'm hopeful that that concentrated spend will be lower than it was this quarter by virtue of our captive recruiting capability that we've now put in place. But frankly, even if it weren't, you're not going to ever find a better investment opportunity than that.
spk01: Right. Thank you so much. And this will be my last question. So you are raising liquidity needs. Are you seeing extended net terms from customers?
spk03: No, we're not. We're paid very quickly. So nobody's stretching us out. All is good there. Now, when we do our modeling to determine our needs, we take a very conservative approach. If someone's paying us in 30 days, we'll model it at 60 days. If someone's paying us in 60 days, we'll model it at 120 days just to provide that conservatism in our forecasts. But no, we're paid well, and Meriz makes sure of that. Thank you, Sarah.
spk01: Thank you so much.
spk02: Our next questioner is Tim Clarkson with Van Clements.
spk05: Hey, Jack. Obviously, I'm thrilled with the results. Great work. I'm sure you and Mrs. Abelhoff are very happy with all the hard work over the many years, and maybe I'm the third happiest person in the world about InnoData, but Anyhow, getting into the business side, in terms of these big contracts, what's the magic elixir that's allowing you to get a contract of that magnitude versus the competition?
spk03: So I think there are several things. I think if there's one thought that I'd ask you to kind of hold on to, it's that Data and AI are inextricably linked. And that's true with the big techs when you're training the models. It's true in the enterprises when you're fine-tuning or customizing or implementing rag-based solutions. Now, as you know, we've been in the data business for a long time. We've worked with the most demanding customers who are the most error intolerant in the world for years and we learned how to keep them happy and we were working across domains. Tax, regulatory, legal, medical, healthcare, technical, financial. All of that is repurposable into this opportunity. All of that is what makes us, we've been told, the number one provider for this largest account that we now have. Now, the good thing is that what we've learned and what we've developed as a platform is transferable. We're now engaged by these other accounts. We're bringing those same capabilities to the field in these other competitions, and we have every reason to believe we'll be successful.
spk05: Right. Now, I suppose a corollary of this is that when your large customers work with companies that don't have as accurate of annotation, they don't get the successful results. So from their point of view, it's very risky to do business with someone where the data isn't as good as it should be.
spk03: Yes, that's absolutely right. Garbage in, garbage out, bad data, trains poor performing models, of course. I think there are two aspects to it. One is the data quality. And as I mentioned, the challenge of creating high-quality data is only going to get more significant. As we move into agentic RAG and these other applications and think about domain-specific models and more complex models, the challenge of finding training data is going to be more significant and then the challenge of getting it right is going to be harder still. So that's one of the differentiators that we bring. The other thing that we bring is just very reliable, very tuned execution. So when someone's training a model, they're depending on getting that data payload on time when expected because they've reserved the data center. They've reserved the GPUs. for those training cycles. If the data is late, if the data needs to be reworked, those GPUs are sitting not being used, costing a ton of money. So we think of that performance on essentially those two vectors, data quality and data timeliness. And if we can execute well in both of those, as we have now, we become a very important partner to very important customers.
spk05: Sure. Hey, just kind of give a profile. What would be the typical background of an employee in the Philippines or India that's doing this kind of work for you in terms of, you know, college degree, speaking English, how many years of experience? What would that look like?
spk03: Yeah, so a couple of things there, Tim. Firstly, you know, as you well know, our legacy has been in hiring people offshore, you India have been locations of choice historically for us. Now, for this set of opportunities, we are hiring people in those locations, but we're also hiring a ton of people in other locations. We're hiring a lot of people here in the U.S., many hundreds of people here in the U.S. So, you know, our footprint and our profile relative to, you know, who we're hiring is changing dramatically. from what you recall. The other thing I'd say is when we're building these teams, we're hiring kind of a pyramid of different skills and capabilities. At the base of the pyramid are people with very fine-tuned language capabilities, people who are linguists. We have PhDs and we have masters and bachelors in linguistics, computational linguistics. Journalism, English, the language that we're working with is English. People need to pass significant batteries of tests in order to be qualified. We then measure their aptitude for this kind of work. And then, of course, we put them through pretty extensive training and design training. in partnership with our customers, the kinds of workflows that we can parse out to people and have them be effective very quickly after being trained. So the good thing about that is we believe we can keep on scaling and not face an impediment in terms of being able to recruit and staff. We believe the platform is extensible and can keep growing support the growing customer base and the expansions that we're anticipating.
spk05: Great, great. Well, look, I'm speechless with how well things are going. Appreciate it. Let someone else ask some questions. Thank you.
spk04: Thank you, Tim.
spk02: We have reached the end of our question and answer session. We now turn the call back to Jack for any closing remarks.
spk03: Thank you, Operator. And thank you, everybody who joined the call. You know, we've really never felt more bullish about our business and more enthusiastic about a market opportunity. Creating large scale, near perfect data is a hard technical problem. It took us years of development and a ton of trial and error. But that's why we've emerged as the preferred data engineering partner at our biggest big tech customer and why we're getting solid traction with other big tech customers. We believe large language models will not be built without training data. It's a critical must-have. We also believe that our differentiating capabilities will be even more highly valued as models become more complex and require more complex training data. If you remember, back in the middle of 2023, we said we were executing on a transformational strategy. We also said that our training data would be the foundation of making LLMs valuable. You're now seeing that with your own eyes in our results. Today we are focused on growing InnoData to be a larger and more valuable company. We believe there are exciting days ahead and we're really thrilled that you've chosen to be part of our journey. Thank you.
spk02: This concludes today's conference and you may disconnect your lines at this time. Thank you for your participation.
Disclaimer

This conference call transcript was computer generated and almost certianly contains errors. This transcript is provided for information purposes only.EarningsCall, LLC makes no representation about the accuracy of the aforementioned transcript, and you are cautioned not to place undue reliance on the information provided by the transcript.

-

-