Video: How AI Is Reshaping Software Pricing & Contracts (and How To Stay In Control) | Duration: 3316s | Summary: How AI Is Reshaping Software Pricing & Contracts (and How To Stay In Control) | Chapters: AI's Evolving Role (1.36s), AI's Dual Future (112.68s), AI Budgeting Challenges (212.885s), AI Value Perception (337.28s), AI Pricing Models (499.165s), AI Value Perception (736.69s), AI Pricing Challenges (932.62s), AI Disrupts Pricing (1057.535s), AI Credit Models (1137.23s), AI Cost Transparency (1346.3s), Outcome-Based Pricing Challenges (1527.41s), Measuring AI Success (1729.235s), AI Accountability Strategies (1895.815s), AI Budgeting Challenges (2043.2s), Measuring AI Impact (2287.96s)
Transcript for "How AI Is Reshaping Software Pricing & Contracts (and How To Stay In Control)": sure to do that. So, first of all, guys, I asked AI to describe the journey it's been on over the past year in finance and procurement in its own words. You ready for this? Oh, man. Let's this it. is this is AI speaking in the first person. So it says, over the past year, I've gone from being a clever parrot perched on your shoulder, good at repeating patterns and sounding confident to something more like a mycelium network under a forest, connecting signals, sensing what's off, and moving the right nutrients to the right places. For finance and procurement, that means I'm less about flashy outputs and more about invisible leverage, linking contracts to spend risk and servicing the weird supplier moments. And so, that's super fascinating that AI has described itself that way. Thoughts on that? Now do it in Shakespeareanse on it. Wow. I mean, was amazing. Mycelium network, where did that come from? still baffled, but I don't even know what a Mycelium network means. So, hey, at least AI outsmarted me already. So AI is delivering more value faster, and the buying market is expecting all of this to cost less. So we're in a weird moment because the selling market feels like we should be paying more, and we got buyers and sellers kind of competing. There's tension. Right? So. I'm kinda gonna kick it off for both of you and ask just, like, how did we get here, and what does it really mean? And so when you think about this strange moment that we're in, where delivery is getting better in my opinion, but pricing pressure is going down and not up. Thoughts? I think this to me, this is the ultimate tension with AI. Right? Is AI a utility, kinda like electricity, or is AI something that's going to n x technology spend because it's going to we're gonna have artificial general intelligence, and AI is gonna be doing the work of people. Mhmm. Like, those are very different visions for the future. I can tell you where the VC dollars are betting on, which is the latter vision, but we see signs of both sides. And on on the one hand, you know, my mom expects to be able to go to ChatGPT and get an answer for a quest to her question, and we're seeing even kind of generic LM tools offer an insane amount of value for free and value that's honestly hard for proprietary tools to even replicate or or get better better at. At the same time, I am, you know, working with and talking to vertical software companies in spaces like legal. And in their, in their industries, AI can actually do the work of maybe a junior associate who would normally be charging hundreds of dollars per hour. And so when you think about the pricing power, the monetization potential of AI tools that are essentially delivering really great work output at a fraction of the cost, that's a pretty exciting future. But I, you know, I I don't know where we're gonna go in the next two years, but there's a lot at stake between which direction we we go on. Yeah. For sure. Lots of eyes are on it, and it really has become mainstream when our grandparents or people in our family that are not highly technical are using it every day as the new Google. CJ, what are your thoughts on this question? Kyle Kyle covered the trends exactly how I'm seeing it as well. Just from my CFO lens, something that I I know we'll get into later, Russell, but I I I just wanna get it off my chest here at the beginning. It's that most software pricing still sits inside pre AI budget buckets. So Kyle hit on the b to c motion there and how we have grandparents that are using AI now. I wanna zoom in on what we're doing as businesses. So you got the IT budget. You got the fine finance budget, the operations budget, and you have g and a efficiency tools. And it's color inside the lines type of stuff, and those budgets were designed to augment workflows. They were not meant to replace labor or decision making. So we're at a time where AI is delivering labor like outcomes, but it's being priced like software and being compared to historical budget envelopes. I'm guilty of it myself. And I think that mismatch, Russell, is the core tension. And so until budgets move from tools to labor outcomes, I think the pricing is gonna stay under some pressure. Yeah. I mean, budgeting in general I I like to say that budgeting follows strategy. But at so many companies, budgeting is used to drive strategy, which I think is completely lopsided. And it's because finance kicks off the budget season. The company hasn't thought about strategy, and so suddenly it's the opportunistic time to do that. But I think we need to invert that. So there's this other thing going on that I call the microwave effect. And what to me, the microwave effect is when something comes so fast and easy, do we begin to appreciate it less? Like, how many times have I seen people, my teenagers, stand by the microwave and be like, oh, come on. And they're literally heating up an entire dinner in a minute and a half. And so if you think about the corollary to AI, is AI becoming so smart so fast that it's commoditizing itself before we even have a full understanding of what it's worth? Maybe we'll get started with you, CJ. We are so spoiled. We're completely spoiled. So there's this Picasso story. I can see Kyle rolling his eyes right now. He's gonna kill me because we I've told this in our podcast a million times. But, Kyle, it actually fits this scenario. So Picasso's sitting in a cafe. He's sketching on a napkin. Right? And so someone recognizes him and asks him to buy the napkin. And he finishes it up in a in a minute, and he says, okay. That'll be a million francs. And the person freaks out. They they say a million. You took five minutes to draw that. And Picasso says, no. It took me a lifetime. And I think that's what AI is. We've packed basically all of human knowledge into something that answers in a second, and now we're anchoring on the second. We're not anchoring on the lifetime of information that went into there. We're not anchoring on the infrastructure, not the fact that it's replacing real work. And from a user standpoint, speed messes with our brains. I know it completely scrambles my brain. When when something gets easy, I think we stop respecting it like the microwave. We we price the interaction, that that moment, not the spend. And so it's scary to actually think about. And, I remember when we first had that moment asking chat GPT to write us a poem, and I was joking with you, Russell, when you read that description. It's like our jaws hit the floor when we could say write it in Shakespearean's on it. And now we get we get frustrated when we're asking it to do deep research comparing, like, public company bonds or securities, and we're like, this takes two seconds longer than I thought it would. So we're definitely spoiled. We're spoiled, Kyle. We are still but AI doesn't necessarily need to cost less or be be free, and I think that's where I'm not sure where the future's gonna go. But if I think about in a lot of spaces, we're used to getting a fast and pretty generic response from AI. So if AI is doing something like summarizing things that already exist on the Internet, like, we expect that to be commoditized. If AI is driving a car well, actually, did you know Waymo's cost more than an Uber or a taxi? People are spending more money on a Waymo because it is safer. You don't have the awkwardness of having a driver with you. You can control the temperature and the music how you want. Like, the experience of an AI transportation is more valuable than other forms of transportation. And so that's where I think that we're starting to get into some some nuance with AI where to the extent that from a from a company standpoint, you can measure the outcomes associated with AI or the work that AI is doing for your company, and there's a value attached to that outcome. Like, why shouldn't it be valuable to you? Yeah. It reminds me of, like, the early days of the Internet when you had to pay for faster speed. And now we just take for granted that it's lightning fast. But if the Internet ever goes out in my household, it's like the entire world stops. Armageddon. It's like, I'm getting, you know, texted by my children. You know, the Internet's out. And then we all just sit there. Like, we don't know what to do. Like, go outside. Outside. Enjoy the sun. But, really, like, AI is happening all day every day in so many applications. We may even be interacting with it and not know or we're using it constantly. And so it's running the risk of creating invisible value that's hard for the consumer to even quantify. And so we call this the AI tax, as you know, and it's there it's the AI tax is like the drive for suppliers to, kind of absorb and pass on some of that cost of investing in all of these capabilities. But here's my observation. So many companies that have become agentic, they still provide everything they used to provide before they were agentic. They're just now providing it faster and more performance, always on, never sleeping, faster iterative cycles, proactively pushing insights to you. And so in this at the same time, the buyers seem to be expecting the pricing to come down because of that. So you've got the those that are investing in AI wanting to get more, those that are buying the AI expecting that this is cheaper. Right? It's fewer human labor. So what's broken here with this model? I think, to me, one of the broken things is that if pricing is in this sort of standard way that we've been used to buying software, we're going to expect, hey. If we're paying per seat, we're spending $15 per seat. We now expect that $15 to stay the same or pretty close and now have AI in addition to just software. But that was that was this model of software as a workflow tool that maybe replaced spreadsheets or, you know, pen and paper, email bay based workflows. That that is a paradigm that, like, I think a lot of companies are going down. Right? They're just keeping per seat pricing now with AI. But there's another world in which we actually try to say, hey. AI is delivering work products for us. And if we can measure what is the work product being done by AI, how valuable is that to the company, we can take more of a share of that. And so as an example, in customer support, many of the tools in that space have said, we're actually not gonna charge you per person per customer support person that uses AI. We're gonna charge you based on the number of tickets that AI automatically resolves for you. And Intercom, for example, came out really early on with the model that said it's gonna be 99¢ per resolution. They when they started, their resolution rate, I think, will be 25%. Now their resolution rate is 70% of cases. And so they've essentially tripled their monetization within their customer base as the product gets better. And for their customers, there's a great ROI because the alternative to having AI resolve these tickets is having people. Maybe that's an outsourced, you know, set of people or an in house team doing it, but at a cost much more than a dollar per resolution. And I think if we as we think about models like that, it really puts a lot of onus on teams to measure what is the job being done by technology and how good is it at doing it. So we we can't just say, does a vendor have AI or not? Like, check yes or check no. But it becomes how good is the AI? How what's the effectiveness rate? Right? What's the speed around it? Like, we have to get actually much deeper around evaluating vendors, and I think that's really hard to do. People throw around the the AI tax term a lot, and I've joked with Kyle about this before. I think at a more micro level Russell, do you remember having to pay extra for single sign on? Absolutely. But but now it's bundled in. Right? Like, you you it's table stakes. Everybody has to have it. Yeah. And I think extra for it? that sounds crazy. And now I think we're going the same direction with bundling in note taking with any sort of app. Right? I have five different apps that can do note taking for me now. That used to be something that you'd buy on its own. So so that's my analog to it, but it's it's it's much larger with some of the technology that, they they need to put in to keep up. So I don't think anything is broken in the technology necessarily, but what's broken, to Kyle's point, is the incentive system around it. Because vendors are afraid to churn existing customers. Right? Everyone's like, oh, okay. Let's protect what we have here despite adding a ton more value. And they're choosing the retention, Russell, over the incremental monetization. So from a CFO perspective, what's actually happening is is kinda simple. It's that vendors are delivering these labor level outcomes like the customer support example that Kyle gave, but they're still selling it into software budgets with annual uplift cap. So they end up over delivering value to protect the logos that they have instead of resetting the price. And and buyers aren't irrational here because the procurement team, they're trained to protect downside. I've never met a procurement professional who gets comped on underwriting upside. Right? They're like the goalie. And by the time, like, buyers get that upside, even if they complain about getting AI features that that that they didn't ask for. So that that's kind of what I'm seeing, and, I've I've seen it go down multiple deals where it's clear that someone's getting a ton more value than they got prior, but their their expectations are just so heightened of what they should get for it. See, I just I look at this a different way, CJ. So in the past, procurement team might say, hey. We're evaluating going with Slack or Microsoft Teams. Well, Teams is bundled in with what we already pay Microsoft, and it's free. So even if Slack is 10% better, 20% better, like, it's just not enough value to be free. Let's go with the using the tool we already have access to, we can save a lot of money. I think with AI, there's a temptation to have that same evaluation and say, these tools both have AI. That AI might be a little bit better. But, really, AI is AI. It's a commodity. It's like a microwave. Right? But I think the reality is that AI can work extremely well if it has the right context and if you have access to the advanced models. Like, it can deliver work products that are similar to what people can do or even beat benchmarks and things like coding or other industries. But you can also have, like, generic Slop AI. And I think that's what's so hard is that AI is not a check the box feature, or it can't be evaluated that way. Something positive. that could come out of this. It just made me think, Kyle, the context piece of it. I'm curious how retention rates will trend, say, three years from now. Have you been training something on your company similar to an employee? Right? I'm not sure if AllAI will be able to ramp just as fast in. Like, the the the switching costs, are they higher? Are they lower? But it's something that I've had on my mind a lot. Yeah. I think that a lot of people freaked out when Gemini three came out and people started to say, hey. It's better than Chad Chubby Tea. I think for many of us, myself included, we're like, Chad Chubby Tea has a history from my stuff? years of conversations. Yeah. They if I asked Chad Chubby Tea what it knows about me, it would scare the hell out of me. In fact, they did that with their, with their essentially year end review feature. Was like, you know too much. But it knows so much that the outputs are already highly personalized to what I'm looking for, and I just have it as a shorthand back and forth, almost as if it's been operating as my chief of staff or assistant, I'm not gonna turn off models just because something might be a little bit better. Mhmm. I feel like I feel like AI monetization and pricing is exposing a broader theme that evaluating the success of how well software that we're hiring is doing at the job we're hiring it to do, that rigor already needed bolstering at many companies. We already are not very good at evaluating that in general as business stakeholders. We just go when we buy another tool, buy another tool, and that's what causes the SaaS sprawl. And I like the point that Miriam made, which is, the gap between we need this and we can build this is approaching zero because so AI is accelerating. Because I also as you were both talking, it occurs to me, we used to be in a point where go to market was maybe moving faster than technology could build, and you would sell towards things that you maybe didn't have yet. Now the capabilities are accelerating so fast that go to market sales, you know, the conversations we're having, how we're pricing it. It, like, is now trying to catch up to the technology, and it's completely inverted the cycle in my opinion. And so on that pricing front, do you feel like we're moving to where invoice is invoicing is going to have more radical transparency required? You, know, man. people get surprised suddenly by this big invoice, and it's like, oh my gosh. Yeah. I I think I think to the points we made earlier because AI AI breaks flat pricing. Right? Full. stop. So once software cost is directly tied to compute, inference, or activity, you can't hide behind just an annual contract or this hand wavy usage anymore. The the economics will force transparency. And so from a CFO standpoint, I think live metering becomes the new de facto standard because if spend can move materially month to month, I need to see it in real time. You have to remember that a lot of people aren't just buying an outcome. They're buying predictability of the cost behind the outcome. So. I can't wait till renewal. I can't wait till a QBR deck. Like, I need it now. I I wanna be able to log in and see it today. And I I wanna pause for a second to say, though, that this doesn't mean we have to make it more complicated. It sounds like I was just teeing up like this all has to be more complicated to be able to see it. No. The I think the vendors that win, Russell, are the ones not with the fanciest models. They'll be the ones who make the invoicing boring, predictable, transparent, and extremely explainable to a CFO or finance financial professional. I mean, Well, Kyle, that's it. you look at pricing all you are the czar of pricing. What's gonna happen with invoicing? Yeah. So I I look at this, you know, in some ways similar to CJ. For a lot of AI companies, the the initial temptation was to give AI use for free, and then they realized they had a lot of underlying costs of delivering AI, especially as they wanted to give customers access to the latest and greatest models. So their costs started to explode through a combination of, like, expensive LLMs, larger context windows, more reasoning, more agenda capabilities. And so just about everyone said, we need some sort of AI credit model for for this year. I've been tracking the top 500 software and AI companies. At the 2024, only 35 of those 500 had any sort of AI credit model. At the 2025, that number was from went from 35 to about 80. So big increase year on year, and the trend seems to be continuing. And I think that is great from a vendor perspective because it helps them cover their own AI cost, especially as you have some power users that are really heavy adopters of AI. The challenge for a company buying AI on a credit based model is that credits look different for every single company. Some say a credit is a token. Some say it's like an API call. Some say it's an action in the product. One action might be worth five or 10 credits. Another action might be worth one credit. It's really hard to compare across vendors when all of them say they have credit pricing, but they use a different logic under the hood. And you were also looking at, hey. You've already maybe had relationships with OpenAI or Anthropic. You're paying for tokens with them, and then you might have unused token credits with another vendor. It just gets really, really difficult to manage. And I think that the token or the credit based model was easy for vendors to adopt, but they but now procurement and buyers are demanding much more transparency around it. How am I consuming a credit? How do I have predictability of how many credits I've consumed over the last month, the last week? Who on my team is consuming credits, you know, at the user level? Can I put caps at an individual level? So so so maybe my intern doesn't spend a thousand dollars in credits next week. Like, we're gonna demand much more transparency. And I think both live metering and live control over the bill, we don't have these bills that get due. And then we look at them when we're like, this is so much more than we had ever planned, and it doesn't line up with the value we're seeing from this product. Yeah. We've got some good comments coming in from the audience. One is for Michael Shields talking about how the LLMs, if they're not profitable, they're going to push to be profitable and pass those costs on sometimes at two x. And so we know this is going to create pricing pressure because they're not doing all this for free. Then Eddie made a comment about the same way we see finance stepping with AWS on the savings side. I wonder if we see the CFO more involved with engineering to determine, committed. usage. Yeah. I mean, quickly on that, it's funny because, CJ, when I was doing the multiyear discount analysis with you and. for you, I found myself using our own instance of AI. And I know we're on a credit based model, and I'm, like, churning and churning and churning, iterating and iterating. And as CFO, I paused, and I was like, what is this costing me? Guess what? No clue. I'm not. the admin. of our instance, so there's no transparency to even know as you're working how much it's costing you. You gave me a link so I could do my own stuff in Sigma with the with the data I had. And in the back of my head, I'm like, I really hope I don't get a nasty gram from Russell saying I just racked up with a $5,000 bill because you don't know how much data you're spinning up and what you're putting in motion immediately. Right? Yeah. And, you know, and Justin points out that this flexibility of, like, just consuming is maybe greater for sales, marketing, and r and d. But we're talking to procurement and finance people today. We sit in g and a. We often don't have the discretionary spend. So if we're talking about accelerating our own use of AI, Right. we're not sitting on a variable marketing budget that I can just dip into to pay for that. So it feels like getting visibility around users and credits. It's all like, how do we begin to even understand and make heads or tails of all this? OpenAI had their user day, I think they call it, and they put on the screen, Russell, it was the top 50 credit users who'd using the most tokens by company. And I think Duolingo was number one, but but it went through all the different companies. And, it made me think of of the line from the big short. It's like, why are they confessing? It's like they're not confessing. They're bragging. They're bragging that they used all of this. Is that a wall of fame or a wall of shame? I'm unclear. that's what that's what I couldn't decide. Is this a good thing or a bad thing? Like, I hope you did something with all those credits. You know, what what this all reminds me of too is, I mean, my grandma grew up in the depression. And if you ever left a light on when you exited a room, you would not hear the end of it. And so I'm like, who left the AI on is gonna. be the who left the the light on and who left the water running. But all of this is is because for many companies, they're pricing their AI as if it's a utility that they're essentially passing through or electricity. And in that model, like, these are these are the fundamental questions, right, that we're gonna have to answer. If instead your AI vendor said, we're gonna charge you nothing for tokens or credits, we're only gonna charge you when we deliver tangible business outcomes, you actually wouldn't need to worry about that. You could use it as much as you want. You're only gonna pay when there's a a business outcome that is positive for your company. And I'd argue that, like, if you're paying that by the way, if a company does do things like, hey. Maybe they, manage your chargeback so that they can get fraudulent chargeback revenue recovered from you, for you. If you have historically not had a function that can recover that chargeback revenue, this is, like, found revenue. Why shouldn't they take 20% of that? Right? If you have more of your vendors that have this outcome orientation with their model, I think it's actually very win win, for for both sides. I I agree, Kyle. I think we're also, though, going to need to link the outcome of the credit usage on a departmental basis. And so I I liken it to Snowflake. Right? So it it it starts with the engineering team. And then the account manager at Snowflake, their job is to go out and hunt for other workflows within your organization. So, Russell, you have a lot of financial data. I would love to move that into into the cloud into Snowflake's cloud so we can move all of that around to, oh, the HR department has data too. Then what you end up with is all different workflows and all different departments consuming the credits, but I don't think success is the exact same thing depending on which department you sit in. Yeah. I mean, I in many ways, I like the thought of outcome based pricing. Right? Because it aligns incentives. The challenge with outcome based pricing is it's in the how do you avoid argument argumentation around whether the outcome was achieved? Because if the vendor decides, oh, outcome achieved, Yeah. but the consumers like, that didn't answer my question. I guess I guess the concern would be that it begins to lose track of the whole goal, is digging in, using the capability, benefiting from the capability. So I'm very torn I'm on the fence about outcome based pricing. Where do you two fall? I think it it's it is the most sort of, like, aligned model that I can think of. But, yeah, to your point, the challenge is measuring outcomes and having attribution over the outcomes. The last thing you'd want is for a vendor to be charging you based on all these things they claim credit for, and you're like, hey. You didn't do those things. I did those with maybe a small assist from your product, but barely any. But that's where, from a vendor perspective, the onus needs to be on them to prove it. Right? And, like, it should be that's their responsibility. And I think that's something that many companies are gonna need to get much better at, and their sales processes are gonna need to prove that to to buyers and procurement teams. And I think we're gonna see a lot of innovation in that space over the next year. But some of the forward thinking companies so like I said, Intercom was an early adopter in doing this with their FinAI product. I think they do things like if someone closes out of the support chat and they don't return for twenty four hours, that counts as a resolution. If they check at the end of the an interaction, there's a checkbox for them to say, your question was resolved. If they click that, then that indicates it was resolved. Right? And if Intercom has a back and forth with a customer for even fifteen, twenty minutes, but then it gets escalated to a human support representative as an escalation, they can't get credit for that even though they were very involved in that interaction or might have resolved part of that interaction. And so there's a lot of nuances here around how you're defining an outcome and getting attribution for it. But I think thoughtful companies can go through these nuances. They can make it transparent. They can explain it. They can have documentation to prove, you know, when they've actually when their product's delivered. And the vendors that can do that for you, I would personally be willing to pay a lot more for those products because there's essentially no risk. You're only paying when there's upside, and there's a lot of trust because that vendor has really thought through the mechanics of of the motion. Yeah. Russell, when people come to you internally and they want funding for some tool or project, do you have them fill out, like, a slide or a memo or something? They use Tropic. Oh, They use. So they use Tropic. Historically historically, I've asked people how they'll they would measure success on something. Like, we're. gonna buy this new HRIS system. Write down for me in six months. Gonna we're gonna be back on the phone. By the way, you almost never end up back on the phone to measure it, and that that's like a strategy and a cultural thing that's broken in lot of companies. But I I think I think it goes to figuring out how you measure before you sign the deal. And if the seller has a different idea of what success looks like than the buyer, then it's gonna be broken because no one ever agrees on math that you scramble to figure out after the fact and try to calculate, like, this looks good or this looks bad. So I think with AI now and especially the outcome based pricing that that Kyle had mentioned, like, you have to both be speaking the same language before the deal is signed. It makes it really hard if you're trying to back into ROI using some crazy math. Yeah. I'm just worried about the paper trail. If we agree that transparency will need to rise and that the providers will begin to provide more visibility around the consumption of AI. I'm thinking through how do we avoid an invoice that isn't, like, divulging private information about the different prompts that are run and or just being overly loquacious or verbose about all of the different use of AI. What's the middle ground here? How do we balance transparency without it being like a complete dissertation of an invoice that makes AP's I don't think spin. I I think it it reminds me of in in court where the other side just buries the other side in paper that they're like, you're never gonna be able to get through that, right, in in the discovery process. That can happen to a finance leader where you just get you you ask for, like, the details behind it. Like, here's every single prompt or something that that somebody ran. Here's every single query on the data with time stamps, and and that's too much. And I don't have an answer on, like, how how much is enough, but I do think you have to have a summary that they understand that is simple. Right? You have to give them the ability to go on their own choose your adventure to dive down into the data and filter it, on an as needed basis. Like, you you have to do both. You have to give them the front door that's very easy to understand. Like, Yeah. I've gotten invoices before. I'm like, I don't even know where to go from here. I don't get it. And and and I have to dive deeper. You have to give them the self-service tool similar to, like, on GCP. You can check different environments that you ran with it, And you may need to pull in somebody else who's actually doing that, like, from the infrastructure team, but it is there, and you and you can go down that rabbit hole. So Alfonso has a question. He says, it seems like vendors will have more participation in developing business strategy, but no accountability. What is the best approach to bring accountability in mutually beneficial relationships? Will businesses require to create partnership agreements and share revenue models, cost models? And it it feels like that's what you were both talking about is agreeing ahead of time the measure of success so that we mutually agree that the outcome has been achieved. But what's. your take on Alfonso's question? Well, I think we're things are gonna bifurcate. Right? So we talked towards the beginning about technology budgets versus headcount budgets or sort of function specific budgets. So So we'll have some technology AI being part of this that is more of a, hey. It makes our team more productive. It digitizes our workflows. It's maybe core infrastructure that our team needs to be successful. That's technology spend. We want that spend to be predictable. We don't wanna have some sort of, like, cost overage that we didn't plan for. You know, per seat pricing or fixed fee pricing makes a lot of sense, and and we don't need to think too hard about, like, the partnership agreements or get too cute with sort of outcome attribution and all that because it's just it's essential spend for our team to kinda keep doing what they're doing. And then we have soft and we have technology that's going to essentially take things that we would normally outsource to maybe a BPO or a vendor, contractor, or things that we maybe hire incremental headcount to do. We're now hiring an AI to do that. In those models, like, we in when we're hiring people, we often have a performance target for those people. Like, hiring a sales rep, they have a quota they're supposed to hit. There's maybe a cliff that they need to get to before they receive any bonus upside, and then there's performance measurements around are they hitting their quota and or how are they tracking against it. There's maybe even an accelerator if they do really well and blow through it. Like, we're gonna need to have those same kinds of hiring plans for our AI products. And the KPIs are gonna look different depending on if this is an AI that helps us with sales, with marketing, with finance, with engineering, with operations. But I think we're we're gonna need to start thinking about these agreements as more similar to what we would do when we hire someone. Kyle, you've adeptly moved us into the next topic area, which is budgets. And so I feel like we've we've crossed the adoption chasm very quickly here, but the budgets and the business models aren't catching up to mirror kind of the what you're talking about, the purchasing consideration cycle you're talking about. Kind of two questions in one, which is just react to the thought that budgets and business models haven't caught up. And two, if AI is eating human work, low end human work, are we at a point where we need to be clear about when someone says, well, we didn't allocate budget for that, and they're just looking at their software budget, they need to go look at their head count budget and allocate some of that over to this agentic spend. I mean, we've got a former CFO, Alenso. I'll I'll I'm curious, CJ, what what you would do or encourage us to do. I think for me, I see nuances of, like, vendor spend as kind of in between, where if we've sourced something to a vendor, an AI can do it. It wasn't really like, it kind of skirts this question. And so I think a lot of the AI products that had been able to be early adopters in this space have displaced vendor spend where we can say, hey. There was already a budget assigned to it, but we're swapping vendor spend for AI spend here. Or they're, selling to companies that are fast growing where that company might have needed to hire 10 people, and now they only need to hire five. So it's more seen as, like, headcount avoidance versus replacement. I've seen the most success there, but things are getting much harder if we're actually talking about, like, headcount replacement for existing team. But curious to hear your thoughts, CJ. The budgets don't align with how AI is replacing labor because, historically, what we would do is we would have a people budget. And if you're at a tech company, 70% of your costs walk on two legs, and then you have a budget for travel and software and rent and everything else. Right? Like, those heuristics are kind of out the window if you're replacing part of that 70% budget. But the problem is we're still packaging it that way. What I struggle with the most is, like, how do we quantify like, the avoided hires, I I kinda get that one, the reduced contractor spend. But there's a lot of this ROI that has to triangulate into the budget of just, like, time saved. But, Russell, we've kicked this idea around. What are you doing with that saved time? What what are are you making more sales with it? Are we able to prove that, or is it hand wave? Is it like you don't have to work as hard so now you can take your dog for a walk for an extra hour today? I mean, there's something to be said for overly burdened employees having better quality of life. Right? There's. a benefit. of that for your employees. But at some point, these investments need to pay dividends and efficiencies. You know, they help both sides of the p and l. They help accelerate growth, but they also improve efficiencies. The problem is we just don't always hold the business accountable to achieving those efficiencies. We talk about how this is gonna save our fill in the blank marketing team time. But then when the marketing team goes and asks for another headcount, we forget that we hired this tool to avoid that in the first place. And so I think it's about keeping a good track record of the benefits that we're looking to be achieved and closing the loop and saying, now wait a minute. Is this tool achieving that benefit that we thought in the first place? Right. I if the ROI shows up in a in a measurable p and l line item within a quarter or two, right, just being reasonable. Finance is happy to fund it. But if the payoff is vague, and I hate when people would come to me with with these terms, they'd say, it enables better decisions or happier teams or more insight. That that may be true, but, like, it won't move the budget. And it pains me, but, like, it ends up getting treated like overhead if if you can't put it into something more concrete. Yeah. I think the procurement teams are maybe an interesting test case for this. Right? And so if you're a procurement person and you're looking at a tool that could save you, let's say, twenty hours a week of what you're doing, especially the tedious work, that you don't wanna be doing in the first place. Now there's a big potential ROI there for the business, but you're gonna still have the same size procurement team, realistically speaking, as before. And so it's actually hard to justify that spend and that budget without some sort of, you know, more hard ROI saving somewhere. And so if you were gonna maybe hire two people, now you're only get able now you only have to hire one. That's a great justification. But the other thing would be to say, if it's just our current team, but now we've got all this time back every week, what are we gonna be doing differently as a team? How does our job change? And I think based on this discussion, the job is actually gonna change from maybe managing vendors that are at the later stages of a you know, our buying teams have already kinda decided which vendors they or which products they want and even what vendors they want. Now we're managing the negotiations with them. Instead of maybe just handling the back end of a deal, we can actually get much more involved proactively around helping identify where technology could improve Teams lives, where we have, you know, duplicate spend or have opportunities to actually invest more, how we're measuring the accountability of the vendors against what they had what they had promised. Alright? There's a lot more made proactive work and strategic work that teams can do with that saved time, but it really changes the behavior and, like, the nature of what a team is responsible for. Right? If they're not doing that manual tedious work anymore, what are they doing? I think that's a really uncomfortable question to ask ourselves, but it's exciting too because it can be much more impactful for the overall company. Yeah. Be before we dig into our next topic of ROI, I wanna unpack something that that your comments brought to mind to me. I think about PowerPoint, Google Slides. I think about how the creation of those tools in many ways was a boon to, being able to communicate. Right? But how often have you found that, like, the existence of these things, like, creates this work that we think strategy equals creating massive amounts of decks, iterating on them a gazillion times, beautifying the slide to perfection, normalizing the formatting, like, such a huge time suck that that really happened because this new tool got created that allows you to create presentations. So pivoting to AI, to what extent is it now doing work that's, like, net new versus replacing work, or is it augmenting take compliance, for example. Proper compliance would be that every documentation goes through a compliance protocol. But how many teams actually can do that? With AI, you can. So I think in some cases, AI is always on making sure nothing falls through the cracks and there's value to that. In other cases, there could be more novel AI that's, like, creating brand new things that may or may not even be driving value for the business. And maybe that's what you call AI slop, Kyle. Or or was it see, one of you called it AI slop. And so how do buyers weed through that? How do they determine where AI is helping? Kyle, I'll let you tackle that one first. Well, I think that's where I'd ask really every team. Like, okay. If we adopt this tool, what what changes around your goals for your OKRs or your revenue goal or any sort of performance goal that you have as a as a team? What changes if you adopt this tool? And if it's if they have a good answer to it, that's great. Right? That means that the tool is really adding value or allowing the team to be more productive, and you can also that's something that you can track to see performance against those those objectives. If it's more of like, oh, this is, you know, this is helping us stay a little bit more compliant or, hey. This is a, a proxy we weren't doing before. Maybe we should consider doing it now. I think there's always a lot of, nice to have things and also just tool sprawl in a company, and it can be really uncomfortable to, say say no to some of that or or stop it from happening when you see it happening in another team. But the reality is if there's not a a different performance expectation at the team, Yeah. then it's really hard to justify that that the technology is having an impact. Maybe to put in slightly different terms, I think it's it's what work unit change. That's that's how you how you have to measure it. And so is it invoices process per FTE? Is it tickets resolved per agent? Is it deals reviewed by lawyer? Forecast iterations that we've been able to do per analyst, days to close, errors per transaction. So you're trying to tie it back to a unit of work by the person who's using the tool. Because if you can't measure the unit of work, I don't know I don't have a good way to measure the improvement. And are there even systems in place to measure all those things at most. company? Remember, is it telemetry? I always mix up the one where, the word where you can talk to aliens. Trying trying to figure out, how much product usage there was. So I worked up at a at a backup and recovery company. Right? It was on a per license basis. We had no way, telepathy. Good thing I didn't say that this time. We had no way of knowing how many licenses were actually used on the other end. That's been solved to by a lot of SaaS companies since. then. But to your point, Russell, I I'm saying, like, I would love to measure this thing, but it's like, okay. Do we even have a system to do that? So now you're buying something else. So you're creating this admin system. Everybody out there who has a complicated billing system, they're like, oh, well, now we need an admin system to charge how we wanna charge. Do you need some sort of admin construct in order to measure AI productivity? So now it's you're in this circular reference. Yeah. Yeah. I mean, if it matters, if it's a metric, if it's a work, a unit of work that matters to the organization, you should be measuring it. Or, like, there should be a way to measure it even if it's more of a proxy. And then I'd argue, like actually, AI is pretty good at measuring things. Even taking unstructured data and structuring it or or analyzing it without a whole lot of extra admin, workload. So but I'd say if it if it's a unit that is really important to the business, that's something that you should do the work to measure where you are today and how you're trending on that metric over time. And who who do you think owns the measurement of these things we've been talking about? How how would a company embark upon beginning to stand up the measurement? Who's the referee? Who's who's calling balls and strikes? The umpire? Yeah. Well, CJ, you had a post recently that you think finance should start to own analytics in in the company. So it sounds like you you already have a stake in the ground on this. I feel like I'm being signed up for additional jobs, Kyle, but You signed yourself up. I I do think that data integrity should be housed in one place. Right? Like, I don't I people may argue with me, and it's a bit of a tangent. Like, having having sales ops sit under rev ops and bringing different numbers to to the same meeting, that that's kinda like having a fox in the henhouse. It's kinda like whistling by the graveyard. You don't know when something's going bad. But, I do think there is an onus that if you're gonna ask for the budget, you have to be able to tell the rest of the team, like, resources aren't infinite. If we're agreeing to do this, it means we can't do something else. So how are we going to measure that this is successful? And if you want this tool, you should probably have an idea in mind. Now finance may help you instrument it. Someone else in the company may help you with the analytics behind it, but you should have an idea of how you're gonna stand it up. Because, otherwise, that's, like, not a great that's not a great risk adjusted bet to just give somebody money, and and and there's no way to measure it. And I think we need to embolden finance and procurement leaders out there to ask tougher questions, to be willing to lean in and not just take at face value that this tool being requested is the right tool. You know, you don't have to be an expert technically at the function itself, but just ask the question. What are you hiring this tool to do? How do you know this is the right one to do that? What have you tried before? What alternatives did you consider? Is this a fair price? How will you measure the outcomes of this tool being a success thing? All that is fair game for the modern finance and procurement team. I. wanna move to I wanna make a statement, and I want you to react. AI is driving more value than it's getting credit for. Not on social media. That's a good question. My we're what I'd argue is that AI has the potential to offer a lot of value, but I have not seen it show up in terms of like, consistently show up in terms of higher quality, like, better productivity, real tangible business. Otherwise, we'd be seeing right? Like, businesses would be growing faster. They would be more profitable as companies. We'd see publicly traded companies, right, would be accelerating the revenue growth and their profitability growth. Like, I I just don't see AI showing up in terms of that type of impact, but I think that AI is adding a lot of it has a lot of potential for value, but it's in these conversations that we're gonna unlock how to translate that potential into realized value, which is the gap that we're seeing today. I I think it is in the b two b sense. The problem is I don't remember a time where the technology was out further ahead than where the people were in the use cases that they came up with to use it. It's kinda like they're they keep building the railroads further and further and further, but we're still trying to figure out how to run the steam engine, like, out of the first out of the first, city that we're in. So, yeah, the the the value is there, but but some of it, I think, is on us to figure out the application. And me and Kyle discussed this before in the podcast. That's also why you see a lot of AI large language model companies like, oh, we have to get into advertising. Why? Because it's gonna take ten years for people to catch up to the b to b scenarios that we've created for all this other cool stuff. So we have to figure out other ways to monetize it in the short to medium term to make up the gap to fund all these data centers. So for AI forward companies that are going to market with these capabilities, how do they prove that their tools are actually better than maybe the old way if if the value is a bit diffused or even a bit hard to measure. Yeah. That I mean, that is gonna be the the fundamental question. But I think the for the folks that are listening that are, you know, here today, I think the ex exciting thing for us all right now as an industry is that we, AI has an insane amount of potential across all functions. There's only a handful of functions that are really seeing value from it right now, and I would say software engineering and customer support being two of those main functions. But the way it's gonna see value is we're gonna apply it to hard problems. We're gonna assign a sort of a unit of work associated with those problems, and we're gonna see in real time is it actually improving that, either the cost per unit of work or the overall number of units of work that we're able to to get done. And I think for most companies, they haven't seen that level of output yet. Like, they're going based on vibes, but we can we can get there. But we'll have to both set those measures and hold vendors accountable to them. Yeah. I think a lot of it goes back to measurement as we discussed before. And some of that measurement too is going to be the avoidance stuff that we often talk about in procurement, like capacity avoided. We didn't need to hire x, cost remove removed, so fewer contractors, less rework, fewer credits having to be paid out, accelerating cash conversion cycles. Because if it doesn't land in some of those buckets, it's it's hard to it's hard to prove that there was some ROI. Yeah. Mariah says more of a quantity versus quality debate. AI is quicker, produces more quantity, making it more valuable, but I'm doubtful from a quality standpoint. You definitely have to review the quality. Absolutely. And it's often garbage in, garbage out. That's why, you know, AI is all about prompts, guardrails, parameters, data ingestion. Like, what you feed it is what you're gonna get out. Now how it came up with the Mycelial network, Yeah. I have no idea, but, I think it's all about that. Right? The data quality that's driving all of this AI and all of these tools is of utmost importance. Yeah. Absolutely. Well, that's where the big thing is it's now an expectation that every vendor will have AI in their product. Like, everyone will have some level of AI capabilities. The real question is how good is the AI at solving what you needed to solve? And that's where things can be wildly different across companies. And and I think that's also the exciting opportunity because if you find a way to apply AI, you it frees you up to have people conversations that feed back into better context and data. So I in my experience, I've worked with a lot of procurement leaders. Some of them were great. Some of them were not great, but the great ones had higher EQ than they had IQ. What I mean by that is they use the time that they had outside of just doing these requisitions to go and talk to the budget leader of why they needed something and then ask why again. So it's like, okay. You're asking for a new CRM, but there's really, like, a derivative cause of that behind it, and there's often emotions behind that purchase too. That's the exciting part to me that if you have more time back, you can you can lean into the EQ side, which gives you better data to feed into the AI for a better outcome. So we're at our final two minutes, minute and a half, give or take. So I want some closing thoughts from both of you here for our audience. Like, where is all this headed? In light of everything we've talked about, what are your closing thoughts about where all this is headed? CJ, I'll let you go first. I mean, we're doomed. It's all over. No. I'm just kidding. I I think there are downside scenarios, but it's not this AI winter that everybody's talking about. I think it's just it's just normalization. It's it's complacency and just underappreciation because we we start complaining that there was a twenty minute layover on the flight and stop appreciating that. You're sitting in a chair in the sky going 500 miles per hour. And so if we don't fix the value perception problem, I think AI will get priced like infrastructure before it's priced like the impact that that it deserves and what the vendors on the other side deserve to get for solving that. So, I I I think if if that happens, then innovation will shift more towards the features and the optics instead of the real improvement, and that would be the downside. But I'm I'm still optimistic that this frees up people to work on things that they're more uniquely qualified to do rather than the mundane. And and by working on the stuff that they're better at, it actually will make the AI better. Alright, Yeah. Kyle. I think thoughts. yeah. Last year was the year of, like, AI experimentation. Everyone wanted to talk about the AI stuff their teams were doing. They found new budget to fund AI initiatives. They might have had, like, five, five coding tools in use at their company. Right? They were just experimenting with a lot of stuff. This next year is, I think, the more exciting year where it's gonna be about AI AI ROI, and it's with these teams that are that are here that we're gonna actually be able to measure the ROI. Alright. Thank you both. Appreciate it, CJ and Kyle, for the great discussion. Thank you. We're on to our next session, AI versus FTE, the new workforce equation. Thank you both. Thanks.