Is Software Entering a New AI Driven Commoditization Cycle?
Another week and software continues to grind lower. However, despite all of the carnage, there was another big winner this week! Fastly is up ~100% over the last week. The week prior, 8x8 had the big week (they were up ~70% in a week). Always an opportunity somewhere… I thought I was done talking about “is software dead” after the last couple weeks Clouded Judgement posts, but I just had more thoughts I wanted to share… I think two things are true. I think people are simultaneously under and over estimating the impact AI will have on the existing software complex. The difference is the timing. Overestimating in the short term, and underestimating in the long term. I see a lot of arguments claiming software is dead because everyone will just vibe code their own software. I don’t buy this at
Software Is Dead...Again...For Real this Time...Maybe?
Last week I wrote a post titled “Software is Dead…Again.” Since then, $iShares Expanded Tech-Software Sector ETF(IGV)$ is down ~20% (in just 1 week!). If software was dead a week ago what is it now, down an incremental 20%?! First - some fun stats The median NTM revenue multiple (cue all the comments “he’s still talking about revenue multiples?!?”) is 3.6x. This is the lowest it’s been in the last 10+ years. For the revenue multiple haters, the median FCF multiple is 16x NTM FCF, for median growth rate of ~20% (alas, once again, cue another set of haters, saying none of the FCF is real and it’s all sitting in SBC). Can’t escape it, maybe software is a zero with no valuation support. Was good while it lasted. 39% of my software index is trading <
Well, software is dead again! At least investor confidence is dead… The median NTM revenue multiple for the cloud software universe is 4.1x. That’s the lowest it’s been in 10 years (it was about the same very briefly in 2016, when the fed started hiking rates for the first time after the GFC ZIRP period). The current median FCF multiple is 18.9x. The previous low in the last 10 years was ~26x! However, the “narrative violation” metric here is the growth adjusted revenue multiple median is still 0.35x vs the pre covid average of 0.28x (you can see the graph below, I post it every week). So while multiples are at historical lows, so are growth rates. The FCF multiple is the most telling, however. With the current median ~30% lower than the prior low point in 2016 So what’s going on?! I think
I have a growing conviction that 2026 will be the year of multi modal AI. There are a handful of trends all coming together at the same time that are set to converge. Multi-modal models get good enough. Inference is getting cheaper and faster (cost curve is important). And the real world starts showing up as first class input. I really believe AI will stop predominantly living in text boxes and instead in places humans actually are. For the last few years, AI has been overwhelmingly text first, and for good reason. Text was the fastest path to usefulness. It was easy to collect, easy to tokenize, relatively cheap to serve, and generally didn’t have the same latency requirements. If you were building an AI product in 2023 or 2024, starting with text was the rational choice. But text always
As always, these posts are more of a brain dump of “what I’m thinking” about…And lately I have been thinking a lot about “legacy SaaS",” systems of record, etc. I wanted to write another post today in a similar vein.When I think about how people and companies interact with software today (and for this post when I say software I’m talking about legacy SaaS systems of record) the pattern is generally pretty simple. The system of record is a single, organized place where a human goes to look something up, understand the state of the world, and then take some sort of action based on the information they gathered from the system of record. Like opening Salesforce to check pipeline before updating a forecast, or pulling up NetSuite to reconcile numbers before approving a close. And more often th
I have been thinking a lot lately about education in AI. More often than not, I am seeing education as a core go to market problem (but sometimes an advantage) in AI. The companies that are best able to educate the world and their prospects are winning. Part of the reason for this is that everything in enterprise AI is still so new. Many people, teams, and organizations are facing what I think of as a blank canvas problem. They know AI is powerful, but they don’t even know where to start. Where to use it, where to augment existing workflows, where to create net new workflows, etc.What surprises me the most - even when teams think they know what functionality they want build, it’s not obvious to them how to build it. Build versus buy is just where it starts. Even inside “build,” there are d
Every AI conversation eventually drifts toward the same set of questions. Which model should we use. Build or buy. How much will inference cost. What happens to accuracy at scale. These are all real questions, but they are also strangely orthogonal to whether AI actually changes anything meaningful inside a company. You can answer every one of them well and still end up with an AI strategy that feels impressive in demos and immaterial in practice.There is a more important decision sitting underneath all of this, and most companies are not talking about it directly. They are making it implicitly, often without realizing it. That decision is whether AI is allowed to be authoritative or merely assistive.Most enterprise AI today is firmly in the assistive bucket. AI drafts the email, but a hum
Every few weeks I see some version of the same claim: “The new system of record is the agent,” or “Workflows swallow systems of record,” or “Data is the system of record, apps are just thin views.” There is a grain of truth in each of those, but I think it is also easy to over-rotate and accidentally throw out the thing enterprises still need most, which is not a “system of record” as much as a reliable source of truth.When I think about systems of record, I do not actually think in product categories. I think in the much more boring lens of “where does the truth live.” In other words, if an enterprise workflow needs to know something at a specific step, where is the one place that answer is considered canonical. Because as workflows get more automated and more agent driven, the fragility
I want to write this week about a topic that’s been bugging me recently. It’s an irrational (in my opinion…) behavior that’s starting to emerge in earlier stage startup land more and more. The challenge is - I can see how we are getting to an irrational end state with a rational starting point… And it all comes down to how employees view and value startup equity. So this post is meant for all of the employees evaluating different offers from startups. Let me set the stage with a hypothetical conversationFounder: “I need to raise at the highest valuation possible. And after I do, I need to raise again in 6 months at an even higher valuation.”Me: “Why?”Founder: “Because that’s how I will attract and hire the best talent”Me: “Why? Wouldn’t new hires want a lower valuation for more upside?”Fou
Over the last year we have all been trained (at least I have…) to look at model launches through one lens. What is the benchmark score and who sits at the top of the leaderboard. It felt like every product announcement was immediately reduced to a scatter plot and a few social media hot takes about who beat whom by a few points on MMLU or GPQA. But something subtle started to shift with the Gemini 2.0 family and became much clearer with this week’s Gemini 3.0 launch. The story is no longer only about raw model quality. The frontier is getting crowded, the performance gaps are narrowing, and the real competition is moving up the stack into platforms. Said another way, the next decade in AI will be defined not only by model breakthroughs, but also by distribution, integrations, and the shape