Microsoft (MSFT.US) FY26Q2 Earnings Call: Cloud Revenue Surpasses $50 Billion for First Time; Q3 Capital Expenditure Forecast to Decline Sequentially

Stock News01-30

Microsoft (MSFT.US) held its FY26Q2 earnings call. The company reported quarterly revenue of $81.3 billion, a 17% increase year-over-year (15% growth in constant currency). Operating profit grew 21% year-over-year (19% growth in constant currency). Earnings per share were $4.14, representing a 24% adjusted increase year-over-year (21% growth in constant currency). Microsoft's cloud business revenue exceeded $50 billion for the first time, reaching $51.5 billion, a 26% surge year-over-year (24% growth in constant currency). The gross margin stood at 67%. The company's commercial order volume in the second quarter skyrocketed by 230% year-over-year (228% growth in constant currency), primarily driven by large, multi-year commitments, including from OpenAI. Commercial Remaining Performance Obligation (RPO) climbed to $625 billion, a 110% increase year-over-year, with approximately 25% expected to be recognized as revenue within the next 12 months (a 39% year-over-year increase). About 45% of the commercial RPO balance originates from OpenAI.

Looking ahead to the third quarter, the company forecasts revenue between $80.65 billion and $81.75 billion (representing 15%-17% year-over-year growth). It anticipates cost of revenue to be between $26.65 billion and $26.85 billion (a 22% year-over-year increase), and operating expenses of $17.8 billion to $17.9 billion (a 10%-11% year-over-year increase). Capital expenditure is projected to decline sequentially, attributed to normal fluctuations in cloud infrastructure construction and the delivery schedule of financing leases. The proportion of short-lived assets in Capex is expected to be similar to Q2.

Company executives stated that the overall strategy focuses on three key layers of the technology stack: Cloud and the Token Factory, the Agent platform, and superior Agent experiences. The GDP impact and Total Addressable Market (TAM) growth driven by AI adoption are just beginning. The Agent platform represents the next-generation application platform, with agents being the new type of application. To build, deploy, and manage Agents, customers require a model catalog, fine-tuning services, orchestration tools, context engineering services, AI safety, management, observability, and security capabilities.

Q: Regarding CapEx growing faster than expected while Azure growth was slightly slower, investors are concerned about ROI. Please help us understand how the expansion of computing power influences Azure growth and how you assess the ROI of these investments? A: Guidance for Azure growth should be viewed more as a capacity allocation guide for Azure. Our CapEx decisions, especially for GPUs/CPUs, are based on long-term demand. We must first meet the growth and acceleration in sales of first-party applications like M365 Copilot and GitHub Copilot. Second, we invest in R&D and product innovation, allocating GPUs to AI talent to accelerate product development. The remaining capacity is then used to fulfill Azure demand. If all newly deployed GPUs were allocated to Azure, its revenue growth rate would exceed 40%. The key understanding is that the investment is made so that every layer of the technology stack can benefit customers; this is reflected in revenue growth across the business and also in our OpEx growth from talent investments.

Q: Server depreciation is over 6 years, while the average RPO duration is only 2.5 years (2 years last quarter). How can investors be confident that AI-centric CapEx will generate sufficient revenue over the hardware's 6-year lifespan to achieve robust revenue and gross margin growth? A: The average duration is a result of our contract portfolio mix. For instance, businesses like M365 have shorter contract terms (e.g., 3 years), which lowers the overall average. The majority of the remaining balance consists of longer-term Azure contracts, whose duration extended from about 2 years to 2.5 years this quarter. A significant portion of the capital currently being spent, and the GPUs being purchased, have their useful life already secured by contracts. Therefore, the perceived risk does not exist. Looking solely at Azure, its RPO duration is longer. The GPU contracts we're discussing, including those with some of our largest customers, cover the entire useful life of the GPUs, so this risk is absent. Furthermore, we continuously optimize the entire hardware fleet, including older models, through software, and leverage Moore's Law to refresh equipment annually, achieving global optimization via software. Additionally, our delivery efficiency improves over the hardware's lifespan, meaning margins actually improve over time. This has been consistently observed in our CPU fleet.

Q: Regarding the fact that 45% of the Remaining Performance Obligation (RPO) is related to OpenAI, can you comment on its sustainability? Given external concerns about associated risks, what is your perspective? A: We mention this number precisely because the remaining 55% (approximately $350 billion) is tied to our broad business portfolio, encompassing a wide range of solutions, Azure, industries, and a diverse geographic customer base. This represents a massive RPO balance, larger and more diversified than most peers. We have extremely high confidence in it. This segment alone grew 28%, demonstrating the breadth of our business and the continued growth of the adoption curve across customer segments, industries, and geographies. As for the OpenAI partnership, it's a great relationship. We continue to be their scale provider and are excited about it. We are supporting one of the most successful ventures and remain bullish. This keeps us at the forefront of building technological and application innovation.

Q: Can you qualitatively comment on the scale of capacity increases? Last quarter's addition of 1 gigawatt was substantial, and capacity additions are accelerating. Investors are particularly interested in projects like Atlanta and the Fairwater project in Wisconsin, hoping to understand the magnitude of capacity increases in the coming quarters, regardless of allocation. A: We are working as fast as possible to increase capacity. The specific locations you mentioned, like Atlanta or Wisconsin, are multi-year delivery projects, so the focus shouldn't be solely on individual sites. Our core task is to add capacity globally, with a significant portion in the US (including the two locations you mentioned) and other regions worldwide, to meet customer demand and growing usage. We will continuously build long-lead infrastructure, securing power, land, and facilities, and deploy GPUs and CPUs as quickly as possible once these are ready. Simultaneously, we strive to improve construction and operational efficiency to achieve the highest possible utilization. This isn't just about two locations; they are on multi-year delivery timelines. The key is to complete this work as rapidly as possible across all locations currently under construction or about to start.

Q: The performance achievements of the Maia 200 accelerator for inference appear very impressive, especially compared to existing TPUs, Trainium, and Blackwell. How do you view this achievement, and to what extent are chips becoming a core competency for Microsoft? Additionally, what are the implications for future gross margin prospects regarding inference costs? A: We have a long history in developing our own chips, and the performance achieved when running models like GPT-5.2 is particularly noteworthy. It demonstrates that with new workloads, you can achieve end-to-end innovation across the model, the chip, and the entire system—this isn't just about the silicon itself but also rack-level networking and memory collaboration optimized for specific workloads. We work closely with our super-intelligent teams, and all models we build will be optimized for Maia. Overall, this is still at a very early stage, with continuous innovation. Currently, everyone is talking about low-latency inference. We ensure we are not locked into any single technology; we have strong partnerships with NVIDIA and AMD, and both they and we are innovating. We want our fleet to have the best total cost of ownership at any given time. This isn't a one-generation game; you need to stay ahead continuously. This requires integrating significant external innovation into the fleet to maintain a fundamental advantage in total cost of ownership. Thus, we are excited about Maia, Cobalt, our DPUs, and network cards—we possess strong systems capabilities for vertical integration. However, having the ability to integrate vertically doesn't mean we do so exclusively; we aim to maintain flexibility, which is what you observe.

Q: Can you elaborate on the momentum of enterprises embarking on frontier transformation? We also see customers achieving breakthrough benefits after adopting Microsoft's AI technology stack. As they become "frontier firms," how might their spending with Microsoft potentially scale? A: We observe customer adoption across our three major suites (M365, Security, GitHub). They create a compounding effect. For example, Work IQ is crucial because, for any company using Microsoft services, the most critical database is the underlying data in Microsoft 365, which contains all implicit information—people, relationships, projects, outcomes, communication. This is a super-important asset for any business process and workflow. Now, the Agent platform is genuinely transforming companies. Deploying these agents helps businesses coordinate work, leading to greater impact. Furthermore, enterprises are leveraging services in Fabric and Foundry, along with GitHub tools or low-code tools, for their own transformation in areas like customer service, marketing, and finance, building their own agents. The most exciting aspect is that new agent systems like M365 Copilot, GitHub Copilot, and Security Copilot are converging, compounding the benefits of all data and deployments—this is likely the most transformative impact currently.

Q: How is Azure's CPU aspect performing (considering operational changes)? From a broader perspective, are customers realizing that proper AI implementation necessitates moving to the cloud, and how is this driving cloud migration momentum? A: Firstly, AI workloads shouldn't be viewed solely as liabilities using AI chips. This is because any agent will call other tools or containers through tool use, and these containers also require general-purpose computing resources. When planning our fleet, we consider the ratio of AI computing power to general-purpose computing power. Even training tasks require substantial general-purpose compute resources and closely located storage. The same applies to inference; inference in agent mode inherently requires general-purpose computing resources configured for the agent—they don't necessarily need GPUs but do need compute and storage resources. This is what's happening in the new world. Additionally, cloud migration continues. For instance, the latest SQL Server continues to grow as an IaaS service on Azure. This is precisely why we must consider our commercial cloud and keep it balanced with the AI cloud, because when customers migrate workloads or build new ones, they need access to all these infrastructure elements in the regions where they deploy.

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Comments

We need your insight to fill this gap
Leave a comment