Don’t disrupt the Software!
How new age AI Native Companies are tackling the enterprise landscape?
"Disrupting software incumbents" gets mentioned in a lot of pitches I hear - yet in the last decade, fewer than a dozen startups have meaningfully displaced one. It's one of the hardest things to do in enterprise tech, precisely because most of us often understate the importance of distribution, which makes taking on these companies almost insurmountable. The system absorbs change rather than yields to it (Microsoft bundled Teams, Adobe almost acquired Figma). I would go to the extent of saying - if you are positioning your product as "AI-native Figma, Autodesk, Hubspot or any other scaled software company" - go and rethink your strategy and "right to win." Autodesk has remained clunky, as have ServiceNow, Epic Systems, and countless other companies - yet we don't see them getting disrupted.
Disruption (according to Christensen's seminal paper) occurs when incumbent companies ignore new technologies that don't serve the needs of their customers or fit within their existing business models. However, as the new technology, which excels on completely different attributes than the incumbent's product, continues to mature, it eventually takes over the market. Another core feature of disruption is that the initial set of adoption is driven by cost-led advantages with a "good enough" value proposition that incumbents don’t about. However, in (enterprise) software, the calculus of cost and experience changes. How do you provide something (i.e., software) at cheaper prices that is already getting cheaper (and better) - in fact, more so with AI, which has led to an exponential drop-off in complexity to launch new (and high-end) products - solutions such as Cursor, Codeium, GitHub Copilot, etc. have greatly democratized engineering. Most of the needle-moving disruptions in the last 5-10 years have been in hardware (arguably in sectors ridden by bureaucracy).
SpaceX vs NASA: SpaceX made orbital delivery 10x cheaper through vertical integration, reusability, and software-defined control systems. It vertically integrated everything - from rocket engines to software - and prioritized reusability from day one. Meanwhile, NASA was operating with bloated supply chains, fixed-cost contracts, and slow bureaucratic processes.
BYD vs Legacy Car Companies: BYD bet early and aggressively on battery innovation, vertically integrating both cell production and EV assembly while legacy automakers outsourced both. This gave them control over cost, quality, and supply chain resilience - especially during lithium price surges. While GM and BMW hesitated or leaned on hybrids, BYD went all-in on electric and built products tailored for domestic demand in China.
Anduril vs Defence Primes: Current theaters of war are living proof that a higher number of cheaper autonomous systems are better suited than large military platforms (such as battle tanks, howitzers, and others). Anduril had this foresight and built a software-first autonomy stack for surveillance, targeting, and decision making - with rapid iteration and off-the-shelf hardware. Traditional primes like Lockheed and Raytheon were optimized for multi-decade programs and cost-plus contracts, not speed or autonomy. Anduril's edge came from productizing AI at the tactical edge - in drones, towers, and battlefield systems - while incumbents were stuck integrating subcontractors. It created a full-stack autonomy company in a market built around fragmentation.
The far and few that happened in software had one common theme - most of these companies 1) fought the battles on their own terms 2) Took advantages of platform shifts and 3) When urgency hit (data scale, AI wave, COVID), the disruptors were already ready with the right architecture and user experience.
Snowflake took advantage of Cloud adoption and built a multi-cluster shared data architecture optimized for AWS, Azure, and GCP. While Oracle and others assumed deep tuning and control were must-haves, they didn't see simplicity and elasticity (separation of compute and storage) as serious value props for real enterprises. Snowflake proved that usability, elasticity, and consumption-based pricing were not just "nice-to-haves" - they became the new standard, and the rest is history. In this case, the platform shift was → 1) Cloud adoption and new vector was 2) Elasticity offered by Snowflake.
Databricks unified AI + Analytics + Data Engineering in a single "lakehouse" architecture - combining the reliability of data warehouses with the flexibility of data lakes. Hadoop vendors focused on storage and batch ETL, underestimating the rise of machine learning pipelines and real-time analytics. Traditional warehouses (Teradata, Oracle) were focused on BI/reporting use cases, not predictive workloads. Databricks positioned itself at the intersection of ML workflows and cloud-scale data processing, years before incumbents saw ML as a separate workload, and became the default platform for AI-native data teams, riding the ML and generative AI wave with real architectural edge. In this case, the platform shift was → 1) ML/AI Adoption and new vector was 2) Lakehouse architecture offered by Databricks.
Zoom was built from scratch with a video-first architecture, optimized for low latency and high reliability even in low-bandwidth environments. It focused obsessively on ease of use: one-click join, no installs, clean UI - when incumbents had bloated desktop apps and clunky integrations. Cisco and Microsoft assumed video was a secondary feature behind email or voice. Video was bundled, not prioritized. They focused on selling to IT departments, not to end users, and ignored usability complaints. COVID hit, and Zoom's 10x better exp. made it the de facto choice - not just for companies, but for schools, courts, and families. What was dismissed as "nice UX" became mission-critical - and a $100B+ market cap company was built. In this case, the platform shift was → 1) COVID and new vector was 2) End User focused application, that was easy to scale from Day 0.
One core thing to note about the AI wave is that applications need to think deeply about "fighting on their own terms." Founders should assume that incumbents will take advantage of platform shifts and are likely to move faster than we've given them credit for. There are some interesting approaches founders have taken to insert themselves in the enterprise stack.
AI Native Companies are going after IT budgets (and not software budgets)
AI is redrawing the boundaries of IT budgets. If you break it down, ~50% of IT budgets today go to headcount, ~25% to software, ~15% to hardware, and ~10% to consultants. More importantly, only ~15% of that is truly "up for grabs" on an annual basis.
Best-in-class AI companies aren't just targeting the ~25% software slice. They're going after the entire 60%+ tied to human labor and services, embedding themselves into workflows that previously required headcount, consultants, or internal dev teams - the "shadow services" budget. This is a shift from selling software to selling outcomes. The ratio of software to services spend is ~1:11
AI enables companies to replace full-service teams with scalable automation, driving down the unit cost of outcomes. This unlocks a massive budget shift and valuation shift. Services companies typically trade at 3–6x revenue, while software companies trade at 10–20x. Reclassifying services into software doesn’t just expand the pie - it re-rates the revenue itself, compounding equity value creation.
Conversely, if you're building a pure software tool without touching labor-intensive workflows, you risk selling Point AI tools - like summarization or UI assistants - are often perceived as “nice-to-have,” and rapidly commoditize as LLMs evolve. On top of this there is added complexity of CIOs preferring to wait rather than commit early in a market moving this fast - specially when enterprise platforms are great at bundling stuff.
A good case study on this is Norm AI - which is building compliance AI agents for highly regulated industries such as Finance and Healthcare. Typically in regulatory-sensitive organizations such as banks, there are internal compliance teams that regularly sift through the impact of new regulations and their impact on internal processes. There are internal tooling/software solutions that monitor if all processes are followed per mandate. Additionally, there's a huge budget allocated to audit firms such as the Big 4 to audit these processes partially. Even smaller banks, which lack large compliance teams, particularly feel the pressure - compliance costs can reach ~2% of their total assets - majority of these costs (~75%) stem from labor. Norm essentially automates the overall workflows targeting the overall compliance spend - i.e., going after 1) Internal Compliance Team, 2) Software Tooling and 3) Audit firms. Net-net: The real action is where AI collapses headcount + software into a single system. This isn't about replacing software. It's about replacing workflows. And often, teams.
Unlocking Net New Use Cases
Too many companies constrict AI agentic application use cases to those currently served by existing software applications - just faster and cheaper. But in practice, the biggest agentic opportunities are likely to be in areas that are currently unexplored - where agents get deployed to solve net new problems. We treat modern enterprise setups as an optimized engine, yet in reality there are significant inefficiencies: projects stall, headcount freezes, skill gaps are very much the norm, budgets hit the ceilings, and customer pipelines can be erratic. In a lot of these choke points - talent, resources, and capital run thin. Aaron Levie at Box articulates this really well
One of the key features of the agentic approach is that it provides an elastic cost model for knowledge work that wasn't earlier possible. AI agents give organizations the ability to augment and automate work in areas they weren't able to spend on previously or in areas where they were supremely resource-constrained. Most of these tasks are likely characterized by a "cold start" problem, which requires some form of focused investment that gets deprioritized. Some of the very basic manifestations of this I have heard across enterprises are:
Ability to A/B test at scale that wasn't possible before: e.g., a performance marketing team at a portco is churning out 1000s of short-form AI-generated video content to A/B test quickly to identify needle-moving creatives, improving its targeting significantly.
Automate loss-leading services at scale to get larger contracts: Harvey is enabling legal firms to take on low-margin projects in private equity more efficiently (such as through side letter compliance work). Traditionally, law firms would do these lower-end tasks at a loss to build relationships with private equity firms, hoping to eventually land larger M&A deals. Harvey helps by building software that allows firms to: a) Automate side letter compliance work b) Charge a flat fee for these previously unprofitable tasks c) Reduce the time and cost of completing these projects and d) Create a more attractive entry point for securing future high-value work from private equity clients. By making these low-margin projects more economically viable, Harvey helps law firms expand their market share and build relationships with potential high-value clients more effectively and profitably.
Circumventing to End User if possible and then tackle the Enterprise
One of the most interesting shifts in the AI-native era is how companies are bypassing traditional enterprise sales cycles and going straight to end users - developers, nurses, operations staff—through self-serve, freemium, or viral entry points. This GTM motion isn't new, but its speed and scalability in the AI context is unprecedented. The most obvious example is Vibe Coding. Companies like Cursor, which now does ~$200M in ARR, have quickly become the go-to IDEs for developers across enterprises.
A more unusual vertical where this is playing out is healthcare. A very interesting case study is that of Open Evidence - an AI-powered medical platform designed to help doctors make better clinical decisions by providing quick access to the latest peer-reviewed medical research (medical research 2Xs every 73 days). It allows doctors to quickly find specialized medical information for complex or rare patient cases, helping them make more informed treatment decisions by accessing the latest medical research that might not be easily found through traditional search methods. Open Evidence essentially scaled from 0 users about a year ago to now serving ~25% of active physicians in the United States. They rejected traditional top-down healthcare enterprise sales approach and adopted a consumer-centric, direct-to-user approach. Instead of pursuing complex institutional approvals, which are mired in bureaucratic complexity, taking up to 2-3 years to obtain final approvals, they released a free, high-value app on the App Store. The strategy focused on creating an exceptionally useful product that would naturally spread through word-of-mouth recommendations.
The direct-to-doctor approach also created a powerful technology flywheel for Open Evidence. Open Evidence had trained its models on publicly available medical literature, but there are many top peer-reviewed medical journals that aren't available publicly (such as The New England Journal of Medicine - NEJM). As doctors began using the app, senior editorial board members of these journals (such as NEJM) became users themselves. These high-profile medical professionals experienced the value of the platform firsthand, which was crucial to the subsequent partnership. Unlike traditional approaches where companies would pitch partnerships through lengthy negotiations, NEJM's senior leadership reached out to Open Evidence because they were impressed by the product. These journals wanted their content to be shown to users. This organic, bottom-up approach led to a strategic content partnership that would have been typically very difficult through traditional approaches. The partnership benefits were mutual: Open Evidence gained access to (exclusive) full-text NEJM content, NEJM received millions of visits to their journal pages through Open Evidence Platform, doctors got access to cutting-edge medical research, and the platform drove traffic to specific, deep methodological sections of medical journals that might otherwise go unread
Additionally, many AI-native healthcare companies such as Scribes (Freed, Nabla) are inserting themselves through PLG-led motion. In many cases, companies are executing unique PLG-led Enterprise Sales motions (going after entire health systems), which has not been the norm in this industry. Clinicians start using these scribes to tackle burnout where adoption of these tools is slow at the healthcare system level (Note: Clinicians spend 1–2 hours/day writing notes during and after patient consults instead of spending time with patients). Once a critical mass of clinicians start using these AI products, the companies engage in enterprise sales motions, leveraging their power users to push the product into the system.
While incumbents focus on compliance, governance, and integration, AI natives are building utility-first products that earn adoption in weeks - not quarters. It’s a fundamentally different motion - and it’s working.
If there are other interesting case studies - would love to hear!