Azure Functions v4 runtime, which reached GA with .NET 6, stabilised the programming model for .NET serverless functions. The cold start problem that had affected .NET Functions in previous runtime versions has been significantly addressed.
The v4 isolated process model
Azure Functions v4 with .NET 6 introduced the isolated worker process model as the default. The function code runs in its own process, separate from the Functions host. This eliminates the dependency version conflicts that plagued the in-process model, where your code and the Functions runtime shared the same .NET process and could have incompatible package versions. Isolated process also enables future .NET version support independent of the Functions runtime.
Cold start reduction strategies
The primary cold start reduction strategies for .NET Azure Functions: Premium plan with pre-warmed instances eliminates cold starts entirely for latency-sensitive functions. Native AOT compilation for .NET 8 Functions reduces cold start from 1-2 seconds to under 100ms. For functions on the Consumption plan where Premium plan cost is not justified, the Flex Consumption plan (announced in preview in 2024) provides per-instance scaling with faster warm-up.
Durable Functions for orchestration
Durable Functions provides stateful orchestrations on top of Azure Functions. Long-running workflows (human approval steps, external system polling, fan-out/fan-in parallel processing) are expressed as orchestrator functions that are automatically checkpointed. The programming model makes complex workflows readable. The storage backend (Azure Storage or the newer Microsoft SQL Server backend) serialises and replays orchestration state.
When serverless is not the answer
Azure Functions is not the right choice for: latency-sensitive APIs where cold starts are unacceptable on the Consumption plan, long-running CPU-intensive tasks (function timeout limits apply), or workloads with consistent high traffic where the per-invocation pricing of the Consumption plan exceeds the cost of always-on container instances. The use case fit for serverless is event-driven, irregular workloads.