LangChain became the dominant framework for building LLM applications in 2023. Its GitHub stars went from near zero at the start of the year to over 50K by June. The speed was unmatched. So were the growing pains.

What LangChain provided

LangChain abstracts the common patterns in LLM applications: prompt templates, output parsers, chains of model calls, tool integrations, memory management, and agent loops. Building any of these from scratch requires understanding both the LLM API semantics and the application architecture patterns. LangChain gives you abstractions that let you combine these patterns with less boilerplate.

The criticism

As LangChain grew rapidly, criticism grew alongside it. The API surface area expanded faster than the documentation. Abstractions that were convenient for simple cases became obstacles for complex ones: when things went wrong, the layers of abstraction made debugging difficult. The magic that hid the complexity also hid what was actually happening at the API level. Teams that built production applications on early LangChain versions had to refactor as the API changed between releases.

LlamaIndex as an alternative

LlamaIndex (originally GPT Index) focused more narrowly on the data layer: indexing, retrieval, and data ingestion pipelines for LLM applications. Where LangChain tried to be everything, LlamaIndex was more focused. Teams doing sophisticated document retrieval, knowledge graph construction, or multi-step query planning over structured data found LlamaIndex's abstractions more natural.

The framework consolidation

By the end of 2023, the LangChain team had stabilised the API and invested heavily in documentation and testing. LangChain Expression Language (LCEL), introduced in August, provided a cleaner composition model. Teams that had been skeptical of early LangChain found the more mature version genuinely useful. The lesson for adopting rapidly evolving open source frameworks is to version-pin aggressively and expect significant refactoring in the first year.