The meteoric rise in LLM usage – and the growing pains that have come with it – have left many technologists searching for a panacea. Since it launched in late 2022, LangChain has been positioned by many as the go-to solution for LLM orchestration; its much-hyped flexibility promises an experience where you can readily “plug in” all the componentry you need to build an AI platform. But as LangChain’s users become more sophisticated – and their use cases more complex – is the model keeping pace?

Unfortunately, for those who have in-practice experience, LangChain’s promise has, so far, fallen flat. 

LangChain: A Cautionary Tale

Being an AI consulting firm specializing in developing mission-critical, intelligent platforms for our clients, we’ve been immersed in AI for some time. Nearly our entire staff is working on AI-centered platform development, or is being engaged by clients to help with their AI strategy. Collectively, we’ve amassed a wealth of experiences using LangChain. To illustrate the commonalities in those experiences, I’ve put together a fictitious “based on reality” developer story that goes something like this:

You’re an AI developer, tasked with developing a state-of-the-art Level 3 cooperative-style application using the latest Large Language Models (LLMs). The buzz in the AI community points you toward a framework named LangChain, lauded for its ability to easily “chain” together different components to power advanced use cases. You eagerly dive into this framework with a hope for ease and simplicity: to efficiently translate your innovative ideas into functional applications.

You kickstart your project smoothly, thanks to the easy-to-use pre-assembled chains and modular components that LangChain offers. You appreciate its versatility and how it accommodates a variety of LLMs, and you’re impressed with how quickly you can get your proof of concept up and running. LangChain’s potential seems to live up to the hype.

But then you encounter your first hurdle: a bug that grinds your momentum to a halt. Time to debug. You activate verbose mode, expecting a detailed account of the error, only to be provided with vague (at best), unhelpful clues. Debugging becomes an uphill battle. Your enthusiasm starts to fade; while LangChain cleared the way for a rapid launch, you have some fears about your ability to triage production issues in the future.

You decide to push through the difficulty, seeking guidance in the LangChain documentation. However, you find that key details are missing, and many of the advanced concepts are poorly explained. Now you’re left to comb through LangChain’s codebase, looking for the missing links and spending a tremendous amount of time and energy that you’d initially hoped to save by using LangChain.

Gradually, your use case demands more complexity, needing features that are not covered in LangChain’s existing workflows. You turn to ‘Custom Agents’ hoping to tailor it to your needs. This process turns out to be far more laborious than you expected. What was positioned as a flexible, easy-to-use framework now feels like a rigid, ill-suited construct.

What We’ve Learned about LangChain

The intention of the story isn’t to minimize the value and attention that LangChain brought to the orchestration aspect of the broader LLM stack. LangChain does:

  • Enable developers to quickly stand up proofs of concept with its pre-assembled chains and modular components.

  • Provide the ability to chain together different components for more advanced LLM use cases, offering a level of creativity in the initial stages of software development.

But there’s a tremendous imbalance in the pros versus cons that’s identifiable via a simple litmus test: would it have been faster and easier to build what I was building without it? So far, our experience has been “yes, it would have been faster to NOT use LangChain.” 

Obviously, as is the case with most litmus tests, it doesn’t account for the full picture. For example, maybe using a framework took longer, but gave a team some intrinsic benefits. Unfortunately, LangChain is lacking in the “intrinsic benefits” arena as well. We’ve identified four fundamental flaws:

  1. Terrible to Debug: LangChain’s support for debugging is downright bad. Errors are elusive, even with verbosity turned on, making troubleshooting a struggle. To improve, there needs to be a comprehensive and straightforward way to track errors and audit component behavior within the LangChain framework.

  2. Difficult Customization: LangChain is rigid (which is unexpected given that it’s a framework) and difficult to customize. Although it boasts ‘Custom Agents’ and ‘chains’ to accommodate diverse use cases, if developers’ requirements extend beyond the given examples in the documentation, customizing LangChain becomes onerous at best, and outright blocking at worst. This inflexibility could significantly inhibit the development of more complex or innovative applications.

  3. Documentation Gaps: The documentation accompanying LangChain leaves much to be desired. Several integral pieces of information, like the distinction between Agent types, are missing. This omission forces developers to sift through LangChain’s codebase, consuming time that could be better spent elsewhere.

  4. Outdated Foundations: LangChain’s current workflow and prompt engineering techniques are based on early LLMs like OpenAI’s InstructGPT. With powerful models like GPT-4 now available, LangChain’s core capabilities lag behind what is possible with state-of-the-art LLMs.

The Moral of the Story

The current LangChain experience isn’t too different from the early days of Kubernetes: clunky, rigid, and rife with “gotchas.” One difference, though, is that the Kubernetes community ended up establishing clear leadership that drove development to remediate the feature gaps and much of the clunkiness (trust me, I know – my last company, Apprenda, led the windows-sig).

LangChain needs good stewardship if it intends to map to the terrain; while we’re hopeful that it will receive the necessary community leadership, that doesn’t happen overnight. Until that leadership materializes, we won’t be using it for anything more than proofs of concept – and we don’t recommend anyone use it in a production-scale initiative.

Looking to jumpstart your Generative AI journey? Our workshop offers the knowledge, tools, and strategic insight that business leaders need.

LEARN MORE