Back to blog
Mar 17, 2024
3 min read

How ChatGPT/LLM Wrappers Are the Future of Most Startups

A quick look at my thoughts on the controversial ChatGPT wrappers: what they are, how they work, and why many people believe they are not a viable long-term business model.

What Are Wrappers?

ChatGPT or other LLM wrappers are applications built around the ChatGPT model to make it easier to use in various real-world applications. These wrappers can offer a range of functionalities, such as:

  1. Simplified Interface: They provide a more straightforward way to interact with the ChatGPT API, abstracting technical details and allowing users to focus solely on the results provided by the AI rather than the input they need to craft to get optimal output.
  2. Integration with Other Tools: They enable ChatGPT to integrate with other platforms or programming languages, making it easier to implement in all kinds of projects.
  3. Additional Features: Some wrappers can offer extra features, such as caching responses, session management, or customization options in the interaction with the model.

In summary, wrappers are useful tools that allow people to implement ChatGPT’s capabilities more efficiently and effectively in their lives.

How Do They Work?

ChatGPT wrappers are essentially applications that make calls to the ChatGPT API and, based on user actions, send a specific prompt to ChatGPT. These applications display the result of the prompt in various ways or execute different functions based on the output. Some even filter the response through other models until they get the “best” result. The possibilities for creating applications this way are practically endless. In my opinion, we are at the same stage as when the first CRUDs started appearing, and in the future, most applications will use some form of language model to interact with their users.

Why Are There Detractors of This Way of Creating Applications?

Many argue that ChatGPT or other model wrappers are merely extensions of ChatGPT that OpenAI can do better. This is relatively true, as it’s likely to happen if your application isn’t niche enough or doesn’t offer anything extra to the user beyond being a wrapper. What I mean is that these wrappers are genuinely useful when applied to very specific use cases. For example, within the field of psychology, a wrapper specialized in ADHD could assist both patients and psychologists in detecting the condition, making contact, etc.

Regarding long-term viability, it’s true that you depend on an external company for your application to function. This is a point many detractors lean on, but I don’t entirely understand it. Nowadays, many of the applications we use depend on the proper functioning of numerous external services, such as AWS, SWIFT, the internet, etc. Moreover, if your company’s main service is a ChatGPT wrapper and it grows to build a solid customer base, it’s possible to use in-house hosted language models. This is feasible today, and it will surely become even more so in the future.