Guest Articles

The cost-effectiveness and power of open-source LLMs

The cost-effectiveness and power of open-source LLMs

Open-source large language models are challenging proprietary AI with cost-effective performance, better privacy, and flexible deployment.

Open large language models (LLMs) have emerged as a compelling and cost-effective alternative to proprietary models like OpenAI’s GPT model family. For anyone making products with AI, open models provide strong enough performance and better data privacy at a lower price point. They can also serve as viable replacements for tools and chatbots like ChatGPT.

The challenges of proprietary models

OpenAI’s ChatGPT chatbot, GPT suite of models (GPT-4o and GPT-4o-mini), and o1 family of models (o1-preview and o1-mini) have dominated the conversation around chatbots and LLMs in recent years. While these proprietary models deliver excellent performance, they come with two major limitations. 

First, data privacy – OpenAI discloses very little about how its AI models operate. They haven’t published the model weights, training data, or even the number of parameters for any model since GPT-3. When using OpenAI’s services, users rely on a black-box model on external servers to process potentially sensitive data. With open models, not only can you select a model you understand better, but you also have control over where you deploy it.

Second, cost. Deploying LLMs is an incredibly resource intensive computing task. While proprietary models like the GPT family typically perform well in benchmarks, they aren’t necessarily optimized for cost effectiveness. Not every application requires maximum performance and having a wider range of models to select from allows you to choose the most effective one for the job. 

Proprietary models may still be a good choice, particularly during prototyping stages. However, you should also weigh open alternatives before making a selection.

Key consideration in selecting a model

Selecting the right AI model involves assessing several key factors:

What modalities does it need to support? LLMs just handle text, though there are now multimodal models available that can also process images, audio, and video. If you just need a text model, remember, they operate on text fragments called tokens, rather than words or sentences. This determines how they are priced and how performance is measured. 

What level of performance does it need to have and what size of model is most appropriate? Larger models typically achieve higher performance on benchmarks but are more costly to run. Depending on the model, the price can vary from around $0.06 per million tokens (approximately 750,000 words) to $5 per million tokens. The price-performance trade-off can really make or break your profit margins. Look at benchmarks to find a few models that could meet your needs then test them with a sample dataset to find the most appropriate model for your needs.

What size context window does the model need? The context window is how many tokens it can operate on at once. Models with larger context windows support larger inputs and allow you to process larger documents. While 128k tokens is now a rough standard, you can find models with smaller and much larger context windows. For applications like document summarization or search, a larger context window might be important but for a simple chatbot, you might be able to use a more cost-effective model with a smaller context window.

How fast does the model need to be? Speed is measured in a couple of ways including Time To First Token (TTFT), User Throughput (TPS), and System Throughput. For interactive systems, you may need a model that responds quickly to a user query (TFTT) whereas for Agent Systems, you might be more concerned with TPS so you can run more inference before responding to an input. With other tools, speed may not even be a major priority.

How much is the model’s cost per token, and does it vary between input and output tokens? With some providers, both input and output tokens cost the same and with others, output tokens cost more than input tokens. Check the input to output token ratio of your use case and use it to compare the price of any models under consideration. If you aren’t sure, at Nebius we’ve found that the average is approximately 10 input tokens for every output token.

Balancing these competing priorities is the key to selecting the right model for your application. While a proprietary model may meet your needs, the range of open models available like Meta Llama 7B, 70B, and 405B, Mistral Nemo, and Mixtral 8x22B, and Microsoft Phi-3 often offer all the performance you need at a much more attractive price.

The evolution of LLM hardware

LLM hardware continues to advance. Some of the smallest current models can be run on edge computing devices like smartphones, while state-of-the-art models require specialized hardware in high-performance data centres. As both models and hardware evolve, we can expect performance improvements across both small, consumer-grade devices and the most powerful, specialized infrastructure.

Deployment software is also shifting. While previously you would need to rent time on a GPU in order to run inference with an LLM, you can now find providers like Nebius AI Studio that charge per token for open LLMs. This new approach is great for consumers as it means the compute provider handles the model-GPU optimization, leaving you free to focus on building your applications.

Explore AITechPark for the latest advancements in AI, IOT, Cybersecurity, AITech News, and insightful updates from industry experts!

Nikita Vdovushkin

In his capacity as Product Lead at Nebius AI, Nikita Vdovushkin oversees Machine Learning products, including Nebius AI Studio, Managed Service for MLflow, and Soperator, Nebius' open-source solution for Slurm.His expertise and ability to execute empower companies to bring industry-leading products to market, drawing on his extensive experience at BI.ZONE and Wallarm, as well as his involvement in volunteer projects such as organizing the OFFZONE conference and overseeing CTFZone competitions.

Related posts

The Impact of AI in the Legal World

Micah Drayton

Responsible AI use demands greater diversity

Louise Lunn

Innovation on Autopilot: How Experimenting with AI Can Identify Untapped Opportunities

Corne Nagel