The AI Pricing Problem: When One Size Doesn't Fit All
Software companies are rushing to add AI features to their products, and a pattern has emerged: slap on AI capabilities, double the price. I personally think this approach is lazy and misses the mark, especially for products where AI serves as a complement rather than the core functionality. It looks like the product team either hasn’t have been given the opportunity to think abuot this or that someone else have pushed this type of pricing model, withouth considering the impact it has on the user experience.
The Current Landscape
The default seems to be a $20 monthly premium for AI features. This works for AI-first products where the technology fundamentally transforms the user experience. Think of products that eliminate significant manual work or create entirely new workflows.
Let’s look at two real-world examples of how this is being implemented poorly:
-
JIRA charges 2x per user to enable AI features, even though their AI capabilities mainly benefit managers creating tickets and planning sprints. Individual contributors, who make up the majority of users, get minimal value from these features yet are forced into the same pricing tier.
-
Microsoft Office 365 automatically bumps users to premium tiers to access AI features across all applications. A writer using Word might not want AI interfering with their creative process, while an Excel user might benefit from AI-powered data analysis. Yet both are forced to pay for the full AI suite.
Understanding Usage Patterns
Different roles interact with AI differently:
- A product manager might use AI to draft specifications and summarize meetings
- A developer might only need occasional help with code documentation
- A designer might actively avoid AI to maintain creative control
Current pricing models ignore these distinctions. In JIRA’s case, a developer who just needs to update tickets pays the same AI premium as a product owner who uses AI for roadmap planning and ticket refinement.
A Better Approach
Instead of forcing users into expensive AI tiers, products should consider a more flexible model:
- Keep the base product pricing unchanged
- Offer AI capabilities as a $5/month per-user add-on
- Allow users to connect their own LLM provider (OpenAI, Anthropic, etc.)
For example, JIRA could let managers enable AI features just for their accounts, while keeping costs lower for developers. Office 365 could allow users to enable AI selectively per application, connecting their preferred LLM provider.
This approach solves several problems:
- Users only pay for AI when they need it
- Companies with enterprise LLM contracts can use their existing providers
- Different teams can use different models based on their needs
- Data privacy concerns are addressed by letting companies choose providers they trust
The Technical Implementation
Building a model-agnostic AI integration requires:
-
Abstraction Layer
- Create a unified interface for different LLM providers
- Handle provider-specific quirks behind the scenes
-
Prompt Engineering
- Design prompts that work consistently across different models
- Build fallbacks for when certain capabilities aren’t available
-
Error Handling
- Graceful degradation when AI features fail
- Clear feedback to users about limitations
The Counter-argument
Some might argue this weakens the AI integration or reduces revenue. But consider this: users are already copying and pasting between ChatGPT and their work tools. A flexible integration just makes this workflow more efficient.
Additionally, by making AI features optional and reasonably priced, you’re likely to see higher adoption rates than with forced, expensive upgrades.
Building for Reality
When building AI features, focus on:
- Making your application model-agnostic
- Supporting top-tier LLMs interchangeably
- Providing clear value beyond basic LLM interactions
- Building features that complement existing workflows rather than replacing them
This approach respects that different users have different needs. A writer might want full control over their prose, while a marketer might welcome AI assistance for email campaigns.
The Enterprise Angle
For enterprise customers, flexible AI integration solves several key challenges:
- Data Processing Agreements (DPAs) can leverage existing contracts
- Security teams can approve specific LLM providers
- Cost management becomes more predictable
- Teams can experiment with different models without committing to expensive licenses
Consider a large company using Office 365. Instead of paying Microsoft’s AI premium for every user, they could connect their existing OpenAI enterprise contract, controlling costs and data processing while still providing AI capabilities to teams that need them.
Looking Forward
As LLM costs continue to decrease and more providers enter the market, the current premium pricing model will become increasingly difficult to justify. Companies like Microsoft and Atlassian will need to adapt their AI pricing strategies or risk users finding alternatives.
The key is giving users control over both pricing and functionality. Let them choose when and how to use AI, rather than forcing an all-or-nothing approach. This isn’t just about pricing – it’s about building products that respect user agency and adapt to their needs.