OpenAI turns ChatGPT into an app platform with distribution it controls

OpenAI opened ChatGPT to third-party apps that render inside conversations, built on an open standard but distributed through algorithms it controls. The trade: reach 800M users, lose funnel ownership. Platform economics arrive at chat.

OpenAI Turns ChatGPT Into App Platform With Controlled Reach
OpenAI launches Apps SDK for ChatGPTOpenAI launches Apps SDK for ChatGPT

OpenAI is opening ChatGPT to third-party apps that render live UI inside conversations. The company released an Apps SDK built on the Model Context Protocol and turned on in-chat apps for all logged-in users outside the EU. Partners including Canva, Zillow, Spotify, and Coursera launched today, with Uber, Target, and Peloton coming "later this year." Monetization through the new Agentic Commerce Protocol follows the same timeline. Apps appear when ChatGPT suggests them mid-conversation or when users call them by name, displaying maps, videos, and interactive elements without leaving the thread.

Key Takeaways

• Apps SDK built on open MCP standard but distribution controlled by ChatGPT's suggestions and ranking algorithms

• Replaces failed GPT Store from January 2024; apps now embed real UI and functionality, not just wrapped prompts

• Monetization via Agentic Commerce Protocol coming "later this year" with fee structure and terms still undefined

• EU users excluded from launch; regulatory compliance and consent flows remain unresolved stress test for platform model

This replaces the GPT Store, which OpenAI launched in January 2024 after announcing custom GPTs at Dev Day 2023. That effort never gained traction—most users found little reason to hunt for specialized chatbots when base models kept improving. The new approach is structurally different: apps now embed real functionality with native interfaces, not just wrapped prompts. And they surface proactively based on context, not through directory browsing.

The platform architecture

The Apps SDK extends MCP, which Anthropic introduced last year to standardize how AI clients connect to external tools. MCP defines wire formats, authentication flows, and metadata schemas. OpenAI added UI primitives: inline cards, expanded views, fullscreen modes, picture-in-picture overlays. Developers define logic and interface in code, then connect to their backends so existing customers can authenticate. The result: intent → app suggestion → rendered UI → action → optional payment. That's an operating system move, not a feature add.

For developers, the value proposition is distribution. ChatGPT reportedly reaches 800 million weekly active users. Apps get surfaced at the moment of intent—when someone asks about homes in Pittsburgh, Zillow appears. No URL typed. No app installed. OpenAI frames this as discovery without friction.

The cost is control. Apps inherit ChatGPT's design system, content policies, and ranking algorithms. They accept that "featured" status depends on OpenAI's guidelines, not just functionality. If your growth comes from ChatGPT's proactive suggestions and your UX was optimized for its inline/expanded surfaces, moving elsewhere means redesign, distribution loss, and user re-onboarding. Open standard reduces technical lock-in. It doesn't erase platform dependence.

The distribution calculus

OpenAI is trying to be the place users start. The demos at Dev Day showed this clearly: ask ChatGPT to create slides, and Canva appears to execute. Need a playlist? Spotify shows up. Looking for housing? Zillow's map renders in-thread with filters you can adjust conversationally. The bet is that chat works for intent capture and orchestration, even if structured UIs still handle manipulation.

For brands, the trade is reach for funnel ownership. Apps get access to hundreds of millions of users. They lose the first-party touchpoint where upsells happen and data gets collected. If users interact with Zillow through ChatGPT's interface, OpenAI sits between the brand and the customer. That's the Apple and Google playbook—control the launcher, collect the fees, shape the defaults.

The "open standard" language provides cover. Because MCP servers can theoretically serve multiple AI clients, OpenAI can claim interoperability. In practice, portability isn't the same as competition. If ChatGPT owns suggestions and rankings, developers optimize for its guidelines regardless of whether their server could run elsewhere. Standards blunt technical friction. They don't redistribute power.

Monetization and the ACP bet

OpenAI announced that it will "soon" support instant checkout through the Agentic Commerce Protocol. Details remain sparse—fee structure, refund policies, dispute resolution, tax handling. The phrase "later this year" appeared repeatedly in Monday's announcements, signaling that the economic model isn't ready even as the technical foundation launches.

That lag matters. Getting payments right—listings, payouts, chargebacks, fraud detection—takes time. App Store economics are well-understood: tiered structures, review processes, 15-30% cuts. Expect similar here. Brands will calculate whether incremental conversion from ChatGPT's distribution offsets the lost margin and customer data. Some will decide owning the funnel is worth more than OpenAI's reach.

The revenue-sharing question also introduces a new conflict. When ChatGPT suggests an app in conversation, what determines the ranking? Relevance, quality, business terms? The company hasn't disclosed how suggestions get weighted. Users won't parse whether they're seeing the best fit or the highest bidder. That opacity creates space for pay-to-play dynamics to emerge quietly, even if OpenAI insists recommendations stay merit-based today.

AgentKit and the workflow layer

OpenAI also launched AgentKit, a visual builder for multi-agent workflows. It includes a drag-and-drop canvas, evaluation tools for measuring agent performance, and ChatKit—an embeddable chat UI developers can drop into their own products. The positioning is clear: if apps are the storefront, AgentKit is the factory. Developers can prototype faster, then deploy through ChatGPT or embed agents directly in their products using OpenAI's components.

The connector registry centralizes data source management across ChatGPT and the API, giving enterprise admins one panel to govern Dropbox, Google Drive, SharePoint, and third-party MCPs. This solves a real pain point for large organizations trying to maintain security and compliance across agent deployments. It also deepens OpenAI's infrastructure role—companies start depending on its admin layer, not just its models.

The announcement of guardrails as modular safety layers shows awareness of enterprise requirements. Masking PII, detecting jailbreaks, flagging policy violations—these aren't novel capabilities, but bundling them into the workflow builder lowers implementation friction. For OpenAI, it's another lever to make building inside its ecosystem easier than assembling equivalent tooling elsewhere.

The credibility test

This is OpenAI's second attempt at an app marketplace. The GPT Store launched 15 months ago with over 3 million custom GPTs. It went nowhere. The base model kept improving, making specialized prompt bundles obsolete. Users had no reason to hunt through a directory when ChatGPT already handled most requests competently.

The Apps SDK addresses that failure by adding real utility—embedded maps, video players, checkout flows. But it introduces new dependencies. If apps feel sluggish, if context doesn't persist cleanly across modes, if latency spikes during high traffic, the UX case for chat collapses. Users will revert to native apps where interactions are instant and state management is reliable.

There's also the "Amazon Basics" risk. When developers build on a platform owned by a company that can see usage data, feature adoption, and engagement patterns, they help train their potential replacement. If OpenAI later decides to build native versions of high-traffic app categories, partners will face competition from the platform itself. That's classic marketplace dynamics: the distributor watches what works, then competes using structural advantages. Developers have seen this story before.

What to watch next

Three signals will show whether this is durable or another iteration:

Adoption metrics beyond launch partners. How many developers outside the pilot group submit apps once the review process opens? Low submission volume would suggest the value proposition doesn't justify the platform dependency.

EU rollout timing and structure. Local privacy regimes and consent flows are the stress test. If OpenAI can't bring apps to EU users quickly, that's a quarter of the market locked out—and a sign that regulatory friction matters more than the company implied.

Monetization terms when published. Fees, refunds, dispute policies, and whether ACP becomes default or optional. Those details reveal the true economics and how much leverage developers will actually have.

OpenAI positioned Monday's announcements as the start of a new generation of apps. The technical foundation is solid. The distribution advantage is real. The question is whether partners will accept losing customer ownership in exchange for reach, and whether users will tolerate suggestions that may tilt toward commercial interests over relevance. Platform economics always arrive at that tension eventually.

Why this matters:

• The center of gravity shifts from websites and apps to a chat-first surface that brokers intent, UI, and payments—reshaping discovery, conversion, and who owns the customer relationship at the moment of transaction.

• Open standards enable technical portability, but distribution power determines market structure; whoever controls suggestions, ranking, and checkout controls margins regardless of interoperability claims.

❓ Frequently Asked Questions

Q: Why did OpenAI's GPT Store fail after launching in January 2024?

A: The GPT Store featured over 3 million custom GPTs that were essentially bundled prompts with no real functionality. As base models improved, these specialized chatbots became obsolete—ChatGPT could already handle most requests without custom versions. Users had no reason to hunt through a directory when the main interface worked well enough.

Q: What is MCP and why does it matter that the Apps SDK is built on it?

A: Model Context Protocol is an open standard introduced by Anthropic that defines how AI clients connect to external tools—wire formats, authentication, and metadata schemas. Because it's open, apps theoretically work with any MCP-compatible client, not just ChatGPT. This reduces technical lock-in but doesn't change the fact that OpenAI controls discovery and distribution through its 800 million user platform.

Q: How does ChatGPT decide which apps to suggest during conversations?

A: OpenAI hasn't disclosed the ranking algorithm. Apps appear either when users call them by name or when ChatGPT proactively suggests them based on conversation context. The company mentioned apps meeting "higher standards for design and functionality will be featured more prominently," but didn't specify what weights drive suggestions—relevance, quality, or business terms remain opaque.

Q: Why are EU users excluded from the launch?

A: OpenAI stated apps will come to EU "soon" without providing specifics. The exclusion likely stems from GDPR consent requirements and data sharing policies between ChatGPT and third-party apps. Local privacy regimes require explicit user controls over what data each app can access—infrastructure OpenAI said it will provide "later this year" with "more granular controls."

Q: How do developers make money from apps built on this platform?

A: Not yet defined. OpenAI announced monetization will launch "later this year" through the Agentic Commerce Protocol for instant checkout inside ChatGPT. Fee structure, revenue splits, refund policies, and dispute resolution haven't been disclosed. Developers can currently build and test apps but can't charge users or submit apps for public distribution until the payment infrastructure launches.

Q: What's the "Amazon Basics" risk for developers building on this platform?

A: When developers build successful apps, OpenAI can see usage data, feature adoption, and engagement patterns across its platform. If certain app categories gain traction, OpenAI could build native versions using those insights and compete directly—leveraging preferential placement and integrated functionality. This mirrors how Amazon launched competing products after third-party sellers proved market demand on its marketplace.

Q: What's AgentKit and how does it relate to the Apps SDK?

A: AgentKit is a separate toolkit launched Monday for building multi-step AI workflows. It includes a visual canvas (Agent Builder), evaluation tools, and ChatKit—an embeddable chat UI. While the Apps SDK lets developers publish apps inside ChatGPT's ecosystem, AgentKit helps them build agent logic faster and deploy chat experiences in their own products using OpenAI's components.

OpenAI’s Platform Bet: Integration Depth Over Model Wins
OpenAI made Codex generally available with Slack integration, a developer SDK, and enterprise admin tools. It also opened ChatGPT to third-party apps and launched AgentKit. The headline is maturity. The strategy is lock-in through workflow embeds.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Implicator.ai.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.