Inside OpenAI’s Blitz: How a 3,000-Person Company Shipped Codex in Just Seven Weeks

Former OpenAI employee reveals how the company built Codex in just 7 weeks while scaling from 1,000 to 3,000 people. Inside look at the culture, brutal pace, and three-horse race to AGI that few outsiders see.

Inside OpenAI's 7-Week Sprint to Build Codex

💡 TL;DR - The 30 Seconds Version

👉 OpenAI built its Codex coding agent in just 7 weeks, from first lines of code to full launch, with a team working nights and weekends.

📊 The company exploded from 1,000 to 3,000 employees in one year, putting anyone with 12 months tenure in the top 30%.

🏭 OpenAI runs entirely on Slack with no email - one employee received only 10 emails during their entire tenure there.

🌍 Codex generated 630,000 pull requests in its first 53 days, roughly 78,000 per engineer on the core team.

🚀 The race to AGI is now between three companies: OpenAI, Anthropic, and Google, each taking different paths to get there.

A former OpenAI employee just shared what it's actually like inside one of the world's most watched companies. The account reveals a place moving at breakneck speed, where everything runs on Slack and good ideas can come from anywhere.

The employee joined OpenAI in May 2024 when it had just over 1,000 people. One year later, the company hit 3,000 employees. That puts them in the top 30% by tenure after just 12 months. Nearly everyone in leadership is doing a completely different job than they were two or three years ago.

This kind of growth breaks everything. Communication systems, reporting structures, product development, hiring processes. Teams vary wildly in culture. Some sprint constantly. Others babysit long training runs. Some move at a steady pace. There's no single OpenAI experience.

The Slack-Only Company

OpenAI runs entirely on Slack. No email exists. The former employee received maybe 10 emails during their entire tenure. This creates chaos for disorganized people and productivity for those who curate their channels carefully.

The company operates with a strong bottoms-up culture, especially in research. When the employee first asked about quarterly roadmaps, they were told "this doesn't exist" (though it does now). Good ideas emerge from anywhere, and it's often unclear which will prove most valuable. Progress happens through iteration as new research bears fruit, not through grand master plans.

This creates a meritocracy where leaders get promoted based on having good ideas and executing them. Many competent leaders aren't great at presenting at all-hands meetings or political maneuvering. That matters less at OpenAI than elsewhere. The best ideas tend to win.

The Seven-Week Miracle

The employee's biggest project was launching Codex, OpenAI's coding agent. The story reveals how fast the company can move when it wants to.

In November 2024, OpenAI set a goal to launch a coding agent in 2025. By February 2025, they had internal tools showing the models' coding capabilities. Market pressure was building as other companies launched coding tools.

The employee returned early from paternity leave to help. After a chaotic merger of two teams, they began a sprint. From first lines of code to fully launched product took seven weeks.

The pace was brutal. Most nights until 11 PM or midnight. Up at 5:30 AM with a newborn. Back to the office at 7 AM. Working most weekends. The entire team pushed hard because every week mattered.

Building at Breakneck Speed

The scope wasn't small. They built a container runtime, optimized repository downloading, fine-tuned a custom model for code edits, handled git operations, introduced a new interface, enabled internet access, and created a product that users actually enjoyed.

The team was small but senior: eight engineers, four researchers, two designers, two go-to-market people, and one product manager. Without that caliber of people, the project would have failed. Nobody needed much direction, but coordination was crucial.

The night before launch, five team members stayed until 4 AM deploying the main system. Then back to the office at 8 AM for the launch announcement and livestream. They turned on the feature flags and watched traffic pour in. Few products get such immediate uptake just from appearing in a sidebar, but that's the power of ChatGPT's user base.

Technical Reality Check

The technical foundation is revealing. OpenAI uses a giant Python monorepo running on Azure. The code looks strange because it combines libraries built for scale by 10-year Google veterans with throwaway Jupyter notebooks from newly-minted PhDs.

Azure provides exactly three trustworthy services: Kubernetes, CosmosDB, and BlobStore. There are no true equivalents to AWS staples like DynamoDB, Spanner, or BigQuery. This forces more in-house development.

The Meta influence is strong. Many engineers came from Meta, and the infrastructure resembles early Facebook: blockbuster consumer app, nascent infrastructure, desire to move fast. There's an in-house reimplementation of Meta's TAO system and efforts to consolidate authentication.

Nearly everything is a rounding error compared to GPU costs. One niche Codex feature had the same GPU footprint as the employee's entire previous company infrastructure.

The Pressure Cooker

OpenAI faces intense scrutiny. The employee regularly saw news stories in the press before internal announcements. Twitter users run automated bots checking for new feature launches. This makes OpenAI very secretive. Revenue and burn numbers are closely guarded.

The company is more serious than expected because the stakes feel high. There's the goal of building AGI. There's building a product hundreds of millions use for medical advice and therapy. There's competing in the biggest arena in the world while major governments watch closely.

Despite constant criticism in the press, everyone the employee met was trying to do the right thing. OpenAI gets more scrutiny because it's the most visible lab with the biggest consumer focus.

The Three-Horse Race

The employee sees the path to AGI as a three-horse race: OpenAI, Anthropic, and Google. Each will take different paths based on their DNA - consumer versus business versus rock-solid infrastructure and data.

OpenAI deserves credit for democratizing access. Cutting-edge models aren't reserved for enterprise tiers. Anyone can use ChatGPT without logging in. Most models quickly make it to the API for startups to use. An alternate regime could operate very differently.

The company "runs on Twitter vibes," as one friend joked. As a consumer company, this isn't entirely wrong. There's analytics around usage and retention, but vibes matter equally.

What Success Looks Like

Codex has generated 630,000 pull requests in its first 53 days. That's roughly 78,000 public PRs per engineer on the team. The numbers for private repositories are likely much higher.

The employee believes most programming will eventually look more like Codex - asynchronous collaboration with AI agents. The models are good but not great yet. They can work for minutes but not hours. User trust varies widely. Even OpenAI isn't clear on the models' true capabilities.

Why this matters:

• This rare insider view shows that OpenAI's speed advantage comes from cultural choices - hiring senior people, avoiding bureaucracy, and letting good ideas win regardless of their source.

• The seven-week Codex sprint proves that even 3,000-person companies can move like startups when they prioritize execution over process, suggesting the AI race will be won by whoever ships fastest, not whoever plans best.

Read on, my dear:

Calvin French-Owen: Reflections on OpenAI

❓ Frequently Asked Questions

Q: What exactly does Codex do that other coding tools don't?

A: Codex works asynchronously like a coworker - you assign it tasks, it works in its own environment, then returns with a pull request. Unlike tools like Cursor that work in real-time, Codex can handle multiple tasks simultaneously and excels at navigating large codebases.

Q: Why does OpenAI use Azure instead of AWS when Azure has fewer reliable services?

A: The employee noted only three trustworthy Azure services: Kubernetes, CosmosDB, and BlobStore. There's no equivalent to AWS tools like DynamoDB or BigQuery. This forces more in-house development, but OpenAI likely has strategic partnerships or pricing agreements with Microsoft.

Q: How does a company actually function with no email and only Slack?

A: Everything happens in different Slack workspaces with various permission levels. The employee received only 10 emails total during their tenure. Success requires curating channels and notifications carefully, or the constant stream becomes overwhelming and distracting.

Q: What do those GPU costs actually amount to in dollars?

A: The employee didn't share specific numbers, but noted that a single niche Codex feature had the same GPU footprint as their previous company's entire infrastructure. Training runs likely cost hundreds of thousands to millions of dollars each.

Q: Why did the employee leave if OpenAI was such a great experience?

A: No personal drama - they found it hard transitioning from founder to employee at a 3,000-person company. They explicitly said they were "craving a fresh start" and that the quality of work might draw them back.

Q: What's a "pull request" and why do Codex's 630,000 matter?

A: A pull request is a proposed code change submitted for review. Codex generating 630,000 in 53 days shows massive adoption - that's roughly 12,000 code contributions daily. Each represents a task completed by AI instead of human developers.

Q: How do you get hired at OpenAI with this rapid growth?

A: The employee noted a strong "Meta to OpenAI pipeline" for engineering talent. The company prioritizes senior people who can work independently. With 2,000 new hires in one year, they're clearly hiring aggressively across all functions.

Q: What makes this different from typical startup "move fast and break things" culture?

A: OpenAI combines startup speed with massive scale and resources. They deployed a new product to hundreds of millions of users in 7 weeks. Most startups can move fast but lack the distribution, talent density, and capital to execute at this level.

OpenAI Launches AI Browser to Challenge Google Chrome
OpenAI launches AI-powered browser to challenge Google Chrome’s dominance. The move targets Google’s data collection engine that powers its $307B ad business. With 400M ChatGPT users as potential adopters, this could reshape the web.
OpenAI Launches Improved Coding Assistant, Discusses Safety Tests
OpenAI enhances its coding assistant for quicker performance and reveals its rigorous safety testing procedures.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to implicator.ai.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.