Commoditization of Skills

The Superpower Nobody Talked About

There is a specific kind of knowledge that takes years to accumulate in SRE and DevOps work. It is not found in a single tutorial. It lives in the gap between documentation and reality — in knowing that StringEquals and StringLike are not interchangeable when writing an AWS IAM trust policy condition for GitHub OIDC, and that choosing the wrong one lets forks of your repo assume your deploy role. It lives in knowing that you must never hardcode an AWS account ID in a file committed to a public repository. It lives in the muscle memory of wiring together an OIDC identity provider, a scoped IAM role, a CDK stack, and a GitHub Actions workflow so that a CI pipeline can deploy to AWS without a single long-lived secret sitting in a configuration file somewhere.

That specific kind of knowledge — keyless, credential-free GitHub-to-AWS deployment via OIDC — just became a 15-minute prompt.

"The learning curve used to be months of trial and error. Now it is the time it takes to run cdk deploy after a single Claude Code conversation."

Distilling OIDC Deployment Into Repeatable Steps

Keyless GitHub-to-AWS deployment via OIDC is a concrete demonstration of this shift. The goal is not a code generator that produces plausible-looking boilerplate you still have to debug. It is a distillation of SRE expertise — with the security non-negotiables encoded — that walks a developer through the entire setup end-to-end without requiring them to understand each decision.

Here is what the documented process handles, without the developer needing to know why:

The output is not a starting point. It is a production-grade setup. The entire workflow, from zero to functioning keyless deployment pipeline, collapses to a handful of commands a developer can follow without understanding the layers beneath them:

# One-time infrastructure bootstrap
cd infra/cdk
python -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
cdk deploy

# Copy DeployRoleArn output → GitHub secret AWS_ROLE_ARN
# Push a commit. Done.

That is it. That is the learning curve now — not months of accumulating the knowledge to know what to build, but 15 minutes of following documented steps that encode what an expert already figured out.

What This Used to Require

To appreciate the magnitude of this shift, consider what assembling this setup from scratch used to require a developer to know:

Capability Required Where You Learned It
AWS IAM trust policies and condition keys AWS docs + painful debugging sessions
OIDC federation concepts — sub claims, audience validation RFCs and security blog posts
Why StringLike with wildcards is a security hole for forked repos A security incident or a very thorough code review
AWS CDK in Python — stack structure, constructs, cdk.json AWS workshops or a CDK-fluent colleague
GitHub Actions OIDC token flow and aws-actions/configure-aws-credentials GitHub docs and Stack Overflow
Least-privilege IAM — what S3 sync actually needs vs. what is lazy IAM policy simulator + prod incidents
The "already exists" OIDC provider edge case in shared AWS accounts Someone else hitting it first and writing about it

Each of those rows represents not just knowing a fact but holding the context of why it matters, where it breaks, and what the failure mode looks like. That context is what takes years. A junior developer reading AWS docs would produce something that worked in the happy path and failed silently in production or opened a security hole they would not even recognize as one.

That entire context — years of accumulated SRE knowledge — is encoded not as documentation to be read, but as decisions already made, correctly, with the edge cases handled. A developer executing the steps does not need to understand them to benefit from them. That is the point.

Documentation as the New Interface to Expertise

What separates a properly codified expertise document from generic AI output is precision — explicit, unambiguous instructions that specify what an experienced SRE would do, in what order, and with what hard constraints. The document is not instructions for the developer. It is a specification of the expert's decision-making process that can be handed off to any AI agent and reproduced faithfully.

A well-written expertise document for OIDC deployment contains explicit security non-negotiables — things like: "Use StringEquals, not StringLike, in the trust policy condition. StringLike with a wildcard allows forks of your repository to assume the deploy role." That is institutionalized security knowledge. It is the kind of insight a senior SRE writes in a code review comment once, and then writes again in the next code review, and the one after that — until it is encoded in a document and applied automatically, every time, for every developer, without anyone having to know it exists.

Key insight: Codified expertise documentation is not for humans to read and follow. It is for an AI to internalize and execute — transforming a senior engineer's mental review checklist into an automatic constraint that applies every time, for every developer, with zero effort required from the developer to know it exists.

This changes the relationship between documentation and expertise fundamentally. Historically, documentation was a way of transferring knowledge to humans who would then apply it imperfectly, partially, and inconsistently — depending on how much they had read, how much they remembered, and how closely they followed instructions under time pressure. Encoded expertise transfers knowledge to an AI that applies it completely, consistently, and without fatigue.

Here is what a real expertise document for OIDC deployment looks like in its entirety — every decision an SRE would make, written down precisely enough for an AI to execute without improvisation:

## Pre-flight: Scan the Repo First

Before generating anything, search the repo for existing setup to avoid collisions:
  .github/workflows/*.yml     — look for configure-aws-credentials or aws-actions
  infra/cdk/ or cdk.json      — existing CDK app
  iam-trust-policy.json       — manual policy file

If any exist, read them and build on what's there rather than creating duplicates.

## Collect Parameters

Gather from the user or infer from the repo:
  GITHUB_ORG / GITHUB_REPO    — git remote get-url origin
  GITHUB_BRANCH               — current branch, default main
  AWS services to deploy to   — ask if not obvious (S3, CloudFront dist ID, etc.)
  CDK output path             — default infra/cdk/
  Workflow output path        — default .github/workflows/deploy.yml

## Workflow

### 1. CDK Stack — OIDC Provider + IAM Role

Generate infra/cdk/github_deploy_stack.py. Place alongside:
  infra/cdk/app.py            — CDK entry point
  infra/cdk/requirements.txt  — aws-cdk-lib>=2.0.0 and constructs>=10.0.0
  infra/cdk/cdk.json          — {"app": "python app.py"}
  infra/cdk/.gitignore        — cdk.out/, .venv/, __pycache__/

### 2. GitHub Actions Workflow

Generate .github/workflows/deploy.yml.
Fill in deploy steps specific to this repo's target services.

### 3. GitHub Secrets

Tell the user to add in Settings > Secrets and variables > Actions:
  AWS_ROLE_ARN                — DeployRoleArn output from cdk deploy
  Any service-specific secrets — e.g. CLOUDFRONT_DISTRIBUTION_ID

### 4. One-Time AWS Bootstrap

  cd infra/cdk
  python -m venv .venv && source .venv/bin/activate
  pip install -r requirements.txt
  cdk deploy

Then copy DeployRoleArn from the output into AWS_ROLE_ARN.

## Security Non-Negotiables

- StringEquals NOT StringLike in the trust policy condition.
  StringLike with a wildcard allows forks to assume the role.
  StringEquals with the full ref does not.
- self.account in CDK for resource ARNs — never hardcode the account ID
  in files committed to a public repo.
- Least privilege — only the permissions the workflow actually needs.
  No s3:* or *.

## Existing OIDC Provider Edge Case

If token.actions.githubusercontent.com already exists in the account,
creating a second one will fail. Include this as a comment in the CDK stack:

  # oidc_provider = iam.OpenIdConnectProvider.from_open_id_connect_provider_arn(
  #     self, "GitHubOidcProvider",
  #     f"arn:aws:iam::{self.account}:oidc-provider/token.actions.githubusercontent.com"
  # )

That document is the product. It is not a how-to guide for a human to read. It is a precise specification of what a senior SRE would do — scan first, collect parameters, generate infrastructure code, handle the edge case, enforce the security constraints — expressed concisely enough to be handed off and executed without interpretation errors. A developer who has never heard of OIDC can invoke this and get a production-grade result. An AI executing it cannot accidentally choose StringLike instead of StringEquals, because the choice is already made.

The Broader Pattern: Every SRE Superpower Is Extractable

Keyless OIDC deployment is one domain. It is a proof of concept for something much larger. Consider the full catalog of hard-won SRE and DevOps expertise that exists only in the heads of specialists — knowledge that takes quarters or years to accumulate — and ask how much of it is amenable to the same extraction:

Infrastructure & Security

Kubernetes & Containers

Observability & Incident Response

Deployment Patterns

Each of these is a domain waiting to be extracted. The pattern is identical: encode an expert's decision-making process, security constraints, and failure mode awareness into a structured document that an AI can execute reliably — and any developer can trigger with a simple request, without needing years of context to do it safely.

The Long-Term Implications: Superpower Commoditization

Let us be direct about what this means for the value of specialized engineering roles.

For most of the history of software infrastructure, expertise compounded over time. An SRE who had been running production systems for a decade was worth meaningfully more than one who had been doing it for two years — not because of credentials or titles, but because the decade-long SRE had accumulated a dense map of failure modes, edge cases, and non-obvious constraints that the two-year engineer simply had not encountered yet. That accumulated knowledge was the economic moat.

AI skills erode that moat. Not immediately, and not completely — but directionally and inevitably.

"The question is not whether a developer following a codified expertise document produces the same output as a senior SRE. They do. The question is what the senior SRE does next — once the work that used to take days of specialized knowledge can be completed in 15 minutes by anyone."

The Three Scenarios

Organizations will respond to this shift along a spectrum. Three scenarios are already visible:

Scenario What Happens Who Benefits
Skill consumers only Teams use existing skills to eliminate work that required specialists. Headcount doesn't grow with system complexity. Engineering managers, small startups
Skill producers + consumers Senior engineers shift from doing repeated work to encoding it into skills. Output multiplies across the organization. Platform teams, staff engineers
Skill economy participants Skills become shared assets — internal marketplaces, open-source skill libraries, consultancies that sell curated skill sets. Skill authors, tooling companies

The organizations that only consume skills will see immediate productivity gains and compressed headcount requirements for routine infrastructure work. The organizations that produce skills will build a compounding advantage — each skill their engineers write multiplies the effective output of every developer who invokes it.

What Gets Commoditized vs. What Remains Rare

The skills that commoditize first are the well-defined, high-repeatability tasks — the same setup performed across thousands of teams with minor variation. OIDC deployment is a perfect example: the problem is identical for every team deploying to AWS from GitHub, the security constraints are fixed, and the correct implementation is well-understood. A skill encodes the correct answer once and makes it universally accessible.

What does not commoditize as easily:

The shape of valuable engineering work is shifting — from knowing how to do things correctly to knowing what things need to be done, and then verifying that the AI did them right.

The Value Inversion

Here is the uncomfortable inversion: the senior engineers whose knowledge is most valuable to encode into skills are also the ones who will see the most direct impact on the market value of their expertise once it is encoded. The SRE who writes the definitive Kubernetes RBAC skill has made that knowledge accessible to every developer in their organization — and, if the skill is published, to every developer everywhere. The scarcity of that knowledge, which was the economic basis for their premium compensation, no longer exists in the same form.

The structural tension: Organizations benefit maximally when their most expert engineers encode knowledge into skills. Those engineers benefit individually from not doing so — their expertise remains scarce and therefore valuable. This tension will define the next phase of engineering compensation structures.

Some resolution mechanisms are already emerging: skill authorship as a career track, attribution models that credit engineers for the downstream productivity their skills generate, skill-as-product business models where the expertise lives in the documentation layer rather than in any individual's head.

Where This Goes

Keyless OIDC deployment is not an endpoint. It is an early demonstration of a pattern that will propagate through every specialized domain in software engineering. The question for every SRE, DevOps engineer, and developer today is not whether this happens — it is whether they are the ones encoding the expertise or waiting to be replaced by those who did.

The engineers who will remain valuable in this environment are not the ones who most jealously guard their accumulated knowledge. They are the ones who understand the craft of knowledge extraction — who can look at a complex, nuanced, failure-mode-laden domain and produce a structured document that correctly encodes it, with the edge cases handled, the security constraints baked in, and the failure modes anticipated.

That is a different skill than running infrastructure. It is more abstract, more valuable at scale, and significantly harder to commoditize — because it requires having been through enough failures to know which failures to protect against.

"The superpower of the next decade is not knowing how to do things. It is knowing how to teach an AI to do things correctly — with all the hard-won context that took years to accumulate encoded into 200 lines of Markdown."

A properly codified OIDC deployment setup takes roughly 15 minutes to execute on a fresh AWS account and GitHub repository. The SRE knowledge encoded in its steps took years to accumulate. That asymmetry — years of expertise becoming 15 minutes of execution — is the economic fact that reshapes what it means to be a specialist in infrastructure engineering.

The commoditization is real. The opportunity is equally real. Which side of it you land on depends on whether you are writing the skills — or just running the prompts.