Skip to content
Vibecoding
Vibecoding·6 min read·May 8, 2026

Should You Let Claude Code or Cursor Run Your EAS Builds?

Where the trust line should sit when AI assistants drive Expo Application Services — what's safe to delegate, what to do yourself, and the specific commands that have caused the most damage.

Written by
Kaspar Noor
Should You Let Claude Code or Cursor Run Your EAS Builds?
The short answer

Yes for builds, mostly. No for credentials, signing, and submissions. Production releases should always be a human-pressed button with the real release notes typed by a human, even if the model assembled everything that led up to it.

EAS Build is one of the most automatable parts of the mobile workflow, and AI coding tools are eager to run it. That's mostly good — but the destructive failure modes are concentrated in a narrow set of commands, and the model's confidence does not always track with the reversibility of the action.

Here's the trust map I've settled on.

What's safe to delegate

These are commands the model can run without supervision in most projects, because the worst-case outcome is a wasted build minute or a clearly-failing log:

  • eas build --profile development --platform ios|android — dev clients are throwaway artifacts
  • eas build --profile preview --platform ios|android — internal-only, signed with internal credentials
  • eas update --branch <name> for non-production branches
  • Any local build or simulator launch
  • Any prebuild operation in a fresh project (the model will regenerate native folders if it goes sideways)

The pattern: if a build fails, you get a log. If it succeeds, you get an internal artifact. Nothing customer-facing happens.

What to do yourself, every time

These are the operations where I always take the keyboard, even when I'm pair-programming with Claude Code or Cursor:

  • eas build --profile production --platform <platform> — production builds
  • eas submit --platform ios and eas submit --platform android — actual store submission
  • eas credentials and any flow that creates, deletes, or rotates signing keys
  • eas update --branch production — the channel real users hit
  • eas channel:edit production --branch <something> — a one-line command that can break every install

The reason isn't paranoia. It's that the failure modes are slow to surface and expensive to reverse. A bad production update goes out instantly. A deleted distribution certificate locks you out of submitting until Apple regenerates one. A wrong --non-interactive flag combined with a misconfigured profile can submit a build to TestFlight that you didn't mean to send.

The commands that have caused real damage

Three I've seen go wrong in the wild, all driven by AI assistants moving too fast:

eas credentials --remove followed by re-creating credentials

The model, trying to fix a code-signing error, removes a working distribution certificate and asks Expo to generate a new one. Expo can. But anything signed against the old one — TestFlight builds in review, in particular — gets confused, and the next submission asks Apple for a fresh provisioning profile that takes hours to regenerate cleanly. Always read what eas credentials is about to do before agreeing.

eas update --branch production from the wrong working state

The model finishes a feature branch, runs eas update --branch production to "test it on the production channel," and ships work-in-progress code to every paying user. The channel name is just a string; nothing prevents this.

The fix is a rule in CLAUDE.md along the lines of:

## Things to never do
- NEVER run `eas update --branch production` — production updates are a human action
- For testing updates, use `--branch staging` or your feature branch name

The model respects this almost universally if it's stated explicitly.

eas build --auto-submit on production profile

--auto-submit can be useful in CI pipelines you've audited. It is dangerous when an AI assistant flags it on a one-off build to "save you a step." The build goes to TestFlight or Play Store internal testing immediately, sometimes before you've even reviewed the build artifact yourself.

If you use --auto-submit, gate it behind a pipeline you wrote, not a prompt response.

What good delegation looks like

The pattern that actually works in practice is something like:

  1. You define the goal — "ship the auth fix to internal testing."
  2. The model proposes a plan — make sure the change is on the right branch, run a preview build, wait for the artifact, and tell you when it's done.
  3. You confirm the plan, the model executes the safe parts, and pings you for any production-level command.
  4. You press the actual submit button.

The contract: the model handles the boring middle, you handle the boundaries.

Setting it up in CLAUDE.md

A short policy section is enough:

## EAS commands

Safe to run unsupervised:
- eas build --profile development|preview --platform <platform>
- eas update --branch <not-production>
- Any local prebuild

Always ask before running:
- eas build --profile production
- eas submit (any platform)
- eas credentials (any subcommand)
- eas update --branch production
- eas channel:edit production
- Any --auto-submit flag
- Any --non-interactive flag combined with credentials or production

The "always ask" list is short, but it covers every command that has caused real damage in projects I've watched.

CI is a different conversation

Everything above is about interactive sessions where the model has tool access. CI is different — there, the credentials are already provisioned, the workflow is reviewed, and the trust boundary is the pipeline itself. If your release flow is a GitHub Action that runs on a tag, having Claude Code edit the workflow file is fine. Having Claude Code push the tag and the production update directly is not.

What this looks like with a good boilerplate

Most of the friction here disappears when the project is already set up correctly. With Shipnative, the EAS profiles are pre-configured, the production / staging / development channels are separated by name, and the AGENTS.md files include the policy section above. The model doesn't have to invent its own EAS strategy because the strategy is already in the repo.

If you're building from scratch, write the policy before you write the first auth screen. The cost is fifteen minutes; the alternative is the one bad command that wipes a Tuesday.

For more on AI-driven mobile workflows, see AI Coding for React Native with Cursor and Claude. For the specific gotchas around store submission, see App Store Rejection Reasons in 2026.

Ready to ship faster?

Get lifetime access to Shipnative for a one-time payment of $99.