Most foundation models were trained on a snapshot of the React Native ecosystem that's six to twelve months old. Expo ships a new SDK every few months and quietly deprecates modules along the way. The model doesn't know what it doesn't know — so it cheerfully imports expo-permissions (gone since SDK 49) or calls AsyncStorage from react-native (moved out years ago). The fix isn't smarter prompting. It's giving the model the current ground truth.
I lose more time to this than to any other AI-coding failure mode. The model writes a beautiful component, the import is wrong, the build fails, and you re-prompt with the error. Multiply that by twenty screens and you have a lost afternoon.
Here's the workflow I've landed on after debugging it across half a dozen projects.
What "ground truth" looks like for a React Native repo
The model should never have to guess at:
- The exact Expo SDK version your project is on.
- Which modules are installed and at which version.
- The current import paths (
AsyncStoragefrom@react-native-async-storage/async-storage, not fromreact-native). - Project-specific patterns — your auth client, your routing, your styling system.
If any of those is implicit, the model fills the gap with whatever it remembers. And what it remembers is older than your codebase.
The five things that fix 90% of it
1. Pin the SDK in the project context file
Every repo I touch now has a CLAUDE.md or AGENTS.md in the root with a short, factual stack section:
## Stack (current as of 2026-05-08)
- Expo SDK 55, React Native 0.81
- expo-router (file-based routing — never use react-navigation directly)
- @react-native-async-storage/async-storage (never import AsyncStorage from "react-native")
- expo-image (never use react-native Image for new code)
- expo-secure-store for tokens
- Zustand for client state, TanStack Query for server state
- nativewind v4 for styling — Tailwind class strings on native components
The "never" lines do most of the work. Without them, the model picks up a pattern from somewhere else in the file and runs with it. With them, it asks before deviating.
2. Show, don't tell, the import paths
If you have vibe/ or docs/ directories the model can read, drop a imports.md:
# Imports — copy these exactly
# Auth
import { useSupabase } from "@/lib/supabase"
import { useUser } from "@/lib/auth"
# Storage
import AsyncStorage from "@react-native-async-storage/async-storage"
import * as SecureStore from "expo-secure-store"
# Routing
import { useLocalSearchParams, router } from "expo-router"
# Image
import { Image } from "expo-image" # NOT from "react-native"
When a prompt asks the model to add storage, it sees this file and uses the right import. When a prompt asks for an image component, it knows which one.
3. Lock the model into the lockfile when it hesitates
For modules that have multiple syntaxes across versions (RevenueCat, Reanimated, expo-notifications), paste the relevant section of package.json into the prompt for that task:
We're on:
- expo-notifications: 0.32.0
- react-native-reanimated: 3.18.0
- react-native-purchases: 9.4.0
Use the API for those exact versions.
It's three lines. It eliminates 80% of "this looks right but the function is renamed in v3" failures.
4. When the model is wrong, correct the file, not the prompt
If the model wrote import { AsyncStorage } from "react-native" once, it'll do it again next session. Fixing it inline is cheap; updating CLAUDE.md to say "AsyncStorage import comes from @react-native-async-storage/async-storage, never from react-native" is durable.
This is the single highest-leverage habit I've picked up. Every wrong assumption that survives a session is a memory leak across future sessions.
5. Use Context7 (or the equivalent) for live docs
For libraries that move fast — Expo Router, Reanimated, RevenueCat — fetching current docs at prompt time produces dramatically better output than relying on training data. In Cursor and Claude Code, both have MCP servers or rules that wire this up. Using one isn't optional in 2026 if the model's snapshot is more than a few months stale.
What I keep in my CLAUDE.md, concretely
This is the actual structure I use, in roughly the order it should appear:
# Project name
One-line description of what the app does.
## Stack
[exact versions, exact module names, "never use X" rules]
## Directory layout
[which folder owns what — auth, paywall, navigation, server, db]
## How auth works
[which files matter, in what order, what the contract is]
## How payments work
[paywall flow, entitlement check, restore behavior, who lives where]
## Conventions
[error handling, logging, when to use Zustand vs context, naming]
## Things AI assistants get wrong here
[explicit list — past tense — so it doesn't happen again]
The "Things AI assistants get wrong here" section is the cheat code. Every time the model produces a bad pattern, you add a line. Over a few weeks, the file becomes the project-specific equivalent of a senior engineer's "no, we don't do that here."
Things to avoid putting in the context file
The file works better when it's tight. Resist:
- Pasting the whole
package.json. Pin the key versions; let the lockfile speak for the rest. - Long prose explanations of why a decision was made. The model doesn't need history; it needs rules.
- Boilerplate language ("we value clean code"). It's noise.
- Anything you might not actually enforce. If you write "we never use any" and the codebase is sprinkled with
any, the model will trust the codebase and ignore the rule.
When the model still hallucinates
Sometimes you've done everything right and the model still writes something that looks fine but doesn't compile. The shortest path back is:
- Run the build. Read the actual error.
- Paste the error and the offending file into the prompt with one sentence: "this fails to build. The version of [package] is X. Show me the correct call."
- If the model still gets it wrong, the model is the wrong tool for that specific question — go to the package's GitHub releases page or its docs, not your prompt.
There's a subtle trap where you keep re-prompting on the same hallucination because the model is confident. Set a personal rule: two wrong attempts and you go check the source.
What this looks like when it's working
A well-set-up React Native repo with a tight CLAUDE.md and current docs in context produces patches that build on the first try most of the time. New screens, new mutations, new auth flows — the model has enough scaffolding to get the imports right and the patterns consistent.
If you're starting from scratch, the Shipnative boilerplate ships with the AGENTS.md and per-directory context already written for an Expo + Supabase or Convex + RevenueCat stack — including the "never use X" lines that prevent the most common drift. If you'd rather build the context yourself, the structure above is what I'd start with.
For the broader set of habits that make AI-driven mobile work productive, see AI Coding for React Native with Cursor and Claude and Layered Context Docs for Vibecoding Monorepos.
