Effect-TS replaced try/catch and ad-hoc retries across 3 Shopify backends and cut error-handling code by roughly 40 percent
Pattern 1: Effect.tryPromise wraps Shopify Admin API calls with a typed error channel so callers see exactly what can fail
Pattern 2: Effect.retry plus Schedule.exponentialBackoff handles 429 rate limits without hand-rolled sleep loops
Pattern 3: Effect.scoped guarantees Postgres pool cleanup even on early returns or thrown panics
Pattern 4: Layer-based DI swaps real Shopify clients for in-memory mocks in tests with one line
I avoided Effect-TS for about a year. The docs looked like Haskell cosplay and the import list felt like a tax. Then I had a Shopify webhook handler that lost orders during a rate-limit spike, and I gave up trying to bolt retries onto raw fetch calls. Three backends later, Effect is the thing I reach for first.
This is not a sales pitch. It is the four patterns I actually use, with code that runs in production behind raxxo.shop and two client stores.
Why Effect over plain async/await and try/catch
Plain TypeScript hides failures. A function typed Promise could throw a network error, a validation error, a 401, a 429, or a Shopify GraphQL user error nested inside a 200 response. The compiler tells you nothing. You learn about the failure modes when production tells you.
Effect makes the error channel a first-class generic. A function returning Effect.Effect says exactly which failures can happen and which dependencies it needs. Forget to handle one and the type checker complains before deploy.
You get four things from the runtime: typed errors, structured concurrency where cancellation propagates the right way, retry combinators that compose, and a DI system that does not need a framework. You do not have to use all four. Most of my code uses two of them and is still better than the try/catch version was.
The learning curve is real. The first week feels like writing Lisp by hand. By week two the muscle memory kicks in and you stop typing try { reflexively. I won't pretend it is free.
If you are still on the fence about backend type discipline, TypeScript Decorators Finally Shipped covers the broader 2026 type-safety shift that pushed me to try Effect.
Pattern 1: Effect.tryPromise for Shopify Admin API calls
Every Shopify call has at least three failure modes: network blew up, the API returned a non-200, or the API returned a 200 with userErrors in the JSON body. Effect.tryPromise lets you tag each one.
import { Effect, Data } from "effect"
class ShopifyAuthError extends Data.TaggedError("ShopifyAuthError")<{
status: number
}> {}
class ShopifyNetworkError extends Data.TaggedError("ShopifyNetworkError")<{
cause: unknown
}> {}
class ShopifyUserError extends Data.TaggedError("ShopifyUserError")<{
errors: ReadonlyArray<{ field: string; message: string }>
}> {}
const getOrder = (orderId: string) =>
Effect.tryPromise({
try: () =>
fetch(`https://${SHOP}/admin/api/2026-01/orders/${orderId}.json`, {
headers: { "X-Shopify-Access-Token": TOKEN }
}),
catch: (cause) => new ShopifyNetworkError({ cause })
}).pipe(
Effect.flatMap((res) =>
res.status === 401 || res.status === 403
? Effect.fail(new ShopifyAuthError({ status: res.status }))
: Effect.tryPromise({
try: () => res.json() as Promise<{ order: Order; userErrors?: any[] }>,
catch: (cause) => new ShopifyNetworkError({ cause })
})
),
Effect.flatMap((body) =>
body.userErrors?.length
? Effect.fail(new ShopifyUserError({ errors: body.userErrors }))
: Effect.succeed(body.order)
)
)
The signature Effect.Effect is the contract. Anywhere you use getOrder, the compiler forces you to either handle each tagged error or let it bubble. No more silent 401s pretending to be 500s in your dashboard.
Data.TaggedError is just a class with a _tag discriminant. The pattern matching uses that tag, which means Effect.catchTag("ShopifyAuthError", ...) is fully type-safe. You can refresh tokens on auth errors and let the rest fall through.
Pattern 2: Effect.retry with Schedule for Shopify rate limits
Shopify Admin GraphQL uses a leaky-bucket cost system. You will hit THROTTLED errors if you batch enough. The naive fix is a sleep loop. The Effect fix is a Schedule.
import { Effect, Schedule, Duration } from "effect"
const retryPolicy = Schedule.exponential(Duration.seconds(1)).pipe(
Schedule.compose(Schedule.recurs(5)),
Schedule.intersect(Schedule.spaced(Duration.minutes(1)))
)
const isRetryable = (err: unknown) =>
err instanceof ShopifyNetworkError ||
(err instanceof ShopifyUserError &&
err.errors.some((e) => e.message === "THROTTLED"))
const getOrderResilient = (orderId: string) =>
getOrder(orderId).pipe(
Effect.retry({
schedule: retryPolicy,
while: isRetryable
}),
Effect.tapError((err) =>
Effect.logError(`getOrder ${orderId} failed after retries`, err)
)
)
Schedule.exponential doubles the wait each attempt starting at one second. Schedule.recurs(5) caps at five tries. Schedule.spaced puts an upper bound of one minute between attempts so you never wait twenty minutes. The while predicate decides what counts as retryable, so a 401 fails immediately while a THROTTLED keeps trying.
What I actually like: schedules compose. I have one retryPolicy shared across every Shopify call, one for Postgres reconnects, and one for Stripe webhook redrives. They are values, not framework config. I unit-test them by piping Effect.runPromise with a mock failure stream.
I covered a related rate-limiting story in The 5 Vercel Cron Jobs That Keep My Studio Running where the cron retry loop was the bug, not the feature.
Pattern 3: Effect.scoped resource lifetimes for Postgres pools
Postgres connections leak when you forget to release them. Try/finally works until someone returns early. Effect has Scope, which guarantees cleanup even on cancellation, panics, or early exits.
import { Effect, Scope, Layer } from "effect"
import { Pool } from "pg"
class PgPool extends Effect.Tag("PgPool")
() {}
const makePool = Effect.acquireRelease(
Effect.sync(() => new Pool({ connectionString: process.env.DATABASE_URL })),
(pool) => Effect.promise(() => pool.end())
)
const PgPoolLive = Layer.scoped(PgPool, makePool)
const getOrderRow = (orderId: string) =>
Effect.gen(function* () {
const pool = yield* PgPool
const client = yield* Effect.acquireRelease(
Effect.promise(() => pool.connect()),
(c) => Effect.sync(() => c.release())
)
const result = yield* Effect.promise(() =>
sql(client)`SELECT * FROM orders WHERE shopify_id = ${orderId}`
)
return result.rows[0]
})
// In your handler:
const program = getOrderRow("gid://shopify/Order/12345").pipe(
Effect.provide(PgPoolLive),
Effect.scoped
)
The pool is created once when the layer is built and pool.end() runs when the scope closes. The per-request client is acquired and released inside Effect.gen. If the query throws, the client still goes back to the pool. If the request is cancelled mid-flight, same thing. I have not seen a leaked connection since I migrated.
The mental model: acquireRelease is try/finally for async. Scope is the bag that holds all the finally blocks until the bag closes. Layer.scoped says "build this resource once per scope and tear down at the end."
If you are picking a Postgres host and want to know why I run Neon in production, Neon Database Branching Saved Me 200 EUR Every Month has the cost breakdown.
Pattern 4: Layer-based DI for swapping mocks in tests
The fourth pattern is the one that sold me. Most TypeScript codebases either use a DI container framework (heavyweight) or pass dependencies as function arguments (verbose). Effect's Layer is a typed graph of constructors. You build a Live layer for production and a Test layer for tests, and the call sites do not change.
import { Effect, Layer, Context } from "effect"
interface ShopifyClient {
getOrder: (id: string) => Effect.Effect
}
const ShopifyClient = Context.GenericTag("ShopifyClient")
const ShopifyClientLive = Layer.succeed(ShopifyClient, {
getOrder: (id) => getOrder(id)
})
const ShopifyClientMock = Layer.succeed(ShopifyClient, {
getOrder: (id) =>
Effect.succeed({ id, total_price: "33.00", currency: "EUR" } as Order)
})
const computeDiscount = (orderId: string) =>
Effect.gen(function* () {
const shopify = yield* ShopifyClient
const order = yield* shopify.getOrder(orderId)
return Number(order.total_price) * 0.9
})
// Production
Effect.runPromise(computeDiscount("123").pipe(Effect.provide(ShopifyClientLive)))
// Test
Effect.runPromise(computeDiscount("123").pipe(Effect.provide(ShopifyClientMock)))
computeDiscount does not know whether it is talking to a real Shopify or an in-memory fake. The test suite does not need to monkeypatch fetch or wire up a fake server. The DI graph is type-checked, so if you forget to provide a layer the compiler refuses to build the program.
Composing layers is the move that pays off long-term. Layer.merge(ShopifyClientLive, PgPoolLive, RedisLive) builds the whole runtime in one expression. Swap any one for a mock and the rest stays the same. I run integration tests with a real Postgres but a mocked Shopify, which catches schema drift without hammering the Shopify API.
If you are setting up a Shopify backend from scratch, Shopify plus a small Effect runtime gets you further than a heavier framework would, in my experience.
Bottom line
Effect is not a religion. I still write plain TypeScript for short scripts, one-off migrations, and most of my React components. But for any service that handles money, retries network calls, manages connections, or needs a real test suite, Effect has earned its place.
The four patterns I keep coming back to: Effect.tryPromise for typed wrappers, Effect.retry plus Schedule for rate limits, Effect.scoped for resources, and Layer for DI. Start with one. The first two pay off in a single afternoon. The other two pay off the first time something breaks at 3am.
If you are building a serious Shopify or SaaS backend in 2026, the Claude Blueprint shipping kit walks through the runtime patterns I use across every project, including the Effect-based fetch layer you can drop into your own repo.
Top comments (0)