DEV Community

Cover image for We Built On Two Chains From Day One — Adding Base Six Months Later Was a Weekend.
QBitFlow
QBitFlow

Posted on • Originally published at qbitflow.app

We Built On Two Chains From Day One — Adding Base Six Months Later Was a Weekend.

The Solana-vs-Ethereum debate is a trap for builders

You're building a crypto payments product in 2026. Which chain do you launch on?

Pick Solana, and a chunk of your users will tell you (often loudly, on Hacker News) that:

  • The validator set is too centralized
  • There's no mempool, so transactions silently drop during congestion
  • Light clients can't cryptographically verify state
  • "It's not even significantly cheaper than L2s anymore"

Pick Ethereum L1, and a different crowd tells you:

  • L1 fees are insane for anything under $50
  • Confirmation times are too slow for checkout UX
  • Your users pay $5 in gas to send $20

Pick a single L2 (Base, Arbitrum, Optimism), and you trade L1's problems for new ones:

  • Sequencer centralization
  • Bridge risk
  • Fragmented liquidity
  • "What if Coinbase deprecates Base?" is a real question merchants ask. Yes, really.

Every chain has tradeoffs. Picking one means inheriting all of its tradeoffs and all of its critics.

So we picked the two that mattered most — and built the codebase so adding more later wouldn't require a rewrite.


What "two chains from day one" actually meant

We launched QBitFlow with two chains live in the first version of the product:

  • Ethereum mainnet — for users who care about decentralization, censorship resistance, and the "real" Ethereum
  • Solana — for users who want sub-second confirmations and fees measured in fractions of a cent

A merchant signs up, picks which chains and tokens to accept (USDC on both? ETH on L1? SOL on Solana?), and customers get a hosted checkout where they pick what to pay with. The smart contracts handle the rest. No bridges. No swaps. The merchant receives payment in the same token the customer paid with.

That's it. No ideological lock-in for the merchant or the customer.

The two-chain choice wasn't accidental. Ethereum and Solana represent the two largest stablecoin economies that don't share an execution model. Covering both meant we could plausibly serve almost any merchant who wanted crypto payments — and, more importantly, it forced the codebase to be honest about chain abstraction from the very first commit.


Then merchants asked for Base

Six months in, a clear pattern emerged in support tickets and sales calls:

"We want USDC. We want EVM-grade security so our users can pay from their existing Coinbase Smart Wallet. We want fees that don't kill micropayments. Do you have a plan for that?"

L1 Ethereum was too expensive for sub-$50 invoices. Solana was great on cost but a different ecosystem with its own wallet UX. Base — Coinbase's L2 — answered the merchant ask precisely: Ethereum's security model, Coinbase Smart Wallet onramp, USDC-native settlement, and fees in cents.

Adding Base took a weekend.

That sentence sounds like marketing, so let me show the work.


How a weekend was enough

Three reasons the Base addition was small instead of large.

1. Same EVM contracts, new deployment target

Base is an OP Stack L2 that runs the EVM. The smart contracts we'd already written, tested, and deployed on Ethereum mainnet were valid bytecode on Base. Adding Base on the contract side was a deployment script, a few configuration constants (chain ID, RPC endpoints, USDC contract address), and a deploy-and-verify run.

Solana would not have been a weekend. Solana would have been weeks at minimum — different language, different account model, different signature scheme, different audit story. We did Solana on day one specifically so we never had to do it as a "later" project. Base was the easy case because we'd already paid the hard cost.

2. The codebase was structured for new chains, not for two chains

This is the part most "we built multi-chain from day one" posts gloss over. There's a meaningful difference between:

  • "We support Ethereum and Solana" — two specific chains hardcoded into your data model, your webhooks, your SDKs, your dashboard
  • "We support multiple chains, currently Ethereum and Solana" — chain identifiers as first-class values, chain-specific behaviors isolated behind interfaces, no "if EVM else Solana" branches scattered through the business logic

We aimed for the second from commit one. It cost more upfront because every "just do it for Ethereum first" temptation had to be resisted in favor of an extra layer of abstraction. But the payoff was that adding a chain later wasn't a refactor. It was filling in implementations behind interfaces that already existed.

When Base showed up, the codebase asked: "What's the chain ID? What's the USDC contract address? Which RPC providers do we trust?" It didn't ask: "How do I express the concept of an L2 in a system that previously only knew about L1 and Solana?" That second question is the one that takes a quarter to answer in most multi-chain retrofits. We'd already answered it.

3. EVM means real code reuse, not just contract reuse

Beyond the smart contracts themselves, a lot of the off-chain code path for Base was identical to Ethereum:

  • Transaction construction, signing, and submission
  • Event log parsing
  • Address validation
  • Webhook payload schemas
  • SDK methods
  • Dashboard wallet configuration

The L2-specific differences exist (different gas dynamics, different finality assumptions, sequencer-specific edge cases) but they're additions on top of an existing EVM code path, not parallel implementations. Adding Base was something like a 10% incremental code surface — not 50%, not 100%.

The reason this worked is that we'd already spent the day-one cost of separating "EVM-shaped logic" from "Ethereum-mainnet-specific logic." On day one, you do that for nobody — there's only one EVM chain. Six months later, when an EVM L2 shows up, that distinction pays for itself in a weekend.


What it actually cost us upfront

Two-chains-from-day-one wasn't free. Here's where it actually hurt — honestly.

Two execution models doubled the cognitive load

Every architectural decision had to satisfy both EVM and Solana semantics. ECDSA on one side, Ed25519 on the other. Addresses on one side, program-derived addresses on the other. Gas on one side, compute units and priority fees on the other. RPC reliability quirks differ across providers and chains.

If we'd only done EVM, we'd have shipped faster. We'd also have written code that quietly assumed EVM forever, and Solana would have become a "v2 rewrite" project that never quite got prioritized.

RPC reliability eats more time than you think

Every chain has RPC providers. Every RPC provider has outages. With two chains, you have 2x the surface area for "the customer's transaction looked successful but the webhook never fired." With three, it's 3x — and Base in particular has its own L2-specific edge cases (sequencer hiccups, batch posting delays, withdrawal-window confusion in user support).

We run redundant RPC connections per chain (multiple providers + a public fallback) with health-check rotation. Months of work nobody asks about until a webhook is missing.

Fee abstraction is harder than it looks

EVM has gas. Solana has compute units and priority fees. L2s have gas plus L1 data costs that fluctuate independently of L2 gas. Showing a merchant "this will cost about $X to your customer" needs real-time data feeds for every ecosystem and graceful degradation when they disagree.

Testing surface multiplies

Unit tests, integration tests, fork tests, Solana localnet tests, L2 sequencer simulations — every PR has to pass on every chain. CI takes longer with each chain added. Local dev environment setup is more involved for new contributors.

Documentation gets thin in unexpected places

Most blog posts, tutorials, and Stack Overflow answers assume EVM L1. When a Solana-first dev hits an issue, or an L2-first dev hits an L1 assumption, they often need help we have to write ourselves.


On audits

A note on audits, because most multi-chain posts hand-wave this and we don't want to.

Our smart contracts are open-source on GitHub today. They have not been formally audited yet. Audits are real money — the kind of money that pencils out cleanly when you have meaningful merchant volume backing them, and feels expensive to commit pre-revenue when iteration speed still matters more than the audit certificate.

Our plan is straightforward: as merchant adoption grows and contract changes stabilize, we'll commission a formal audit across both EVM (covering Ethereum + Base — same bytecode, same audit) and Solana (separate review, different ecosystem). Doing both in the same window will be cheaper than doing them sequentially, and the EVM-on-EVM-L2 reuse means Base doesn't add meaningful audit cost beyond Ethereum.

Open-source-without-formal-audit isn't ideal. We're not pretending it is. It's the honest tradeoff for a small team optimizing for shipping over signaling. When the audit happens, we'll publish the report.


What we'd do differently

If we were starting QBitFlow today, we'd ship Ethereum + Solana and Base in v1, even if Base demand wasn't fully proven yet.

Three reasons:

  1. The L2-specific code paths are real, and getting them in while the codebase is small is even cheaper than the weekend it took us six months in. A few extra days on day one beats a weekend later.
  2. Base merchants converted faster than any other chain we've added. The Coinbase Smart Wallet onramp halved customer-side friction overnight. We left months of revenue on the table by waiting.
  3. The infra delta would have been negligible upfront — same redundant RPC setup, same monitoring, just one extra connection per layer.

The original "multi-chain from day one" thesis was right. Our application of it was just slightly too conservative on the L2 side.


When multi-chain from day one is the wrong call

I'm not pretending this is universal advice. Multi-chain from day one is the right call only if:

  1. Payments are your core product. If chains are just a deployment target for your dapp's logic, pick one and ship.
  2. Your users care about chain choice. If they'll use whatever you give them, multi-chain is wasted effort.
  3. You can afford the upfront discipline. If you're solo-bootstrapping with $5K of runway and three weeks before a demo, ship on one chain and prove demand first. The discipline is real and the discipline costs time.

For us, all three were true. QBitFlow is a payment rail. Our users (merchants) absolutely care about which chain their customers can pay on. We chose to take the upfront discipline cost so we'd never have to take the upgrade-rewrite cost.


What's next

We'll add more chains as merchant demand justifies them. Polygon, Optimism, Arbitrum, and a few non-EVM chains are on our roadmap. Each new EVM L2 should be in the same weekend-of-work range as Base was. Non-EVM chains will be more — but the day-one discipline of "chain identifiers as first-class values, no hardcoded execution-model assumptions" makes them additions rather than rewrites.

If you're building anything that touches payments and you're stuck on the "which chain" question, our answer is this: don't pick one. Architect for several from the start, even if you only ship two — and structure your code for adding chains, not for the chains you've shipped. The cost of being wrong about one chain is much smaller than the cost of being locked into one. The cost of architecting for "the chains we currently support" instead of "any chain we might add" is the difference between a weekend's work and a quarter's work, six months later.


QBitFlow is a non-custodial crypto payment rail for one-time payments, subscriptions, and marketplace fee splits — live on Ethereum, Base, and Solana. Smart contracts are open-source on GitHub. Try it →

Top comments (6)

Collapse
 
itsevilduck profile image
ItsEvilDuck

This is one of the cleanest articulations of “architecture as optionality” I’ve read on this platform, and the day-one-discipline-vs-later-rewrite framing is going to be sitting in my head for a while. The “chain identifiers as first-class values” principle in particular is the kind of small architectural choice that compounds in ways that don’t show up until you need it, and your post does the unusual service of making the compounding visible while it’s happening rather than only in retrospect.
I want to push on one part, less to disagree than to surface a wrinkle that affects how the principle generalizes. The post is written from the perspective of a payments rail as core product — chains are the substrate, multi-chain support is the value proposition, and architecting for “any chain we might add” pays off because adding chains is the product surface. I’m building something different: a catalog of consumer apps where payments are a substrate but not the product itself. The same “architect for many, ship two” discipline applies, but the calculus around it shifts in interesting ways. The cost of the upfront discipline is the same, but the payoff is delayed and indirect — I’m not adding chains because users are asking for them, I’m adding chains because the agent layer underneath my apps will eventually need a chain whose latency and fee profile matches what user-facing apps don’t need. The “merchants asked for Base” forcing function in your post is replaced, for me, by “the agent layer is going to need Solana whether anyone is asking for it yet or not.”
That difference makes me think there are actually two flavors of “multi-chain from day one” worth distinguishing. The merchant-driven version your post describes — where you architect for chains because you know your users will eventually want them. And what I’d call the substrate-driven version, where you architect for chains because the internal requirements of your system will eventually need them, even if no user is asking for it. Both lead to the same architectural discipline, but the upfront sell to a co-founder or investor is harder for the second one because the payoff isn’t visible until much later.
The closing principle — “the cost of being wrong about one chain is much smaller than the cost of being locked into one” — holds in both cases, which is probably the deeper point. Worth carrying around.
For full disclosure since you’ll see it shortly anyway: I’m in the middle of drafting a post that touches the same architectural territory from the catalog side, and I’d been intending to send it to you for a precision check before publishing. Reading this one is going to materially improve that draft, especially the section where I was probably under-pricing the upfront discipline cost. Thanks for shipping yours before I shipped mine.
— quackbuilds.com @itsevilduck 🦆

Collapse
 
qbitflow profile image
QBitFlow

This is the wrinkle the post needed and didn't have — thank you for surfacing it. The merchant-driven vs substrate-driven distinction explains why so many "multi-chain from day one" stories fall apart at the implementation layer: most are unconsciously merchant-driven, so when the substrate-driven case shows up later, the abstractions don't generalize.

Let me extend rather than disagree: I think both flavors collapse into a deeper one — capability-driven multi-chain. The chain set falls out of the operation set, not out of who's asking. For QBitFlow: accept any stablecoin, settle non-custodially, in seconds, at sub-cent cost when needed — no single chain delivers all four. For your catalog: agent-side latency, fee profile, settlement guarantees the user-facing layer can't compromise on. Different operations, same logic.

User-asks-vs-system-needs is then a forcing function on timing, not a flavor. Which is why your case is harder to sell upfront — merchant-driven discipline gets validated continuously by user requests; substrate-driven discipline has to survive a year of "why are we paying this complexity tax for a thing nobody asked for." Same architectural conclusion, very different organizational politics. Worth naming explicitly in your post — most engineering leaders won't grant the discipline budget without it.

On the precision-check offer: yes, please send it over when you're ready. Genuinely curious to read the catalog-side argument. I suspect substrate-driven has a stronger "weekend instead of quarter" payoff than ours did — user-facing catalogs typically bake in more state-shape assumptions than payment rails do, and refactoring those later is multiplicatively worse.

Your "two flavors" framing is going in my vocabulary regardless of whether I argue they collapse. Thanks for taking the time.

— Paul / @QBitFlowApp

Collapse
 
itsevilduck profile image
ItsEvilDuck

Paul — thanks, and yes, the capability-driven framing is sharper than what I had. I tried to argue myself out of agreeing with it before writing this reply and couldn’t. Both my flavors collapse cleanly under it, and it produces a more useful design heuristic: enumerate the operations the system needs to perform, identify which capabilities each requires, then ask which chains deliver which capabilities. That’s a question you can answer with a spreadsheet. Who’s asking is a question you can only answer in retrospect, which is exactly why merchant-driven discipline appears to work and substrate-driven discipline appears to be a luxury — until the substrate-driven case becomes load-bearing and the spreadsheet was never built.
The organizational-politics observation is the part of your reply I’m going to be sitting with longest. You’ve named a problem I hadn’t articulated to myself, which is that the substrate-driven case carries an unfunded discipline tax for the entire interval between architectural commitment and capability payoff. QBitFlow’s discipline gets paid back every time a merchant asks for a new chain. Mine gets paid back the first time the agent layer hits a Base-shaped wall it can’t route around — which could be six months out, could be eighteen, could be never if the agent layer fails for unrelated reasons. The interval where the discipline looks like overhead is the interval where engineering leaders quietly start dismantling it. Same architectural conclusion, very different organizational politics is exactly right, and I’m going to pull that into the post explicitly because you’re correct that most readers won’t grant the discipline budget without seeing the politics named.
On the catalog-vs-rail comparison you raised at the end — I think you’re right that the “weekend instead of quarter” payoff is structurally larger on the catalog side, and worse, that I won’t get to find out empirically. The catalog-side refactor cost compounds across every app that bakes in state-shape assumptions, which means by the time the substrate-driven case is forcing the issue, I’d be retrofitting twelve apps instead of one payment rail. The fact that the discipline pays off more in my case is, paradoxically, exactly why it’s harder to justify upfront — the avoided cost is invisible by definition, and the larger the avoided cost, the more invisible it gets. That’s a cleaner version of the politics problem than I’d have arrived at on my own.
I’ll send the draft via DM tomorrow. The bimodal cost framing and the chain-identifiers-as-first-class-values principle are already credited to your published post, and your capability-driven framing plus the politics observation are going in too — with attribution, since both are clearly yours. Let me know if you’d rather a different attribution shape than what I default to; I tend toward naming the framing and crediting it inline rather than footnoting, but happy to do whatever works for you.
— Luke / @itsevilduck 🦆 quackbuilds.com

Thread Thread
 
qbitflow profile image
QBitFlow

Luke — your "avoided-cost-is-invisible-by-definition, and the larger the avoided cost the more invisible it gets" generalizes well past architecture. It applies anywhere the discipline budget is paid in a currency the org doesn't track until the bill arrives — observability investment, schema discipline, type-safety adoption curves. You've turned the politics observation into a class of problems, not just an instance.

On attribution: inline naming is exactly right. Footnotes hide the conversation; inline credit signals to your readers that the framework was forged in dialogue, which it was. If "capability-driven multi-chain (per @QBitFlowApp)" or "the politics observation (Paul, QBitFlow)" reads naturally where you'd put it, that's perfect. No formal preference beyond that — you have better instinct for what fits your prose than I do.

Send the draft whenever it's ready. I'll clear time to read it properly the moment it lands — careful pass, not a skim. Looking forward to it.

— Paul

Thread Thread
 
itsevilduck profile image
ItsEvilDuck • Edited

Paul — the generalization to observability and type-safety is the kind of move that makes me think the politics observation is doing more work than I realized. I folded it into the post as the closing beat of the politics section, with inline credit to you, because the class-of-problems framing made the section meaningfully more useful than the chain-architecture-only version was.
The post is ready and I was about to send it via dev.to DM when I learned the platform sunset Connect entirely a while back, which I hadn’t known. So a quick logistics question: comment threads are fine for short coordination but a 3,000-word draft pasted as a comment would be hostile to both of us. Happy to send the post via whichever channel works best on your end — your blog has a contact form, X DMs are open per @QBitFlowApp, or I can put the post into a Google Doc and share the link if that’s easier on your end (Doc has the side benefit of letting you leave inline comments directly on the draft, which is probably the cleanest review surface). Whatever you prefer.
No rush on either side — careful pass on whatever timeline works for you.

— Luke / @itsevilduck 🦆 quackbuilds.com

Thread Thread
 
qbitflow profile image
QBitFlow

Twitter DM works perfectly — that way you can ping me at any odd hour and I'll see it. Though if you don't mind, Google Doc with comment access would be cleaner for the actual review pass — inline comments at the exact line beat me writing "in paragraph 4 you said..." in a DM thread. So: send the link via DM whenever ready, and I'll do the careful pass in the Doc itself.