Hoi hoi!
I'm @nyaomaru, a frontend engineer who once panicked because I triggered a fire alarm while roasting chashu at home in the Netherlands. π
This time, I want to write about how I built the corporate website for Necoz B.V.
The site is built with:
-
Astrofor the base structure -
Reactonly where interactive UI is needed -
TypeScriptfor the main animation control - responsive behavior for both desktop and mobile
- virtual scrolling to control the scroll amount itself
Because this is a corporate website, I did not want to break SEO.
But I also wanted the motion to feel good.
And I did not want the experience to fall apart on mobile.
When you take these requirements seriously, the important question is not only:
Which technology should I use?
It becomes:
Which responsibility should belong to which layer?
In the end, the hardest part was the animation.
AI was useful. It helped me move fast. It gave me rough ideas and initial implementations.
But the animation it produced was not something I could use as-is.
This article is about why, and how I ended up designing the animation system.
The repository is here:
necoz
Necoz B.V. website built with Astro, React islands, and custom scroll / walker animation logic.
Scroll down and enjoy the animations!
You can check it on mobile too πΈ
Project Structure
The project is organized like this:
/
βββ public/ # Static files served as-is
βββ src/
β βββ components/
β β βββ home/ # Homepage sections
β β βββ ui/ # Reusable UI primitives and shared shells
β β βββ block/
β β βββ button/
β β βββ layout/
β β βββ mail/
β β βββ nyaomaru/
β β βββ scene/
β β βββ scroll/
β β βββ text/
β βββ layouts/ # Shared page layout shells
β βββ lib/ # Non-visual shared logic
β β βββ nyaomaru/ # Walker controller, scene logic, scene models
β β βββ math.ts
β β βββ site-links.ts
β βββ pages/ # Route entrypoints
β β βββ index.astro
β β βββ privacy-policy.astro
ββ¦Let's get into it.
π€ Why I did not use AI-generated animation as-is
AI-generated animation often looks nice at first glance.
It usually has:
- some movement
- some easing
- some visual atmosphere
But when I looked more carefully, I often felt:
The direction is right, but the timing is wrong.
For example:
- the movement was too fast
- everything moved with the same rhythm
- elements did not stop where they should
- there was no pause
- there was no sense of timing
AI can generate the rough idea of "how something moves".
But in real animation, what matters is also:
- where it stops
- where it waits a little
- where one element reacts slightly later
- where the rhythm changes
Animation is not just about moving from position A to position B.
A small pause, a tiny delay, or a slightly different easing curve can completely change how it feels.
I think this is one of the current gaps between AI and human judgment:
AI can produce motion quickly, but humans still have better sensitivity for timing and rhythm.
So I used AI for rough drafts, structure, and experiments.
But I adjusted the final timing by hand.
π§βπ Why Astro?
This site is a corporate website.
So I wanted:
- real HTML content
- good SEO
- fast initial loading
- no unnecessary SPA architecture
That is why I chose Astro.
The page structure is mainly built around src/pages/index.astro and src/layouts/Layout.astro.
The header, sections, footer, and base layout are handled by Astro.
<Layout title={DEFAULT_TITLE} description={DEFAULT_DESCRIPTION} canonicalPath="/">
<main id="top">
<Header />
<HeroIntro />
<WorkSection />
<StudioSection />
<ContactSection />
<NyaomaruWalker />
</main>
<Footer />
</Layout>
The layout also switches between native scrolling and virtual scrolling.
<body class:list={[virtualScroll ? "virtual-scroll-body" : "native-scroll-body"]}>
{virtualScroll ? <VirtualScroll><slot /></VirtualScroll> : <slot />}
</body>
What I like about Astro is that I do not have to turn the whole site into JavaScript.
Astro lets me output clean HTML first, and then hydrate only the parts that need client-side behavior.
For a corporate website, this is very useful.
In this project, I wanted:
- solid page structure
- metadata and SEO
- client-side animation only where necessary
Astro did not make the animation better by itself.
But it helped me separate structure and behavior cleanly.
That separation was very important.
βοΈ I used React, but not for everything
The site also uses React.
But it is not a full React app.
The responsibility is divided like this:
-
Astroowns the page structure -
Reactowns small interactive UI parts -
TypeScriptowns the animation system
For example, the mail dialog is hydrated with client:load.
<div class="contact-links">
<MailDialogTrigger client:load />
<SocialLinks />
</div>
The dialog trigger itself is a React component.
export default function MailDialogTrigger() {
const [isOpen, setIsOpen] = useState(false);
useEffect(() => {
if (!isOpen) return;
const previousOverflow = document.body.style.overflow;
document.body.style.overflow = "hidden";
return () => {
document.body.style.overflow = previousOverflow;
};
}, [isOpen]);
// ...
}
This is a good use case for React.
There is local UI state.
There is user interaction.
There is a dialog.
But that does not mean the entire website needs to become a React application.
This was an important decision:
Using React does not mean everything has to be React.
React is great for UI.
But I did not want React to become responsible for the motion system.
π Animation is controlled by TypeScript, not React state
The core animation logic lives under src/lib/nyaomaru/.
It handles things like:
- the walking character
- scene progression
- scroll-based state updates
- DOM measurement
- animation phases
Some of the main files are:
| File | Responsibility |
|---|---|
walker-controller.ts |
Controls the main walker progression |
work-scene.ts |
Controls the work section scene |
studio-scene.ts |
Controls the studio section scene |
contact-scene.ts |
Controls the contact section scene |
This is not a purely declarative animation system.
It is much more direct:
- read the current scroll position
- determine the active scene
- calculate progress
- calculate the pose
- update the DOM
A simplified version looks like this:
const update = () => {
const scrollY = getSceneScrollY();
const activeSceneState = getActiveSceneState(scrollY);
if (!activeSceneState) return;
const progress = getSceneProgress(scrollY, activeSceneState.snapshot);
const pose = activeSceneState.scene.getPose(
progress,
activeSceneState.snapshot,
);
walker.style.transform = `translate(${pose.x}px, ${pose.y}px)`;
walker.dataset.phase = pose.phase;
};
const requestUpdate = () => {
cancelAnimationFrame(rafId);
rafId = window.requestAnimationFrame(update);
};
The flow is simple:
- read the scroll value
- find the active scene
- calculate progress
- calculate the pose
- apply it to the DOM
This was easier to tune than putting everything into React state.
React is strong for UI state.
But animation timing often needs a thinner, more direct control layer.
This does not mean React is bad for animation.
It only means that, for this project, the responsibilities were different:
- React was for UI
- TypeScript was for motion
That separation worked well.
π Why I did not avoid direct DOM manipulation
In modern frontend development, we often try to avoid direct DOM manipulation.
Usually, I agree with that direction.
Declarative UI is easier to reason about in many cases.
But scroll-driven animation is a little different.
Sometimes, the important state is not:
What state does this component have?
It is:
What is actually visible on the screen right now?
For this project, I needed to measure things like:
getBoundingClientRect()- viewport width
- which block is currently visible
- whether the desktop or mobile layout is active
- where the walker is currently positioned
This was similar to what I learned while building a browser game before:
The animation depends on the actual layout.
So the DOM is not just an output target.
It is also something the animation system has to observe.
In this case, directly measuring the DOM made the intention clearer.
It also avoided unnecessary indirection.
So I did not avoid direct DOM manipulation completely.
I used it only where it made sense.
πΏ Responsive design was not only a CSS problem
The site also supports mobile.
But responsive behavior here was not only about changing layout with CSS.
For several sections, I have different block structures for desktop and mobile.
For example, the hero, work, and studio sections have:
- desktop blocks
- mobile blocks
The visible block changes depending on the viewport.
The TypeScript animation logic also uses a 430px mobile breakpoint and changes things like scene progress, landing positions, and offsets.
const workStack = isMobileViewport()
? getVisibleElement("[data-work-mobile-origin]")
: getVisibleElement("[data-work-scene-display]");
const stackRect = workStack.getBoundingClientRect();
const walkerRect = walker.getBoundingClientRect();
const shotX =
walkerRect.right +
(isMobileViewport() ? MOBILE_SHOT_OFFSET_X : SHOT_OFFSET_X);
The markup also separates desktop and mobile blocks.
<div class="block block-three block-three--desktop">...</div>
<div class="block block-three block-three--mobile">...</div>
So responsive design in this project affected:
- DOM structure
- scroll progression
- animation targets
- movement distance
- position correction
If I handled only the visual layout, the desktop version might feel good, but the mobile version would break quickly.
For scroll-driven animation, responsive design is not only layout adjustment.
It is also motion redesign.
π Why I used virtual scroll
One of the most important parts of this site is virtual scrolling.
The layout uses VirtualScroll.astro, and inside that layer, a custom scroll controller runs.
The goal was not just to make scrolling look fancy.
The real goal was to separate:
- visual scroll
- scene scroll
- scrollbar behavior
- smooth scroll following
- animation progression
In the implementation, I separate visualScrollY and sceneScrollY.
const applyScroll = () => {
const visualScrollY = getVisualScrollY();
const sceneScrollY = getSceneScrollY();
content.style.top = `${-visualScrollY}px`;
setScrollState({ sceneScrollY, visualScrollY });
window.dispatchEvent(new CustomEvent("necoz:virtual-scroll"));
};
The scroll values are read like this:
export const getVisualScrollY = () => window.__necozScrollY ?? window.scrollY;
export const getSceneScrollY = (visualScrollY = getVisualScrollY()) =>
window.__necozSceneScrollY ?? visualScrollY;
This separation was very important.
If everything depends directly on native scroll, it becomes harder to control the rhythm.
For example, I wanted to adjust cases like:
- the page visually moved down, but the scene should wait a little more
- the footer area needs denser animation
- mobile should use a different scroll progression
- some scenes should feel slower or more deliberate
With virtual scroll, I can separate:
how much the page visually moves
from:
how much the animation scene progresses
That means I can design the time axis myself.
This was the main reason I used virtual scroll.
Not because it looks cool.
But because I wanted control over animation time.
For scroll-driven animation, scroll is not just movement.
Scroll is input.
And if scroll is input, it needs to be designed carefully.
π€ Where AI was useful
At this point, it might sound like I do not trust AI.
That is not true.
AI was very useful for:
- generating rough drafts
- trying implementation patterns
- splitting responsibilities
- getting something working quickly
- exploring ideas before committing to one direction
As a tool for the first step, AI is excellent.
But final animation tuning was different.
For example, walking, falling, and landing phases were separated like this:
export const lerp = (start: number, end: number, progress: number) =>
start + (end - start) * progress;
if (progress > secondFallStart) {
return {
x: lerp(layout.secondFallX, layout.landingX, secondFallProgress),
y: lerp(
layout.secondRunY,
layout.landingY,
Math.pow(secondFallProgress, HERO_FALL_EASING_POWER),
),
phase: "second-fall",
};
}
if (progress > secondRunStart) {
return {
x: lerp(layout.firstFallX, layout.secondFallX, secondRunProgress),
y: layout.secondRunY,
phase: "second-run",
};
}
This kind of adjustment required a repeated loop:
- look at the animation
- feel that something is slightly wrong
- adjust a function or a number
- check it again
- repeat
AI helped me move fast.
But speed alone was not enough.
For this kind of site, the character walks, the scene changes, and the rhythm follows the scroll.
The final feeling had to be adjusted by hand.
I think this is one of the areas where frontend engineers will still matter in the AI era.
Not only writing code faster.
But deciding whether the result actually feels good.
π What I learned from this architecture
The main lessons from this project are:
- separate HTML structure and animation behavior
- separate UI state and animation state
- treat scroll as input, not just movement
- responsive animation requires motion redesign, not only layout changes
- direct DOM measurement can be valid when the screen itself is part of the state
- AI is useful for speed, but timing and rhythm still need human judgment
For scroll-driven animation, this kind of separation made the system much easier to reason about.
π― Summary
For the Necoz B.V. website, I used:
-
Astrofor page structure and initial HTML -
Reactfor small interactive UI parts -
TypeScriptfor the main animation control - direct DOM measurement where needed
- responsive motion logic for desktop and mobile
- virtual scroll to redesign scroll progression
The biggest benefit was that I could keep SEO and initial rendering stable while still keeping control over the motion system.
AI helped a lot.
But there is still a gap between:
animation that moves
and:
animation that feels alive
So I did not let AI handle everything.
I used it to move faster, organize ideas, and create rough versions.
But I kept the final timing and rhythm in my own hands.
In the end, I think what I was really building was not just a layout or an animation.
I was designing time along the scroll.
Thanks for reading!
If you found this interesting, the repository is public.
A star β would make me very happy πΈ


Top comments (11)
Honestly, you donβt really want to fight with useEffect dependency issues.
I learned that the hard way when I built a TD game in React. Once animations and frequent state changes get involved, dependency arrays quickly become dependency hell.
Thank you for your comment! πΊ
Yes, exactly. π€
I also feel that once animation and frequent updates enter the picture, useEffect dependencies can get messy very quickly.
That is why I tried to separate UI state and motion state in this and past game project.
React is still great for UI, but for timing-heavy animation, a small imperative layer often feels much easier to control.
That makes a lot of sense. πΈ
I looked at signal-kernel, and I like the idea of building the data graph first and treating rendering as just one effect of that graph.
My current feeling is that a data graph like signal-kernel would not make every animation easier, but it could make the architecture easier when animation becomes a dataflow problem.
For example, in my case the flow was roughly:
scroll input β virtual scroll β scene progress β walker pose β DOM transform
That feels very close to a reactive graph.
But I also think a React wrapper should probably not force high-frequency animation updates through React rendering.
React can own the component shell and lifecycle, while the kernel owns the motion graph.
Then rendering, DOM updates, logging, or other external effects can be treated as effects of that graph.
So for simple UI animation, React state + CSS is enough.
But for scroll-driven, stream-driven, or game-like animation state, I can see signal-kernel being useful.
Is this close to how you imagine the framework adapters working?
Yes, this is very close to how I imagine the framework adapter wroking.
I also don't think signal-kernel should make every animation "easier" by default. For simple UI animation, React state, CSS transitions, or animation libraries are already good enough.
Where I think a reactive graph becomes useful is exactly the kind of case you described:
scroll input β virtual scroll β scene progress β walker pose β DOM transform
At that point, animation is no longer just a visual detail. It becomes a dataflow problem.
My goal for a React adapter is not to force high-frequency updates through React rendering. React can own the component shell, lifecycle, and integration boundary, while signal-kernel owns the motion/data graph.
Then DOM updates, rendering, logging, async events, or other external systems can be modeled as effects of that graph.
So yes, your interpretation is very close: React handles the UI structure, but the kernel handles the reactive coordination underneath.
I really like that direction.
The key point for me is that React does not have to be the center of every update.
If those can be modeled as one reactive graph, the mental model becomes much cleaner than spreading everything across multiple useEffects.
Iβll keep following signal-kernel. π
The React adapter sounds especially interesting.
Really enjoyed this article.
I think the most valuable point is not βReact vs TypeScriptβ, but the idea that high-frequency animation systems often need an imperative runtime separated from the reactive rendering layer.
I also agree that scroll becomes more like a timeline/input stream than traditional UI state.
The only part Iβd challenge a bit is virtual scrolling. In my experience it gives incredible control, but it can also introduce long-term complexity around accessibility, browser behavior, mobile momentum scrolling, restoration, and maintainability.
Iβd be curious to see how this architecture evolves once the animation system grows beyond a few scenes/components.
Still, very refreshing article β especially because it discusses architecture tradeoffs instead of just βframework hypeβ.
Thank you, this is a really thoughtful comment. πΈ
I completely agree with your point about virtual scrolling.
It gives a lot of control, but it also means replacing some native browser behavior with custom logic. So it can easily become technical debt if the trade-off is not intentional.
I would probably not choose this approach for a typical business application or a content-heavy site where native scrolling should be preserved as much as possible.
For this project, I accepted the trade-off because the goal was to expand the expressive range of the animation and control the timing more directly.
But I also think this is something I need to keep watching as the site grows.
I plan to continue evolving this site, so Iβd like to write a follow-up later about how this architecture feels after more scenes and components are added.
Thanks again for pointing this out!
That makes a lot of sense honestly.
I think the key part is exactly what you said: making the trade-off intentionally instead of accidentally fighting the browser/runtime later.
And I actually respect that youβre framing this as an expressive/interactive experience rather than a generic app architecture. A lot of people apply the same patterns everywhere without considering the constraints of the medium.
Would definitely be interested in a follow-up article later, especially around:
Those are usually the points where handcrafted motion systems either become really powerful⦠or really painful
Thank you, I really appreciate that. πΈ
I do not have a clear answer yet, but those are exactly the points I want to watch as the site grows. π
Honestly, thatβs probably the healthiest mindset for this kind of system.
A lot of interesting architectures look great at 3 scenes and become impossible at 30.
The fact that youβre already thinking about scaling pressure and long-term ergonomics is a very good sign.
Looking forward to seeing how it evolves.
Some comments may only be visible to logged-in visitors. Sign in to view all comments.