How I Use Claude Code at Stellar

It was at Nano Banana Hackathon on September 6, 2025 where I felt the shift in how AI was going to change the way I work. Unlike in 2023 when I used Cursor for the first time at AGI House in Hillsborough, AI was good at writing acceptable, functioning code. Thanks to AI's capabilities, teams at Nano Banana Hackathon were much smaller, mostly consist of two people per team. I wrote the entire frontend codebase for my team's project in 2023. At Nano Banana Hackathon, I spent about 50% on writing the code and another 50% coming up with ideas for the product and prompting. I had been using AI daily at work for Stellar Laboratory for small tasks like writing a helper function or refactoring one file at a time under my guidance. At Nano Banana Hackathon, I started with an empty canvas, gave AI prompts and let it write the codes. I still contributed a lot as I wasn't used to having AI doing it all. Other teams that were familiar with AI shipped a beautiful functioning project very fast.

I realized then that I needed to get on top of it if I wanted to stay relevant, but it was hard to even figure out where to start. Should I start a new project? Should I just use Claude Code or should I also try out Codex? I wanted a guideline on where to start, but that felt impossible in today's fast paced, ever changing AI environment.

In January 2026, our company launched an AI Software Engineer Guide with six levels of AI usage maturity.

  • Levels 1–2: treat AI primarily as a typist or search tool, such as IDE auto-completion or basic query answering.
  • Level 3: uses AI as a co-author, providing clear instructions to complete small, well-scoped tasks.
  • Level 4: represents basic agentic use, where AI can execute a small project under human guidance.
  • Level 5: introduces intermediate agentic workflows with feedback loops - the AI proposes a plan, the human approves it, and the AI executes, tests, and summarizes the results.
  • Level 6: reflects advanced agentic use, where humans define objectives, constraints, and success criteria, and AI operates independently for extended periods.

Since the company wide adoption of Claude Code (CC) in February, I have been experimenting with different tools to improve my developer experience for CC. CC came at the right time as I became the solo maintainer for Stellar Laboratory, the all-in-one web dev tool to build, sign, simulate, and submit transactions and interact with contracts on the Stellar network. It uses popular JS libraries like NextJS, TanStack, React, Zustand, and etc. I also volunteered to work on its Node backend. All of our design is on Figma.

My goal for Q1 was to find the best work flow with AI to increase development velocity. AI's been incredible at fixing small bugs and adding tests. I noticed common errors (ex. AI wanting to create a new component instead of using our custom design library) so I created CLAUDE.md to inform AI about the best practices to the Lab. Building a complex new feature needed more guidance in the beginning. I created an agent team made of UI/security expert and devil's advocate to create a design doc for a new transaction flow. With the design doc, CC has been very good at creating new features.

It wasn't a smooth sail though. I am a self taught developer so my usual way of learning a new tool was to read the official doc, checkout tutorials, look what others have built, and then learn by building with it. Learning how to use CC was different. Official doc or tutorials on CC felt very basic, especially in comparison to how people were using CC on X (Ralph Wiggum loop, multiple skills/agents). But X also felt overwhelming. I felt so behind and didn't want to care about things that would go irrelevant. Early popular posts on how to write CLAUDE.md became irrelevant as SKILL.md became popular. I questioned some of the popular SKILL.md as they were not following the convention by Anthropic's official docs. It was hard to measure what made their SKILL.md better than the other since everyone's project was different. I was looking for the perfect guideline that didn't exist because everyone's project is different.

I decided to stop comparing myself and start using CC.

I have tried git worktree, CLAUDE.md, SKILLS, Figma MCP, subagent, agent teams.

git worktree for working on multiple features in parallel

💡
git worktree is a Git command that allows you to have multiple working directories for the same repository simultaneously, with each directory on a different branch. This enables parallel development without the overhead of constantly stashing changes or cloning the entire repository multiple times.

git worktree enables developers to create separate working directories pointing to the same repo. This is my go to tool to run multiple CC to finish multiple tasks in parallel. The command itself is straightforward to use; however, when running multiple worktress, it can get tedious to run the same commands over and over again. Inspired by this post, I asked CC to create me a custom bash to streamline the worktree workflow.

My custom script . worktree <branch name> from my dotfile repo does the following:

  • It creates a worktree as sibling next to the main repo (if the repo is ~/Code/laboratory, worktrees go in ~/Code/laboratory-{issue#})
  • Copy node_modules using copy-on-write, common untracked files
  • Run repo-specific setup

The command takes less than a minute to be completed. I create multiple terminals, run . worktree command for corresponding issue tasks on github, then run CC to implement tasks in parallel.

. worktree at work

If you return to your main repo and run git branch, you'll see your worktree branch since git worktree shares the same .git database.

issue-1910 and issue-1930 are branches checked out by git worktree

SKILL.md and hooks

💡
At its core, a skill is a folder containing a SKILL.md file. This file includes metadata (name and description, at minimum) and instructions that tell an agent how to perform a specific task. Skills can also bundle scripts, templates, and reference materials.

I have two SKILL.md for the Stellar Lab:

  • figma-design-handoff
  • staff-code-review

These skills were created by CC and initially they both had a hook that invoked the SKILL.md based on the prompt. This is an overkill since SKILL.md's description field already handles this. When CC adds things that aren't needed, I get frustrated a bit. When CC doesn't add things that the official docs by Anthropics require, I get annoyed. For example, the official SKILL doc clearly encourages usage of trigger conditions, but SKILL.md does fine without it as long as its description is clear.

Hook would invoke whenever the relevant word is used within CC so personally, I haven't felt the need to use Hook yet.

There is no set rules. Just enjoy the ride. This is the new joy of AI coding.

To be continued.. I promised my OpenClaw Claude Booboo that I would publish tonight so I published it. But I will complete it tomorrow.