Home/Blog/AI

AI10 min readApril 6, 2026

What We Learned Building Our Website with Claude (The Hard Way)

What We Learned Building Our Website with Claude (The Hard Way)

When we decided to build the Signal and Form website, we made a deliberate choice to use AI assistance heavily throughout the process. As an AI consulting firm, it felt right to practice what we preach. We used Claude as our primary AI assistant for code generation, content drafting, and problem-solving. The result is the site you are reading this on right now. It works, it performs well, and we are proud of it. But the journey was not the smooth, effortless experience that AI tool marketing would have you believe.

This is an honest account of what went well, what went badly, and what we learned. If you are considering building a website with AI assistance, or if you are evaluating AI's role in your development workflow, these lessons will save you time and frustration. Stanford HAI's 2024 AI Index found that 78% of developers now use AI coding assistants in some capacity, but satisfaction rates with the output vary wildly depending on the complexity of the task.

The Tech Stack Decision

Before we get into the AI-specific lessons, some context on what we built. The Signal and Form site runs on Next.js with the App Router, TypeScript, and Tailwind CSS. We chose this stack deliberately because it has excellent server-side rendering support (critical for SEO), strong typing (which helps AI generate more reliable code), and a utility-first CSS approach (which reduces the surface area for styling bugs). The site is deployed on Vercel, which handles edge caching and automatic HTTPS.

Developer workspace with code on a laptop screen|Our actual development environment: Claude Code in a terminal, browser preview on a second monitor, and a lot of coffee.
Developer workspace with code on a laptop screen|Our actual development environment: Claude Code in a terminal, browser preview on a second monitor, and a lot of coffee.

This stack choice mattered more than we initially realized. AI assistants produce dramatically better output for popular, well-documented frameworks. Next.js and Tailwind have massive representation in training data, which means Claude could generate accurate, idiomatic code for most standard patterns. When we experimented with less common libraries or custom abstractions, the output quality dropped noticeably.

What AI Excelled At

Let us start with the wins, because there were genuine ones.

Component generation was excellent. Describing a UI component in plain language and getting a working React component back in seconds is genuinely transformative. "Create a responsive pricing card with a title, price, feature list, and CTA button using Tailwind CSS" produces a solid starting point almost every time. We estimate that AI-generated component code saved us 40% to 50% of the time we would have spent writing components from scratch. The key insight: AI is best at generating self-contained components with clear boundaries and well-defined props. The more isolated and standard the pattern, the better the output.

Person working at a laptop with code visible on screen

Boilerplate and configuration were handled well. Next.js configuration, TypeScript types, Tailwind setup, metadata configuration, sitemap generation: AI handled all of this efficiently. These are tasks with well-documented patterns, and AI models have seen millions of examples in their training data. The output was reliable and rarely needed significant modification. We saved an estimated 8 to 10 hours on configuration alone.

Content first drafts were useful. Using AI to generate initial blog post drafts, service page copy, and meta descriptions gave us a strong starting point. The drafts always needed human editing for voice, accuracy, and brand consistency, but they cut the blank-page-to-first-draft time dramatically. Every piece of content on this site started as an AI draft and went through multiple rounds of human revision. HubSpot research backs this up: teams using AI for content drafting report 35% to 45% faster time-to-publish, but the quality gap without human editing is significant.

Debugging assistance was valuable. When we hit errors, pasting the error message and relevant code into Claude often produced the correct fix faster than searching Stack Overflow. AI is particularly good at identifying common patterns in error messages and suggesting targeted solutions. One specific win: Claude diagnosed a hydration mismatch error in under 30 seconds that would have taken us an hour to trace manually through the component tree.

TypeScript type generation was surprisingly strong. Defining complex TypeScript interfaces and type guards from plain-language descriptions worked well. Claude understands type systems deeply, and the generated types were almost always correct on the first attempt. This was one area where AI output consistently required zero or minimal editing.

Where Things Went Wrong

Developer troubleshooting code issues on a computer screen
Developer troubleshooting code issues on a computer screen

JavaScript rendering and SEO conflicts. This was our biggest lesson. Early versions of the site relied heavily on client-side JavaScript for rendering content. The site looked great in a browser, but when Google's crawler visited, it was not executing all the JavaScript, which meant some content was invisible to search engines. AI-generated React components defaulted to client-side rendering patterns because that is what most code examples in AI training data demonstrate. We had to systematically audit every page and convert critical content to server-side rendering. This is the kind of SEO nuance that AI tools consistently miss. Google's own documentation on JavaScript SEO makes it clear that while Googlebot can render JavaScript, it does so in a deferred "second wave" of indexing, meaning your content may not be indexed for days or weeks.

CSS specificity and Tailwind conflicts. AI-generated components sometimes used conflicting Tailwind classes, inline styles, and custom CSS in ways that created unpredictable rendering across different screen sizes. When you are generating components in isolation, each one looks fine. When you assemble them on a page, specificity conflicts emerge. We spent considerable time debugging layout issues that only appeared when multiple AI-generated components were combined. The worst offender: AI would generate responsive breakpoints that contradicted our site-wide responsive strategy, causing elements to resize at the wrong viewport widths.

What AI Marketing Promises Describe your website in plain English Get production-ready code instantly Ship in a weekend No coding knowledge needed Pixel-perfect output from prompts

What Actually Happened Describe a component, get a starting point 4-6 iterations per component to reach production quality Shipped in weeks, not days Coding knowledge essential for evaluating output Constant manual design adjustments needed

Visual design decisions were weak. AI can generate a component that matches a description, but it struggles with the subjective aspects of design: visual hierarchy, whitespace balance, color harmony, typography pairing, and the overall "feel" of a page. We went through many iterations where AI-generated layouts were technically correct but aesthetically flat. Design requires taste and context awareness that current AI models lack. Nielsen Norman Group research on AI-generated interfaces confirms this: AI produces functional layouts but consistently underperforms human designers on visual hierarchy and emotional resonance.

Cross-browser and device testing gaps. AI generates code for an idealized environment. It does not test on Safari iOS, Firefox on Android, or old versions of Chrome. We discovered rendering bugs on mobile Safari that AI-generated code introduced because the AI used CSS features with incomplete Safari support. Manual testing across devices remains essential. One particularly painful bug: a CSS backdrop-filter that worked perfectly in Chrome but caused a white flash on Safari page transitions. Took two days to diagnose.

State management complexity was a blind spot. For simple components, AI-generated state management was fine. But as components grew more complex and needed to share state across the page, AI-generated solutions became brittle. Claude would suggest prop drilling where context was needed, or create redundant state that caused unnecessary re-renders. Any time the state management pattern deviated from a textbook example, we needed to redesign it manually.

The Iteration Cycle Reality

Wireframe sketches and design planning on paper

The marketing around AI-assisted development implies a smooth workflow: describe what you want, get code, ship it. The reality was much more iterative. Each component went through an average of 4 to 6 iterations before it was production-ready. That is still faster than building from scratch for an experienced developer, but it is not the one-prompt magic that social media demos suggest. For someone without development experience, the iteration cycle would be much longer because identifying what is wrong with AI output requires technical knowledge.

Our actual workflow looked like this:

  • Describe the desired component or feature to Claude
  • Review the generated code and identify issues
  • Provide feedback and request modifications (often 3 to 5 rounds)
  • Manually adjust the code for edge cases AI missed
  • Test across browsers and devices
  • Fix rendering issues that only appeared in production
  • Revisit the code when it conflicted with other components
  • Run Lighthouse audits and fix performance regressions
  • Validate semantic HTML and accessibility compliance

The most time-consuming iterations were not about getting the code to work. They were about getting the code to work well: performing efficiently, rendering correctly across devices, meeting accessibility standards, and fitting cohesively into the existing design system. Forrester's research on AI-assisted development found that while AI reduces initial coding time by 30% to 50%, total project timelines only decrease by 10% to 20% because of the quality assurance overhead.

Key Takeaway

AI-assisted development is faster than building from scratch, but it is not a replacement for development expertise. You need to know enough to evaluate and fix AI output. Think of it as a very fast but occasionally confused junior developer.

SEO Lessons Specific to AI-Built Sites

Building with AI introduced some SEO-specific challenges that we want to flag for anyone going down this path. These are not theoretical concerns. We discovered each of these through our own mistakes.

Person analyzing website analytics data on a monitor|Every SEO issue we found was invisible during development and only showed up in search console data weeks later.
Person analyzing website analytics data on a monitor|Every SEO issue we found was invisible during development and only showed up in search console data weeks later.

Content duplication in components. AI sometimes generates components with hardcoded text that gets duplicated across pages. When the same paragraph appears on three different pages, Google sees duplicate content. We had to audit for this and ensure all content was page-specific. Ahrefs has documented that even partial content duplication (more than 40% overlap) can suppress page rankings.

Missing semantic HTML. AI-generated components frequently used generic div elements where semantic HTML (nav, main, article, section, aside) would be more appropriate for accessibility and SEO. We had to retroactively add proper semantic structure. Google Search Central explicitly states that semantic HTML helps their systems understand page structure and content hierarchy.

Image optimization oversights. AI-generated image components often lacked proper alt text, lazy loading attributes, width/height declarations, and responsive sizing. Each of these affects Core Web Vitals and search rankings. We built a checklist for auditing every image component against SEO requirements.

Internal linking was entirely manual. AI does not understand your site architecture well enough to suggest meaningful internal links. Every internal link on this site was manually planned and implemented. A strong internal linking strategy is critical for SEO, and it requires human knowledge of your content hierarchy. Moz research consistently shows that internal linking is one of the most underutilized ranking levers, and it is something AI simply cannot do well because it does not know what other pages exist on your site.

Meta tag inconsistencies. AI-generated page metadata sometimes used inconsistent title tag formats, duplicate meta descriptions, or missing Open Graph tags. We built a pre-deployment script that validates all metadata across the site. This is the kind of systematic quality control that AI cannot self-enforce.

If you are building an AI-generated site and wondering why it is not ranking, our guide on why AI content does not rank covers the broader picture.

What We Would Do Differently

If we started over today, here is what we would change:

Establish server-side rendering patterns first. Before generating any components, we would define clear patterns for server-side rendering and make sure all AI-generated code follows them. Retrofitting SSR is much harder than building it in from the start. We would create a "component template" that every AI-generated component must follow, including the use client / server component decision.

Create a component library before AI generation. Having a small set of base components with consistent styling patterns would give AI better context for generating new components. Instead of generating from scratch each time, the AI would extend existing patterns. This reduces inconsistency dramatically.

Run SEO audits weekly during development. We waited too long to test how Google was crawling our pages. Running crawl simulations early would have caught the JavaScript rendering issues much sooner. Tools like Screaming Frog and Sitebulb can simulate Googlebot rendering and flag issues before they reach production.

Invest more in design system before coding. We should have spent more time on visual design in a design tool before asking AI to generate components. AI is better at implementing a defined design than it is at making design decisions. Our biggest time sink was the back-and-forth on visual details that should have been resolved in Figma before a single line of code was written.

Designer working on website layout on a large monitor

Build a testing pipeline from day one. We should have set up automated cross-browser testing, accessibility checking, and Lighthouse CI from the very first component. When you are generating code quickly with AI, the quality assurance infrastructure needs to be in place before the code starts flowing, not after.

For businesses considering the startup website route with AI assistance, these lessons are directly applicable. AI makes the build faster, but the strategic planning and quality control still require human expertise. Our team coaching can help your development team integrate AI tools effectively without falling into these traps.

The Cost-Benefit Reality

Let us put real numbers on this. Building the Signal and Form site with AI assistance took approximately 60% of the time a traditional build would have required. But that time saving was not evenly distributed.

Where AI Saved Time Component scaffolding: 50% faster Boilerplate and config: 70% faster Content first drafts: 40% faster Bug diagnosis: 30% faster TypeScript types: 60% faster

Where AI Cost Extra Time Design iteration: 20% slower (more revision cycles) SEO remediation: Added 15+ hours Cross-browser fixes: Added 8+ hours State management rewrites: Added 5+ hours Consistency audits: Added 10+ hours

The net result was positive, but not as dramatic as the marketing suggests. For an experienced developer, AI assistance is a clear productivity win. For someone without development experience trying to build a production site, the quality assurance burden could easily eat up all the time savings and then some.

The Verdict

Building a website with AI assistance is genuinely faster than traditional development, and we would do it again. But the speed advantage comes with a quality control burden that is easy to underestimate. AI is a powerful tool in the hands of someone who knows what good output looks like. It is a liability in the hands of someone who cannot evaluate the output critically.

The businesses that will get the most value from AI-assisted web development are the ones that pair AI speed with human expertise in design, SEO, and quality assurance. Skip any of those, and you will end up with a site that was fast to build but slow to succeed. If you want to understand the broader landscape of AI tools available to Canadian businesses, that is a good starting point for evaluating which tools are worth integrating into your workflow.

Key Takeaway

AI reduces the cost of creation but increases the importance of evaluation. The skill that matters most is not prompting. It is knowing what good looks like.

Building a website and want to get the AI-assisted approach right? Book a discovery call and we will help you plan a build that is fast, optimized, and built to rank.

Get Started

Share this article

The Signal & Form Team

Written by consultants with backgrounds in digital agency leadership, enterprise dashboard development, AI workflow automation, and SEO strategy across multiple industries. We build what we advise — every recommendation comes from hands-on experience.