Daring Designs uses cookies to enhance your experience, analyze site traffic, and improve our services. By clicking dismiss, you agree to our use of cookies.

Skip to main content
Daring Designs
Contact Search

How to use
Claude (and AI)
responsibly.

Read it. Research it. Edit it. My personal rules for using Claude on real projects, why transparency wins, and how I use AI for SEO without misleading anyone.

I keep seeing people pass off pure AI output as their own work. Don't. The web is getting better at sniffing it out, and that erodes the trust your audience hands you for free. Here's how I actually use Claude on real projects, why I'm transparent about it, and the rules I follow so AI helps the work instead of replacing the part that matters.

Read it. Research it. Edit it.

This is the loop, in order:

  • Read what Claude wrote. Every line. Don't skim. If you can't explain why a line is there, you don't get to ship it.

  • Research the parts you're unsure about. Cross-check claims, library APIs, statistics, anything that smells confident-but-wrong. AI hallucinations are at their most dangerous when they sound authoritative.

  • Edit it. I usually delete most of what gets generated. The first draft is a starting point, not a finished product. Cut the filler, the hedging, and the "in today's fast-paced world" boilerplate.

Be Transparent About How You Used It

If Claude typed it but you rigorously reviewed and rewrote it, say so. If you only used AI for research and you wrote it yourself, say that too. People are getting better at spotting AI-generated content, and trying to pull the wool over their eyes is a short-term play with a long-term cost. AI transparency isn't just an ethics talking point, it's the same instinct that makes you disclose a sponsorship or a referral link.

I use Claude to help me write these posts. My process for these blog posts are: Jot down a bunch of ideas in a Google Doc and then use Claude CLI to help draft a post and then read and edit that draft! This process just helps me get my ideas out there in a more legible format I believe. Oh yeah and also Im using it to create the images if you can't tell already!

My Personal Rules for AI-Assisted Coding

Risk vs. Reward, Every Time

Don't vibe code something that genuinely matters. Auth, payments, anything that touches sensitive user data, slow down. AI-assisted code review is great for catching style issues and obvious bugs; it isn't a substitute for the judgment you bring to a feature that, if it breaks, breaks something real for your users.

Let Claude Write the Commit Message, If Claude Wrote the Code

Every commit Claude meaningfully touches in my repos is co-authored by Claude. The history reflects what actually happened. It also keeps me honest if I'm ever tempted to take credit for something I didn't actually write.

Watch the Dopamine Loop

There's a particular kind of high you get from shipping ten features in a day with AI in the loop. Take a walk. Touch grass. Making things is great; the act of making them isn't the only good thing in life, and the manic build cycle that AI tooling enables is genuinely easy to fall into.

Don't Scale Just Because You Can

Just because you can implement 100 features quickly doesn't mean you should. Each feature is a maintenance commitment, a surface area for bugs, and another thing your users have to learn. Speed of generation is not the same as quality of product.

How I Use Claude for SEO Work

For client SEO work I lean on Claude alongside the industry-standard toolchain, Ahrefs (via MCP), Google Search Console and Analytics, and Screaming Frog. The AI doesn't replace those tools; it helps me get more out of them faster.

Two reports I generate this way:

  • Onpage Content Recommendations, a CSV with H1, H2, meta description, page title, and per-page recommendations. Claude pulls and aligns the page data; I review and edit; the client gets a deliverable they can actually act on.

  • Monthly SEO statistics document, traffic, ranking shifts, and top-performing pages. Claude does the assembly. I do the interpretation.

That second part, the interpretation, is the part I do not let AI do alone. Which leads straight into the next rule.

Don't Trust the Stats Just Because They're Cited

Statistics are easy to use to bolster a weak argument. As the old saying goes, there are three kinds of lies: lies, damned lies, and statistics. AI happily provides numbers with citations. That doesn't mean the numbers are relevant, current, or representative of the population the client actually cares about. Read the source. Check the methodology. Then use the stat, or don't. My coworker Aaron Bushnell introduced me to this saying.

Be Careful What Access You Give It

I almost never give Claude write access to APIs unless I have a very good reason. Most of my MCP servers are read-only. I've heard enough horror stories of AI-assisted debugging accidentally wiping a production database, even in a "local" session that was, surprise, pointed at prod, that least privilege is non-negotiable for me.

This goes double for automation. If you're running Claude on a schedule, you really need to know what it can touch. I wrote a follow-up specifically about that: Don't Use OpenClaw. Run Claude CLI With Cron Jobs Instead.

TL;DR

  • Read it, research it, edit it.

  • Be honest about what AI did and what you did.

  • Slow down for risky code.

  • Don't trust cited stats blindly.

  • Read-only API access by default.

  • Co-author your commits.

  • Don't ship 100 features just because you can.

AI is a power tool. Treat it like one.