← Back to posts

Why I'm Moving My Scheduled Jobs to Claude Code

I still use multiple AI agents every day.

OpenClaw has been useful. Claude Code has been useful. Different tools are good at different things.

But for major scheduled jobs, I'm shifting the backbone to Claude Code.

The reason is simple: I got tired of babysitting automations that should have just quietly worked.

Too many of the recurring failures were the boring kind:

  • auth tokens expiring
  • cron environments missing variables
  • jobs failing because one integration got weird
  • scheduled tasks that looked healthy until the day you actually needed them

That doesn't mean OpenClaw is useless. It means I don't want critical persistence and backup workflows hanging off a system that keeps making me wonder if today is the day an auth edge case breaks the chain.

The clearest example is my Granola meeting notes system.

The moment I realized meeting context was disappearing

I use Granola for meeting notes all the time.

It's great in the moment. But I realized something that broke the whole promise of agent memory if I didn't fix it: Granola only keeps a limited local cache on my Mac.

In practice, that local data window was about 7 days.

So if I wanted an agent to answer a question like:

  • what did we decide in that 565 call three weeks ago?
  • what did CXL ask for in that strategy meeting last month?
  • when did we first discuss that AppSumo pricing issue?

...the answer could just be gone.

Not because the meeting never happened. Not because the transcript didn't exist. Just because the local cache had rolled forward.

That's a bad setup if you're serious about using AI agents as real operating leverage.

If the memory layer expires every week, your agent isn't building context. It's just living in a foggy little present tense.

The idea: GitHub as permanent memory

My fix was pretty straightforward:

treat GitHub as the permanent memory layer for meeting data.

Instead of relying on Granola's cache to hold everything forever, I wanted a system that would:

  • read the local Granola cache before data aged out
  • export each meeting into a clean markdown file
  • organize those files in a repo in a way both humans and agents could browse
  • keep syncing automatically with no manual cleanup work

Once the notes live in a GitHub repo, they stop being trapped inside one app's temporary cache.

Now they become:

  • searchable
  • versioned
  • portable
  • accessible to any future tool
  • readable by any agent I use later

That's the bigger point.

I don't want my operating context locked inside whatever AI app happens to be hot this month.

I want the underlying data in a format that outlives the tool.

What Claude Code built

I didn't hand-write this system myself.

I described what I wanted to Claude Code and it built the whole thing in basically one conversation.

That included:

  • parsing Granola's local cache file
  • extracting meeting metadata
  • pulling attendees from multiple fields
  • categorizing meetings by company automatically
  • generating structured markdown files with YAML frontmatter
  • creating an incremental sync flow
  • wiring up a native macOS scheduler
  • handling git commit and push logic

The whole first version took about 15 minutes.

That's the part that still feels kind of wild.

Not because the code was magic. It wasn't. The scripts are pretty normal.

But the speed was ridiculous. I explained the system I wanted, clarified a couple of details, and Claude Code turned it into a working automation backbone.

That's where I think these coding agents are actually strongest right now.

Not as toys. Not as autocomplete.

As fast, competent builders for boring internal systems that save real time.

The exact setup

The source file is Granola's local cache:

~/Library/Application Support/Granola/cache-v6.json

That file contains a mix of:

  • meeting metadata
  • attendee information
  • notes
  • transcripts
  • event details from connected calendar data

The export target is a GitHub repo:

nickyc1/appsumo-meeting-notes

And the repo is organized by date first, then company.

So instead of one giant dump of markdown files, I can browse it like this:

meetings/
  2026-03-24/
    appsumo/
      growth-sync.md
      pricing-review.md
    cxl/
      consulting-checkin.md
    565media/
      weekly-performance-review.md

That structure matters more than people think.

If I open meetings/2026-03-24/cxl/, I instantly see every CXL-related meeting that day.

If I open meetings/2026-03-24/appsumo/, I get the internal meetings.

That makes the repo useful to me as a human, but also easy for agents to traverse programmatically.

Auto-categorizing by attendee domain

One of the nicest pieces of the system is the categorization logic.

Meetings get routed into folders based on attendee email domains.

So for example:

  • @cxl.com goes to cxl/
  • @565media.com goes to 565media/
  • all-internal meetings go to appsumo/
  • other companies get auto-detected and mapped into their own folder names

There are over 100 partner companies being auto-categorized now.

Claude Code set up the logic with priority ordering too, so the important consulting and partner categories win before generic fallbacks.

That sounds like a small detail, but it's the kind of thing that decides whether a backup repo stays clean or becomes junk.

The scripts behind it

Here's the core architecture:

scripts/lib/cache-reader.mjs

  • reads and parses the Granola cache
  • extracts attendees from both the enriched people field and the calendar event data

scripts/lib/categorizer.mjs

  • maps email domains to company folder names
  • uses priority ordering so key clients and partners land in the right folder

scripts/lib/markdown-builder.mjs

  • builds the markdown output
  • includes YAML frontmatter for title, date, attendees, companies, duration, and category
  • formats notes and transcripts cleanly with timestamps and speaker labels

scripts/sync-new.mjs

  • runs the incremental sync
  • uses a .sync-state.json manifest so it only processes new or updated meetings
  • then does git add, commit, and push

scripts/export-all.mjs

  • one-time bulk export used for the original backfill

And the scheduler is just native macOS:

  • ~/Library/LaunchAgents/com.nick.granola-sync.plist

That part matters a lot.

No Docker. No cloud worker. No random npm scheduler wrapper. No extra service I have to remember exists.

Just launchd.

Why I like launchd more than fancy scheduling stacks

This is another place I've gotten more opinionated.

For personal automations on a Mac, launchd is underrated.

It:

  • survives reboots
  • runs natively on macOS
  • can catch up after sleep
  • doesn't need another app layered on top
  • is boring in the best possible way

The sync job runs twice daily, Monday through Friday:

  • 12:00 PM
  • 6:00 PM

That's enough to keep the repo current without turning my machine into some ridiculous Rube Goldberg orchestration platform.

And because it's just Node plus launchd, the whole thing is easy to inspect later.

Zero npm dependencies on purpose

This was deliberate.

The whole system uses only Node.js built-ins.

No npm packages.

That sounds minor until you've had enough automations break because:

  • a package changed behavior
  • a dependency introduced a weird bug
  • a lockfile drifted
  • a transitive package got deprecated
  • an install worked on one machine and failed on another

For scheduled jobs, I want less surface area.

If built-ins can do the job, I'd rather use built-ins.

That bias alone makes these systems more durable.

What the initial export looked like

The first backfill exported:

  • 736 meetings
  • 511 AppSumo internal meetings
  • 44 565 Media meetings
  • 8 CXL consulting meetings
  • 100+ unique partner companies auto-categorized
  • 10 meetings with full transcripts
  • around 16 with markdown notes
  • date range from April 2025 through March 2026

The uneven notes/transcript count is just how Granola caches content. Some meetings had richer stored content than others.

But even with that limitation, the system turned a fragile rolling cache into an 11-month searchable archive.

That's the win.

Why this changed how I think about agent systems

This project pushed me further toward a view I already kind of had:

the real moat is not the agent. It's the data layer the agent can reliably access.

Once meeting notes are in structured markdown inside GitHub:

  • Claude Code can read them
  • OpenClaw can read them
  • a future custom agent can read them
  • plain scripts can read them
  • I can grep them myself

The data is no longer trapped inside Granola.

And it's no longer trapped inside one AI agent stack either.

That portability matters.

Every time you keep core business context inside a proprietary UI with a short memory window, you're rebuilding your operating system on rented land.

GitHub isn't perfect, but for this kind of thing it's fantastic:

  • durable
  • transparent
  • searchable
  • version-controlled
  • easy to sync from local scripts
  • easy for humans and machines to inspect

Why I'm moving more scheduled jobs to Claude Code

This Granola backup system is the template.

I still use OpenClaw for a bunch of interactive stuff. It's useful.

But for major scheduled jobs, I increasingly want this stack instead:

  • Claude Code to build the automation
  • plain Node scripts with minimal dependencies
  • native scheduling like launchd
  • GitHub as the durable backing store when the data matters

Why?

Because when a scheduled job is important, I care less about how elegant the platform story is and more about whether it runs next Tuesday without me thinking about it.

And honestly, repeated auth weirdness has been the breaking point.

If a system keeps failing because a token expired, an environment var didn't make it into the scheduler, or some integration silently drifted, that system is telling you something.

It's telling you the operational surface area is too big.

Claude Code has been really good for helping me collapse that complexity into smaller, more legible systems.

The payoff

The practical result is simple:

I can now ask an agent about meetings from months ago and actually have a shot at getting a good answer.

Not because the model got smarter.

Because the memory stopped evaporating.

That's the real trick.

A lot of agent workflows fail because people focus on prompts and ignore persistence.

But persistent context is the thing that compounds.

Now any agent I use can work across 11 months of meeting history instead of one week of leftovers.

That's a completely different category of usefulness.

The takeaway

If you use AI tools heavily for work, back up the important data.

Not eventually. Now.

Your meeting notes, transcripts, CRM exports, docs, research, summaries, all of it.

If it matters to how you think and operate, don't leave it inside a tool that can forget, rotate, revoke, or disappear.

Get it into a durable format. Get it into version control. Make it readable by both humans and agents.

That's the shift I'm making with scheduled jobs too.

Less platform magic. More boring systems that keep working.

And right now, Claude Code is the best tool I've found for building those systems fast.