The short version: A publishing script set
pubDateto the same timestamp on 172 pages. Google flagged it as spam-like behavior. Traffic went from 40,000 impressions/day to 300. I fixed it by restoring original timestamps from git history. Recovery is underway.
In early January 2026, I watched GameAnomaly's traffic fall off a cliff. Not a gradual decline. A 99.25% drop in 48 hours.
No manual penalty. No deindexing. No content quality issues. Just a metadata mistake that made Google think I was running a spam operation.
What Happened
I run a gaming guides site. Roblox codes, tier lists, beginner guides. The kind of content that needs frequent updates because game codes expire fast.
To speed up publishing, I built automation. A script that would set metadata, generate images, sync to the database. Standard stuff.
The problem: one of those scripts had a bug. When I ran a batch operation to fix some formatting issues, it also updated pubDate on every file it touched.
Before the script:
# Different files had different dates
pubDate: 2025-12-25T20:27:03+05:30
pubDate: 2025-12-28T14:15:22+05:30
pubDate: 2026-01-02T09:45:11+05:30
After the script:
# Every file: same timestamp, same second
pubDate: 2026-01-02T13:24:07.000Z
pubDate: 2026-01-02T13:24:07.000Z
pubDate: 2026-01-02T13:24:07.000Z
172 pages. Identical timestamps. Down to the millisecond.
Why Google Cared
Think about what this looks like from Google's perspective:
- 172 pages suddenly have the same publication date
- All timestamps match to the exact second
- This pattern is common in scraped/spun content
- Spam sites often mass-generate pages with identical metadata
Google's systems flagged it. Not as spam exactly, but as suspicious enough to suppress rankings while re-evaluating.
The site wasn't deindexed. Pages still showed in Search Console. But impressions dropped from 40,000/day to 300. Rankings that were position 3-5 disappeared from the first few pages entirely.
How I Diagnosed It
The traffic drop was obvious. Finding the cause took longer.
What I checked first (and ruled out):
- Manual actions in Search Console: None
- Indexing issues: Pages still indexed
- Core Web Vitals: Still green
- Content changes: Nothing significant
- Backlink issues: Nothing unusual
What finally clicked:
I was looking at the sitemap and noticed the <lastmod> dates. They were all identical. That seemed wrong.
Checked the frontmatter in a few files. Same pubDate everywhere. Checked git history. Found the commit where everything changed.
git log --oneline --all -- "src/content/**/*.md" | head -20
There it was. One commit that touched 172 files. All with the same timestamp.
The Fix
The solution was straightforward but tedious: restore the original timestamps.
Git history had the real dates. Each file's first commit was its actual publication date.
# Get original creation date for a file
git log --follow --format=%aI --reverse -- path/to/file.md | head -1
I wrote a script to:
- Loop through every affected file
- Extract the original commit date from git
- Update
pubDateto match - Set
updatedDateequal topubDate(to remove conflicting signals)
Important: I didn't run this as a batch operation. After what just happened, I wasn't about to mass-update timestamps again. I ran it in small batches over a few days, verifying each change.
The result: natural timestamp distribution restored. Dates ranging from December 25, 2025 to January 11, 2026. No more identical patterns.
Recovery Timeline
This happened in early January. Here's where things stand now (January 21, 2026):
Week 1 (post-fix):
- Deployed corrected timestamps
- Submitted updated sitemap
- Waited
Week 2:
- Crawl rate picked up slightly
- Some pages started appearing in search again
- Impressions: ~2,000/day (up from 300, still down from 40,000)
Week 3 (current):
- Rankings testing on page 1 for some queries
- Average position: 7-8 (was 3-5 before incident)
- CTR: 13% (healthy sign)
- Impressions climbing slowly
Full recovery will take 4-6 weeks based on similar cases I've read about. The site isn't penalized, it's just being re-evaluated. Google needs time to trust the signals again.
What I Learned
1. Automation needs guardrails
The script that caused this worked fine in isolation. The problem was running it on files that shouldn't have been touched.
Now every script that modifies content has:
- Explicit file filters
- Dry-run mode by default
- Timestamp fields excluded from batch operations
2. Timestamps are trust signals
I knew Google used timestamps for freshness. I didn't realize identical timestamps could trigger spam detection.
Makes sense in hindsight. Real sites don't publish 172 articles at the exact same second. Spam sites do.
3. Git history is your backup
Without git history, I'd have had no way to recover the original dates. The fix would have been guessing or making up new dates, which might have caused different problems.
4. Recovery takes patience
The instinct after a traffic drop is to do something. Change titles. Update content. Request indexing for everything.
That's the wrong move. During algorithmic re-evaluation, stability matters more than activity. I'm publishing new content but not touching anything that was affected.
Prevention Checklist
If you're running a content site with automation, here's what I'd check:
Timestamp hygiene:
- Are
pubDatevalues unique across pages? - Do timestamps match git commit history?
- Are batch operations excluded from touching date fields?
Script safety:
- Do publishing scripts have dry-run modes?
- Are there filters to prevent touching existing content?
- Is there logging to track what changed?
Monitoring:
- Are you tracking timestamp distribution?
- Would you notice if 100+ pages got the same date?
- Do you have alerts for sudden traffic drops?
The Bigger Picture
This incident cost me about 3 weeks of traffic so far, with another 3-4 weeks of recovery ahead. For a site that was growing well, that's painful.
But it's also a good reminder: technical SEO mistakes can hurt as much as content quality issues. Google's systems are looking for patterns. Patterns that look automated or manipulative get flagged, even if the intent was innocent.
The fix isn't complicated. The lesson is: be careful with batch operations on metadata. What looks like a time-saver can become a traffic-killer.
GameAnomaly is my Roblox gaming guides site, built with Astro. If you're interested in the technical setup, I wrote about why I chose Astro over Next.js for content sites.
Technical Details
For anyone dealing with a similar issue, here's the diagnostic approach:
Check for timestamp clustering:
# Extract all pubDates and count duplicates
grep -r "pubDate:" src/content/ | cut -d: -f3 | sort | uniq -c | sort -rn | head -20
Get original dates from git:
# First commit date for a file (actual creation)
git log --follow --format=%aI --reverse -- "path/to/file.md" | head -1
# Last commit date (actual last update)
git log -1 --format=%aI -- "path/to/file.md"
Verify sitemap dates:
curl -s https://yoursite.com/sitemap.xml | grep -o "<lastmod>[^<]*" | sort | uniq -c | sort -rn
If you see hundreds of identical dates, you've found your problem.
