Editorial Policy

Last updated: May 5, 2026

This page describes how RackNotes writes, researches, edits, and corrects articles. It exists because in 2026, “tech blog” can mean anything from a careful publication to an AI-generated content farm — and the difference matters to readers. We’d rather state our standards openly than have people guess.

How articles get written

Most articles on RackNotes follow this workflow:

  1. Topic selection. Driven by what we think is genuinely useful — questions practitioners actually have, gaps in existing coverage, things we wish someone had explained clearly when we were learning. Not by keyword volume alone.
  2. Research. Vendor documentation, peer-reviewed benchmarks where they exist, technical forums (Reddit r/Proxmox, mailing lists, GitHub issues), official changelogs, and primary sources. We avoid building articles on top of other articles when we can help it.
  3. Drafting. Articles are typically drafted with AI assistance — large language models (LLMs) for structure, initial wording, and helping organize complex topics. We disclose this because (a) it’s true, (b) the alternative is hiding it, and (c) the editorial layer that follows is what determines whether the article is worth reading — not whether an LLM touched a draft.
  4. Editorial review. Every article is read end-to-end by a human editor before publication. This is where claims get verified, vague statements get tightened, version numbers get checked, and anything that smells like AI hallucination gets cut.
  5. Fact-checking. Specific technical claims (version numbers, pricing, feature availability, configuration details) are verified against current vendor sources. When we can’t verify something definitively, we say so or remove the claim.
  6. Publication. Articles are published only after the editorial pass. We don’t publish unsupervised AI output.

What we test vs. what we research

Articles fall into one of two categories, and we make the difference clear in the article itself:

  • Lab-tested. Based on actual deployments in our test environment. Includes screenshots, specific hardware/software versions, and real failure modes we encountered. These articles are explicitly labeled and identify the test setup.
  • Research-grounded. Based on documented sources, vendor specifications, and community-reported experience — not personal testing. These articles are framed as analysis and decision-support, not as “I deployed this and here’s what happened.”

We don’t pretend research-grounded articles are first-person experience. Fabricating “in my 15 years of deploying X” when we haven’t is a thing other publications do. We don’t.

Source standards

Sources we treat as reliable:

  • Vendor official documentation and changelogs
  • Peer-reviewed or methodologically transparent benchmarks (e.g., Phoronix, Blockbridge, Anandtech-style testing)
  • Government, academic, and standards-body publications (NIST, IETF, RFC documents)
  • Primary news sources (The Register, Ars Technica, ServeTheHome) for industry events
  • Active community forums where claims can be cross-checked (Reddit subreddits with high engagement, official mailing lists, GitHub issues)

Sources we avoid or use with skepticism:

  • AI-generated summaries of other articles (the “summary of summaries” problem)
  • Content farms, low-engagement listicles, SEO-driven aggregator sites
  • Vendor marketing pages presented as objective analysis
  • Unverifiable single-anecdote forum comments without corroboration
  • Outdated documentation (we check dates)

Fact-checking process

For specific technical claims, we cross-reference at least two sources where possible. Version-specific behavior is verified against the actual version mentioned. When information is contested or unclear, we flag it explicitly rather than picking the convenient answer.

If a claim depends on assumption rather than verifiable fact, we either remove it or label it as our analysis rather than established fact.

Corrections policy

When we get something wrong, we correct it visibly:

  • Articles include a “Last updated” date that’s revised when meaningful changes are made
  • Significant corrections are noted with a brief explanation of what changed and why
  • Factual errors are fixed, not silently rewritten
  • If a correction changes the article’s conclusion, we say so clearly

If you spot an error, tell us. We’d rather get it right than win an argument.

What we don’t do

  • No keyword stuffing. Articles are written for readers, not for search engine density metrics. If an SEO tool says we need to repeat “proxmox vs esxi” eight more times, we ignore it.
  • No clickbait headlines. “ULTIMATE GUIDE,” “You Won’t Believe,” and similar formulas are off the table. The headline should accurately describe the article.
  • No fabricated experience. When we haven’t personally deployed something, we don’t pretend we have. Articles are framed honestly as either tested or researched.
  • No vendor-pleasing reviews. If a product is mediocre, we say so — even when there’s a commission attached. If a vendor we have an affiliate relationship with does something bad, we cover it like we’d cover any other vendor.
  • No undisclosed sponsored content. We don’t currently publish sponsored articles. If we ever do, they’ll be labeled clearly at the top, not buried in a footer.
  • No filler listicles. “10 Best X” articles get written when there’s a genuine reason for that format, not as generic SEO bait.
  • No content outside our competence. We don’t write about topics where we have neither deployment experience nor strong reference material. There’s enough confident-sounding nonsense online already without us adding to it.

Affiliate independence

Some articles contain affiliate links. We earn commission when readers click through and purchase, at no additional cost to them.

This does not affect what we recommend or how we describe products. Editorial decisions are made before commercial relationships are considered. If a product is bad, we say so — even with a commission attached. See our Affiliate Disclosure for the full breakdown.

Reader feedback

If you spot a factual error, have a question about how we sourced something, or want to push back on a claim — please do. Use the contact form. Reader corrections are how publications get better, and we treat them seriously.

For business inquiries (sponsorships, partnerships, content licensing), use the same form with “Business” in the subject.

Changes to this policy

We may update this Editorial Policy as our practices evolve. The “Last updated” date at the top reflects the most recent revision. Significant policy changes will be summarized here so readers can see what changed.