EyeSift

Editorial Guidelines

Last updated: March 2026 | How we ensure accuracy, fairness, and transparency

Our Editorial Principles

🔍

Evidence-Based

Every claim backed by verifiable data and peer-reviewed research

⚖️

Balanced Reporting

Honest about capabilities and limitations of AI detection

🤝

Full Independence

No funding or influence from AI companies or competitors

1. Our Commitment to Accuracy

EyeSift is committed to providing accurate, honest, and transparent AI detection services and content. Our stated accuracy range of 75-85% reflects our actual measured performance, not a marketing number. We recognize that AI detection carries real consequences — a false positive can unjustly accuse a student of cheating. Because of these stakes, we treat accuracy as an ethical obligation.

Every factual claim on EyeSift must be traceable to a verifiable source, and every statistic must be current and correctly cited.

2. How We Evaluate Accuracy

We test against curated datasets containing verified human-written and AI-generated text from GPT-4, Claude, Gemini, Llama, and other models. Benchmarks use blind testing with stratified sampling.

Metrics We Track

  • True Positive Rate: AI text correctly identified
  • True Negative Rate: Human text correctly identified
  • False Positive Rate: Human text incorrectly flagged (highest priority to minimize)
  • F1 Score: Balanced measure of overall accuracy
  • AUC-ROC: Discrimination ability across thresholds

Accuracy is re-evaluated monthly against fresh samples from latest model versions.

3. False Positives and Negatives

No AI detector is 100% accurate. False positives and negatives are inherent to the technology.

We address false positives through confidence scoring, per-section analysis, clear disclaimers, and documentation of known false positive patterns. We openly acknowledge our detectors can miss paraphrased, adversarially prompted, or recently generated content.

4. Balanced Reporting

All factual claims must be supported by primary sources. When citing research, we note sample size, methodology, and limitations. When reviewing competing tools, we apply the same criteria to all including EyeSift.

5. Independence

  • No funding from AI companies: No investments or sponsorships from OpenAI, Anthropic, Google, etc.
  • No paid reviews: Rankings based solely on independent testing.
  • No affiliate commissions: We do not earn referral fees from competitors.
  • Revenue transparency: Supported by contextual display advertising only.
  • Honest self-assessment: We report accurately when competitors outperform us.

6. Correction Policy

Minor errors are corrected in-place. Significant errors include a visible correction notice. Retracted articles remain accessible with a retraction header. Report errors via our contact page.

Read Our Methodology

For technical documentation of how our AI detection algorithms work.