Frequently Asked Questions
Find answers to the most commonly asked questions about EyeSift and our AI detection tools. If you cannot find the answer you are looking for, please do not hesitate to contact us.
General Questions
1. What is EyeSift?
EyeSift is a comprehensive AI detection platform that analyzes text, images, video, and audio content to determine whether it was generated or manipulated by artificial intelligence. Our platform uses advanced machine learning models and statistical analysis to provide probabilistic assessments of content authenticity, helping educators, journalists, businesses, and researchers identify AI-generated material.
2. How does EyeSift work?
EyeSift works by analyzing submitted content using multiple detection methods simultaneously. For text, we examine perplexity, burstiness, and entropy patterns. For images, we look for GAN fingerprints and metadata anomalies. For video, we analyze temporal consistency and facial landmarks. For audio, we perform spectral and prosody analysis. Each analysis produces a confidence score indicating the likelihood that the content is AI-generated.
3. Is EyeSift free to use?
Yes, EyeSift offers free access to all of our core detection tools, including text, image, video, and audio analysis. You can submit content for analysis without creating an account. We believe that AI detection tools should be accessible to everyone. We may offer premium features and enterprise plans in the future for organizations with high-volume needs and advanced requirements.
Accuracy Questions
4. What is EyeSift's accuracy rate?
EyeSift's detection accuracy typically ranges from 75% to 85%, depending on the type of content analyzed. Text detection accuracy can vary based on the AI model used to generate the content, the length of the text, and whether it has been edited after generation. Image and video detection rates depend on the generator used and the resolution of the submitted file. We are transparent about these rates because we believe honesty builds trust.
5. Why is the accuracy 75-85% and not 99%?
AI detection is fundamentally a probabilistic challenge. As AI generation models improve, they produce content that more closely mimics human patterns, making detection increasingly difficult. No detection tool can achieve near-perfect accuracy because the boundary between human and AI-generated content is not always clear-cut. Edited, paraphrased, or mixed content further complicates detection. We believe transparency about our limitations is more responsible than overstating our capabilities.
6. What factors affect detection accuracy?
Several factors influence accuracy: content length (longer text yields more reliable results), the specific AI model used to generate the content, post-generation editing and paraphrasing, the language of the text, image resolution and format, video quality and length, audio clarity and recording conditions, and whether the content is a mixture of human and AI-generated material. Using the optimal input guidelines we provide for each tool will help improve result reliability.
Text Detection
7. What AI models can EyeSift detect?
EyeSift can detect text generated by a wide range of large language models, including OpenAI's GPT-3.5 and GPT-4, Anthropic's Claude, Google's Gemini, Meta's LLaMA, and many other popular AI text generators. Our detection models are regularly updated to account for newly released AI models and evolving generation techniques. Detection effectiveness may vary across different models and versions.
8. How does perplexity scoring work?
Perplexity measures how predictable the text is to a language model. AI-generated text tends to have lower perplexity because AI models generate statistically likely word sequences. Human-written text typically has higher perplexity because people make more creative, unexpected, and sometimes irregular word choices. EyeSift calculates perplexity scores at the sentence and document level and compares them against established baselines to assess the likelihood of AI origin.
9. How does burstiness analysis work?
Burstiness refers to the variation in sentence length and complexity throughout a piece of text. Human writers naturally produce text with high burstiness, mixing short, punchy sentences with longer, more complex ones. AI-generated text tends to be more uniform in structure and sentence length, exhibiting lower burstiness. EyeSift measures this variation and uses it as one of several indicators of AI authorship.
Image Detection
10. What does image analysis check for?
Our image analysis examines multiple characteristics: GAN fingerprints (subtle patterns left by generative adversarial networks), EXIF metadata (camera and editing information embedded in image files), compression artifacts unique to AI generators, pixel-level statistical anomalies, color distribution patterns, edge consistency, and texture regularity. By combining these methods, we build a comprehensive profile of the image to assess its authenticity.
11. What image formats are supported?
EyeSift supports all major image formats including JPEG, PNG, WebP, and TIFF. For best results, submit original, uncompressed images whenever possible. Images that have been heavily compressed, resized, or repeatedly saved in lossy formats may yield less reliable results. We recommend uploading images with a minimum resolution of 256x256 pixels, though higher resolution images generally produce more accurate analysis.
12. Can EyeSift detect images from Midjourney, DALL-E, and Stable Diffusion?
Yes, EyeSift is trained to detect images generated by all major AI image generators, including Midjourney, DALL-E (versions 2 and 3), Stable Diffusion, Adobe Firefly, and other popular generators. Each generator leaves subtly different artifacts and patterns that our analysis models are designed to identify. Detection accuracy may vary depending on the specific version of the generator used and any post-processing applied to the image.
Video Detection
13. How does deepfake video detection work?
EyeSift's deepfake video detection analyzes multiple dimensions of video content simultaneously. We examine temporal consistency (whether facial features remain stable across frames), facial landmark tracking (checking for unnatural movements or distortions), audio-visual synchronization (whether lip movements match the audio), and compression artifacts specific to deepfake generation tools. These methods are combined to provide an overall assessment of video authenticity.
14. What video formats does EyeSift support?
EyeSift supports the most widely used video formats, including MP4, AVI, MOV, and WebM. We recommend submitting videos in their original format and quality for the most accurate results. Videos that have been heavily re-encoded, downscaled, or compressed may produce less reliable analysis. There is a file size limit for video uploads, and longer videos may take more time to process.
15. How long does video processing take?
Video analysis processing time depends on the length, resolution, and complexity of the submitted video. Short clips (under 30 seconds) typically process within one to two minutes. Longer videos (one to five minutes) may take three to ten minutes. Very long or high-resolution videos may take longer. You will receive a notification when your analysis is complete. We continuously work on optimizing our processing pipeline to reduce wait times.
Audio Detection
16. What is voice clone detection?
Voice clone detection identifies audio that has been generated or manipulated using AI voice synthesis technologies. These technologies can create convincing imitations of a person's voice from relatively short samples. EyeSift analyzes audio for signs of synthetic generation, including unnatural spectral patterns, irregular prosody (rhythm and intonation), and anomalies in vocal tract resonance that differ from natural human speech production.
17. How does spectral analysis work for audio detection?
Spectral analysis examines the frequency content of audio over time. Natural human speech has characteristic spectral patterns shaped by the physical properties of the vocal tract, including formant frequencies, harmonic relationships, and noise characteristics. AI-generated audio often exhibits subtle deviations from these natural patterns, such as overly smooth spectral transitions, missing micro-variations, or unusual frequency distributions. EyeSift's spectral analysis identifies these deviations as potential indicators of synthetic origin.
Privacy Questions
18. What data does EyeSift store?
EyeSift is built with a privacy-first approach. Content submitted for analysis is processed in real time and is not permanently stored on our servers by default. We retain only anonymized metadata and aggregate usage statistics to improve our detection models. If you create an account and opt in to analysis history, your results (but not the original content) may be stored for your reference. You can request deletion of all your data at any time by contacting [email protected].
19. Is my submitted content kept after analysis?
No. By default, the content you submit for analysis is not retained after processing is complete. Your text, images, video, and audio files are processed in memory during analysis and are discarded once results are generated. We do not build a database of submitted content, and we do not use your submitted content to train our models without explicit consent. Your privacy and the confidentiality of your content are our top priorities.
20. Is EyeSift GDPR compliant?
Yes, EyeSift is committed to complying with the General Data Protection Regulation (GDPR) and other applicable data protection laws. We provide users with the right to access, correct, and delete their personal data. We do not sell user data to third parties. Our data processing practices are designed to minimize data collection and retention. For more information about our data practices, please review our Privacy Policy or contact our data protection officer at [email protected].
Technical Questions
21. Is an API available?
We are currently developing a public API that will allow developers and organizations to integrate EyeSift's detection capabilities directly into their own applications, workflows, and content management systems. The API will support all four detection modes (text, image, video, and audio) and will include comprehensive documentation. If you are interested in early access, please contact us at [email protected].
22. Does EyeSift support batch processing?
Batch processing for analyzing multiple files simultaneously is planned for our upcoming enterprise tier. This feature will allow organizations to submit large volumes of content for analysis and receive consolidated results. Currently, content must be submitted one item at a time through our web interface. If you have immediate batch processing needs, please reach out to our enterprise team at [email protected].
23. What integration options are available?
We are building integrations with popular learning management systems (LMS), content management systems (CMS), and publishing platforms. Planned integrations include browser extensions for quick analysis, LMS plugins for academic institutions, CMS modules for content publishers, and email verification tools. Our goal is to make AI detection seamlessly available wherever content is created and consumed. Stay updated on new integrations by following our announcements.
Using EyeSift Effectively
24. What is the minimum text length for reliable detection?
We recommend submitting text samples of at least 250 words for meaningful analysis. Samples of 500 words or more tend to produce the most reliable and consistent results. Very short texts, such as individual sentences or brief paragraphs under 100 words, may not contain enough linguistic data for our detection models to produce a confident assessment. When analyzing shorter texts, keep in mind that the confidence scores may be less reliable and should be interpreted with greater caution.
25. Can EyeSift detect content that mixes human and AI writing?
EyeSift can identify mixed content to a degree, but this remains one of the most challenging scenarios for any AI detection tool. When human-written and AI-generated passages are blended within a single document, our analysis may flag the document as having mixed signals, typically reflected in a mid-range confidence score between 30% and 70%. For best results with suspected mixed content, try analyzing different sections of the document separately to identify which portions may be AI-generated.
26. How often are EyeSift's detection models updated?
We update our detection models on a regular basis to keep pace with the rapidly evolving landscape of AI content generation. Major model updates typically occur monthly, with smaller refinements and calibration adjustments happening more frequently. Each update incorporates training data from the latest AI generation tools and techniques to ensure our detection capabilities remain current. We announce significant updates through our website and communications channels so users can stay informed about improvements.
27. Can I use EyeSift results as evidence in academic integrity proceedings?
While EyeSift results can serve as a supplementary data point in academic integrity investigations, they should never be used as the sole piece of evidence for any academic misconduct determination. AI detection results are probabilistic and carry a known margin of error. We strongly recommend that educators use EyeSift results in conjunction with other evidence, such as writing style comparisons, interview-based verification, draft history review, and professional judgment. Many academic institutions have specific policies regarding the use of AI detection tools, and we encourage you to follow your institution's established guidelines.