AI Generation Detection
How accurate are AI detectors? I’m not sure, but I used this online “AI detector” tool as a test and, to my surprise, it was pretty accurate. Today, I wrote a random piece completely by myself. It said it was 100% original. Correct! I submitted a piece that was partially self-written, partially AI-written. It said it was 87% original. Also correct! And then I submitted something that was entirely AI-written and it said it was 0% original. Correct!
What surprises me about that is: The piece I wrote myself was intentionally nonsensical and bizarre. There was no fluid story, nor central character, nor theme. It was kind of just a jumbled bit of loose words that did form sentences, but not very good ones. In that vein, I was trying to replicate how I’ve seen AI-written writing firsthand. In my experience, AI writing does follow a narrative and make sense from a grammatical standpoint, but it’s usually just really weird and nonsensical in every other sense.
The AI-written piece was, expectedly, just as random and strange as the one I’d written. To me, coherence of the the content didn’t seem that different. Which leads me to wonder if it’s the writing style that sets it apart, rather than the content. But even then, I didn’t see anything uniquely distinct in the AI’s writing style. The phrasing choices seemed normal, or at least consistent with what I’d done.
To further make things interesting, while the “partial AI” piece was correctly identified as being partially AI and, probably to the correct percentage point too, what was interesting was that the program then highlighted the sections it said were AI and didn’t highlight the sections that were original. That was where it started getting things wrong. Certain sections that I wrote were flagged as AI and certain sections that were AI were not flagged at all. So the interesting takeaway there is that, while it correctly identified the mix of some AI/some original, when it came down to it, it couldn’t exactly point to which parts were which.
Conclusions: It’s actually pretty accurate as far as I can tell. I’ve tried to stump it with all AI or all original stuff and can’t. Where it is so-so is with partial AI. I am glad this kind of detection system works, as something like this should be out there to identify these kinds of writings. I wonder if future book publishers will be vetted with systems like these. I certainly would not be opposed to that.
Feel free to play around with it! I think it’s pretty fun. And who knows – maybe post an email into it and see if your friends have been faking their emails to you!


