Humans Are Abusing AI – We Need Human Content Now More Than Ever.
Watch out, there's a fake Tom Hanks out there.
We now live in an age where you can’t tell the difference. You have no clue whether this text is AI-generated or not.
Even if you think it’s extremely creative that I, a human, wrote it.
Even if you think that its full of grammer and spellng mistakes.
Even if it sounds like a human.
Alright, so don’t trust everything you read, right? Try to focus more on images and videos.
But then you realize that Midjourney, an AI image generator, is creating images that look like this.
For heaven’s sake, how can any person tell that the above picture is AI-generated? It used to be by looking at the hands. But the girl’s hands look pretty okay.
So you’re thinking, oh, it’s video. That’s our only hope.
So you’re out of options — Any video, image, or text you see might be AI-generated.
They even faked out Tom Hanks.

He does look quite young compared to the current Tom Hanks; but I doubt that this is an obstacle.
Verification
I was approached on LinkedIn a year ago by a founder of a company who wanted to create an AI-text classifier. In a nutshell, it ought to tell you whether the text is most probably AI or not. The website is called Originality.ai and is getting quite popular.
I’ve used it numerous times with my blogging team. But the problem is it’s not really telling you whether the text is AI-generated or not. It tells you whether it sounds like it’s AI-generated or not.
This just puts it in a region of inaccuracy. The founders of OpenAI knew this. They created a text classifier that does this exact thing. But then realized it didn’t make sense.
They want ChatGPT to sound creative and original.
They’re trying to build a text classifier that tells if it is released by ChatGPT.
It’s like they’re turning around each other in a loop. So, they discontinued the text classifier.
Are we doomed to a world of doubt? Till this moment, there is no guarantee that I’m human. I don’t know how to prove it to you, though. I could record a video of myself, but that’s also fake-able.
I could tell you something that I only know as a secret, but AI could also write this.
I could try to tell you a feeling, something machines don’t have.
Let’s talk about the feeling that you have when drinking water after being dehydrated for quite a long time. It feels as if your throat is a rocky desert getting a flood of water. You want to just keep drinking.
Does this cut it? It probably doesn’t. But I’m trying my best here.
So what happens in the future? What do we desperately need right now?
We need human intervention.
When you eat a Snickers bar, how do you know it’s not just packaged poison? Well, sometimes, it has an FDA-approved badge, showing that a human entity guarantees that this product is not poisonous.
We need an FDA for AI (An AIA, per se.)
But the FDA just does tests to guarantee that this is not poison. How would it work with AI? They don’t even know themselves.
The only way I could think of is manual checkups, which are very exhausting and nonscalable but might be worth it.
For instance, I am writing this blog through my computer. If I have an AIA accreditation, then I’m giving access to this organization to access my computer remotely.
They see that I’m online and writing an article right now.
They can just access my computer and do image recognition via camera as well as screen sharing.
They might find me naked, but that’s more of a guarantee that I’m human.
They might also just come knocking on my office’s door and ask me about the article.
This doesn’t mean that they are 100% guaranteed to successfully badge all human content. Some AI will pass, but it won’t be that much.
It’s a bit extreme, wouldn’t you say? But you’ll respect this article having an AIA certification when you notice that your co-founder, whom you’ve been talking to on LinkedIn for three months and video chatted with, is actually AI-generated.
Scams are on the verge of exponential growth, and being human is the only solution.
We need to take a step back.
That’s my take on the matter. Do you think it’s too much? Are you bothered with AI images and text? Do you feel deceived? Or are you just okay with it as long as it delivers the right objective? Let’s start a discussion in the comments.
Great article.
This is why I’m glad I picked my topic, human sexuality. The creators of AI are almost entirely building guardrails against output that’s overly sexual. It’s not perfect. But I have a lot more cover than everyone else.
When you talked about one word to communicate to prove to the audience I was human, I was torn between “assfucking” and “gangbang” as surely almost all AI won’t be able to spit out those words.
Again, great piece.