Today, developers of AI content detectors present them as tools to guard authenticity. The question is, are they worth the trust and investment? In this article, we’ll look at how AI content detectors work, why they might go extinct, what challenges they bring, and the ethical dilemmas they pose.
AI content detectors are based on language models similar to those used to generate AI content. They can be divided into those whose task is to check the origin of images, texts, and music generated with the support of artificial intelligence. Each type of “AI detector” works slightly differently, but none of them can distinguish with absolute certainty between human-created and AI-generated content.
AI-generated image detectors are playing an increasingly important role due to the media’s power to generate fake news. They analyze anomalies, distinctive styles and patterns, and look for signs left behind by models such as DALL-E.
Prominent among the detectors used to identify images is the “AI or Not” tool from Optic, which uses image databases generated by Midjourney, DALL-E and Stable Diffusion. While results are uncertain, it is a step toward developing more precise identification methods in the future.
Source: AI or Not (https://www.aiornot.com/)
Behind the operation of AI detectors that recognize AI-generated texts are advanced algorithms that analyze the structure and word choice of the text, and then recognize AI-specific patterns. They make use of:
The above-mentioned elements together are used by AI content detectors to assess whether we are dealing with man-made or machine-made text.
AI content detectors work in a variety of fields – from education to marketing and recruitment. Here are the top reasons to have them as a tool to aid in evaluation, but not as definitive proof of whether content has been generated:
However, it is worth remembering that the origin of the text is not the basis for Google’s lowering of a site’s ranking. Google’s Search Center blog states that it is key for Google to “reward quality content regardless of how it is created […]. Automation has long been used to generate useful content, such as sports scores, weather forecasts and transcripts. AI can open up new levels of expression and creativity and be a key tool to support the creation of great web content.”
Although AI content detectors are ubiquitous, their effectiveness can be questionable. The main problems are:
Tests conducted by OpenAI showed that their classifier recognized GPT-generated text only 26% of the time. An interesting example of the unreliability of generators can be seen in an experiment conducted by TechCrunch, which showed that the GPTZero tool correctly identified five out of seven AI-generated texts. While the OpenAI classifier only identified one.
Source: GPTZero (https://gptzero.me/)
In addition, there is a risk of receiving a false positive, that is, identifying text written by a human as AI-generated. For example, the beginning of the second chapter of Miguel de Cervantes’ Don Quixote was marked by the OpenAI detector as most likely written by artificial intelligence.
While errors in the analysis of historical literary texts can be treated as an amusing curiosity, the situation becomes more complicated when we want to use detectors as tools for evaluating texts. The U.S. Constitution was marked by ZeroGPT as 92.15% written by artificial intelligence. And, according to a study published by researchers at Stanford University, 61% of TOEFL essays written by non-native English-speaking students were classified as AI-generated. Unfortunately, there is no data on how high the percentage of texts falsely classified as positive in other languages is.
Another issue is the change of classification on subsequent runs of the detector. This is because it often happens that a detector such as ZeroGPT or Scribbr changes the classification of text fragments, which it marks as AI-generated once and as human-written another time.
Source: Scribbr (https://www.scribbr.com/ai-detector/)
AI image and video detectors are primarily used to identify deepfakes and other AI-generated content that can be used to spread disinformation.
Current detection tools such as Deepware, Illuminarty, and FakeCatcher do not provide test results on their reliability. In the legal context of detecting AI-generated visual material, there are initiatives to add watermarks to AI images. However, this is a very unreliable way – you can just easily download an image without a watermark. Midjourney takes a different approach to watermarking, leaving it up to users to decide whether they want to watermark an image in this way.
Entrepreneurs should be aware that AI content detectors are not a substitute for human quality assessment and are not always reliable. Their practical maintenance issues may pose considerable difficulties, just as trying to avoid getting your content classified as AI-generated. Especially when the AI is simply a tool in the hands of a professional – that is, it is not “content generated by AI,” but rather “content that was created in collaboration with AI.”
It is relatively simple to add someone to the generated materials so the way they are created is really difficult to detect. If the person who uses generative AI knows what effect to achieve can simply manually tweak the results.
The basic question lies in the reason behind our wants to avoid detection if the content was generated by AI.
It also raises the question of whether we want to promote responsible use of AI through bans and detractors (ZeroGPT and GPTZero!), or through an appreciation of transparency, trust-building and honest use of advanced technologies.
Source: ZeroGPT (https://www.zerogpt.com/)
The answer to the question of whether AI content detectors are worth using is far from clear. AI content detectors are still in development, and their future is difficult to predict. One thing is certain – they will evolve along with the development of AI technology. Advances in AI, including the increasing ability of language models to mimic human writing style, means that AI content detection could become even more complicated. For businesses, this is a sign to follow these developments and not rely solely on tools, but on their assessment of content and its suitability for the purpose for which it was created. And to use the rapidly developing artificial intelligence wisely.
If you like our content, join our busy bees community on Facebook, Twitter, LinkedIn, Instagram, YouTube, Pinterest, TikTok.
Author: Robert Whitney
JavaScript expert and instructor who coaches IT departments. His main goal is to up-level team productivity by teaching others how to effectively cooperate while coding.
Pinterest, which made its debut on the social media scene a decade ago, never gained…
Thinking carefully on a question of how to promote a startup will allow you to…
A podcast in marketing still seems to be a little underrated. But it changes. It…
Video marketing for small business is an excellent strategy of internet marketing. The art of…
Are you wondering how to promote a startup business? We present crowdfunding platforms and websites…
How to use social media to increase sales? Well, let's start like that. Over 2.3…