The flood of news on ChatGPT continues. The stakes are high, as many companies, educational institutions and professionals are worried about the developments in generative AI. Could this all be solved simply by using a software to make texts written by an AI recognisable? Our columnist Markus Sekulla took a closer look.
In a time with so many fake news, media literacy is a great asset. Right at that time ChatGPT is coming our way and with it an easily accessible tool that allows us to easily write texts that we did not write.
"ChatGPT passes exams from law and business schools", a CNN headline said last week. Whereas the AI would probably have failed the Bavarian Abitur. If we no longer know whether the tests or homework are written by humans or machines, we are in trouble. That's why there are already ideas and even softwares that either recognize or watermark AI-generated texts.
OpenAI (the company behind ChatGPT) has itself launched a classifier that is trained to distinguish between AI and human texts. However, its recognition is not perfect: New AI classifier for indicating AI-written text
But let's take a step back, since I used the word problem here. Why is it important at all to be able to recognize whether a human has written the text or not? In other words, why is it even important that we have the knowledge in our heads, isn't it enough to simply have the competence to Google in the right places or to consult an AI?
We still live in a knowledge society. Maybe it was never as attractive as it is today to cut a good figure on "Who wants to be a millionaire". In the past, in times before the internet and Wikipedia, the older ones will remember, one acquired one's knowledge with ample effort. Library visits, educational television, a subscription to a daily newspaper and maybe you even proudly owned an encyclopedia, i.e. Wikipedia in tree form. It all sounds like dinosaurs, or even mum and dad. Today, when you look at their grandchildren and granddaughters, every piece of information is only seconds away. And soon every text, too.
Why do we still find quiz contestants so fascinating or embarrassing? Because education and knowledge are the key to a healthy society and a good life. Because intensive research provides us with new insights. Because the difference between fake news and news is important for our decisions and our participation in society. Because only with our own knowledge can we recognise connections and creativity emerge. Human creativity, at least.
But! Maybe artificial creativity will eventually become much better than ours, and maybe it will recognize connections that we wouldn't recognize, no matter how much we know - it's worth looking in the direction of digital health and drug development, for example.
If you ask the machine itself, you get buried in a wave of understatement:
Markus: How to tell if a text was written by a generative ai?
ChatGTP: There are several ways to tell if a text was written by a generative AI model:
It remains exciting with generative AI. Especially when the hype has slowed down a bit and we see how things fall into place.
Last but not least, was this text created by a human? Or was it created at ChatGPT? Opinions - with explanations - in the comments, pls.
Text: Markus Sekulla