A few days ago, my friend from Switzerland shared a funny story about her 14-year-old son. Her curious student had created an amazing school essay on the topic of “Commonwealth – Advancing Democracy and Human Rights in the World.” It was a concise and engaging piece of work, with well-structured sentences and captivating content. It was ready to be submitted to the teacher, and there was no doubt it would receive a very good grade. However, just before handing in the essay, the boy seemed to have a change of heart. He went to his parents, opened his computer, and showed them a robot that had written that remarkable text for him. His mom and dad were astonished, not at their son’s talent, but at how far artificial intelligence had come.
It has been over a hundred years since Karel Capek introduced the word “ROBOT” to the world. In his play “R.U.R.,” he opened the theme of concern about uncontrollable progress in artificial intelligence development. His doubts are becoming increasingly relevant today. What was once a fictional drama is becoming a reality, and it doesn’t have to manifest as the classic science fiction humanoid machine. More often, it’s about applications that communicate through text or voice.
The latest chatbot, ChatGPT by OpenAI, which authored the essay mentioned above and was originally co-founded by visionary Elon Musk, amazes the academic world, regular users, and at the same time, it makes many professions, from journalists to doctors to programmers, uneasy. Will these people still have jobs in a few years, or will similar programs completely replace them?
A robot operating on the ChatGPT system can generate text at almost a human level. If you ask it a question, it will respond without any difficulty. Ask it to write a research paper on the topic of ringworm, and the program will go through a vast amount of text samples from the internet and generate a relevant paper. It can even write a letter to your uncle, instructing him on how to set up a “Litacka” card. It can engage in conversations about sensitive topics. In the Czech Republic, it is not yet entirely accurate, makes mistakes, and you should always verify the information from other sources. However, there is no doubt that this web tool is not just an innovation and innocent entertainment but, in a way, a threat. It’s a bit like when people discovered fire – very soon, they realized that it’s a good servant but a bad master. Artificial intelligence could become the master very soon.
Already, it has become the latest battleground between progressives and conservatives in the United States. When asked to have the ChatGPT app write a positive poem about Donald Trump, it responded that it was programmed to avoid engaging in politics and must always remain objective. It’s a sort of machine rebellion. However, when given the same task with the name of the current U.S. President, Joe Biden, the platform created an enthusiastic celebratory poem filled with grandiose superlatives about the virtuous leader. Conservatives are naturally concerned that this politically biased mass communication tool will become a powerful weapon in election campaigns. It can be used to deliver personalized messages, down to the level of individual potential voters, with knowledge of their immediate privacy, including family ties and relationships with friends. In terms of election campaigns, we may soon fear more intense but less publicly visible marketing tricks.
This topic is not irrelevant to me, especially since I have children, and I care about their future. I don’t want them to spend their whole lives navigating the jungle of internet pitfalls and not knowing what to avoid. That’s why my husband and I supported Matyáš Bohacek in his studies, a brilliant programmer who has worked on research related to sign language recognition using artificial intelligence. His latest creation is the Verifee app, which uses AI to detect misinformation and can be used by ordinary users on their devices. The application does not judge whether a particular opinion is acceptable or not. It leaves that judgment to each reader. It only alerts you to false statements.
I think it’s time for all of us to learn to distinguish between these categories. It must not happen that any authority, nonprofit organization, or individual claims the right to label opinions as misinformation. If we don’t want people on the web or on social media to believe in falsehoods, then we need to strengthen our education system, emphasizing the development of critical thinking. But delving into people’s minds and searching for whether they hold opinions that contradict the official view is not the solution. We cannot protect people from themselves.
It’s good that after a few months of hesitation, the government has also realized this and decided to change the person and focus of the government’s commissioner for countering disinformation towards a clearer definition of terms and definitions of what can be censored if it threatens its existence. It is a step in the right direction, and it will be good if the scope of content that can be restricted is as narrow as possible and subject to judicial review.
Ivana Tykač,
- 21. February 2023