illustration: bing.com/createIn the study conducted by Montana researchers, ChatGPT ranked in the top 1% for originality. The Torrance Test of Creative Thinking (TTCT) has been used for decades to assess human creativity. It evaluates creativity based on several factors:
- fluency,
- flexibility,
- originality
- elaboration.
According to Newseria, the researchers submitted eight ChatGPT-generated responses and a control group of 24 University of Montana students for evaluation. The results were compared with the scores of 2,700 students nationwide who took the TTCT in 2016.
- We decided this would be an interesting subject to study because ChatGPT and GPT-4 perform very well on other tests, typically displaying traits like memory and extensive knowledge. Along with other researchers, we concluded that GPT-4 can create creative responses to tasks, says Erik Guzik from the University of Montana. - Interestingly, OpenAI claims GPT-4 is more creative than GPT-3. This is fascinating in itself and motivated us to explore GPT-4`s creativity by measuring the fluency, flexibility, and originality of its responses in the Torrance Test of Creative Thinking.
Creativity at an EXCEPTIONAL Level
The evaluation was conducted by Scholastic Testing Service (STS), which was unaware of AI`s participation in the study. The STS analysis revealed that ChatGPT achieved exceptional results, ranking in the upper percentile for fluency and originality but slightly lower in flexibility at the 97th percentile.
- Some people are surprised that artificial intelligence demonstrates creative thinking skills, but if we go back to its beginnings, we find its founders openly stated their goal was to simulate creative thinking abilities. They aimed to create a simulation of intelligence, and creativity is a part of that, says Erik Guzik in an interview with Newseria. - The creators of AI clearly defined their goal and used the word “creativity” in the initial documents outlining potential achievements of artificial intelligence.
The study`s results could have far-reaching implications. They suggest that AI could be used to generate new ideas and solutions in fields such as medicine, business, and science.
AI algorithms are already being used to develop new drugs. The biotechnology startup Insilico Medicine was the first to advance a drug designed by generative AI to Phase II clinical trials involving patients. This drug offers hope to patients suffering from idiopathic pulmonary fibrosis, which leads to respiratory failure.
The results from Montana researchers show AI can be a powerful tool to support human creativity.
Artificial Intelligence and Marketing
Marketers using generative AI also see its potential but have concerns about its safety and accuracy. According to the Generative AI Snapshot Series study conducted by Salesforce and YouGov, the industry anticipates significant changes soon.
Salesforce surveyed over 1,000 marketing professionals. The results, cited by Newseria Biznes, showed:
- 51% already use generative AI
- another 22% plan to adopt it soon.
This means nearly three-quarters of marketers recognize its potential and are already leveraging it. Marketers primarily use generative AI for:
- content creation (76%)
- writing copy (76%).
Many predict significant changes in their work due to this technology. As many as 53% say generative AI is a "game changer," seeing potential in personalizing messages, creating marketing campaigns, and optimizing SEO strategies.
Although the technology seems promising, marketers have concerns. 67% believe their company data is not adequately prepared for AI. Similarly, 63% think trusted customer data is essential. Despite the enthusiasm, many believe effective AI use requires human oversight.
Algorithmic Exclusion
Dr. Kuba Piwowar, a sociologist and cultural expert from SWPS University, highlights another critical aspect related to AI: algorithmic exclusion. This phenomenon, caused by improper data selection for analysis, leads to better outcomes for some groups while disadvantaging others.
This occurs because society and culture are challenging to capture as digital data. Typically, marginalized groups - such as women, non-heteronormative individuals, people of color, or lower-income individuals - fare worse.
Classification systems organize reality. If someone doesn`t fit into an accepted category, the system doesn’t see them, making it impossible to create robust policies for these groups. We can’t determine whether an issue is small or widespread or how often it occurs. We overlook groups of people who want to be recognized.
These models reflect societal phenomena. For example, if you search for “physicist,” you’ll likely see images of men in lab coats rather than Marie Curie or other notable women researchers. This happens because men dominate this professional group in society.
Dr. Piwowar believes education is crucial. Analytical skills should be taught in primary and high schools to familiarize students with these concepts. At the university level, especially in technical fields, ethics courses should be introduced to explain the dilemmas surrounding algorithm operations. In the United States, there are even degree programs on ethical AI and responsible AI use.
COMMERCIAL BREAK
New articles in section Media industry
Vulnerable to disinformation. Study of fake news in social media
KFi, azk/ bst/ amac/
As many as 58 percent of Generation Z individuals are unable to recognize fake news in social media. Among those over 65, this figure stands at 29 percent - according to a study published in Poland by NASK and the Praktycy.eu association.
Radio in Poland 2025. Analysis of listenership and listener behavior
Krzysztof Fiedorek
Radio attracts 17.3 million listeners in Poland every day, who spend over four hours with their receivers. Interestingly, as much as 86 percent of station time is listened to via traditional FM waves. Despite digitalization, the internet accounts for only 12.5 percent of the listenership share.
Tags, hashtags and links in video descriptions. Youtube SEO after Gemini AI update [ANALYSIS]
BARD
Once, positioning a video on Youtube was simple. It was enough to stuff the description with keywords and wait for results. Those days are not coming back. In 2026, the algorithm is no longer a simple search engine that connects dots. It is the powerful Gemini AI artificial intelligence that understands your video better than you do.
See articles on a similar topic:
Future of Public Media. Who Will Be Data Ethicists and VR Designers?
KFi
How does the future of work in media look? Here are professions that do not yet exist but will soon become essential. The report "Future Jobs at PSM: Competencies and Professions for the Media of Tomorrow," prepared by the European Broadcasting Union (EBU) and Rai Ufficio Studi, outlines key changes awaiting the public media sector in the coming years.
Blogs in E-commerce. Report by Elephate and Senuto
KrzysztoF
The average number of indexed articles on e-commerce blogs is 565, with each post attracting 347 readers per month. The health industry generates the highest organic traffic per single article. The authors of the "E-commerce Blog Ranking" take a behind-the-scenes look at the commercial blogosphere.
Trust in social media. Youtube beats TikTok and X
Krzysztof Fiedorek
Do we really trust social media? A new study reveals major differences in how top platforms are rated. Trust goes where there's authenticity, not just algorithms. The role of people is growing while brand influence is fading.
Milgram Experiment 2023. AI Can Encourage Violence
KrzysztoF
Researchers from SWPS University replicated the famous Milgram experiment, in which participants were instructed to inflict pain on another person under the authority’s command. This time, the authority was a robot. It’s the first study showing that people are willing to harm another person when a robot commands them to do so.




























