top of page

A.I. vs Human Brain: A Creativity Warfare

Written By Mary Abou Issa


Everywhere, creativity is generally related to problem-solving skills and generating innovative and “out-of-the-box” responses for various types of questions and issues. Creative individuals usually show aspects of originality, imagination and expressiveness. For years, this term has solely been given to humans known  for their consciousness and ability to come up with unique and original solutions for any given problem. Nonetheless,  the arrival and prompt development of artificial intelligence (AI) is proof that the asset of creativity is no longer a distinguishing human trait as many would have previously believed since these emerging technologies have demonstrated an ability to produce high-quality artwork. 

This arising use of AI is a threat for many, but are these artificial intelligence generative language models actually outperforming their human counterparts in creative divergent thinking?


For long, we have thought we could still have control over all of these technological advances, but the fast progression and development of AI is showing otherwise, especially in the artistic field. 


Research number 1 [1]

A recent study conducted by Mika Koivisto and Simone Grassini on the comparison of AI and Human in creative divergent thinking tasks put three different types of artificial intelligence chatbots (ChatGPT3, ChatGPT 4 and Copy.AI) with human counterparts to tests related to four objects (a box, a rope, a pencil, and a candle) and studied each of their answers on the subjects using their interactions as a fixed effect and their fluency in English as a covariate. This study has shown that AI had an overall better performance than most humans yet couldn’t beat the best humans. Fluency played a major role is decreasing the mean (average) scores and in elevating the max scores that were calculated for the semantic distance of each individual response additionally to the judges’ subjective and unbiased opinions , just as shown in the following picture, 



Humans’ and AI’s mean scores (average of all responses within each trial) and max scores (the highest scoring response within each trial) as revealed by semantic distance analysis (A, B) and human subjective ratings (C, D).


Although AI chatbots performed better than humans on average, they did not consistently outperform the best human performers. There was only one instance in which an AI chatbot achieved the highest semantic distance score (Copy.Ai in response to pencil) and two instances where AI chatbots (ChatGPT3 and ChatGPT4 in response to box) achieved the highest subjective scores. In all other cases, the highest scores were achieved by humans. However, it was shown that humans consistently achieved the lowest scores in the tasks. While AI chatbots typically responded with relatively high levels of creativity and some variability, human performance exhibited greater variation, as measured by both semantic distance and subjective ratings.


Research number 2 [2]

Another research has shown that AI chatbots were considered less competent than human experts when the identities of each were revealed. Nonetheless, when the subjective answers were given anonymously, the comparison was fairer and showed that AI could generate valuable responses related to societal and personal issues.


Perceived author competence across contexts by author transparency in Study 1 (N = 1003). Author competence was calculated as the mean value from three response items (see the “Methods” section). Dots indicate mean values, error bars represent 95% confidence intervals, and grey/white areas display kernel densities.



Research number 3 [3]

The third and final study led by scientists Kent F. Hubert, Kim N. Awa and Darya L. Zabelina showed that these experts aimed to evaluate human creativity compared to artificial intelligence. Responses to the Divergent Associations, Consequences, and Alternative Uses tasks were provided by GPT-4 and human participants in the study. When compared to humans, they discovered that AI was significantly more creative across all divergent thinking metrics. In particular, AI was more inventive and detailed when answer fluency was taken into account. According to the current research, AI language models have more creative potential than human responders right now.


Discussion

All of the previously stated studies overlap on one critical point which resides in the capability of Artificial Intelligence in generating human-like responses. The various tests and tasks done have permitted researchers to affirm that computed answers are no longer firmly nor uniquely convergently generated, rather, AI was able to show its potential of divergent thinking outputs in comparison to human participants. Whilst the latter used less repetitive and wider ranges of words in the answers they created and developed, they lacked originality and creativity in contrast to machine intelligence, which, albeit the repetitive occurrences, produced far more intriguing responses that were far more pleasant to the unbiased judges. Yet, they all agree that the best humans were still able to compete with computational intelligence and even out-beat it in many tasks. What we still don’t know is if the near future will surprise us with AI outperforming human counterparts in all divergent thinking tasks and not only in AUT (Alternate Uses Test) that was used to measure it. 



Resources

[1] Koivisto, M., Grassini, S. Best humans still outperform artificial intelligence in a creative divergent thinking task. Sci Rep 13, 13601 (2023). https://doi.org/10.1038/s41598-023-40858-3


[2] Böhm, R., Jörling, M., Reiter, L. et al. People devalue generative AI’s competence but not its advice in addressing societal and personal challenges. Commun Psychol 1, 32 (2023). https://doi.org/10.1038/s44271-023-00032-x


[3] Hubert, K. F., Awa, K. N., & Zabelina, D. L. (2024). The current state of artificial intelligence generative language models is more creative than humans on divergent thinking tasks. Scientific reports, 14(1), 3440. https://doi.org/10.1038/s41598-024-53303-w


[4] Kaplan, Z. (2023, June 27). What is creative thinking? Definition and examples. Forage. https://www.theforage.com/blog/skills/creative-thinking


[5] Hagendorff, T., Fabi, S. & Kosinski, M. Human-like intuitive behavior and reasoning biases emerged in large language models but disappeared in ChatGPT. Nat Comput Sci 3, 833–838 (2023).  https://doi.org/10.1038/s43588-023-00527-x



13 views0 comments
bottom of page