{"id":606563,"date":"2025-07-11T13:40:50","date_gmt":"2025-07-11T18:40:50","guid":{"rendered":"https:\/\/towardsdatascience.com\/?p=606563"},"modified":"2025-07-11T13:41:04","modified_gmt":"2025-07-11T18:41:04","slug":"are-you-being-unfair-to-llms","status":"publish","type":"post","link":"https:\/\/towardsdatascience.com\/are-you-being-unfair-to-llms\/","title":{"rendered":"Are You Being Unfair to\u00a0LLMs?"},"content":{"rendered":"\n<p class=\"wp-block-paragraph\"><mdspan datatext=\"el1752259072569\" class=\"mdspan-comment\">Amidst the current<\/mdspan> hype surrounding AI, some ill-informed ideas about the nature of LLM intelligence are floating around, and I\u2019d like to address some of these. I will provide sources\u2014most of them preprints\u2014and welcome your thoughts on the matter.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Why do I think this topic matters? First, I feel we are creating a new intelligence that in many ways competes with us. Therefore, we should aim to judge it fairly. Second, the topic of AI is deeply introspective. It raises questions about our thinking processes, our uniqueness, and our feelings of superiority over other beings.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Milli\u00e8re and Buckner write [1]:<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p class=\"wp-block-paragraph\">In particular, we need to understand what LLMs represent about the sentences they produce\u2014and the world those sentences are about. Such an understanding cannot be reached through armchair speculation alone; it calls for careful empirical investigation.<\/p>\n<\/blockquote>\n\n\n\n<h2 class=\"wp-block-heading\">LLMs are more than prediction machines<\/h2>\n\n\n\n<p class=\"wp-block-paragraph\">Deep neural networks can form complex structures, with linear-nonlinear paths. Neurons can take on multiple functions in superpositions [2]. Further, LLMs build internal world models and mind maps of the context they analyze [3]. Accordingly, they are not just prediction machines for the next word. Their internal activations think ahead to the end of a statement\u2014they have a rudimentary plan in mind [4].<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">However, all of these capabilities depend on the size and nature of a model, so they may vary, especially in specific contexts. These general capabilities are an active field of research and are probably more similar to the human thought process than to a spellchecker\u2019s algorithm (if you need to pick one of the two).<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">LLMs show signs of creativity<\/h2>\n\n\n\n<p class=\"wp-block-paragraph\">When faced with new tasks, LLMs do more than just regurgitate memorized content. Rather, they can produce their own answers [5]. Wang et al. analyzed the relation of a model\u2019s output to the <a href=\"https:\/\/pile.eleuther.ai\/\" target=\"_blank\" rel=\"noreferrer noopener\">Pile dataset<\/a> and found that larger models advance both in recalling facts and at creating more novel content.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Yet Salvatore Raieli recently <a href=\"https:\/\/towardsdatascience.com\/can-machines-dream-on-the-creativity-of-large-language-models-d1d20cf51939\/\" target=\"_blank\" rel=\"noreferrer noopener\">reported on TDS<\/a> that LLMs are not creative. The quoted studies largely focused on ChatGPT-3. In contrast, Guzik, Erike &amp; Byrge found that GPT-4 is in the top percentile of human creativity [6]. Hubert et al. agree with this conclusion [7]. This applies to originality, fluency, and flexibility. Generating new ideas that are unlike anything seen in the model\u2019s training data may be another matter; this is where exceptional humans may still be at an advantage.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Either way, there is too much debate to dismiss these indications entirely. To learn more about the general topic, you can look up <a href=\"https:\/\/en.wikipedia.org\/wiki\/Computational_creativity\" target=\"_blank\" rel=\"noreferrer noopener\">computational creativity<\/a>.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">LLMs have a concept of\u00a0emotion<\/h2>\n\n\n\n<p class=\"wp-block-paragraph\">LLMs can analyze emotional context and write in different styles and emotional tones. This suggests that they possess internal associations and activations representing emotion. Indeed, there is such correlational evidence: One can probe the activations of their neural networks for certain emotions and even artificially induce them with <em>steering vectors <\/em>[8]. (One way to identify these steering vectors is to determine the contrastive activations when the model is processing statements with an opposite attribute, e.g., sadness vs. happiness.)<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Accordingly, the concept of emotional attributes and their possible relation to internal world models seems to fall within the scope of what LLM architectures can represent. There is a relation between the emotional representation and the subsequent reasoning, i.e., the world as the LLM understands it.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Furthermore, emotional representations are localized to certain areas of the model, and many intuitive assumptions that apply to humans can also be observed in LLMs\u2014even psychological and cognitive frameworks may apply [9].<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Note that the above statements do not imply <em>phenomenology<\/em>, that is, that LLMs have a subjective experience.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Yes, LLMs don\u2019t learn (post-training)<\/h2>\n\n\n\n<p class=\"wp-block-paragraph\">LLMs are neural networks with <em>static weights<\/em>. When we are chatting with an LLM chatbot, we are interacting with a model that does not change, and only learns <em>in-context <\/em>of the ongoing chat. This means it can pull additional data from the web or from a database, process our inputs, etc. But its <em>nature<\/em>, built-in knowledge, skills, and biases remain unchanged.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Beyond mere long-term memory systems that provide additional in-context data to static LLMs, future approaches could be self-modifying by adapting the core LLM\u2019s weights. This can be achieved by continually pretraining with new data or by continually fine-tuning and overlaying additional weights [10].<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Many alternative neural network architectures and adaptation approaches are being explored to efficiently implement continuous-learning systems [11]. These systems exist; they are just not reliable and economical yet.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Future development<\/h2>\n\n\n\n<p class=\"wp-block-paragraph\">Let&#8217;s not forget that the AI systems we are currently seeing are very new. \u201cIt\u2019s not good at X\u201d is a statement that may quickly become invalid. Furthermore, we are usually judging the low-priced consumer products, not the top models that are too expensive to run, unpopular, or still kept behind locked doors. Much of the last year and a half of LLM development has focused on creating cheaper, easier-to-scale models for consumers, not just smarter, higher-priced ones.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">While computers may lack originality in some areas, they excel at quickly trying different options. And now, LLMs can judge themselves. When we lack an intuitive answer while being creative, aren\u2019t we doing the same thing\u2014cycling through thoughts and picking the best? The inherent creativity (or whatever you want to call it) of LLMs, coupled with the ability to rapidly iterate through ideas, is already benefiting scientific research. See my previous article on <a href=\"https:\/\/towardsdatascience.com\/googles-alphaevolve-getting-started-with-evolutionary-coding-agents\/\" target=\"_blank\" rel=\"noreferrer noopener\">AlphaEvolve<\/a> for an example.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Weaknesses such as hallucinations, biases, and jailbreaks that confuse LLMs and circumvent their safeguards, as well as safety and reliability issues, are still pervasive. Nevertheless, these systems are so powerful that myriad applications and improvements are possible. LLMs also do not have to be used in isolation. When combined with additional, traditional approaches, some shortcomings may be mitigated or become irrelevant. For instance, LLMs can generate realistic training data for traditional AI systems that are subsequently used in industrial automation. Even if development were to slow down, I believe that there are decades of benefits to be explored, from drug research to education.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">LLMs are just algorithms. Or are&nbsp;they?<\/h2>\n\n\n\n<p class=\"wp-block-paragraph\">Many researchers are now finding similarities between human thinking processes and LLM information processing (e.g., [12]). It has long been accepted that CNNs can be likened to the layers in the human visual cortex [13], but now we are talking about the neocortex [14, 15]! Don\u2019t get me wrong; there are also clear differences. Nevertheless, the <a href=\"https:\/\/arstechnica.com\/ai\/2025\/07\/how-a-big-shift-in-training-llms-led-to-a-capability-explosion\/\">capability explosion<\/a> of LLMs cannot be denied, and our claims of uniqueness don\u2019t seem to hold up well.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">The question now is where this will lead, and where the limits are\u2014at what point must we discuss consciousness? Reputable thought leaders like Geoffrey Hinton and Douglas Hofstadter have begun to appreciate the possibility of consciousness in AI in light of recent LLM breakthroughs [16, 17]. Others, like Yann LeCun, are doubtful [18].<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Professor James F. O\u2019Brien <a href=\"https:\/\/towardsdatascience.com\/an-illusion-of-life-5a11d2f2c737\/\" target=\"_blank\" rel=\"noreferrer noopener\">shared his thoughts<\/a> on the topic of LLM sentience last year on TDS, and asked:<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p class=\"wp-block-paragraph\">Will we have a way to test for sentience? If so, how will it work and what should we do if the result comes out positive?<\/p>\n<\/blockquote>\n\n\n\n<h2 class=\"wp-block-heading\">Moving on<\/h2>\n\n\n\n<p class=\"wp-block-paragraph\">We should be careful when ascribing human traits to machines\u2014anthropomorphism happens all too easily. However, it is also easy to dismiss other beings. We have seen this happen too often with animals.<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">Therefore, regardless of whether current LLMs turn out to be creative, possess world models, or are sentient, we might want to refrain from belittling them. The next generation of AI could be all three [19].<\/p>\n\n\n\n<p class=\"wp-block-paragraph\">What do you think?<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">References<\/h2>\n\n\n\n<ol class=\"wp-block-list\">\n<li class=\"wp-block-list-item\">Milli\u00e8re, Rapha\u00ebl, and Cameron Buckner, <a href=\"https:\/\/arxiv.org\/abs\/2401.03910\" target=\"_blank\" rel=\"noreferrer noopener\">A Philosophical Introduction to Language Models &#8212; Part I: Continuity With Classic Debates<\/a> (2024), arXiv.2401.03910<\/li>\n\n\n\n<li class=\"wp-block-list-item\">Elhage, Nelson, Tristan Hume, Catherine Olsson, Nicholas Schiefer, Tom Henighan, Shauna Kravec, Zac Hatfield-Dodds, et al., <a href=\"https:\/\/arxiv.org\/abs\/2209.10652v1\" target=\"_blank\" rel=\"noreferrer noopener\">Toy Models of Superposition<\/a> (2022), arXiv:2209.10652v1<\/li>\n\n\n\n<li class=\"wp-block-list-item\">Kenneth Li, <a href=\"https:\/\/thegradient.pub\/othello\/\" target=\"_blank\" rel=\"noreferrer noopener\">Do Large Language Models learn world models or just surface statistics?<\/a> (2023), The Gradient<\/li>\n\n\n\n<li class=\"wp-block-list-item\">Lindsey, et al., <a href=\"https:\/\/transformer-circuits.pub\/2025\/attribution-graphs\/biology.html\" target=\"_blank\" rel=\"noreferrer noopener\">On the Biology of a Large Language Model<\/a> (2025), Transformer Circuits<\/li>\n\n\n\n<li class=\"wp-block-list-item\">Wang, Xinyi, Antonis Antoniades, Yanai Elazar, Alfonso Amayuelas, Alon Albalak, Kexun Zhang, and William Yang Wang, <a href=\"http:\/\/arxiv.org\/abs\/2407.14985\" target=\"_blank\" rel=\"noreferrer noopener\">Generalization v.s. Memorization: Tracing Language Models\u2019 Capabilities Back to Pretraining Data<\/a> (2025), arXiv:2407.14985<\/li>\n\n\n\n<li class=\"wp-block-list-item\">Guzik, Erik &amp; Byrge, Christian &amp; Gilde, Christian, <a href=\"https:\/\/www.researchgate.net\/publication\/373313932_The_Originality_of_Machines_AI_Takes_the_Torrance_Test\" target=\"_blank\" rel=\"noreferrer noopener\">The Originality of Machines: AI Takes the Torrance Test<\/a> (2023), Journal of Creativity<\/li>\n\n\n\n<li class=\"wp-block-list-item\">Hubert, K.F., Awa, K.N. &amp; Zabelina, D.L, <a href=\"https:\/\/doi.org\/10.1038\/s41598-024-53303-w\" target=\"_blank\" rel=\"noreferrer noopener\">The current state of artificial intelligence generative language models is more creative than humans on divergent thinking tasks<\/a> (2024), Sci Rep 14, 3440<\/li>\n\n\n\n<li class=\"wp-block-list-item\">Turner, Alexander Matt, Lisa Thiergart, David Udell, Gavin Leech, Ulisse Mini, and Monte MacDiarmid, <a href=\"https:\/\/arxiv.org\/abs\/2308.10248v3\" target=\"_blank\" rel=\"noreferrer noopener\">Activation Addition: Steering Language Models Without Optimization.<\/a> (2023), arXiv:2308.10248v3<\/li>\n\n\n\n<li class=\"wp-block-list-item\">Tak, Ala N., Amin Banayeeanzade, Anahita Bolourani, Mina Kian, Robin Jia, and Jonathan Gratch, <a href=\"http:\/\/arxiv.org\/abs\/2502.05489\" target=\"_blank\" rel=\"noreferrer noopener\">Mechanistic Interpretability of Emotion Inference in Large Language Models<\/a> (2025), arXiv:2502.05489<\/li>\n\n\n\n<li class=\"wp-block-list-item\">Albert, Paul, Frederic Z. Zhang, Hemanth Saratchandran, Cristian Rodriguez-Opazo, Anton van den Hengel, and Ehsan Abbasnejad, <a href=\"http:\/\/arxiv.org\/abs\/2502.00987\" target=\"_blank\" rel=\"noreferrer noopener\">RandLoRA: Full-Rank Parameter-Efficient Fine-Tuning of Large Models <\/a>(2025), arXiv:2502.00987<\/li>\n\n\n\n<li class=\"wp-block-list-item\">Shi, Haizhou, Zihao Xu, Hengyi Wang, Weiyi Qin, Wenyuan Wang, Yibin Wang, Zifeng Wang, Sayna Ebrahimi, and Hao Wang, <a href=\"https:\/\/arxiv.org\/abs\/2404.16789\">Continual Learning of Large Language Models: A Comprehensive Survey<\/a> (2024), arXiv:2404.16789<\/li>\n\n\n\n<li class=\"wp-block-list-item\">Goldstein, A., Wang, H., Niekerken, L. et al., <a href=\"https:\/\/doi.org\/10.1038\/s41562-025-02105-9\" target=\"_blank\" rel=\"noreferrer noopener\">A unified acoustic-to-speech-to-language embedding space captures the neural basis of natural language processing in everyday conversations<\/a> (2025)<em>, <\/em>Nat Hum Behav 9, 1041\u20131055<\/li>\n\n\n\n<li class=\"wp-block-list-item\">Yamins, Daniel L. K., Ha Hong, Charles F. Cadieu, Ethan A. Solomon, Darren Seibert, and James J. DiCarlo, <a href=\"https:\/\/www.pnas.org\/doi\/10.1073\/pnas.1403112111\" target=\"_blank\" rel=\"noreferrer noopener\">Performance-Optimized Hierarchical Models Predict Neural Responses in Higher Visual Cortex <\/a><em>(2014), Proceedings of the National Academy of Sciences of the United States of America<\/em> 111(23): 8619\u201324<\/li>\n\n\n\n<li class=\"wp-block-list-item\">Granier, Arno, and Walter Senn, <a href=\"https:\/\/arxiv.org\/abs\/2504.06354\" target=\"_blank\" rel=\"noreferrer noopener\">Multihead Self-Attention in Cortico-Thalamic Circuits<\/a> (2025), arXiv:2504.06354<\/li>\n\n\n\n<li class=\"wp-block-list-item\">Han, Danny Dongyeop, Yunju Cho, Jiook Cha, and Jay-Yoon Lee, <a href=\"https:\/\/arxiv.org\/abs\/2502.12771\" target=\"_blank\" rel=\"noreferrer noopener\">Mind the Gap: Aligning the Brain with Language Models Requires a Nonlinear and Multimodal Approach <\/a>(2025), arXiv:2502.12771<\/li>\n\n\n\n<li class=\"wp-block-list-item\"><a href=\"https:\/\/www.cbsnews.com\/news\/geoffrey-hinton-ai-dangers-60-minutes-transcript\/\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.cbsnews.com\/news\/geoffrey-hinton-ai-dangers-60-minutes-transcript\/<\/a><\/li>\n\n\n\n<li class=\"wp-block-list-item\"><a href=\"https:\/\/www.lesswrong.com\/posts\/kAmgdEjq2eYQkB5PP\/douglas-hofstadter-changes-his-mind-on-deep-learning-and-ai\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.lesswrong.com\/posts\/kAmgdEjq2eYQkB5PP\/douglas-hofstadter-changes-his-mind-on-deep-learning-and-ai<\/a><\/li>\n\n\n\n<li class=\"wp-block-list-item\">Yann LeCun, <a href=\"https:\/\/openreview.net\/pdf?id=BZ5a1r-kVsf\" target=\"_blank\" rel=\"noreferrer noopener\">A Path Towards Autonomous Machine Intelligence<\/a> (2022), OpenReview<\/li>\n\n\n\n<li class=\"wp-block-list-item\">Butlin, Patrick, Robert Long, Eric Elmoznino, Yoshua Bengio, Jonathan Birch, Axel Constant, George Deane, et al., <a href=\"https:\/\/arxiv.org\/abs\/2308.08708v3\" target=\"_blank\" rel=\"noreferrer noopener\">Consciousness in Artificial Intelligence: Insights from the Science of Consciousness<\/a> (2023), arXiv: 2308.08708<\/li>\n<\/ol>\n","protected":false},"excerpt":{"rendered":"<p>They may deserve better.<\/p>\n","protected":false},"author":18,"featured_media":606564,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"is_member_only":false,"sub_heading":"They may deserve better.","footnotes":""},"categories":[21],"tags":[447,4338,465,446,1978],"sponsor":[],"coauthors":[32646],"class_list":["post-606563","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-large-language-models","tag-artificial-intelligence","tag-human-machine-interaction","tag-llm","tag-machine-learning","tag-philosophy-of-mind"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v25.2 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Are You Being Unfair to\u00a0LLMs? | Towards Data Science<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/towardsdatascience.com\/are-you-being-unfair-to-llms\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Are You Being Unfair to\u00a0LLMs? | Towards Data Science\" \/>\n<meta property=\"og:description\" content=\"They may deserve better.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/towardsdatascience.com\/are-you-being-unfair-to-llms\/\" \/>\n<meta property=\"og:site_name\" content=\"Towards Data Science\" \/>\n<meta property=\"article:published_time\" content=\"2025-07-11T18:40:50+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-07-11T18:41:04+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/07\/0_oV1aB_5Q7gVWkdM_.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"720\" \/>\n\t<meta property=\"og:image:height\" content=\"1079\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"Julian Mendel\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@TDataScience\" \/>\n<meta name=\"twitter:site\" content=\"@TDataScience\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Julian Mendel\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"8 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/towardsdatascience.com\/are-you-being-unfair-to-llms\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/towardsdatascience.com\/are-you-being-unfair-to-llms\/\"},\"author\":{\"name\":\"TDS Editors\",\"@id\":\"https:\/\/towardsdatascience.com\/#\/schema\/person\/f9925d336b6fe962b03ad8281d90b8ee\"},\"headline\":\"Are You Being Unfair to\u00a0LLMs?\",\"datePublished\":\"2025-07-11T18:40:50+00:00\",\"dateModified\":\"2025-07-11T18:41:04+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/towardsdatascience.com\/are-you-being-unfair-to-llms\/\"},\"wordCount\":1669,\"publisher\":{\"@id\":\"https:\/\/towardsdatascience.com\/#organization\"},\"image\":{\"@id\":\"https:\/\/towardsdatascience.com\/are-you-being-unfair-to-llms\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/07\/0_oV1aB_5Q7gVWkdM_.webp\",\"keywords\":[\"Artificial Intelligence\",\"Human Machine Interaction\",\"Llm\",\"Machine Learning\",\"Philosophy Of Mind\"],\"articleSection\":[\"Large Language Models\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/towardsdatascience.com\/are-you-being-unfair-to-llms\/\",\"url\":\"https:\/\/towardsdatascience.com\/are-you-being-unfair-to-llms\/\",\"name\":\"Are You Being Unfair to\u00a0LLMs? | Towards Data Science\",\"isPartOf\":{\"@id\":\"https:\/\/towardsdatascience.com\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/towardsdatascience.com\/are-you-being-unfair-to-llms\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/towardsdatascience.com\/are-you-being-unfair-to-llms\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/07\/0_oV1aB_5Q7gVWkdM_.webp\",\"datePublished\":\"2025-07-11T18:40:50+00:00\",\"dateModified\":\"2025-07-11T18:41:04+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/towardsdatascience.com\/are-you-being-unfair-to-llms\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/towardsdatascience.com\/are-you-being-unfair-to-llms\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/towardsdatascience.com\/are-you-being-unfair-to-llms\/#primaryimage\",\"url\":\"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/07\/0_oV1aB_5Q7gVWkdM_.webp\",\"contentUrl\":\"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/07\/0_oV1aB_5Q7gVWkdM_.webp\",\"width\":720,\"height\":1079,\"caption\":\"An octopus (photo by Masaaki Komori on Unsplash)\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/towardsdatascience.com\/are-you-being-unfair-to-llms\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/towardsdatascience.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Are You Being Unfair to\u00a0LLMs?\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/towardsdatascience.com\/#website\",\"url\":\"https:\/\/towardsdatascience.com\/\",\"name\":\"Towards Data Science\",\"description\":\"Publish AI, ML &amp; data-science insights to a global community of data professionals.\",\"publisher\":{\"@id\":\"https:\/\/towardsdatascience.com\/#organization\"},\"alternateName\":\"TDS\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/towardsdatascience.com\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/towardsdatascience.com\/#organization\",\"name\":\"Towards Data Science\",\"alternateName\":\"TDS\",\"url\":\"https:\/\/towardsdatascience.com\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/towardsdatascience.com\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/02\/tds-logo.jpg\",\"contentUrl\":\"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/02\/tds-logo.jpg\",\"width\":696,\"height\":696,\"caption\":\"Towards Data Science\"},\"image\":{\"@id\":\"https:\/\/towardsdatascience.com\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/x.com\/TDataScience\",\"https:\/\/www.youtube.com\/c\/TowardsDataScience\",\"https:\/\/www.linkedin.com\/company\/towards-data-science\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/towardsdatascience.com\/#\/schema\/person\/f9925d336b6fe962b03ad8281d90b8ee\",\"name\":\"TDS Editors\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/towardsdatascience.com\/#\/schema\/person\/image\/23494c9101089ad44ae88ce9d2f56aac\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/?s=96&d=mm&r=g\",\"caption\":\"TDS Editors\"},\"description\":\"Building a vibrant data science and machine learning community. Share your insights and projects with our global audience: bit.ly\/write-for-tds\",\"url\":\"https:\/\/towardsdatascience.com\/author\/towardsdatascience\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Are You Being Unfair to\u00a0LLMs? | Towards Data Science","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/towardsdatascience.com\/are-you-being-unfair-to-llms\/","og_locale":"en_US","og_type":"article","og_title":"Are You Being Unfair to\u00a0LLMs? | Towards Data Science","og_description":"They may deserve better.","og_url":"https:\/\/towardsdatascience.com\/are-you-being-unfair-to-llms\/","og_site_name":"Towards Data Science","article_published_time":"2025-07-11T18:40:50+00:00","article_modified_time":"2025-07-11T18:41:04+00:00","og_image":[{"width":720,"height":1079,"url":"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/07\/0_oV1aB_5Q7gVWkdM_.webp","type":"image\/webp"}],"author":"Julian Mendel","twitter_card":"summary_large_image","twitter_creator":"@TDataScience","twitter_site":"@TDataScience","twitter_misc":{"Written by":"Julian Mendel","Est. reading time":"8 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/towardsdatascience.com\/are-you-being-unfair-to-llms\/#article","isPartOf":{"@id":"https:\/\/towardsdatascience.com\/are-you-being-unfair-to-llms\/"},"author":{"name":"TDS Editors","@id":"https:\/\/towardsdatascience.com\/#\/schema\/person\/f9925d336b6fe962b03ad8281d90b8ee"},"headline":"Are You Being Unfair to\u00a0LLMs?","datePublished":"2025-07-11T18:40:50+00:00","dateModified":"2025-07-11T18:41:04+00:00","mainEntityOfPage":{"@id":"https:\/\/towardsdatascience.com\/are-you-being-unfair-to-llms\/"},"wordCount":1669,"publisher":{"@id":"https:\/\/towardsdatascience.com\/#organization"},"image":{"@id":"https:\/\/towardsdatascience.com\/are-you-being-unfair-to-llms\/#primaryimage"},"thumbnailUrl":"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/07\/0_oV1aB_5Q7gVWkdM_.webp","keywords":["Artificial Intelligence","Human Machine Interaction","Llm","Machine Learning","Philosophy Of Mind"],"articleSection":["Large Language Models"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/towardsdatascience.com\/are-you-being-unfair-to-llms\/","url":"https:\/\/towardsdatascience.com\/are-you-being-unfair-to-llms\/","name":"Are You Being Unfair to\u00a0LLMs? | Towards Data Science","isPartOf":{"@id":"https:\/\/towardsdatascience.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/towardsdatascience.com\/are-you-being-unfair-to-llms\/#primaryimage"},"image":{"@id":"https:\/\/towardsdatascience.com\/are-you-being-unfair-to-llms\/#primaryimage"},"thumbnailUrl":"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/07\/0_oV1aB_5Q7gVWkdM_.webp","datePublished":"2025-07-11T18:40:50+00:00","dateModified":"2025-07-11T18:41:04+00:00","breadcrumb":{"@id":"https:\/\/towardsdatascience.com\/are-you-being-unfair-to-llms\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/towardsdatascience.com\/are-you-being-unfair-to-llms\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/towardsdatascience.com\/are-you-being-unfair-to-llms\/#primaryimage","url":"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/07\/0_oV1aB_5Q7gVWkdM_.webp","contentUrl":"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/07\/0_oV1aB_5Q7gVWkdM_.webp","width":720,"height":1079,"caption":"An octopus (photo by Masaaki Komori on Unsplash)"},{"@type":"BreadcrumbList","@id":"https:\/\/towardsdatascience.com\/are-you-being-unfair-to-llms\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/towardsdatascience.com\/"},{"@type":"ListItem","position":2,"name":"Are You Being Unfair to\u00a0LLMs?"}]},{"@type":"WebSite","@id":"https:\/\/towardsdatascience.com\/#website","url":"https:\/\/towardsdatascience.com\/","name":"Towards Data Science","description":"Publish AI, ML &amp; data-science insights to a global community of data professionals.","publisher":{"@id":"https:\/\/towardsdatascience.com\/#organization"},"alternateName":"TDS","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/towardsdatascience.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/towardsdatascience.com\/#organization","name":"Towards Data Science","alternateName":"TDS","url":"https:\/\/towardsdatascience.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/towardsdatascience.com\/#\/schema\/logo\/image\/","url":"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/02\/tds-logo.jpg","contentUrl":"https:\/\/towardsdatascience.com\/wp-content\/uploads\/2025\/02\/tds-logo.jpg","width":696,"height":696,"caption":"Towards Data Science"},"image":{"@id":"https:\/\/towardsdatascience.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/TDataScience","https:\/\/www.youtube.com\/c\/TowardsDataScience","https:\/\/www.linkedin.com\/company\/towards-data-science\/"]},{"@type":"Person","@id":"https:\/\/towardsdatascience.com\/#\/schema\/person\/f9925d336b6fe962b03ad8281d90b8ee","name":"TDS Editors","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/towardsdatascience.com\/#\/schema\/person\/image\/23494c9101089ad44ae88ce9d2f56aac","url":"https:\/\/secure.gravatar.com\/avatar\/?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/?s=96&d=mm&r=g","caption":"TDS Editors"},"description":"Building a vibrant data science and machine learning community. Share your insights and projects with our global audience: bit.ly\/write-for-tds","url":"https:\/\/towardsdatascience.com\/author\/towardsdatascience\/"}]}},"distributor_meta":false,"distributor_terms":false,"distributor_media":false,"distributor_original_site_name":"TDS Contributor Portal","distributor_original_site_url":"https:\/\/contributor.insightmediagroup.io","push-errors":false,"_links":{"self":[{"href":"https:\/\/towardsdatascience.com\/wp-json\/wp\/v2\/posts\/606563","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/towardsdatascience.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/towardsdatascience.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/towardsdatascience.com\/wp-json\/wp\/v2\/users\/18"}],"replies":[{"embeddable":true,"href":"https:\/\/towardsdatascience.com\/wp-json\/wp\/v2\/comments?post=606563"}],"version-history":[{"count":0,"href":"https:\/\/towardsdatascience.com\/wp-json\/wp\/v2\/posts\/606563\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/towardsdatascience.com\/wp-json\/wp\/v2\/media\/606564"}],"wp:attachment":[{"href":"https:\/\/towardsdatascience.com\/wp-json\/wp\/v2\/media?parent=606563"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/towardsdatascience.com\/wp-json\/wp\/v2\/categories?post=606563"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/towardsdatascience.com\/wp-json\/wp\/v2\/tags?post=606563"},{"taxonomy":"sponsor","embeddable":true,"href":"https:\/\/towardsdatascience.com\/wp-json\/wp\/v2\/sponsor?post=606563"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/towardsdatascience.com\/wp-json\/wp\/v2\/coauthors?post=606563"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}