arXiv:2603.23543v1 Announce Type: cross
Abstract: We explore the capabilities of Large Language Models (LLMs) by comparing the way they gather data with the way humans build knowledge. Here we examine how scientific knowledge is made and compare it with LLMs. The argument is structured by reference to two figures, one representing scientific knowledge and the other LLMs. In a 2014 study, scientists explain how they choose to ignore a ‘fringe science’ paper in the domain of gravitational wave physics: the decisions are made largely as a result of tacit knowledge built up in social discourse, most spoken discourse, within closed groups of experts. It is argued that LLMs cannot or do not currently access such discourse, but it is typical of the early formation of scientific knowledge. LLMs ‘understanding’ builds on written literatures and is therefore insecure in the case of the initial stages of knowledge building. We refer to Colin Fraser’s ‘Dumb Monty Hall problem’ where in 2023 ChatGPT failed though a year later or so later LLMs were succeeding. We argue that this is not a matter of improvement in LLMs ability to reason but in the change in the body of human written discourse on which they can draw (or changes being put in by humans ‘by hand’). We then invent a new Monty Hall prompt and compare the responses of a panel of LLMs and a panel of humans: they are starkly different but we explain that the previous mechanisms will soon allow the LLMs to align themselves to humans once more. Finally, we look at ‘overshadowing’ where a settled body of discourse becomes so dominant that LLMs fail to respond to small variations in prompts which render the old answers nonsensical. The ‘intelligence’ we argue is in the humans not the LLMs
Depression subtype classification from social media posts: few-shot prompting vs. fine-tuning of large language models
BackgroundSocial media provides timely proxy signals of mental health, but reliable tweet-level classification of depression subtypes remains challenging due to short, noisy text, overlapping symptomatology,




