Research Article Volume 16 Issue 2
Retired LBNL (UCB) Scientist, USA
Correspondence: J N Schad Ph.D, Retired LBNL (UCB) Scientist, USA
Received: January 20, 2026 | Published: March 11, 2026
Citation: Schad JN. Stress: the key to human generative intelligence (HGIs), a biophysical artifice of nature not availed to AGIs. J Neurol Stroke. 2026;16(2):59-63. DOI: 10.15406/jnsk.2026.16.00653
This work is about a disruptive finding that reveals the fundamental shortcomings of the prospective potential of any would be artificial generative intelligence (AGI), that otherwise could possibly make human intelligence and ingenuity irrelevant, and place humans at the perilous state of servitude: The finding draws upon the main difference between the schemes of knowledge generation of the physical AI systems, and those of Human’s biophysical system: while both embed the existing knowledge and operate computationally on unknown principles, the former meets the end by means of simplified iterative statistical functional models of the brain, that relies on back propagation process solution method, a mechanism totally unrealistic for the brain [Physics Today; December 2024, volume 77, Number 12, Pp 12-16]; and the latter achieves the goal, along with other human characteristics, through the dynamics of vastly complex biochemical phenomenon, possibly involving microscopic effects. The human brain has evolved in response to the timely imperative for survival, adapting to the dynamics of the internal and external environment. This evolutionary journey, marked by transformative breakthroughs, began with early trial-and-error learning and has continued through the development of complex brain neural patterns, guided by genetic processes. The process of adaptation (learning) involves brain engagement with complexities that demand resolution, generally interferes with the normal operations of the brain, particularly those of the neural patterns that regulate the “Endocrine system.” As a result, depending on the intensity of the brain engagement, the human biological system naturally suffers varying degrees of imbalance, specifically affecting body homeostasis. The immediate result would be the sensations of “unease,” commonly referred to as “Stress.” The role of the stress system is well-established across the medical and cognitive sciences. Evolved for adaptation, resilience, and the maintenance of health, the system is defined as a composite, multidimensional construct. It involves the appraisal and processing of a stimulus—the stressor—and the subsequent output, known as the stress response. Within this framework, a stressor is defined as any environmental change that disturbs homeostasis (Schumann, G. et al., 2014, The ADAPT model: from epidemiology to the mechanisms of adolescent mental health. Development and Psychopathology, 26, 1347-1364). Heavy engagement of the brain in problem resolution, typical in scientific research, involves long periods of 'over-clocking' the stress system, involving the SAM and HPA axes and Feedback Mechanism. This creates a high allostatic state (McEwen, B. S. 1998, Protective and damaging effects of stress mediators. New England Journal of Medicine), which the body may interpret as a state of physical danger, triggering the immune system (directly and indirectly) for intensive processes of repair, restoration and coping. Restoring the normal operations of the brain regardless (preventing health impact by default), is the innate response of human biologic system. It is this process of “Stress relief,” that renders formation of proper neural patterns (harmonious with others; in the sense of the 'Attractors' in Hopfield Network, which is “problem resolution.” This biophysical process of “Stress development” and “Stress relief” unprocurable in any physical system, has been behind the “resolutions” of seemingly insurmountable complexities of the path of evolutionary human adaptations. This complex resolution power, relying on brain’s neural extensiveness and availed structures, insensitive to the kind and nature of the engaged problems, defines the human intelligence. The dynamics of the machinery of human intelligence, which is further explored in this work, makes it impossible for any advanced AI machinery to surpass humanity; humans will remain masters of their own creations. Nonetheless, the advanced AI systems, the very remarkably effective tool they are, will be highly instrumental in much wondrous progression of our civilization.
Humanity stands on the brink of a cataclysmic transformation, eclipsing all the profound transitions in its history of civilization; It appear that, for better or worth, Humans’ future could be subject to the whims of an emerging oracle, the Oracle of the advanced AI systems. It is a developing power that, when and if perfected, is predicted to overwhelm, surpass or even destroy its very own creators.1–3 Such forecasts are based on the fact that these systems, deploying large Language models (LLMs),4 have began to revolutionize the pace of the progression of knowledge, which could alter the face of humanity in unpredictable ways; rendering a strange, perhaps awkward, episodes unlike any other experienced in the human history; nonetheless, the results of another inevitable phase in human’s generation of knowledge by its own ingenuity. The progression of human knowledge, rooted in evolutionary imperatives, and a complex interplay of biological and socio-cultural and economical factors, is driven by their expanding collective cognitive capacity, supported by genetic adaptations that have enhanced human brain's synaptic connectivity. The dynamic interplay between evolving brains and a complex world has fueled the relentless pursuit of understanding nature's intricacies. Human knowledge has advanced through transformative breakthroughs often sparked by the practical application of ideas and the rigorous testing of philosophical concepts. Mathematics emerged as a pivotal tool in this journey, with Newton's and Maxwell’s laws of physics marking a seminal moment. The quintessential relationship between mathematics and physics has become evident as discoveries in each field have contributed to the progress in the other.5,6 Casting the beliefs of some the great minds, such as Galileo7 and Feynman,8 that the essence of existence is inscribed in the language of mathematics, within the framework of Platonic idealism, allows us to conceive of this language as an additional algorithmic language9 inherent to the brain. It is no surprise that some of the most important revelations of mysteries of nature are expressed, “through humans,” in this language, which can be thought of as the fundamental language of the machinery of the brain.10 This presumption may make sense considering the fact that brain computes. Since couple of decade ago, there has been a scientific consensus that the machinery of the brain operates on computational principles; it is based on the earlier insights by Turing11 and von Neumann,12 as well as other brilliant thinkers; and the rigorous neurosciences research and brain functional modeling in cognitive sciences.13–15 The following statement from the Nobel Laureate Frank Wilczek16 explains this understanding of the wondrous nature of the brain in a brilliantly loaded context:
Present-day mainstream computers are essentially two dimensional. They are based on chips that must be produced under exacting clean-room conditions, since any fault can be fatal to their operation. If they are damaged, they do not recover. Human brains differ in all those respects. They are 3D; they are produced in messy, loosely controlled conditions; and they can work around faults or injuries…We may aspire to make body-like machines as well as brain-like computers.
Humanity's remarkable achievements are the results of the computations performed by their brains. However, despite recognizing the brain's computational nature, any understanding of the mechanisms underlying its operations had remained very elusive until recent decades. Fortunately among the ideas that contributed to the concept of the computational brain, ironically the one that was inspired by the brain’s neural structure, the artificial neural network,17 a new computational tool, has provided important insights into the likely process of knowledge development by the human brains. The artificial neural nets are the backbones of the artificial intelligence, from its simpler state to the present day magnificence of its much mathematically enhanced simulator versions; the Large Language models (LLMs).4 These LLMs are built upon intricate mathematical models inspired by neural networks. These models, in principle, solve systems of numerous simultaneous equations with varying parameters, often using iterative methods akin to trial-and-error during training. While this approach is less prevalent in their applications, it remains a foundational aspect of LLM development. These systems that grossly mirror brain computation, may be gateway to more understanding of the human brain. Today’s advanced AI system----all varieties from special purpose, and future AGIs—deploy LLMs that are trained on the totality of various states of the present day human knowledge; the snapshot of the evolutionary quilt of humans’ creations. They bear the premises of further advancement of humanity, and perhaps surpassing human brains in resolution of complexities, and revealing further secrets of the nature. Having the benefit of embedding the present and future data streams of human consciousness, advanced AI systems potential have proven to be far and wide; and very likely will be drastically instrumental in the progress of civilization. Their abilities are best summarized by the very bright former CEO of Google, Erick Schmidt,18 as follows:
In a more recent interview,19 Dr. Schmidt extensively discussed further the potential of the future advanced AI systems. As known, the potential is attributed to advanced mathematical techniques and the use of massive multidimensional embeddings of tokens (which can be letters, words, or phrases; and their positions in the expressions). These embeddings capture the characteristics of tokens, including their significance in a given input sentence, relative to other tokens, based on their occurrences in various contexts across the existing literature. Built on such criteria, it is not farfetched to expect that they could render emergent knowledge; and apparently there are evidences of it too.20 AI systems have already shown their powers in helping scientific tasks, such as in physics informed solution; finding governing equations from data through symbolic regression; performing mathematical translation of various nature; and development of Sparse nonlinear dynamic models for some phenomena, etc. But, any advanced AI systems’ likely “emergences” could be due to their ability to configure various templates patterned in the framework of the categories of the present human creations; jury is still out on explaining how such phenomenon happening!
As known in case of AI systems, the complexity of the intended applications determines the intensiveness of the deployed LLMs (numbers of layers, hidden and otherwise, and nodes, weights values; parameters), and their extents of training, which is now identified by a metric called “Compute.” Today’s AI systems, benefitting from massive compute power, are seemingly closing gaps with the promised AGIs.21 But the requirement for achieving that milestone, which is performing tasks similar to those of humans or even, as claimed, beyond human abilities,1 is to push the “Compute” beyond; further than those of present day LLMs. Present day state of LLMs suffers from fundamental issues that have been pointed out by well-known AI authorities; Face Book CTO22 summarizes them in the followings:
And the Oxford professor Michael Wooldridge,23 in a talk of Royal Institute, mentions some of the limitations as follows:
Given above sentiments about the abilities and inabilities of the AI systems, in the AI community at large, the following statement by a Mathematician,24 though specific, provides a fair assessment of AIs from the perspective of a serious tasks:
“the imagination and intuition of mathematicians will be required to make sense of the output of AI tools. Thus, AI will act only as a catalyst of human ingenuity, rather than a substitute for it”
The shortcoming of the present day AI systems, and whether or not they would achieve AGI status, still leaves them with tremendous potential for imparting known and unknown adverse impacts on humanity, a concern that has put many bright minds, including some of the AI creators, and governments throughout the world, on alert. While efforts by the AI institutions to install guardrails, and parliamentary actions, throughout most of the western world, are underway in order to reduce possible risks, their effectiveness is already being questioned by the very AI community itself (25). The underlying reason for the negative prospects of large language models (LLMs) is the fact that there is no rigorous science underlying their operations; nobody understands how LLMs do what they do in the realm knowledge, despite all that is known in the AI community about its (mathematical) operational procedures. The fact that there is some schematic operational similarity between the Brain and AIs, and both work on unknown principles, does not mean that 1) the brain can be duplicated in AGI, while it can possibly be, to some degrees, functionally simulated, and 2) that it can be justified having them in the same league as far as risk are involved; the risk potentials of the advanced AI systems are vast and all the concerns are quite justified. The concerns and perceivable risks will be even more intensified and even more unpredictable when and if the AGIs demonstrate instances of achieving their goal of surpassing human intelligence and ingenuity. An importance case in point would be discoveries in hard sciences with destructive potential, possibly out of control and understanding of humans, could put humanity at the perils of extinction. This grim critical possibility has served as the motivation for my work, which begins with a brief but deeper scrutiny of the substrates of intelligence and ingenuity in each of the two systems, humans and advanced AI systems (AGIs). As in such systems, human brain’s engagement with problems create demand for resolution—input to any computer would initiate some operations (computations), regardless of presence or absence of related instructions (algorithms). It is likely that brain’s approach for inquiry (problem) resolution, when grossly simplified, is like that of AIs; as in training for skills. But while the working principles of both systems are unknown, brain’s comparably much more complex biochemical- biophysical operational processes leaves vastly less chance of demystification in the foreseeable future. And this by itself will make it impossible for mathematically/algorithmically operating AI systems to mimic its way of knowledge generation: It specifically has to do with at least two critical aspects of brain operations that involve:
These very complex facts that distinguish brain from the mathematical/algorithmic AI systems, highlights the essence of “what is it like to be a human. Nagel,26” especially the ones advantaged with sufficient brain “compute” power. In what follows I will provide the details of the processes involved in the making of human intelligence and ingenuity, which clarifies why it is impossible for AGIs to be curious and puzzled by problems, and find resolution that only bio-physical-chemical operations make their resolutions possible.
The human brain has evolved in response to the imperatives of survival, adapting to the dynamics of the internal and external environment. This evolutionary journey, marked by transformative breakthroughs, which had began with early trial-and-error learning, has continued through the development of complex brain neural patterns. The changing socio-economic and political landscapes have shaped the specific requirements for survival, driving the brain's adaptations. The development process of these neural patterns, likely guided by complex genetic processes, mirrors to some degree the general computational approach observed in artificial neural networks, suggesting a common underlying logic. For the brain, the process of adaptation (demand of the timely connotation of survival), which involves its new engagements with the underlying complexities that require resolution, generally interferes with the normal operations of the brain, particularly those of the structures that regulate endocrine system. Depending on the difficulties encountered, the human biological system naturally suffers varying degrees of imbalance, specifically affecting body homeostasis, immediate results of which would be a “sensations of unease,” commonly referred to as “Stress”. The role of the stress system is well-established across the medical and cognitive sciences. Evolved for adaptation, resilience, and the maintenance of health, the system is defined as a composite, multidimensional construct. It involves the appraisal and processing of a stimulus—the stressor—and the subsequent output: the stress response. Within this framework, a stressor is defined as any environmental change that disturbs homeostasis.27 Heavy engagement of the brain in problem resolution, typical in scientific research, involves long periods of 'over-clocking' the stress system, involving the SAM and HPA axes, and the Feedback Mechanism. This creates a high allostatic state,28 which the body may interpret as a state of physical danger, triggering (indirectly, possibly directly) the immune system for intensive processes of repair restoration and coping. The undeniable realities of human existence—overcoming survival, and the challenges of advancing civilization— reveal the fact that a mechanism of “Stress relief,” has been successfully operating in humans; a phenomenon that accounts for the (human) brain’s power of resolving complexities that they happen to encounter; what amount to human Intelligence and Ingenuity. It is built on following elements characteristics of human biologic system:
The amazing progression of Large Language models (Gemini, ChatGpt(s), etc.), and the prediction about their imminent near future advances, claimed to surpass human intelligence at least in some regard, have created both a sense of elation and serious concerns among various communities; the former for the possibilities of improving life in the planet, perhaps drastically in some aspects, and the latter for unfavorable changes in ways of life, and for possible adversities that can be borne from them. My work is a serious step in moderating, and to some degree, alleviating such concerns. By revealing the biophysical processes from which human intelligence arises, which are unprocurable in any physical system, it provides a solid basis for inferring that, at least, in cases of addressing issues of high complexities, human abilities cannot be surpassed. Human intelligence originated from the processes of human adaptation to seemingly insurmountable obstacles of evolution. And the fact that human brains have evolved through the development of increased neural connectivity patterns which are the results of resolutions of complexities of the environment to which humans have had to adapt. Brain’s varying resolution power, is insensitive to the kind and nature of problem with which it get engaged, and only depends upon its neuronal extent, and existing harmonizing patterns that can be availed to a problem, properties which have been strengthening in the genetically and environmentally evolving human brains; a fact evinced by the state of the modern life. As a side note, this work puts the (both direct and indirect) function of the immune system in a different light; it provides a new perspective on its possible extended function, in the brain, in the context of its computational operations.
None.
The authors declare that there are no conflicts of interest.
©2026 Schad. This is an open access article distributed under the terms of the, which permits unrestricted use, distribution, and build upon your work non-commercially.