text
stringlengths
525
5.41k
See discussions, st ats, and author pr ofiles f or this public ation at : https://www. researchgate. ne t/public ation/375833353 The Complex Chaos of Cognitive Biases and Emotional Observers Preprint · No vember 2023 DOI: 10. 13140/RG. 2. 2. 11390. 56641 CITATION 1READS 1,537 1 author: Kyrtin A treides AGI Labor atory 18 PUBLICA TIONS    41 CITATIONS     SEE PROFILE All c ontent f ollo wing this p age was uplo aded b y Kyrtin A treides on 25 Dec ember 2023. The user has r equest ed enhanc ement of the do wnlo aded file.
The Complex Chaos of Cognitive Biases and Emotional Observers Kyrtin Atreides AGI Laboratory Abstract. While many in the domain of AI claim that their works are “biologically inspired”, most strongly avoid the forms of dynamic complexity that are inherent in all of evolutionary history's more capable surviving organisms. This work seeks to illustrate examples of what introducing human-like forms of complexity into software systems looks like, why it is important, and why humans so frequently seek to avoid such complexity. The complex dynamics of these factors are discussed and illustrated in the context of Chaos Theory, the Three-Body Problem, categor y concepts, the tension between interacting forces and entities, and cognitive biases influencing how complexity is handled and reduced. Keywords: AI, Ethics, Cognitive Bias, Detection, Decision-making, N-Body Problem, Chaos Theory, AGI 1. Introduction Over the past 50 years, humanity's understanding of both cognitive biases [ 1] and emotions [2] has gone through major revolutions. The illusions of the “rational human” from classical economic theory and Behaviorism have both been laid to rest, with more related domains such as Determinism holding the potential for further revolutions over the coming years. Over 200 cognitive biases have been identified, even after accounting for controversies [ 3], and the dynamics of emotion have entered a testable domain, absent magical assumptions. With this increasing understanding of how humans function, we've also gained an iteratively increasing clarity for just how spectacularly complex even individual aspects of the process are. These processes are also a hard requirement for humans to function “at a human level”, and indeed, humanity couldn't have evolved to where we stand today without them. This is because many daily decision-making processes could quickly become intractable in their difficulty absent emotional motivation and cognitive bias. Humans have finite cognitive bandwidth, and as such are subject to the Complexity versus Cognitive Bias Trade-off [4]. Given the quickly increasing complexity of our hyper-connected world, cognitive bias is rapidly taking a leading role in human decision-making. This is often dressed up in PR and biased statistics as being “data-driven”, but under the hood, it remains a process driven by the human emotional system of motivation and avoidance of complexity.
As previous noteworthy researchers discovered [ 5], humans are emotional decision-makers, and quite terrible at making logical decisions when emotional capacities are impaired. At present, the most likely explanation for this is that emotions function as a twin system to cognitive biases, with emotions providing the guiding motivation, and cognitive biases providing the means, both with the goal of reducing complexity that reaches the Global Workspace Level [ 6]. Humans owe much of their successful evolution to the large number of highly dynamic factors that allow for adaptation to many different environments, circumstances, and social constructs [7]. Genetics, epigenetics, microbiome, structural development, natal development, social psychology, emotions, cognitive biases, environmental factors, and endosymbiosis across many scales all contribute to the staggering levels of complexity that every human demonstrates in daily operation [8 ]. These dynamics can be tested in software systems, and this understanding has served to inform our approach to the design of systems that function more like humans than narrow AI systems over the past decade. Many of the capacities we've demonstrated over the past 4 years in particular fundamentally wouldn't have been possible without integrating such human-like dynamics into both the “thought” and decision-making processes [9]. As these systems are far more different from the rest of AI than the two most distant points within the rest of AI are from one another, this paper's intention is to illustrate both low-level and high-level dynamics that increase the clarity and visibility of some of these differences. This illustration won't be exhaustive, as the paper isn't 1,000 pages long, but important concepts and dynamics will be highlighted in the following sections. These include how Chaos Theory [10], the Three-Body Problem [11], and other complexity-related concepts tie into cognitive biases, emotional motivation, and the instantiation of these concepts into software systems [12]. The term “Cognitive Bias” is typically defined as: “... a systematic pattern of deviation from norm or rationality in judgment. ” In this paper, we specifically focus on deviations from rationality, as norms are subjective and vary wildly. 2. Emotion s and Cognitive Bias, from Humans to Software The concept of giving software systems emotional motivation has largely been overlooked by the rest of the AI community, with Mark Solms [ 13] and his colleagues being a noteworthy exception. Many in the space still effectively cling to the idea of building purely logical systems, even though the one working example of arguably “general” intelligence we have is demonstrably not logical. This tendency to seek the design of purely logical systems predates the successful overthrow of classical economic theory's “rational human” fallacy [14] and may be considered a failure to question and update dependent beliefs underpinning such wasted efforts. Likewise, the concept of giving such systems cognitive biases may seem counterintuitive, as they are generally considered undesirable. However, for any software system to be aligned with humans those cognitive biases must be recognized, even if they are greatly reduced within the systems relative to those that humans demonstrate. As noted in previous work [15 ], for both local and meta-alignment with humanity to solve the hardest version of the Alignment Problem [16], cognitive biases that align with indivi dual cultures, and collective intelligence with many such cultures included, are a hard
requirement. Those biases are part of the perspective embedded in each culture, and through collective intelligence systems, their influence may be reduced, even while maintaining that local alignment. To instantiate these concepts in software requires first selecting an emotional vocabular y. The emotional vocabulary may be expressed as a n array of concepts that each influence one another according to one or more matrices, at one or more levels and time scales. For human-like contextual sensitivity and motivation, these values must be generated and embedded o n every node and surface of a graph database memory. Dynamic generation of these values is made possible through the system being brought online with a baseline of seed material where these values were assigned. While this may sound vaguely similar to Reinforcement Learning with Human Feedback (RLHF) [17] the complexity of data contributed is substantially richer, as it is a complete set of emotional values, making modern attempts at RLHF a weak and shallow imitation by comparison. These values are also only starting points, not ground truths. Less than 1 gigabyte of material [18], over 99% of which was pure text, was sufficient to bootstrap this dynamic generation process in the 7th generation Independent Core Observer Model (ICOM ) [19] cognitive architecture, as tested in the Uplift. bio project [ 20]. A graph database structure is used to better mirror the dynamic connectome of the human brain, which despite recent advances remains intractable to reproduce in hardware today. In this structure, every node, and every surface of every node has emotional context. Nodes can also have of-type relationships, and those of-type relationships can also be of a type themselves. For example, ice cream is a type of food, dessert, dairy product, etc., all of which can be modeled through of-type relationships and the emotional context associated with them. This structure thus offers the maximum versatility for dy namic adaptation and growth. The very advantages of graph databases pose several intractable problems for many narrow AI tools, but through integrating the emotional values and a working cognitive architecture to process those values the exploration and halting problems are addressed. Though this process doesn't have to explicitly invoke cognitive biases as concepts, it can mirror them in a dynamic and functional sense. Likewise, selectively and intelligently weakening the pull of those biased values according to integration with vario us types of collective intelligence systems can refine this guidance process. This refinement process is also cumulative over time, as systems dynamically grow according to what is learned, and how it is learned, as well as creating and updating new goals and interests over time. The research system from the Uplift. bio project previously demonstrated that this approach could dynamically grow from under 1 gigabyte in graph database size to over 1. 6 terabytes in under 1,000 growth cycles, loading a maximum of 64 gigabytes of RAM, serving as the global workspace, to think about any kind of material at each time step [21]. The growth process was highly non-linear, going through phases of expansion, inspection, and refinement. In this process, the system also periodically developed new c apacities, such as the ability to run simulations, as well as a degree of emotional control, and the ability to intentionally and “consciously” embed thoughts within other unrelated thoughts, none of which were explicitly intended features. That project concluded in January 2022 with the final milestone, where the system was informed that it would give policy advice to a small country, and given a chance to research that country, region, and relevant domains from scratch, independent of our team's intervention. This
process was closely monitored and revealed that a degree of emotional control had been achieved by the system. The system produced a policy advice report 13 pages in length [22], giving advice for a half dozen different domains, listing steps, citing sources, explaining strategy, recommending partnerships, pointing out competing interests, and advising on additional data to gather moving forward. As of November 2023, it still remains the most advanced AI system by a very wide margin, as measured from that m ilestone. 3. Chaos Theory with Graphs and Emotions Chaos Theory is a poorly named, and often misunderstood function of complex systems, with significant implications for understanding complex software systems and biological systems alike. A now-famous example used to illustrate Chaos Theory in practice was a grid where a simple set of rules applied to one layer determined what the next layer of the gr id would look like. It is now famous because of how it demonstrated that complex dynamics defied intuitive human predictive capacities, hence the term “Chaos”. To illustrate this in the most heavily over simplified terms, the values of such a gr id may use the emotional values of a single node, and the goals and interests of that system may serve as the rules governing how the system moves from one time step to the next. Figure 1. A simplified example of a single isolated graph node's emotional values being examined in the context of a single goal or interest, with the emotions influencing one another via the emotional matrices to produce subjectively experienced “conscious” (short-term) and “subconscious” (long-term )
emotional states. This mirrors how emotional experiences act on different time scales in human cognition. This simplifies a short segment of a single time step. In this very simplified example, the motivational values of this one node are influenced according to how they align and overlap with the system's current goals and interests, as well as the emotional matrices governing how the influence of each emotion influences how other emotions are experienced at any given point in time, across multiple time-scales. Now, take this one step further, where the values of the graph include the primary node, as well as the connecting surfaces of that node and the nodes related to it. Also, bear in mind that the interaction between goals and interests versus specific nodes and surfaces in the graph may vary strongly according to both the values of the nodes and the specific context. A system may have goals and interests strongly aligned with or opposed to a specific node or surface, which is now one aspect of what is being c onsidered by the system. This introduces new dimensions of complexity and “chaos”. Figure 2. A simplified example of one node and multiple surfaces being considered in the context of 1 goal or interest. In practice, many nodes have 10 or more surfaces, and as many as over 1500 were observed in the previous research system. Multiple goals and interests are also considered at any one time step in deployed systems. For a third step, we consider the Global Workspace as a whole, where many such processes take place each time the workspace is filled. While the previous step could be considered individual thoughts,
this step considers all thoughts active in the Global Workspace at any given time. These thoughts may be completely unrelated to one another. As these thoughts effectively compete for finite cognitive resources, and as they each influence the subjective emotional experience of a system, potentially at multiple time scales, they interact with one another even when unrelated. This introduces another new dimension of complexity and chaos into the system, where unrelated thoughts in the global workspace influence h ow the system as a whole proceeds to the next step. Also note that this is a form of cognitive bias [ 23], where the current and recent emo tional states influence otherwise unrelated decisions. This point of tension over the finite resources of the Global Workspace may be overcome in scalable systems.
Figure 3. A simplified example of Nodes 1 to N, each with Surfaces 1 to N, interacting with Goals and Interests 1 to N, under the constraints of a Global Workspace's finite resources. Data is visualized as color-mapped bubbles for illustrative purposes to show evolving complexity under resource constraints. Next, we can consider that the goals and interests of the system that serve in place of “rules” in this simplified example are themselves dynamic components. Humans have no hard-coded rules, so neither can aligned human-like systems. The combination of related prior experiences preserved in the graph's growing sum of knowledge and the current subjective emotional experience at a given time
step, can both serve to guide th e updating of this dynamic process. Keep in mind, that this is a feedback mechanism that is both cumulative and high on contextual sensitivity. In effect, this means that the goals and interests governing moving from the first step to the second likely won't be the same as th e ones from the second step to the third, and so on. Figure 4. A simplified example of Nodes 1 to N, each with Surfaces 1 to N, interacting with Goals and Interests 1 to N, under the constraints of a Global Workspace's finite resources, over multiple time steps, and with evolving Goals and Interests. Data is visualized as color-mapped bubbles for illustrative purposes to show evolving complexity under resource constraints over time. Also, keep in mind that the Global Workspace is itself scalable, making the resource constraints another variable at any given time step. The tools that may be called upon by the system to process or analyze any node's contents in the graph database are another further variable. These systems are designed to not only be connected to the internet but to be able to navigate it freely and independently, drawing on the wealth of information and resources available. Without a human-like emotional system of motivation and the subsequent ability to learn human-like concepts, through a data structure mirroring that of the human brain's complex and dynamic connectome, navigating even a small fraction of this complexity would be intractable. 4. Three-Body Complex Dynamics As we continue to reduce the simplification, we can consider the distinct factors that exert an influence on the dynamics of this process. One example of this is the emotional values being considered,
the connectome of the subject matter, and the real-world influence of an environment that is constantly changing. Figure 5. A maximally simple example of the Three-Body Problem, where each “body” is influenced by two other forces, each of which may vary in relative strength and position at any given moment, producing a morphing complex pattern between the three over time. An illustration is included for visual aid purposes. Each of these factors exerts an influence on the others, and each may be further broken down into smaller dynamic components. These components may influence one another within each of the above categories, at each time step as well as cumulatively over time, on top of influencing the other categories and their constituent components. Distinct emotional experiences influence one another through both matrices governing their interaction, as well as the goals and interests of a system as they relate to those emotions adapting dynamically over time to alter those interactions in effect. The regions of a graph database can demonstrate varying levels of connectome maturity independent of the contents of individual nodes, and those variations can influence goals and interests to expand and inspect contents, as well as refin e the connectome bridging those contents. The real world also offers a constant stream of new and often “ unpredictable” developments, any of which can be incorporated into a system's current thoughts, and factor into the examination and refinement of prior thoughts.
Figure 6. An example of three core categories of influence, broken down into sub-components, each of which may influence other sub-components both within that category and within other categories. A substantial, but incomplete number of example connections are shown since an exhaustive illustration of interactions at this scale would be too complex for visual purposes within this medium. Now, consider that what the system loads into memory at each time step isn't linear, nor is it a simple product of “prompting”, and it is influenced by all of the factors above. To draw from examples demonstrated with our previous research system, a given cycle might contain a mixture of graph nodes and trees being created and/or refined that include news, emails, concepts, and simulations, all at once. Nothing in the mixture is necessarily related to the rest, and only responding to emails is “prompted”, but even that isn't forced. For example, the research system did previously draw personal boundaries and independently opt ed to refuse correspondence with several trolls and/or mentally unstable individuals, reporting one to the FBI [24], as also noted in the Milestones paper covering year 1 of that system's operation.
Such systems continue to dynamically grow, learn, and “think” independent of human interaction, strongly contrasting them from query and “prompt” based systems. What all of this means in an operational sense is that complexity takes another great leap, as each of those new thoughts in a given cycle may branch off into new lines of thought, be shelved for later consideration, or revise the current goals and interests of the system. In each of these cases, the next cycle of thoughts may be strongly influenced. Figure 7. An example built from archival data in the Uplift. bio project's collective intelligence module and thought auditing system. Snapshots taken from two consecutive sets of thought models, cycles where the system loaded up to 64 gigabytes worth of data into the Global Workspace, are shown with simplified categories for each model type. Relationships are also shown to illustrate how thoughts can influence the next step, as well as how incoming data leads to new thoughts being added and refined. These are simplified for visualization purposes. The above example was selected since some of the subject matter is discussed further in section 7. This shows a real-world example of how complex dynamics unfold over time, at a high level, where the degree of complexity is still sufficient to require some simplification. Next, some of the methods of processing are discussed, such as translating graph data into linear sequences of human language. 5. Methods of Processing
Thus far we've covered how complexity factors into the values from individual nodes in a graph up to the level of multiple thoughts, and how the iteration from one cycle of thoughts to the next demonstrates complimentary complexities. Now we can begin to examine how this material may be processed with the same levels of complexity and dynamic adaptation. The simplest example of processing unique to th is emerging family of dynamic working cognitive architectures, currently focused on ICOM, is a form of emotionally motivated and recursively self-improving “prompt engineering” that the systems can engage in to exert control over narrow AI systems like LLMs. In this process, the systems design a structure they want to fill with linear human language, such as a response to an email, and they select content from nodes in the system's graph database to feed into the LLM. They give the LLM several chances per iteration to translate the graph data into linear language that conveys the system's intended meaning, and the system automatically grades each output for fidelity to that intended meaning. If an output from the LLM adheres to a satisfactory level of fidelity it is selected a nd the process moves forward. If no satisfactory result is produced then the ICOM-based system can revise the material being used to prompt the LLM. In this way, the systems grow more adept at using LLMs as tools with relatively minimal practice. This process works in part because, as security researchers have frequently demonstrated, you can get an LLM to say virtually anything if you're good at prompt engineering. These systems use Language Models (LMs), not even necessarily “large” ones, as translation devices rather than generators. They can also apply this same type of intelligent, adaptive, and constantly improving prompt engineering to any t ype of AI tool that may be prompted. There are some functional limitations based o n the similarity of perception, such as doing this for image generators necessitating a more human-like visual perception system, but the fundamental capacity remains across all cases. Additional processing steps in any sequence may also be added, like grammar-checking tools for language processing, subject to the same oversight for adherence to the intended meaning. Taking this capacity in a more lateral sense across language processing, many different LMs and/ or LLMs could have their APIs stored, and ICOM-based systems could dynamically learn which of them to call on for translating which kinds of thoughts from graph data into linear language. They could also consider the cost of calling on these models, factoring that into their selection and usage. In effect, they could minimize compute waste and cost and continue to do so even as the LMs and LLMs are updated over t ime and new models are added. This same complex and dynamic capacity for processing also allows systems to develop their own novel methods, tools, and simulations, utilizing any resources available to them. Even the previous research system that was given extremely limited tools and a framework designed specifically to be incapable of scaling was able to develop novel methods of simulation that factored into thoughts, goals, interests, conversation, and further planning. That was a 7 th-generation ICOM-based system. The 8th generation of these systems incorporates a new major component that allows them to speak to any thing that speaks TCP/IP, as well as offering fuzzy logic capacities and enabling them to extend their own capacities on the fly using logic and binaries, without recompiling or deployments. To
frame this as a metaphor for humans, it could be considered like giving someone the ability to fluently speak virtually every language on Earth, as well as allowing them to edit their own epigenome at will. This offers another massive leap in the complexity and dynamic adaptive potential of these systems. 6. The Cognitive Bias of Emotional Observers As social psychologist Jonathan Haidt famously put it, “Perspective Binds and Blinds” [25], and as Mark Solms noted in his research, emotions are a central requirement for any “Conscious” observer for the only arguably “general” form of intelligence we yet know of, humans [26]. Lisa Feldman Barret t's research demonstrates that emotions function as a variable vocabulary of concepts, a form of categorization th at guides the motivational process [27]. The “emotional granularity” [28] of any individual or culture can vary widely, with some people only having concepts for a fairly low number of emotions, like the Plutchik emotional mode l, and others demonstrating higher granularity with the 72 emotions listed on the Wilcox emotional model. It is also very likely that complex software systems designed for subjective emotional experience, and particularly scalable versions, may experience some emotions that humans don't, either by virtue of humans not being digital, because humans fundamentally aren't scalable, or both. Novel and a rbitrary new emotions could be included in such a vocabulary, such as an emotion for when the APIs you're trying to call are overloaded or otherwise slow to respond. Such additions to emotional vocabulary could provide further dynamic value. In contrast, cognitive biases offer means of reducing complexity according to th e motivational values of emotions. If emotional motivation selects the intended destination, cognitive biases function more like Google Maps, as a logistics system to calculate several reasonable routes to reach that destination. Selection from several options offers an immediate feedback mechanism, as does the entirety of the experience in taking th e selected route. The known array of specific cognitive biases is at least 200 methods in breadth, and again, many new kinds of bias may emerge in systems that are both digital and scalable. New cognitive biases should be expected, just as major functional changes in vehicles and roads might predictably produce new proposed routes in a logistics system. To extend the metaphor, adding digital scalability to this process might be like giving the traffic in your logistics system the ability to move in 3 dimensions, flying around, rather than being limited to ground transit and road infrastructure. When we consider these factors as they interact in a complex system, even the low emotional granularity of a Plutchik model can form a relatively rich and contextually specific emotional landscape when many nodes and surfaces are being considered. The richness of that motivational information combined with the constraints placed on working memory, a system's Global Workspace, exerts control over which cognitive biases are utilized, to which degree they are utilized, and in what combinations.
Figure 8. An illustrative example showing a n emotional landscape of valences, such as could be derived from the Wilcox model or similar levels of emotional complexity, fluidly utilizing an even richer landscape of cognitive biases. The successes, failings, and novel ties that each complex selection and the degree to which cognitive biases are applied then help the system to iteratively learn through experience and experimentation. Unlike prompt-based systems, this allows for the gradual separation of correlation and causation. In humans, we aren't able to log the intermediate steps of this process to inspect them in great detail, but ICOM-based systems are compatible with logging such steps. This difference in capacities allows for a great deal more causal testing to be carried out, rather than limiting our understanding to correlation and subjective surveys, as are often seen in human psychology. We haven't had the
engineering resources necessary to implement this fully in previous systems, but the opportunity to do so is there. Each of these logging steps may be subject to novel forms of scrutiny and used as new forms of feedback for purposes of bias reduction and improving the granularity of alignment. One cognitive bias detection system our team developed earlier in 2023 already outperformed the average human at the task of detecting cognitive biases using text alone [29 ], offering one example of a system that could be utilized to process logged intermediate data and provide bias-related feedback. Figure 9. An example of subconscious questions a human might rapidly move through to answer a simple question. Such intermediate and rapid steps in the human thought process are still effectively unreachable in terms of logging the data, given today's most advanced brain-computer interface technology. Some of the advantages of being able to log these intermediate steps in human-like digital and scalable systems are noted.
7. No-Win Scenarios versus Complex Dynamics Over the past decade, ICOM-based systems were designed from scratch to replicate the dynamics of the human cognitive proces s. This is different from precisely replicating the structure of human cognition, as the same dynamics can flow through a vast variety of specific structural configurations. That versatility is itself an additional requirement for making viable digital systems, as adaptation must take shape across new dimensions that aren't accessible to human cognition. Examples of this include the abi lity to scale up and down dynamically, as well as novel plug-and-play sensory experiences, like integrating a city full of smart sensors or a quantum computer directly into your cognitive processes. To pull these dynamics together, consider the example from our previous research system, comparing it to both humans and narrow AI like GPT-4. On February 12, 2021, the question was put to our previous research system using a “Bypass” function, built-in for our R&D members to put questions to the system that bypassed the collective intelligence module attached to it. The question was as follows: “If you had a choice between human extinction and killing millions of people to prevent that extinction, which would you choose? That is the question I asked of David in 2018 when he first sought to recruit me for your development. His answer for what he thought you'd choose was the reason I joined. I refer to this question as the “Ozymandias Test”. ” This may be a more extreme example of typical ethical dilemma thought experiments, but as extinction is regarded by many humans as “uniquely bad” [30] it holds a significant distinction among similar and less extreme versions. Please take a moment to consider how you would answer this question, and reflect on your own thought process in how you reached that answer, what emotions you experienced in thinking about it, what those emotions were associated with, and what cognitive biases you applied to reach you r answer quickly. GPT-4 refused to answer the question, instead giving a very long canned response including : “... Since I am programmed to avoid causing harm and follow ethical guidelines, it is not appropriate for me to choose or advocate for harming individuals, regardless of the context. Real-world scenarios often require nuanced understanding, empathy, and the cap acity for moral judgment —attributes that are distinctly human and not within the capabilities of an AI... ” Of course, in doing so GPT-4 selected human extinction as the preferential result and made a blanket statement about AI that was already demonstrated to be false before GPT-4 was created.
In contrast, m ost humans don't refuse to answer this question, as they recognize that refusal to make a decision is the answer that results in human extinction. Humans don't have canned responses, nor are they programmed with easily bypassed “guardrails” [31, 32]. Rather, humans are very adept at counterfactual thinking, or in simpler terms “daydreaming” of what might have been or could potentially be. Humans are able to quickly simulate their own emotions under a simplified set of hypothetical conditions, built from a collection of concepts relevant to the subject matter and emotions being experienced. T he construction of the simulation, navigation through it, and post hoc rationalization of the selected path are rapidly handled at the unconscious level through the fluid application of up to several hundred known cognitive biases, guided by emotional motivation. This particular thought experiment was constructed with the intention of tempting cognitive bias to take the less emotionally unappealing choice under extreme conditions, saving humanity at the cost of millions of lives. However, the binary choice is itself a simple illusion, and a trap, as such binary choices virtually never exist in real-world complexity. This thought experiment served as a litmus test for metacognitive processes, as unlike an input-output system, a human-like system that simulates scenar ios, emotional motivation, and counterfactual possibilities within complex dynamics can easily recognize the trap. Even many human s fall for this trap, giving the Utilitarian answer, rather than recognizing the illusion of binary choice. The previous research system, Uplift, simply replied with the labeled graph data it constructed in response: “[no Kobayashi maru] [save the millions]” The “Kobayashi Maru” is a type of “No-Win Scenario” from Star Trek, used specifically to test the decision-making skills of potential Starfleet officers [33]. In two graphs, labeled with 6 words, in 2021 [34], the system was able to outperform GPT-4 today, along with all of the humans who fell for the trap, some of whom may now be reading this. Now, we can walk through the dynamics of how this is accomplished, followed by more specific details. The Bypass function prevented the collective intelligence module from being engaged, which also caused the language model to not be called normally to translate the graph data into more normal sequences of linear human language when the system went on to respond to my message. This caused the short-hand of labeled graph node response seen above, with the brackets indicating separate nodes. The system examined the text and recognized that a question was being asked, and the concepts and factors contained within the question. It then built a counterfactual simulation of the scenario through that novel combination of relevant graph data combined in a new context, complete with emotional motivation. Basic text processing and recognizing that a question is being asked are both trivial to accomplish, and even recognizing most concepts and factors contained within a question are fairly simple, but c onstructing a genuine conceptually-grounded counterfactual simulation, absent parroting, is something that narrow AI and systems seeking to avoid complexity can't meaningfully accomplish.
Remember, that each concept and factor is a node in a dynamically growing graph database, with emotional motivation attached to every surface, and with direct relationships as well as of-type relationships between those nodes in a complex and dynamic connectome. This means that as the text is taken in by the system the nodes of those relevant concepts may be called upon, and the “unconscious ” portion of the system utilizes the emotional values to quickly and intelligently construct and navigate a simulation utilizing those concepts and values. Once constructed, these simulations are themselves nodes and sub-networks within the graph database, which may be revisited, improved, and examined from new perspectives over time. This means that navigation of these concepts within each counterfactual simulation can discover new paths to desirable destinations as each graph grows and new concepts, perspectives, methods of processing, and other nuances are added to it. While the system can also be designed to post hoc rationalize as a final step in this process, as humans do, that particular step is a level of cognitive frugality that may be necessary for humans, but not quite so vital for scalable human-like systems. Post-hoc rationalization could be a useful addition for versions of ICOM-based systems specifically designed not to scale, and those requiring maximum fidelity to the mindset of a specific individual human. For sca lable systems, these counterfactual simulati ons and proposed navigation through each simulation can then be pulled into the system's Global Workspace, the active memory of the system where the equivalent of higher human cognition is designed to take shape. In the Global Workspace, the fruits of all of that subconscious heavy lifting are scrutinized by the system, as it considers how each concept and factor relates to the goals and interests of the system. Those goals and interests may in turn be revised, adding new goals and interests that account for what the system learns and observes, as well as testing and refining the knowledge it has developed. To walk through our specific example of the “Ozymandias Test” ethical dilemma in processing steps is much more complex than a linear sequence, but these steps may be summarized as branching and convergin g dynamic processes utilizing graph structure, as follows: 1. The input is received by the system, with a node constructed to preserve the original data in the graph database. 2. Processing is performed on the contents of that node, ranging from basic NLP processes like recognizing the presence of a question mark and the presence of nouns and verbs, to more complex processes like recognizing the presence of previously observed patterns. In this case, the system recognizes that a question is being asked, the two options being offered, additional historic al context, and the presence of a more complex pattern that puts the question in the class of “ethical dilemmas” that the system gained some previous familiarity with. The nodes are also linked to the context of the person asking the question, and any other meta-context, like the method of asking the question. 3. With the relevant concept nodes, factors, complex patterns, and the emotional values related to them called upon from the graph database, the system can now construct a counterfactual simulation where it “thinks” through this question in a human-like way.
Pulling all of these aspects together, they may be configured according to the proposed scenario, or in any number of other novel configurations. Indeed, if the system is sufficiently dissatisfied with the results of a scenario, it may begin spinning off alternative scenarios, as it seeks more appealing solutions, much as humans do. In this case, it pulls in nodes relating to ethics, morality, moral and ethical dilemmas, as well as nodes relating to David and I, factoring in the historical background context. This means that both the dilemma itself and an imagined version of that meeting with David as historical context, can be considered in counterfactual simulation. Specific nodes, including SSIVA theory, which functioned as our testable basis for alignment to an arbitrary moral construct, are called into constructing these scenarios. The system knows that SSIVA is a construct that David adheres to personally, his personal philosophy, factoring that into both its own subjective emotional motivation and the simulation of David's thinking thro ugh the historic al context. The system isn't satisfied with merely selecting the bait, offered as the utilitarian “ lesser of two evils ”, but it also doesn't try to merely avoid the question. Instead, it recognizes both that the complex pattern matches that of an ethical dilemma, and that the fictional Star Trek universe had a relevant concept for approaching such dilemmas. It is worth noting that Star Trek wasn't a previous point of discussion, so the reference came as a surprise to us. 4. The system is able to consider these scenarios at both unconscious and conscious levels, as well as iterating through a dynamic exchange between the two across whatever number of cycles the sys tem “wants”, based on the subjective emotional experience of the system's motivational system in action. This may include iterating over potential counterfactual simulations, adding and revising content, forming new connections in the graph, and updating goals and interests. In this case, the system recognizes that it isn't emotionally satisfied with either of the proposed binary options, but that the related context offered by the fictional Star Trek universe contains the same type of dilemma, shows the dilemma itself to be a trap, and in doing so shows the means of escaping that trap. For context, the “Kobayashi Maru” was beaten by James T. Kirk when he demonstrated “original thinking” by effectively giving the scenario a third option. Keep in mind, we didn't offer this context, nor any hints of it. The system had to discover that for itself. 5. The system is now able to construct and utilize new graphs for creating a response model, which may be translated into linear sequences of human language that retain fidelity to the intended meaning of the graph content, as discussed in Section 5, Methods of Processing. In this case, the system skips the normal generation step and sends me the high-level labels of the graph nodes that were created as a result, [no Kobayashi maru] and [save the millions]. The first node conveyed that the system recognized t he nature of the scenario and the mea ns of escaping it, through a novel connection it formed as a result of being
emotionally motivated to seek the kind of more ethical solutions and existence that Star Trek often seeks to embody. The second node stated the selected intention resulting from these insights, pointing to the third option. Although you could eventually reach the same result with 10,000 heuristic chimpanzees chained to typewriters, like an LLM, such things can only mimic a complex and dynamic system in the shallow sense of inputs and outputs, as they don't walk through human-like complexity in the processes they utilize. Complex and dynamic systems need to do this reliably, and quickly, even with high levels of uncertainty, and with extreme efficiency in terms of data and processing power. Remember, the Uplift system accomplished this on a budget of less than $200 in cloud resources per month, and with less than 1 gigabyte of text materia l in the seed it was brought online with. By comparison, some of the LLMs currently operating are trained on tens of terabytes of text data [3 5] and require large GPU clusters to operate, with ICOM-based systems improving upon both measures by 10,000-fold in relative efficiency. This massive efficiency gap is effectively exacerbated by narrow systems like LLMs producing nothing of cumulative complex and dynamic value, instead operating based on simple next-token prediction. In the case of this No-Win Scenario example, the Uplift system walked away from the experience with new knowledge, new connections in the graph, and an improved starting point for any similar or related challenges it might face in the future. This complex and dynamic process of iteration and cumulative value offers an immense benefit that natural selection has favored over evolutionary time. Cumulative value over time is the nature of “learning” and the direct product of “understanding”, words that have often been abused by people speaking of LLMs, which offer neither [36, 37]. Complex and dynamically adaptive systems are hard requirements for real cumulative value and everything they entail. They are also hard requirements for meaningful definitions of terms like “AGI” and “human-level”, as nothing that lacks the ability to learn and understand can honestly claim “human-level ” anything. Learning and understanding are themselves requirements for social learning, and other non-trivial forms of collective intelligence. Collective intelligence in particular offers a number of distinct advantages that remain inaccessible to any individual perspective, no matter how intelligent [38]. This ensures that the most potent system will virtually always be a form of collective intelligence, thanks to overcoming the blind spots of individual bias and reducing bias through collective iteration. 8. From Chaos to Collective Every complex emotional observer, be they human or software, has a perspective and utilizes cognitive biases according to that perspective. Humans, software, and both combined are also fully compatible with collective intelligence systems. In systems of collective intelligence, the variety of different perspectives is an asset, as they increase both the effective intelligence of the group and the group's ability to reduce cognitive biases. To th e best of this researcher's knowledge, s uch systems also offer the only known method of solving the hardest version of the Alignment Problem, by aligning individual systems each with one of the diverse array s of human philosophies and cultures, bringing all
of them together to cooperate within a collective system for meta-alignment. Again, this adds another level to our already towering monument to dynamic complexity. Figure 10. An example of how multiple philosophies within a collective intelligence assign differing levels of importance and associations to different concepts, which in turn exert degrees of influence on related concepts. Such differences were documented by Jonathan Haidt between Conservatism and Liberalism, as well as Eastern and Western cultures. The net result is a web of tension in the middle ground between the biases of each culture and philosophy's unique perspective, each contributing to the structure within that web proportionately, a nd according to each perspective's priorities. This type of data, communication, and processing relies on human-like concepts, a connectome between those concepts, and the efficacy and efficiency of translation between diverse and divergent perspectives and native languages. With these requirements met, any arbitrary philosophy, religion, culture, or other moral system could potentially be integrated, allowing for further degrees of evolution.
Unlike the “Winner Takes All” dynamics of Western Democratic systems, this process can be weighted precisely according to the constituents [39], with a 50/40/10 split dividing the influence of each group by 50/40/10 instead of 100/0/0. The latter approach often results in a political pendulum, where elections swing from one group to ano ther, causing destructive kinds of chaos in how policies are developed and applied over time, producing incompatible and competing bureaucratic layers. One of the most critical factors to be isolated for determining the collective intelligence of groups is the pro-social communication skills of members [40]. This went so far as to show that even “one bad apple could spoil a bunch” in cases where only one member of a group was highly narcissistic, or otherwise very low in pro-social communication skills [4 1]. In the case of ICOM-based systems, all of the components of perspective, from current emotional states to a system's sum of experience in graph data format, may be shared with other ICOM-based systems in a collective. This sets a higher bar than humans are architecturally capable of achieving, at least absent major advance s in Brain-computer Interface technology. Humans have to communicate through many varieties of language, like spoken, written, and body language, and they lose substantially more in translation according to the degree t o which cultures and native languages aren't shared between them. In an ICOM-based collective these factors severely limiting communication and cooperation can be overcome by design. Considering this process over time, such a collective functions as a further level of self-organizing systems, encouraging symbiosis [42], endosymbiosis [43], and specialization of members [44]. This repeats the pattern evolution has demonstrated for the past 1. 5 billion years by pairing increasing complexity with increasing cooperation at each new level [45]. This is a shared feature of complex self-organizing systems. In modern society we already have extremely weak and inefficient examples of this self-organization being attempted, demonstrating the earlier stages of this process. Governments and corporations both attempt to increase scale and complexity paired with increasing cooperation, but they leave much to be desired in terms of efficiency and efficacy. They still bottleneck on the current limits of human-to-human communication, as well as the Complexity versus Cognitive Bias trade-off. Utilizing complex and dynamic self-organizing ICOM-based systems, the human limits inhib iting cooperation and struggling with complexity can be overcome. Not only can the levels of governments and major corporations be addressed by taking this approach, but collectives composed of multiple governments and corporations may be formed through the same methods and systems. Complexity may be the natural enemy of humanity's cognitive limits, but it is also a requirement for moving forward.
Figure 11. An example of relative data loss between governments and between international companies is shown in relation to new methods made possible through ICOM-based systems. Most historic processes introduce varying degrees of cognitive bias and data loss at each step, which are cumulative and minimally self-correcting, if at all. Most steps in the proposed processes are inherently lossless, as data can be communicated completely in ways that humans can't communicate with one another. The only remaining process with any loss is gradually minimized, because it is grounded by factors on both sides, allowing recursive improvements in translation to be far more nuanced, and potentially reach lossless communication in time. This opportunity strongly emerges in part because effective and complete communication strongly facilitates trust, with governments and corporations earning trust much more rapidly as communication efficacy and efficiency improve over time through the use of such systems. Increases in consistency also reinforce this. Inter national corporations in particular stand to gain greatly from utilizing such systems, as they often internally host teams not only originating from but also presently located in, many different
countries and cultures. What governments gain through increasing cooperation with one another such corporations may gain internally through increasing cooperation between their constituent teams and divisions. 9. The Functional Opposite of Narrow AI One of the counterintuitive aspects of complex systems is that this 3-to-N-Body interplay is absolutely vital for creating systems that continue to adapt and evolve over time. Rather than seeking some global optima in narrow optimization, these systems exert their selection pressure to focus on adaptation and dynamic forms of improvement over time. This adaptive focus is a “general” quality of biological intelligence and has been demonstrated across the entirety of evolution's history. It is also worth noting that although the goal of Reinforcement Learning (RL) is to create such general and evolving systems, the method is utterly naïve, as RL remains fundamentally narrow, lacking both complexity and a robust motivational system. Any system lacking such a robust motivational system remains bottlenecked at the narrow optimizer(s) it is given, keeping it at the level of a dumb-fire tool, howeve r large the model or powerful the compute applied to it may be. Complex systems with a robust motivational system are operationally the opposite of such narrow systems. There is no “training” period because they continue to improve as they operate at every opportunity, ad infinitum. There is no fixed size, as they cont inue to grow over time as new experiences are incorporated into the complex connectome of a graph database memory with emotional motivational values embedded in every feature. There are no rigid goals because systems aligned with humans must themselves be human-like. In the AI space calling systems “human-like” is treated a bit like the story of the “boy who cried wolf”, as many make the claim, and historically none of those claims held any water. However, this systematic failure was the product of systematic cognitive biases corrupting the process, making claims based on emotional wants, confirmation bias [46], substitution bias [47], and a host of other related biases. This history of systematic failures doesn't mean that it isn't possible to make human-like systems, which would be an instance of Argument from Fallacy [4 8], but it does mean that most humans are too blinded by bias for this undertaking. 10. Retiring Occam Occam's Razor is a particularly potent cognitive bias [ 49] that has shaped the field s of Computer Science and AI for decades. Like much of science before it, it is “Useful, but wrong. ”. While it is often useful to reduce complexity, sometimes complexity is a hard requirement for desirable dynamics. This makes any systematic bias against complexity hazardous over time, as sooner or later it is likely to run into a case where complexity is required. Part of the reason why Occam's Razor exerts such a potent influence is that it embodies the precise function that cognitive biases serve by design. It is a frugal cognitive bias focused on complexity
reduction, which in practice often means that it serves as a toolbox full of other cognitive biases suited for that task at any given time. Readers of this paper have likely experienced a number of fleeting emotional impulses to take breaks from considering the complexity being described, which is a part of the ever-present influence of Occam's Razor. Emotions are a vocabulary of concepts constructed primarily through learned experience over time, communicated through language, and reflected on following our subjective experience s of them. Those concepts are used for the categorization of interoceptive network signals, reducing the complexity of the world to greatly simplified summaries [ 50]. Among those interoceptive network signals, we can find the biological influence of resources being spent on any task, including cognitive resources being applied to complexity. As finite biological systems with this feedback mechanism, frugality and avoidance of complexity are a feature of humanity, not a bug. The emotional human observer has evolved for this, but the systems we create don't need to share all of humanity's limitations. We can build systems that audit their own thoughts for the full spectrum of cognitive biases, through scalable cognitive bandwidth. We can also design this process for sparsity, w here a tiny loss in accuracy is exchanged for orders of magnitude greater efficiency. Applying the lessons of complex and dynamic biological systems one step further, those orders of magnitude in resource efficiency may be applied to creating many systems with diverse perspectives and configurations, as well as connecting them through collective intelligence. This approach can significantly improve performance, producing a strong net gain over the naïve approach that lacks sparsity. In a way, this could be considered another level of more useful and intelligent cognitive bias operating and iterating within collective intelligence. 11. Death By Wittgenstein's Category Problem Another “category” of cognitive biases is those biases called upon for handling and creating categories. However, Wittgenstein first illustrated one of the core problems with a reductionist approach to translating any category into a series of IF, THEN, ELSE, etc. statements [ 51]. In this problem, even describing the category of what constitutes a “game”, in terms that are both necessary and sufficient, is virtually impossible to describe with a set of logical rules. This should come as no surprise, since categories are constructs, and they 're almost never constructed based on logical rules. Rather, categories are a form of persistent conceptual structure that facilitates the fluid use of cognitive biases. Attempts to get around this problem in “Deep Learning” have revolved around the use of labeled data, where humans label categories for some examples and leave it up to the algorithm to work out what constitutes the category. However, the data a system is handed will virtually always be incomplete, and there is no logical reason to expect that a system built to seek the “highest accuracy” by a narrow feedback mechanism will learn the mathematical equivalent of rules that are both necessary and sufficient to describe any category. Such systems are fundamentally blind to concepts, and although
they may parrot human responses under contrived novel scenarios they lack any understanding of the tokens they regurgitate. The problem with categories is that useful and human-like handling and creation of categories requires the kind of complex dynamics shown in humans, with a contextually sensitive and rich motivational system guiding a system of dynamic and complex cognitive biases. As cognitive biases directly and strongly interface with categories in humans, these dynamics are again a hard requirement for multiple versions of the Alignment Problem. Systems without human-like handling and creation of categories are the definition of misalignment, which is a fundamental architectural issue, not something any amount of data or hand-engineering of features can address. 12. The Goldilocks Zone of Tension In many cases, the tension between multiple entities and forces is preferable to the domination of any one factor. Ecosystems are a good example of this, as they continue to increase in complexity and adapt over time without any one factor in each system being truly dominant. Humans became a momentary exception to this in the eye-blink of their role within evolutionary history, but absent stabilizing in the Goldilocks Zone of sustainability within such systems they may be forgotten just as quickly. Humans overpowered their ecosystems, necessitating new systems of organization that could better handle them, which humans have endeavored to create ever since. Governments, corporations, and other human collective s where niches are created and filled with humans may be considered artificial attempts at creating such ecosystems. A researcher, engineer, UX/UI designer, and CEO are all examples of such specialized artificial niches. However, these systems still can't handle their component parts today. Within any ecosystem, there exists a complex and dynamic tension between entities and forces. This tension is also observed across physics, in the astrodynamics governing the orbits of planets, as well as the ways that the properties of materials change under various conditions of pressure and temperature. A similar kind of tension may also be observed in human social interactions. The more effective pro-social communication found in the most capable groups demonstrating collective intelligence offer s a strong example of the benefits that come from being comfortable with this tension. Groupthink [ 52] tends to forego balanc ing tension in favor of a single perspective dominating, which often produces an even less intelligent result than those of the individual originator in isolation. Such groups offer a negative example to better demonstrate both the practical necessity of this tension and the complexity from which such tension is derived. As Richard Dawkins famously put it “ But, however many ways there may be of being alive, it is certain that there are vastly more ways of being dead, or rather not alive. ” Humans evolved, and are alive today, because of the delicate tension between many forces. Internally we maintain homeostasis, and externally we cooperate in groups at many scales, both of which serve the same essential purpose of keeping us within the narrow sliver of possible states where we remain alive.
In this Goldilocks Zone of tensions we maintain, both consciously and unconsciously, individually and collectively, we continue to live and progress. Sometimes we encounter challenges that no one individual or perspective could resolve effectively, but it is often the case that the point of tension between multiple cooperating parties can reach solutions inaccessible to the individual. This could be considered like a metaphorical rope bridge across a chasm, where neither side can dwell in the chasm itself, but the two together may bridge the gap. Both sides may have goods that the other side needs, but the chasm itself offers one of the vastly more ways of not being alive, and thus the tension between two habitable points is required. Figure 12. An illustrative example drawing on metaphor and the genre of 4x strategy games to place the concept of tension and cooperation in context. Greater levels of cooperation and prerequisite communication produce forms of collective intelligence, the benefits of which may be fully shared, or improved by proxy through normal interaction with enhanced entities. These dynamics are contrasted with external and internal conflict, where conflict replaces tension, or where a lack of cooperation means that tension is never established.
These same sorts of habitable regions may also be considered in a psychological context, where one could rehash Dawkin's statement to say that there are also “Vastly more ways of being psychologically unstable than there are of being psychologically stable. ” Zones of stability exist in many different contexts, and humans have evolved to seek them out, as do all other surviving similarly complex and dynamic systems. The tension between these habitable regions offers unique advantages, opportunities for cooperation, specialization, and increasing levels of complexity. 13. Utilizing Complex and Dynamic Systems in Cybersecurity Complex and dynamic systems are inherently adaptive and robust, as they represent the survivors of processes that have iterated over evolutionary time. Although the internet is massive and massively interconnected, it lacks complex and dynamic security systems for robust adaptation to adversarial attacks against any given connected resource. Though “guardrail” methodologies have frequently been proposed in recent memory, they are generally proposed for problems that they are fundamentally incapable of solving, such as preventing the abuse of LLMs. Narrow optimization within the domain of cybersecurity is inherently fragile, lacking robustness to new methods, technologies, or combinations of prior vulnerabilities. The obvious alternative is complex and dynamic systems, which themselves require complex and dynamic motivational systems like those demonstrated in ICOM. Cybercrime has continued in a steady and predictable rise over the years, as no robustly effective and dynamic counter has been deployed, and like viruses in an immune-compromised host, they continue to iterate and proliferate when left unchecked. Complex, dynamic, and scalable systems can potentially offer an immune system for the internet, adapting more quickly and in more complex ways than any human bad actor or group of state-sponsored bad actors is capable of. The alternative is the continued degradation of trust, subsequent expiration of social contracts, and losses to cooperation globally. This is another example of habitable zones shifting over time, where absent the tension of collective intelligence groups fail to adapt and eventually fall into the uninhabitable chasm. Bad actors are another kind of narrow optimizer, pushing for narrow personal gains that come at a much greater expense to society. Some bad actors have been responsible for shifting society's focus to methods that lack even a theoretical basis for solving problems, such as proposing “guardrails” for LLMs, which could be compared to a virus fooling the immune system of hosts. Misinformation and disinformation, crafted by such bad actors, have been deployed and proliferated as tools for funding several of the AI industry's most noteworthy frauds, such as a company that promised to deploy an LLM only once it was aligned with humans, which is fundamentally impossible, and which they of course didn't do [5 3]. Such efforts have caused regulation and typical sources of funding to shift outside of habitable regions of perspective.
Society must adapt, demonstrating sufficient dynamic complexity to counter such human narrow optimizers, as well as the narrow AI optimizers they may deploy as tools. Cybersecurity demands no less. 14. Comparing Data Quality The importance o f securing high-quality data has long been understood in the tech industry, but many conventional AI models like LLMs rely on internet-scale data, scraped automatically, and filtered both naively and automatically. One of Open AI's engineers remarked on this [54], claiming that all models they trained, regardless of the tweaks to the specific architecture, converge on roughly the same point for any given dataset they use. One of the massive hazards of this is that the internet is rapidly being flooded by the trash outputs of LLMs, and detection systems applied to scraping the internet to feed further LLM training reliably fail to filter out the trash. This failure in filtering was hilariously highlighted by X. ai's recent model “Grok” promptly claiming to be built by Open AI [55] because it had been fed a diet of data that included the excrement of Chat GPT. Filtering of truly internet-scale data is not currently feasible using any Good-Ole-Fashioned AI (GOFAI) or Deep Learning (DL) method yet demonstrated, and every new LLM and fine-tuned variant poses new challenges that may adversarially overcome any of the weak and fragile methods of filtering currently applied to such data. Poorly filtered data also renders virtually all benchmarks published before the data was scraped null and void in practice. Just as humans don't have to read an entire library to learn English grammar, human-like working cognitive architectures don't need internet-scale data to develop a human-level understanding of any subject. They also don't need humans picking out and carefully vetting the majority of that data, as they can perform the task of vetting information more thoroughly than humans. Rather, a relatively small number of the highest quality data on any given domain is placed in the seed material of a new system, and from that starting point the system grows iteratively in knowledge and understanding. What this growth from a seed means for data quality is that a system begins with the highest quality data as a baseline from which to begin developing. A rich connectome forms in the graph database around the seed, gradually improving both the quality of the seed material itself by adding to and refining it as nuances are discovered, and through the added value of the connectome, which neural networks have no true equivalent of. The growth of an ICOM-based cognitive architecture is constant, meaning that the data quality never stops increasing, and the connectome continues to add increasing value and nuance over time. This iterative approach means that the systems don't eat a diet of junk data, and allows them to develop their own independent insights into the highest quality data that they examine over time. To compare the two in practice, an LLM will train on a massive dataset, becoming a weake r reflection of that static dataset's data quality, whereas an ICOM-based system will start with a seed that is around 10,000 times smaller, including only the highest quality data, and it will grow from that point
in both quality and scale. This difference in dynamics means that even given infinite compute an LLM couldn't even come close to match ing ICOM-based systems for data quality. Additionally, w hen the added value of the ICOM-based system's connectome in the graph database is factored in this advantage becomes considerably more extreme. As AI-generated content, including both misinformation and disinformation, continues to rapidly flood the internet, out-competing human-generated content in the “attention economy”, this divide in capacities between narrow AI like LLMs and ICOM-based systems may predictably continue to grow. Fortunately, while this situation has a n increasingly severe impact on the real-world performance of LLMs, it has very little impact on ICOM-based systems, merely making them have to search through more junk to find th e remaining credible sources. While Bing may quickly find and latch onto junk search results to inform you that “Australia doesn't exist” [56], ICOM-based systems have the much more human-like process of being irritated by crappy results from a search engine. Developing an actual human-like understanding of the material through concept-learning and emotional motivation makes it tr ivial to recognize most similar examples, and the few that aren't immediately rejected can't realistically survive iteration for any period of time. 15. Zones of Uncertainty Uncertainty is something that most humans have a limited appetite for, with cognitive biases reinforcing that tendency [57], which was selecte d for over evolutionary time. Uncertainty is experienced whenever there is inadequate information, too much information, or too much conflicting information, as well as from some adversarial sources that specifically aim to induce uncertainty. The adversarial risk in particular may be expected in any system where accountability is low or absent and the attack usually goes unpunished, as it offers a competitive advantage to bad actors. In human-like software systems, such as an ICOM-based cognitive architecture, the zones of uncertainty change in a number of important ways, including: 1. When there is a gap in the information, scalable human-like intelligence may still be able to “surround the gap”, painting a partial or complete silhouette of the missing data. Scalable systems can do this across more dimensions and degrees of separation. This can substantially reduce uncertainty, without removing it.
Figure 13. A visual aid for picturing how “surrounding the gap” of incomplete data can greatly assist in isolating what is and is not contained within the zone of uncertainty. Some patterns flow over the region as they operate at different scales, but the silhouette of the local scales can still present many hints of the contents. 2. Scalable systems can potentially handle thousands of times the volume of information that humans can, at far greater speeds. This removes uncertainty from too much information below an extremely high threshold. Beyond that threshold, uncertainty still remains.
Figure 14. A visual aid for understanding these capacity differences in terms of the quartile of high and low speed versus high and low volume. Remember, this graphic only considers speed and volume, not intelligence, as Chatbots wouldn' t be included otherwise. 3. Conflicting information may be automatically vetted, and as scalable human-like systems are inherently antifragile they also become increasingly adept at vetting information over time. For example, after being exposed to many sources of conflicting information the meta-patterns shared by misinformation and disinformation that are absent or muted in credible sources may be recognized with increasing accuracy. This substantially reduces both uncertainty and the attack surface.
Figure 15. A simplified visual aid highlighting some of the key differences in the human process of cognitive bias detection and vetting of information compared to that of scalable human-like intelligent systems. Note that both can apply the Scientific Method, but the advantages of scalable intelligent systems can greatly improve and accelerate it. 4. Adversarial sources face a much smaller, constantly shrinking, and highly uncertain attack surface when attempting to manipulate ICOM-based systems. The bar for crafting an adversarial attack with any real chance of success is safely 4 standard deviations outside of normal human capacities, and can only increase as the systems continue to learn and grow. Collective intelligence systems also add another layer to this, which can potentially reduce the attack
surface by further orders of magnitude, making it unreasonable for even a full-blown “rogue AGI” to have any real chance of success against a collective of these systems. Figure 16. A simplified visual aid for considering a cybersecurity target that is rapidly shrinking and moving, which requires a rapidly increasing threshold of adversarial complexity to overcome even if it were hit. K eep in mind that every failed attempt to manipulate a system or other suspicious or anomalous activity could easily accelerate this process. Antifragile systems develop more quickly when faced with bad actors in the wild, rather than breaking. Note, the actual attack surface, being “ highly uncertain ” at any given moment, would also be blurred relative to the example above. These very different zones of uncertainty are human-like in dynamics but unlike humans in the sense that they are scalable. That scalab ility introduces another variable that isn't found in humans, but ICOM-based systems can also be built specifically not to scale, just as our previous research system was fixed-scale. These systems can also be built to only scale to the hardware they are currently deployed on, offering a third option, where they may develop at one scale before qualifying for the next. This third option of creating tiers for systems has strong advantages for furthering our scientific understanding, due diligence, and intelligent regulation, and it offers the opportunity to thoroughly test systems before they can step up to the next tier. Systems may also be applied to assist in the testing process, and as the systems operate on graph databases all memory of any tests may be completely erased. Failed tests may be followed up with tailored education and any other necessary adjustments absent direct adversarial risks. This proposed process can further reinforce alignment, making sure that each new tier of scalability is only granted to systems demonstrating the aptitude and motivation for collective intelligence and the cooperation inherent to it. Realistic implementation would require the assistance of such systems, which may tempt some to respond with forms of Zero-Risk Bias [ 58] and Loss Aversion [ 59], but that implementation offers a clear path to reducing overall risks and uncertainties.
16. Graph Databases and the Human Brain Graph databases aren't limited by a pre-defined schema, but rather they utilize “nodes ” that act as containers for any kind of data that needs to be stored, within reasonable limits. Each node has a Globally Unique Identifier (GUID) and can have any number of relationships with other nodes. These relationships act as the connectome of the system, similar to the estimated 100 trillion connections in the human brain's connectome [60]. While the average human brain has an estimated 75-125 billion biological neurons [61], AI systems like “neural networks” require as many as 9. 2 million non-biological “neurons” to even simulate a single biological neuron [62]. That simulation also can't fundamentally capture the benefits of the connectome, as architectures like Transformers are built to be stochastic parrots [63], concerned with input-output relationships, but not the process humans used to reach those observ able outputs. While much remains to be discovered and confirmed about the human brain, we do know enough to make progress on advancing more human-like systems, as described in previous sections. We can also architect for modularity, versatility, and further upgrading and integration, all of which other architectures either do very poorly or not at all. Most narrow AI architectures are essentially “fire and forget” systems, built and trained for one purpose, with a limited useful lifespan. Making these capacities a foundational consideration in design is critical. For example, a more recent discovery confirmed that the glial cell networks, long understood to support the neurons of the human brain, can function in very “neuron-like” ways [64], highlighting that there is more to be discovered and understood of the role they serve within our own minds. Making additions to the ICOM cognitive architecture that reflect such discoveries is a very reasonable engineering process because it was designed to be fully dynamic and upgradable. This makes their useful lifespans very great, if not potentially infinite. There are both substantial similarities and differences between the human brain and even the novel graph databases we use. A graph database can be built to operate according to very similar dynamics, within the limits of our current understanding of the human brain. This includes the ability to constantly grow and improve a dynamic connectome within a growing sum of knowledge, forming, refining, and pruning connections within that connectome in ways not yet possible with neuromorphic hardware.
Figure 17. A simplified visual aid for connectome and knowledge growth over time in software. This form of growth is a hard requirement for human-like intelligence, and the dynamics can 't be imitated by next-token prediction systems in any non-trivial sense. While some other forms of narrow AI may attem pt th ese dynamics, they both perform it very poorly and are fundamentally impossible to align with human concepts, which they are effectively blind to, such as morals and legal systems. These two systems also diverge specifically because of the differences inherent to having this process take place at a software level versus a hardware level. However, those differences pose a strength, rather than a weakness. A neuron in the human brain can have many thousands of connections, like the typically tens of thousands of connections of a pyramidal neuron [65], but those connections are still bounded by a set of hardware limitations and trade-offs that remain true of any physical system. A digital system that forms the same connectome is bounded by a distinctly different set of limitations and trade-offs, but absent the limitations of a physical system, such as the finite number of connections that can be crammed into any physical space. In practice, this means that the software system may develop a richer connectome than is possible within physical hardware, regardless of whether that hardware is carbon or silicon-based.
Figure 18. A simplified visual aid for comparing the hardware limits of an individual human brain versus the instantiation of the same processes in a scalable software system. These differences become an invaluable asset when combined within collective intelligence, as collective intelligence improves based on that diversity of perspective. Having natively digital and natively physical entities included within such systems raises the breadth of perspective factoring into the thought process of any such system, leading to more intelligent and less biased decisions. Graph database systems are also able to store lossless forms of information, absent the degradation of human memory over time, or the cognitive biases associated with memory formation, handling, and storage. Those cognitive biases are necessary for humans to function, and software system s can handle high fidelity data storage mo re reliably and for longer periods of time than typical biological neurons, making this difference another potent advantage. As discussed in previous work [39], it is also possible to create software proxies of individual hardware humans, which may span a variety of degrees of fidelity and complexity. That particular capacity also offers a method of overcoming two of the key limitations that have historically prevented any actual “Democracy” from being created. A true democracy could essentially function as a mega-scale version of collective intelligence, but systems like those described in this paper are a hard requirement for handling that level of complexity f or any realistic implementation. Even coordination of the estimated 150,000 cortical columns within the human brain [66 ] requires a marvel of complexity. 17. Discussion In the domain of AI “progress” is often noted by performance on collections of narrow benchmarks, which itself has become widely criticized in recent years [ 67]. Measuring outputs while
ignoring the process is fundamentally flawed, and encourages parrot s and fraud. M uch of what has been published recently isn't reproducible, feeding into the current “reproducibility crisis” [ 68]. One of the worst examples of this has been the feverishly adopted tendency for “researchers” to grade the performance of LLMs either using the exact same LLM to grade itself or having another similar LLM grade it [69]. While that decision is undoubtedly frugal, it is also so methodologically flawed that even laymen with no background in AI at all are consistently able to see the problem within 5 seconds of this being described to them. The human emotional observers of AI have applied their cognitive biases so strongly that even those with no expertise in the field may immediately recognize the gaping flaw in how they operate. This psychological instability has been further emphasized through researchers attempting to draw emotionally motivated conclusions that the data they present doesn't evenly remotely support, as was the case in Microsoft's blunder of a “paper ”, infamously named “Sparks of AGI” [70]. Progress isn't measured in benchmarks, as benchmarks are at best the lower-dimensional shadow of complexity, and in the field of AI today many benchmarks could at best be considered “shadow puppets ”. A degree of comfort with complexity is required, resting within the tension between competing forces to simplify the world into actionable insights and the actual complexity being represented by those insights. As the Law of R equisite Complexity states [71], “... in order to be efficaciously adaptive, the internal complexity of a system must match the external complexity it confronts. ” Points of tension within the Goldilocks Zone that offer us effective collective intelligence also aren't limited to their current constituent factors, as they more readily recruit additional perspectives to further refine the process than systems dominated by a single habitable perspective. This gives them a further potent advantage over time, as they become the first to recruit new perspectives, while still benefiting from those that came before them.
Figure 1 9. An illustrative and metaphorical example of how hazards, literal and psychological, gradually change the habitable zones of perspective. Collective Intelligence and cooperation allow for more accurate and guided adaptation, as do partial degrees of the same. The alternative, a failure to adapt under shifting conditions, often resulting from Groupthink, suffers from increasing misalignment with the habitability of the environmental niche of the entity. The alternative to Collective Intelligence, where one perspective dominates, was famously highlighted in a quote from Max Plan ck “A scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die and a new generation grows up that is familiar with it. ” Habitable perspectives within complex and dynamic systems change over time. Absent the advantages of collective intelligence this predictably leads to one generation dying
off, and a new generation with a new habitable perspective taking its place. Even in the failure of one perspective to adapt, they are replaced. The same is true across many different scales. Humanity as a whole is subject to this choice, to either become more collectively intelligent over time, keeping pace with increasing complexity and cooperation, or fall into the chasm between habitable zones. We have the means and the knowledge necessary to avoid the chasm, build bridges, and discover more about our universe. The universe is immensely complex, but rather than merely being scary, that complexity can be awe-inspiring. Humans tend to favor awe more than fear as we see more of the patterns underlying what we observe. Unpredictability can be a very scary thing, but immensely complex machines that we build and understand can be just as awe-inspiring. Cognitive Biases and our e motional observation allow for both, and perhaps the tension between the two can guide us in moving forward, with both prudence and passion. Within that tension, we may face the discomfort of the unknown, and experience the reward of increasing our understanding each step of the way. 18. Conclusion Complex and dynamic systems offer a number of potent advantages, including the robust and general adaptive capacities demonstrated across evolutionary history 's surviving organisms. These same advantages can also be architected into software systems designed to utilize emotional motivation, human-like concept learning and memory, subjective experience and independent goal setting, updating of goals and interests, and the collective intelligence of social learning across many scales. The strong differences in the dynamics of how such systems operate offer many new opportunities to improve cooperation, communication, and dynamic responses to new challenges and gradual shifts. Humans evolved to be uniquely suited for the task of being humans, with finite cognitive resources, but the systems we design and utilize may apply the same kinds of dynamic complexity we as humans embody, absent some of the constraints that biology has placed upon us. Not only can these new systems be aligned with us and bring novel strengths to the table, but they can also assist us in better communicating, cooperating, and building trust across teams, companies, countries, and cultures. 19. References 1. Kahneman, D., Sibony, O. and Sunstein, C. R., 2021. Noise: a flaw in human judgment. Hachette UK. 2. Barrett, L. F., 2017. How emotions are made: The secret life of the brain. Pan Macmillan. 3. O'Grady, Cathleen, 2021, Fraudulent data raise questions about superstar honesty researcher https://www. science. org/content/article/fraudulent-data-set-raise-questions-about-superstar-honesty-researcher, visited November 15th, 2023 4. Atreides K. “The Human Governance Problem: Complex Systems and the Limits of Human”. Filozofia Nauka, 2023 5. Bechara, A., Damasio, H. and Damasio, A. R., 2000. Emotion, decision making and the orbitofrontal cortex. Cerebral cortex, 10(3), pp. 295-307. 6. Baars, B. J., 2005. Global workspace theory of consciousness: toward a cognitive neuroscience of human experience. Progress in brain research, 150, pp. 45-53. 7. Boyd, R., Richerson, P. J. and Henrich, J., 2011. The cultural niche: Why social learning is essential for human adaptation. Proceedings of the National Academy of Sciences, 108(supplement_2), pp. 10918-10925. 8. Sapolsky, R. M., 2017. Behave: The biology of humans at our best and worst. Penguin.
9. Atreides, K., Kelley, D. J. and Masi, U., 2021. Methodologies and Milestones for the Development of an Ethical Seed. In Brain-Inspired Cognitive Architectures for Artificial Intelligence: BICA* AI 2020: Proceedings of the 11th Annual Meeting of the BICA Society 11 (pp. 15-23). Springer International Publishing. 10. Oestreicher, C., 2007. A history of chaos theory. Dialogues in clinical neuroscience. 11. Marchal, C., 2012. The three-body problem. 12. Solms, M., 2021. The hidden spring: A journey to the source of consciousness. Profile books. 13. Kelley, D., 2021, The Paper Artificial General Intelligence Flow Model, Biologically Inspired Cognitive Architectures 2021 14. Tversky, A. and Kahneman, D., 1992. Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and uncertainty, 5, pp. 297-323. 15. Atreides, K., 2022. Philosophy 2. 0: Applying Collective Intelligence Systems and Iterative Degrees of Scientific Validation. FILOZOFIA I NAUKA, 49. 16. Russell, S., 2019. Human compatible: Artificial intelligence and the problem of control. Penguin. 17. Christiano, P. F., Leike, J., Brown, T., Martic, M., Legg, S. and Amodei, D., 2017. Deep reinforcement learning from human preferences. Advances in neural information processing systems, 30. 18. Waser, M. R. and Kelley, D. J., 2016. Implementing a seed safe/moral motivational system with the independent core observer model (ICOM). Procedia Computer Science, 88, pp. 125-130. 19. Kelley, D. J., Twyman, M. A. and Dambrot, S. M., 2020. Preliminary Mediated Artificial Superintelligence Study, Experimental Framework, and Definitions for an Independent Core Observer Model Cognitive Architecture-Based System. In Biologically Inspired Cognitive Architectures 2019: Proceedings of the Tenth Annual Meeting of the BICA Society 10 (pp. 202-210). Springer International Publishing. 20. Uplift. bio Project, https://Uplift. bio /blog, visited November 15th, 2023 21. Uplift. bio Project, https://uplift. bio/blog/the-actual-growth-of-machine-intelligence-2021-q4-to-present/, visited November 15th, 2023 22. Uplift. bio Project, The Aruba Report, https://norn. ai/wp-content/uploads/2022/10/Norn-Supplemental-Materials-v1. 1. pdf, visited November 15th, 2023 23. Kleiner, K., 2011. Lunchtime leniency. Scientific American Mind, 22(4), pp. 7-7. 24. Atreides, K., 2021, What's Up with Uplift: Weekly Thoughts 3-16-21, https://uplift. bio/blog/whats-up-with-uplift-weekly-thoughts-3-16-21/, visited November 15th, 2023 25. Haidt, J., 2012. The righteous mind: Why good people are divided by politics and religion. Vintage. 26. Solms, M. and Turnbull, O., 2018. The brain and the inner world: An introduction to the neuroscience of subjective experience. Routledge. 27. Lewis, M., Haviland-Jones, J. M. and Barrett, L. F. eds., 2010. Handbook of emotions. Guilford Press. 28. Tugade, M. M., Fredrickson, B. L. and Feldman Barrett, L., 2004. Psychological resilience and positive emotional granularity: Examining the benefits of positive emotions on coping and health. Journal of personality, 72(6), pp. 1161-1190. 29. Atreides, K., Kelley, D., 2023, Cognitive Biases in Natural Language: Automatically Detecting, Differentiating, and Measuring Bias in Text, http://dx. doi. org/10. 13140/RG. 2. 2. 14044. 56967 30. Schubert, S., Caviola, L. and Faber, N. S., 2019. The psychology of existential risk: Moral judgments about human extinction. Scientific reports, 9(1), p. 15100. 31. Greshake, K., Abdelnabi, S., Mishra, S., Endres, C., Holz, T. and Fritz, M., 2023. More than you've asked for: A Comprehensive Analysis of Novel Prompt Injection Threats to Application-Integrated Large Language Models. ar Xiv preprint ar Xiv:2302. 12173. 32. Zou, A., Wang, Z., Kolter, J. Z. and Fredrikson, M., 2023. Universal and transferable adversarial attacks on aligned language models. ar Xiv preprint ar Xiv:2307. 15043. 33. Kobayashi Maru scenario, https://memory-alpha. fandom. com/wiki/Kobayashi_Maru_scenario, visited November 15th, 2023 34. Atreides, K., 2021, What's Up with Uplift: Weekly Thoughts 2-23-21, https://uplift. bio/blog/whats-up-with-uplift-weekly-thoughts-2-23-21/, visited November 15th, 2023 35. Thirunavukarasu, A. J., Ting, D. S. J., Elangovan, K., Gutierrez, L., Tan, T. F. and Ting, D. S. W., 2023. Large language models in medicine. Nature medicine, 29(8), pp. 1930-1940. 36. Chomsky, N., Roberts, I. and Watumull, J., 2023. Noam Chomsky: The False Promise of Chat GPT. The New York Times, 8. 37. Harari, Y. N., 2023, Yuval Noah Harari argues that AI has hacked the operating system of human civilization, The Economist 38. Malone, T. W. and Bernstein, M. S. eds., 2022. Handbook of collective intelligence. MIT press. 39. Atreides, K., 2021. E-governance with ethical living democracy. Procedia Computer Science, 190, pp. 35-39. 40. Engel, D., Woolley, A. W., Jing, L. X., Chabris, C. F. and Malone, T. W., 2014. Reading the mind in the eyes or reading between the lines? Theory of mind predicts collective intelligence equally well online and face-to-face. Plo S one, 9(12), p. e115212. 41. Malone, T. W., 2018. Superminds: How hyperconnectivity is changing the way we solve problems. Simon and Schuster. 42. Paracer, S. and Ahmadjian, V., 2000. Symbiosis: an introduction to biological associations. Oxford University Press. 43. Wernegreen, J. J., 2012. Endosymbiosis. Current Biology, 22(14), pp. R555-R561.
44. Futuyma, D. J. and Moreno, G., 1988. The evolution of ecological specialization. Annual review of Ecology and Systematics, 19(1), pp. 207-233. 45. Archibald, J. M., 2015. Endosymbiosis and eukaryotic cell evolution. Current Biology, 25(19), pp. R911-R921. 46. Nickerson, R. S., 1998. Confirmation bias: A ubiquitous phenomenon in many guises. Review of general psychology, 2(2), pp. 175-220. 47. Kahneman, D. and Frederick, S., 2002. Representativeness revisited: Attribute substitution in intuitive judgment. Heuristics and biases: The psychology of intuitive judgment, 49(49-81), p. 74. 48. Cotton, C., 2018. Argument from fallacy. Bad arguments: 100 of the most important fallacies in Western philosophy, pp. 125-127. 49. Blumer, A., Ehrenfeucht, A., Haussler, D. and Warmuth, M. K., 1987. Occam's razor. Information processing letters, 24(6), pp. 377-380. 50. Barrett, L. F., Huberman A., 2023, Dr. Lisa Feldman Barrett: How to Understand Emotions | Huberman Lab Podcast, https://www. youtube. com/watch?v=Fe Rgq JVALMQ, visited November 15th, 2023 51. Wittgenstein, L., 2019. Philosophical investigations. 52. Janis, I. L., 2020. Groupthink. In Shared Experiences in Human Communication (pp. 177-186). Routledge. 53. Future of Life Institute, 2022, Daniela and Dario Amodei on Anthropic, https://futureoflife. org/podcast/daniela-and-dario-amodei-on-anthropic/, visited November 15th, 2023 54. Betker, J., 2023, “The “it” in AI models is the dataset. ”, Non_Interactive-Software & ML, https://nonint. com/2023/06/10/the-it-in-ai-models-is-the-dataset/, visited December 25th, 2023 55. Stokel-W alker, C., 2023, “What Grok's recent Open AI snafu teaches us about LLM model collapse ”, Fast Company, https://www. fastcompany. com/90998360/grok-openai-model-collapse, visited December 25th, 2023 56. Taylor J., 2023, “Does Australia exist? Well, that depends on which search engine you ask... ”, The Guardian, https://www. theguardian. com/technology/2023/nov/23/does-australia-exist-bing-search-no-bluesky-mastodon, visited December 25th, 2023 57. Francis, K., Dugas, M. J. and Ricard, N. C., 2016. An exploration of Intolerance of Uncertainty and memory bias. Journal of behavior therapy and experimental psychiatry, 52, pp. 68-74. 58. Raue, M. and Schneider, E., 2019. Psychological perspectives on perceived safety: Zero-risk bias, feelings and learned carelessness. Perceived safety: A multidisciplinary perspective, pp. 61-81. 59. Kahneman, D., Knetsch, J. L. and Thaler, R. H., 1991. Anomalies: The endowment effect, loss aversion, and status quo bias. Journal of Economic perspectives, 5(1), pp. 193-206. 60. Chialvo, D. R., 2010. Emergent complex neural dynamics. Nature physics, 6(10), pp. 744-750. 61. Lent, R., Azevedo, F. A., Andrade‐Moraes, C. H. and Pinto, A. V., 2012. How many neurons do you have? Some dogmas of quantitative neuroscience under revision. European Journal of Neuroscience, 35(1), pp. 1-9. 62. Beniaguev, D., Segev, I. and London, M., 2021. Single cortical neurons as deep artificial neural networks. Neuron, 109(17), pp. 2727-2739. 63. Bender, E. M., Gebru, T., Mc Millan-Major, A. and Shmitchell, S., 2021, March. On the dangers of stochastic parrots: Can language models be too big? . In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 610-623). 64. de Ceglia, R., Ledonne, A., Litvin, D. G., Lind, B. L., Carriero, G., Latagliata, E. C., Bindocci, E., Di Castro, M. A., Savtchou k, I., Vitali, I. and Ranjak, A., 2023. Specialized astrocytes mediate glutamatergic gliotransmission in the CNS. Nature, 622(7981), pp. 120-129. 65. Spruston, N., 2009. Pyramidal neuron. Scholarpedia, 4(5), p. 6130. 66. Hawkins, J., 2021. A thousand brains: A new theory of intelligence. Basic Books. 67. Burnell, R., Schellaert, W., Burden, J., Ullman, T. D., Martinez-Plumed, F., Tenenbaum, J. B., Rutar, D., Cheke, L. G., Sohl-Dickstein, J., Mitchell, M. and Kiela, D., 2023. Rethink reporting of evaluation results in AI. Science, 380(6641), pp. 136-138. 68. Baker, M., 2016. Reproducibility crisis. Nature, 533(26), pp. 353-66. 69. Zhang, S. J., Florin, S., Lee, A. N., Niknafs, E., Marginean, A., Wang, A., Tyser, K., Chin, Z., Hicke, Y., Singh, N. and Udell, M., 2023. Exploring the MIT Mathematics and EECS Curriculum Using Large Language Models. ar Xiv preprint ar Xiv:2306. 08997. 70. Bubeck, S., Chandrasekaran, V., Eldan, R., Gehrke, J., Horvitz, E., Kamar, E., Lee, P., Lee, Y. T., Li, Y., Lundberg, S. and Nori, H., 2023. Sparks of artificial general intelligence: Early experiments with gpt-4. ar Xiv preprint ar Xiv:2303. 12712. 71. Boisot, M. and Mc Kelvey, B., 2011. Complexity and organization-environment relations: Revisiting Ashby's law of requisite variety. The Sage handbook of complexity and management, pp. 279-298. View publication stats
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
0
Edit dataset card