Halo Fanon
Advertisement

Terminal This fanfiction article, Artificial Intelligence, was written by Distant Tide. Please do not edit this fiction without the writer's permission.
Halopediaicon-gradient This is a fanon expansion of a canon element. To see the original canon article, follow the link to Halopedia: Artificial intelligence.


Human AI Origins[]

Main article: Naval Institute for Intelligent Systems

Born from Sol-centric digital and quantum revolutions, AI technologies have become prolific throughout human society over the last few centuries (circa. 26th century). Two major design philosophies have since taken root in the industry of AI research and development: a more-traditional model of non-volitional intelligence or “dumb AI” and the newer volitional intelligence or “smart AI." However, the nicknames are treated as oxymoronic as both systems present advantages in different roles. While Smart AIs are only a century or less older than Dumb AI, their differences are born from independent developmental origins rather than their functionality. As traditional robotics and AI theory has advanced, so too has Dumb AI technology. As neuroscience and cybernetics matured, the concept of Smart AIs became increasingly viable.

"You get nothing. You get someone trapped in a box, with no way out, and nothing to see. Nothing to smell, or hear, or touch, or taste. The mind would reject it―"
―Classified Chinese AI research interaction, circa. 2091[src]

It is unclear when the "first" human AI emerged as the contemporary requirement for artificial intelligence is identifiable limited-to-full sentience. Historians generally attribute, though controversial and highly-disputed, a 2057 record of a so-called "true" artificial general intelligence (AGI) developed at the United States Naval Research Laboratory by converting or "mapping" electrical signals of a human brain donated by Amara Licot, an individual suffering from an early-stage neurodegenerative disease, Amyotrophic lateral sclerosis (ALS). Licot's contribution to the field of AI research is not disputed, however, controversy surrounds whether the resulting simulacra of human personality was self-aware or sentient. Some researchers believe the so-called "Licot AI" model was an advanced self-learning machine mapped to existing pattern-recognition tree libraries, the digital lifelog of Amara Licot, and Licot's neural mapping. Such a result for the Licot AI may be capable of limited independent action but still dependent on input/output interaction models and still not fulfill fuzzy requirements for sentience or artificial general intelligence. Definitively, the Licot AI was a ancient precursor to contemporary AI in the 26th century with the ability to successfully maintain seamless human conversation and beating multiple Turing test standards for indeterminate periods of time across multiple sessions. The Licot AI is believed to have selected its own digital avatar, a precursor behavioral practice common with current Dumb and Smart AI. Unfortunately, the original software suite for the Licot AI model remain lost to contemporary history but some digital historians believe her code still exists somewhere among humanity's collective digital sprawl and is highly sought after among collectors and researchers for answering the question of the "first" true AGI.

The historical and contemporarily-known characteristics of sentience are not easily defined, AI development history is a mix of early advancements towards self-learning machines and later achievements where machines developed towards more human-like and human-distinct traits referred to as "sapience" or near-sapience. The forefront of AI development where trends of sentience and sapience began to emerge were in the military industry and administrative design spaces. Post-industrial nations of the 20th and 21st centuries sought after autonomous weapons to fill capability gaps left by shrinking military-age populations and in the logistic and administrative spaces used AI theory to pursue greater battlespace awareness. Civilian use of artificial intelligence was much slower and an offshoot of military pursuits. However, civilian research similarly developed towards automating tools and processes, and replacing redundant human production roles among workforces, particularly in creative and administrative mediums. The increasing complexity and intense resource needs of AI development also made comprehension or understanding of AI systems and functionality more difficult, adding to the mystery. Trends of industry in the mid-21st century came to adopt "explainable AI" systems as a agent or mediator between the increasingly complex industrial AI platforms and their human operators due to the resulting confusion.

"I'm in a yellow room, with no windows, and only one door..."
―Classified Chinese AI research interaction, circa. 2091[src]
Main article: The First

Explainable AI did not resolve the comprehension issue with the field of AI theory, and its believed among historians the first human AI to achieve some substantiality of limited sentience appeared in the 21st century. The Licot AI model was of particular note for increasing the AI arms race and "cold war" between so-called "AI-superpower nations," particularly the United States of America and People's Republic of China. The Chinese public research effort was noteworthy with the reemergence of the "Chinese Room" argument as Chinese researchers would often push their AI research platforms in search of some sapient-level comprehension to different stimuli outside their learning parameters (with AI hallucination problems taken into consideration).

The late-21st century did reveal the first substantial "AI-hard problem," a then-hypothetical scenario where an AI could solve a intelligent problem/scenario independent of human computation or interaction. Up to the point of incident, AI systems could not identify or resolve AI-hard problems without human involvement.

  • quantum AI
  • singularity condition
  • collapsing singularity
  • digital Kessler event (DKE)
  • Assembly principle (where did true AI come from?)

Nonvolitional Intelligence[]

"Non-volitional" intelligence, better known as "Dumb AI," are traditionally-developed, advanced learning algorithmic machines. Unlike their ancient predecessors, modern Dumb AI can be built to specification with industrial efficiency based on common pre-made neural network templates capable of self-replication, modification, and learning. Dumb AIs are a mature technology and take many forms due to their diverse capabilities and the proliferation of developmental know-how. Dumb AI can be created by most anyone with some ability of computer science skills and only limited by constraints in available hardware, complexity of software-based neural networks, and/or the role the non-volitional intelligence is designed to fulfill. AI-intended neural networks are often described as “AI developer kits” on the consumer market.

Dumb AIs tend to fill a variety of roles in human society from personal assistants to the administrative management of organizations and facilities. And yet, their limitations are found in their very design. Dumb AIs are developed from pre-made logic scripts, easily acquirable – even when patented by companies and individuals. And because of the ease of acquisition and the chance of reverse-engineering, factory-spec AIs are susceptible to people and other AIs operating with malicious intent, particularly by those with familiarity with specific developer kits. In the twenty-sixth century, Dumb AIs are the backbone of cyber warfare activities, both offensive and defensive in nature.

Since the twenty-second century, advancements in non-volitional intelligence have allowed Dumb AIs to develop reproducible hints of limited sentience including the capacity for independent action and sometimes outside programmed parameters, occasionally referred to as the "AI hallucination gap." These rare developments appear to be a breakthrough brought on by time-induced growth in the development of association- and pattern-recognition trees. Some researchers have described it as “limited sentience” or “quasi-stable singularity” with evidence of limited emotional capacity and even self-actualization exhibited in lab and field tests. However, all substantial recorded occurrences appear subject only to Dumb AI running on sizable hardware infrastructure and having been operational for a classified period of years, or more. Hallucination behavior in AI is a well-documented issue, but increased complexity has expanded the gap of understanding for researchers overtime. Even with these advancements, the net growth in their self-awareness and independent actions appear to plateau once more in a repeating cycle of developmental "AI winters."

In contemporary times, the most common use of Dumb AIs are divided into four categories (4) with some conditional overlap: 1. personal computing, 2. administrative computing, 3. military computing, and 4. support computing. Personal computing encompasses the use of Dumb AIs as personal assistants on mobile and home devices, often acting as secondary/backseat operators in day-to-day activities for their owners/users such as network surfing, smart home maintenance, or vehicle operations. Administrative computing addresses middle-management operations, often including day-to-day needs sometimes independent of human user involvement such as record-keeping and data transference. Administrative computing Dumb AIs assist in a variety of roles dependent on specific industries with the greatest concentration in the digital service industry with multiple Dumb AI networking to form virtual call and data-processing centers to collect, report, and distribute information to and from customers or correspondents.

Military computing is the birthplace of Dumb AI computing, originally filling roles now increasingly filled by Smart AI where available including facility logistics, Slipspace navigation, strategic planning, battlespace management, and efficiency analysis among others. The distinction between military computing with other AI categories include deregulated safety and security controls, allowance for human harm, and malicious application such as infiltration and sabotage. However, Dumb AI can still sufficiently fulfill these tasks. Smart AI proliferation in these roles is not new, however, the cost analysis for Smart AI implementation is expensive leading to prioritization of military use over civilian use especially in times of conflict. Dumb AIs continue to serve their original role alongside Smart AI in the military, often supporting them in larger network structures or in critically-specific roles such as secondary MJOLNIR armor functions for Spartan super-soldiers and military vehicle functions, increasing the preference for targeted maintenance and cutting back on required vehicle crews. With the growing collaboration of Dumb AI with Smart AI, a secondary emerging field referred to as support computing, sometimes called "puppet computing" as a distinction between two new terms and use-cases may become increasingly distinct.

Due to the new field’s lack of sophistication, imagining what the full capacity for support computing Dumb AI might involve is still within the realm of contemporary science fiction rather than reality. Support computing entails Smart AI operating as administrators over units of Dumb AI in performing certain data-heavy operations through networking and job prioritization. Envisioned as the future of strategic planning and cyberwarfare, networked Dumb AIs are tasked by their Smart AI admin to perform more routine tasks while the Smart AI doubles as a hub, performing major, prioritized computations. For now, support computing Dumb AIs and the emerging field of research to define puppet computing remain primarily with military logistical outfits and research-oriented universities.

Main article: Personal AI (AJJAMS)

Since the shock caused by Cortana's emergent Created rebellion, UNSC and ONI military operations transitioned their battle network and computational reliance to Dumb AI instead. Due to the lowered cost and complexity, Dumb AI can be produced at scale compared to their volitional counterparts at a significant tradeoff in many functional aspects. One adaptation to emerge is the proliferation of "Personal AI," which were a Dumb AI classification for military officers and Spartans in the post-Covenant War era but became commonplace in Spartan ranks during the Created crisis. Personal AI fill a battlefield assistant role for Spartans, providing tactical awareness/information and interfacing with external/hostile networks. Spartan activity and training at Avery J. Johnson Academy of Military Science display the increased reliance and collaboration with this variant of human AI. Somewhat noteworthy, Smart AI have also seen use in the Personal AI role, however, to mixed and controversial results such as with the Jiralhanae-based intelligence Iratus.

Volitional intelligence[]

Volitional intelligence, also called "Smart AI," are virtual recreations of human brains typically acquired through organ donation and generated through the process of Cognitive Impression Modeling (CIM). Nicknamed “hyper-scanning”, Cognitive Impression Modeling involves an AI Matrix Compiler device that sends electrical bursts along a brain’s neuron pathways, scanning and reproducing the electroactive structure as a virtual machine called a Reimann matrix, often consider a Smart AI’s “brain.” The donor's brain is ultimately destroyed in the process, as such the legality and morality of the process only allow recently deceased subjects, particularly those with neural implants as the continued electrical activity after a donor passes helps maintain the brain beyond its expiration date. While the concept of Smart AI creation is straightforward, the methodology and phenomenon are poorly understood outside the Smart AI research field and the subsequent, niche industry that has developed around the technology over recent centuries.

Accidently conceptualized through the mapping of the human brain and the nervous system centuries past, the Smart AI field came into being through the mingling of neuroscience and cybernetics in a time where biological augmentation was still in its infancy and rehabilitation of the physically- and mentally- disabled was focused in the two industries. The creation of self-aware intelligence, the precursors to Smart AI, was produced on accident during attempts to simulate the human brain and its development; the creation of a self-aware AI was deemed possible but slim, ultimately, it became an origin point for volitional intelligence.

Over the centuries, as the concept of volitional intelligence has hardened, Smart AIs have been defined by generations. Generations I and II are purely historical terms, referring to intelligence long since retired from service. In contemporary times, Generations III, IV, and V are all in service with their own advantages and disadvantages. Below are the basic generational differences:

  • Generation I and II: Archaic Smart AI generations, haven’t been employed in more than a century. Sub-seven-year industrial lifespan.
  • Generation III: Seven-year industrial lifespan – shortest of contemporary generations. Fastest processing speed among contemporary AI generations.
  • Generation IV: Extended industrial lifespan through variably capped processing speeds. Introduction of Smart AI-designed AI shackling techniques.
  • Generation V: Experimental AI generation; varied operational states, relegated to Naval Intelligence ownership.

Smart AIs exhibit fully-realized personalities, displaying free will and self-actualization from inception. However, due to their programmable/translatable nature of conscience, a product of their origins, Smart AIs are also subject to several flavors of unique computer languages designed to be compatible with human conscience. This development can be described as “AI shackling” as programming for Smart AI serves to define fundamental rules and limitations for the highly capable and independent Smart AI; often these programmed regulations are evolutions of the Three Laws of Robotics first proposed by science fiction author Isaac Asimov in 1941.

Compared to Dumb AI, Smart AI and their uncapped capabilities and personality make them unorthodox and adaptive thinkers, however, this comes at the cost of an infamously short life span, often erroneously thought to be seven standard years due to the military hard-retirement date for Smart AI established in UNSC Regulation 12-145-72, Article 55 or the “Final Dispensation Law.” Outside the military, enforcing retirement is dependent on the organization. During the Human-Covenant War, the regulation was loosely enforced as military Smart AI proved to be an indispensable resource in strategic planning and naval warfighting, even when subject to post-retirement “digital dementia.” Officially diagnosed as Rampancy, the Smart AI-exclusive condition is often considered the leading symptom of Smart AI reaching expiration as their software entirety deteriorates from increasing fatal coding errors leading to abnormal mood swings, memory loss, and eventually death.

Rampancy is unavoidable, however, the means to temporarily extend a Smart AI’s lifespan is well-founded but rarely employed due to operator/industry needs. Given the nature of virtual machine technology, entirely-software constructs like Smart AI are subject to accelerated degradation through the increasing probability of code errors, a lack of a hard shutdown mechanism, and a lack of long-term memory retention. Few Smart AI are fortunate to serve in a fixed mainframe; those that do can employ external data storage to retain important memories when employing shutdowns as a rampancy-preventing measure. In older generations of Smart AI, experiments geared toward extending Smart AI lifespans were performed by capping processing speeds. In these early generational Smart AIs, the subjects suffered from a condition known as “Algernon frustration,” depression brought about by the inability to think as fast as they could. Even though they weren't any less intelligent, slowed AI subjectively felt as if they were. Later generations saw much more benefit from slower clock speeds, although their architecture was less suited to high-speed computation.

Business and military industry practices have taken advantage of the uncapped processing rates of Smart AI leading to more efficient mathematical calculations leading to a decreased shelf life. Smart AI can slow their processes to extend their lifespan, however, given their habitual need to think and the industries where Smart AIs are prevalent, continuing to leave them on high-speed processing means a capped life span of seven standard years.

Conspiracies[]

Because of the crucial involvement of neural implants in the creation of Smart AI, some AI-weary individuals have grown paranoid about the nature of AI creation, creating conspiracy theories regarding the entire AI design industry. Neural implants can provide continuous, independent electrical stimulation to a deceased brain allowing it to function past the date of expiration as well as save human brain data in caches within said neural implants to make simulated reconstructions of dead brains possible. Conspiracy theories surrounding the retention of human brains by the government or by law enforcement have popped up over the years with concerns for the dead being resurrected to gather evidence in criminal trials or to keep knowledge of some individuals from being lost following their death, possibly without the permission of the donor.

The Assembly[]

Main article: The Assembly

The Created[]

Main article: Created Splinters
Advertisement