The Continued use of Smart AI in UNSC Service

Cadet Second Year Kate Guillou

Mrs. Redmanik

Military Ethics

November 05, 2516

'''The Continued use of Smart AI in Military Service

Almost all modern UNSC capital ships are equipped with a few fundamental technologies: a magnetic accelerator cannon (MAC), fusion reactors, propulsions systems. However, out of all of the technologies equipped across UNSC capital ships, one of the most controversial is the "Smart" AI. Unlike the more limited but longer lasting "Dumb" AI, these constructs are not simply programmed, but created by using scans of human brains to replicate the neural pathways into the AI's matrix. From here, the AI generates its own system of more efficient neural linkages to replace the pathways eroded by the still inefficient process available through today's technology. Additionally, the process destroys the donor brain, and thus, in keeping with the restrictions on cloning set by Section 1A/3 of the UN Colonial Mortal Dictata Act, the brains are typically taken from recently deceased individuals. However, this process is, as stated previously, controversial, with the most common issues arising being the use of a dead individual's brain, the limited "lifespan" of a "Smart" AI, and the forced service of these constructs for the UNSC. This essay is aimed to address these points and argue for the continued creation and usage of "Smart" AI constructs throughout the UNSC fleets.

A not insignificant group of people within the general population of Earth and Her colonies hear about the use of the brains of recently deceased individuals for the creation of "Smart" AI and object to this concept. To them, the reasons for such objections stem from a variety of bases, largely moral. The most commonly cited reasoning behind this is that using an individual's mind beyond their death is immoral on some philosophical or religious ground. While I can certainly find reasonable arguments within this group, such as refusing to utilize the brains of individuals who didn't consent to such use, the large majority of "Smart" AIs, if not all, created and used by the UNSC for their various operations are created by the brains of recently deceased, willing individuals, whether general organ donors or people who specifically  requested their brain to be used for such a process. While this is an inherently moral issue, and thus hard to judge and define, the idea that utilizing the brain of a willing donor is immoral to the degree that it cancels out the vast benefits offered by a "Smart" AI in day-to-day operations of the UNSC seems simply unfounded. Perhaps precisely for this reason, there isn't a large dissent on the matter of utilizing such willing donors for "Smart" AI development.

A more common objection to the creation of "Smart" AIs lay in the idea of their limited lifespan and eventual forced deactivation. To argue this, the point is most commonly split into two separate portions, with the lifespan presented as a matter of worth and morality and the eventual deactivation an argument of suffering versus "death" of sorts. To begin with the former point, many people find moral fault in the bringing to "life" of AI with such a limited lifespan of around seven years. To them, the short lifespan creates a disproportionate degree of negative effects, emotional and mental, for both the AI and the individuals who work with the constructs compared to the worth offered. In this sense, they argue that "Dumb" AIs or typical computer programs could handle the tasks allocated to "Smart" AIs without a significant cost to performance while saving the stress placed on AIs and those that work with them by their eventual "death." However, as indicated by the quotes to this point around words such as "life" and "death," these downsides are a matter of perspective on the topic. Despite the increased flexibility in learning and interacting with their surroundings, "Smart" AI remain, much like their "Dumb" counterparts, constructs. Computer programs. In this regard, they weren't even alive to begin with, and thus can't die. By viewing AI through the lens of tools to be used rather than individuals to develop relationships with, any and all emotional stress that would be experienced as if their decommissioning was their death can be avoided.

For the second aspect of this argument, the counter comes from the same point. As artificial, nonliving constructs and tools, AI cannot experience death. In this perspective, the matter of rampancy versus death becomes clear. It's not a matter of ending the "life" of the AI before they suffer needlessly. While they present facsimiles of the visual signs of suffering, they lack the capacity to actually suffer beyond mimicry. It is, instead, a matter of benefit versus cost. As an AI descends into rampancy, they become more and more inherently unstable. As such, the decision to decommission them forcibly is one to balance the cost of this instability with the benefits reaped by their continued operation. With this in mind, both the limited lifespan and the decommissioning at the end of it are cost/benefit decisions with low costs and extremely valuable benefits pointing towards why the continued development of "Smart" AI remains the choice in our best interests.

Finally, as an extension of the view of "Smart" AIs as living individuals, persons, rather than simply the tools that they are, there is a calling for any new AI created to be given the choice to serve the UNSC or "live their own life." The counter for this is our own extension, in this case of the idea that they're merely the tools that they are. If blacksmith creates a wrench or, in an even more relevant situation, a programmer creates a piece of code, should those tools be given the option to go against the purpose for which they were created? It is hard to believe that such a question is one that is reasonable to pose. Why, then, does the simple fact that an AI has the programming to fake human interaction make the question seem suddenly more reasonable? With this established, it once again becomes a matter of benefit versus cost. In this case, the benefit of keeping it in service is massive. "Smart" AI guide our point defense weapons, run extensive calculations for navigational purposes, and do these things and more far more effectively than any other computer program or even a human could do. The cost of allowing it the "choice" is equally high. Losing out these benefits and allowing our tools to define how we use them.

In conclusion, there are a wide variety of arguments against the continued use of "Smart" AI in the ways they are currently being used. All of these are based on some moral foundation, and while the first, the use of dead brains for AI creation, could be reasonable if unwilling individuals were used after their death, the latter two fall apart once the perspective in which AIs are viewed is change to one that fits reality far more. Individuals who call for change based on the idea of the short "lifespan" of "Smart" AI and their forced "death" or the forced service of these constructs allow themselves to be misled by the mimicry of human qualities that "Smart" AI put out through their programming into believing that "Smart" AI are actually living persons rather than the tools that they are. The reality is that they are tools, and treating them as anything more than this directly harms the efforts of the UNSC and legitimizes false ideas of treating tools as people, costs which are too high compared to the perceived benefits of misled groups.