Value Imperialism in the Future of Artificial Intelligence

I wanted to share my final essay for a Philosophy of Science class focused on Artificial Intelligence from the Spring of my 2rd year in University - April 25, 2022 to be exact. This was my first and only Philosophy class I ever took. With that said, I really enjoyed the in-person discussions (that I was deprived of due to Covid-19 pandemic) and some of the readings. It was by far the most academic papers I had to read in a course (several weekly readings), which often resulted in having a strong urge to take a nap after 20 minutes of reading due to the density of the texts. I especially struggled with the metaphysical and abstract ideas becasue they really did not click for me which I just ended up getting boring. Nonetheless, it was one of my favorite courses I took in college, due greatly to the lecturer who had a great personality and was wickedly intelligent.

I. Introduction

Imperialism is defined as, according to the Merriam-Webster dictionary, “the policy, practice, or advocacy of extending the power and dominion of a nation especially by direct territorial acquisitions or by gaining indirect control over the political or economic life of other areas.” It took off when the world started to globalize in the 1800s and still prevails in certain parts of the world today. Imperialism is typically seen as negative due to it often changing and destroying cultures and values certain groups and societies held. This was, and still is, often done with ill intentions by the imperialists due to them taking a pompous stand over the individuals they exert their power over, which they believe their victims’ values and culture are inferior to theirs. It is important to note that the advocacy of the imperialist’s policies does not make the imperialist’s values right and the victims of imperialism’s values wrong. The concept of imperialism, specifically value imperialism, is a main point of concern in the future of artificial intelligence and moral machines. Cave et al. explain the basis of value imperialism well: “the universalization of a set of values in a way that reflects the value system of one group (such as the programmers). This could be pursued intentionally or, perhaps more alarmingly, could also be perpetrated inadvertently if programmers unintentionally embed their values in an algorithm that comes to have widespread influence. Such value imperialism might affect, or disrupt, cultures differently, or degrade cultural autonomy.” Political and economical imperialism has been extremely destructive to cultures that impeded its progress. Artificial general intelligence (AGI) has the strong potential of becoming, unarguably, the most powerful and influential agent the human race has ever seen. Due to this anomaly, it has the probable cause for the changing of every society and culture into the one it sees fit and “right.” So, what is the “right” set of values for an AGI to possess? This is a nontrivial problem within itself and pertains to the machine ethics field of study. I am not going to address this problem in this paper because it is still largely unanswerable. I will, however, argue why the issue of value imperialism in AI can lead to various problems internationally, why it may even pose a threat to some nations, and will then offer plausible solutions to help mitigate these challenges. The idealization I will make for this argument is that an AGI is created in such a way that it coexists and cooperates with humans, rather than taking full control. In part two (II), I will talk about who, or what, actually has the ability to ingrain their values within an intelligent machine. In the third part (III), I will answer the question of why this concept of value imperialism is concerning due to the potentially drastic consequences that could come of it. In part four (IV), I will explore what path the current state of artificial intelligence research is currently on and where it is likely headed. Finally, in parts five (V) and six (VI), I will present potential solutions and paths that can be taken to help mitigate these dilemmas.

II. Who has the ability to impose value imperialism?

In the current state of artificial intelligence, it is not all clear who is going to be responsible for instilling values into the machine. There are several scenarios of who is going to be responsible for doing this. The first is the individual programmers. The programmer may be responsible for defining the morals and values in an intelligent machine, either purposely or unknowingly. The programmers are responsible for creating the algorithms or model, and as such are capable of embedding their own values and morals into the machine. This is quite dangerous because no individual, nor a small team of programmers, has the capacity of defining a set of values into a machine that is representative of all that the machine will influence, be in contact with, and overall, impact greatly. The next group that is able to imperialize the ethics within a machine is a single company. Companies typically have a set of values and goals to which it strives to hold themselves to and reach. Assuming a company truly believes in these values and makes decisions reflecting them and does not use them for mere public relations. So, a company has its programmers creating the AGI, so it is the proprietary property of the company. Therefore, the company would ideally oversee this project and ensure it represents the company’s values, rather than one of its employees. A company in itself is simply, according to a famous Supreme Court decision by Chief Justice John Marshall, “…an artificial person, invisible, intangible, and existing only in contemplation of the law” (Pride et al.). With this being said, it shows that a company is susceptible to just the same consequences that can be caused by individual programmers, but here it would simply be done collectively by many individuals who represent the “artificial person.”
Lastly, governmental organizations have the ability to define the values and ethics within an intelligent machine. Modern governments represent and influence large masses of people, far more than any individual person or company is capable of. This then seems like the ideal agent to hold the responsibility of imposing its values into an AGI. Governments already impose policies and laws which their citizens are implied to follow. However, governments come in many forms: democratic, totalitarian, communist, etc. Due to these different forms of government being radically different, the values it would give to an AGI would likely be much different. As well as the fact governments, considering there does not exist a cosmocracy (world or global government), only represent a single nation. As we will see, values vary wildly between nations; depending on which government has the power and control of instilling the AGI with values, it can be quite problematic for another nation.

III. Why is this issue currently of concern?

The US, Russia, and China are said to be in a Cold War/Arms Races with AI development, which is irresponsible and dangerous for all. But this brings ethics into the field, because Western values differ, sometimes dramatically, from the East. The former consists of protecting basic human rights, equality, free speech, and liberty. In the latter, these same rights that are assumed to be guaranteed to each of their citizens are often infringed upon. China believes censoring information that enters and leaves its country’s internet and restricting the media is beneficial to the country’s future. While in the West’s eyes, this is unethical and everyone should have the right to the free flow of information. Ideas of values and principles that are dissimilar to what currently exists in a specific society are extremely powerful. A quintessential example of this is the West’s, especially the United States of America’s, reaction to the Communist revolution following World War II. Communism debased nearly all the principles upon which the United States was founded on. This created immense tension between the Soviet Union and the United States, which technically did not lead to any direct conflicts but rather the support of proxy wars fought by third-party nations. Regardless, there was a battle between values, principles, and generally, information. This shows how threatening creating an AGI, which could, with high probability, be the most powerful agent the world could see up to that point. It would be greatly influential and no nations would likely be able to even compete with it in an information war of the same type as the Cold War. So, if this agent has opposing views of one or a group of nations, it would be quite threatening to them and immediate action would likely need to be taken. Neuroscientist and philosopher Sam Harris puts this into perspective: “And what would the Russians or the Chinese do if they heard that some company in Silicon Valley was about to deploy a superintelligent AI? This machine would be capable of waging war, whether terrestrial or cyber, with unprecedented power. This is a winner-take-all scenario. To be six months ahead of the competition here is to be 500,000 years ahead, at a minimum. So it seems that even mere rumors of this kind of breakthrough could cause our species to go berserk” (9:22). Due to the immense power an artificial intelligence would grant its users or owners, it does not make it a far stretch to say that the technological feat of achieving a “true” artificial general intelligence is a war-starting event, of course only if development and research stay on the same path it is currently on.

IV. So what is the track we are currently on?

Artificial intelligence’s current development and research are currently geared towards an economic reward in the industry at companies like Facebook and Microsoft or as a weapon in a world where cyberwar is becoming all the more relevant. Regardless of its use case, both approaches seem to incentivize fast progress without being precautionary. Capitalism, in general, incentivizes progress to gain an economic edge in an industry, while being more reactionary, rather than proactive, to consequences. For example, look at product and drug recalls in the United States. Products are put to use in the real world and policies and laws are often put in place after any consequences are seen. This is rightly so because one can only predict so much that can happen in the real world with so many different acting variables. The atomic bombs use is another perfect example of AI development as well. Technological advancements excelled dramatically during World War II and resulted in the creation of the atomic bomb by Allied powers, and its actual use against Axis powers. Then only after its use was rational policies put in place to control nuclear weapon use both nationally and internationally. The high-stakes environment World War II produced made the creation possible much quicker than anticipated. The main point I am trying to make here is that an artificial intelligence arms race is creating an environment very similar to one seen in World War II. This is hazardous due to the rushed use of the products designed and created in said environments. An AGI is likely not “recallable” and would have an impact touching every part of the world, not restrained to just two cities like the atomic bomb. So, being both persistent and not restricted to a certain and small environment, makes a dire combination for artificial intelligence. Then, there is a possibility that the release of an AGI does not lead to a war or stringent responses from different nations. Although I believe this is unlikely, it is of course imaginable. In this reality, due to an AI’s overreaching influence, the world, as a whole, would likely converge to the values and principles implemented in it. This would likely result in a cultural genocide globally and is certainly undesirable. On the other hand, a world with the same values and principles would be one with far less conflict and national tension, but it is strongly dependent on what these values and principles actually are: democratic or totalitarian.

V. Potential Solution: Diplomacy

The first potential solution to alleviate these potential destructive consequences that AGI can produce is international diplomacy. Global regulations and overseeing of artificial intelligence development can help control and monitor the transition to an AGI inhabited world. This seems like the ideal situation in which all nations, and thus most widespread representation, have a say in the values and principles in which intelligent machines should pursue. However, this, unfortunately, also seems least likely due to diplomacy often being easier said than done. Not to mention, the participation in this diplomacy would likely be voluntary; so nations can just outright not participate. Regardless, international efforts should still be enacted. Getting the sobering reality of the future of AI into the mainstream discourse is important for all due to the great changes it will have on the future of mankind at large. Companies and institutions leading the world in cutting-edge AI development and research should also deem responsibly and put efforts into ethical research for AI. Thankfully, this is already happening at many universities and companies, but there will likely never be a sufficient amount, and the more to put the effort in, the better. The general goal is an international discourse on the hypothetical dangers and concerns revolving around AI: only one specifically being the circumventing the installation of value imperialism within an AI. It is important for researchers, scientists, philosophers, and programmers to demystify AI at large and make plain what the technology can offer, yet also take away.

VI. Potential Solution: AI decides its own values

This problem of value imperialism, in general, is that it makes a significant assumption about the fundamental essence of an artificial general intelligence: its values are embedded by humansβ€”its creators. There is a possibility that the AI, in turn, would be able to generate its own set of values and principles that do not align directly with any one nation or group of people. It will be a superintelligence, after all, so why would it not be able to create a set of values better than humans have currently achieved? This would naturally render the problem of value imperialism, and all its consequences, void. Nonetheless, the nature of AI’s value system is still a largely unanswerable question and may only be answerable when a “true” AI is created.

VII. Conclusion

Additionally, due to the grave dangers that the current approach and development of AI poses, it is absolutely necessary for international discourse, research, and diplomacy surrounding not just the ethics, but many aspects of AI to be put into stride. Like most diplomacy, it will likely be a lengthy and hard process, but is of utmost importance; so immediate attention is definitely favorable, rather than waiting until it’s too late. Sam Harris articulates the general current state of mind of AI: “Now, one of the most frightening things, in my view, at this moment, are the kinds of things that AI researchers say when they want to be reassuring. And the most common reason we’re told not to worry is time. This is all a long way off, don’t you know. This is probably 50 or 100 years away. One researcher has said, ‘Worrying about AI safety is like worrying about overpopulation on Mars’” (9:54). If this reassurance is used over and over, it will be too late for any essential progress to be made before a superintelligence enters our world. The many goals, only a single one including the prevention and general avoidance of value imperialism as talked about in this paper, that need to be achieved before the deployment of an artificial general intelligence is no easy task if a full effort was put forth today, but if this is suspended to the late future, there is a good chance the technology runs away while the slow-moving political processes governing artificial intelligence lags behind with no way of catching up.

Works Cited

  • Cave, Stephen, et al. β€œMotivations and Risks of Machine Ethics.” Proceedings of the IEEE, vol. 107, no. 3, 2019, pp. 562–574., doi:10.1109/jproc.2018.2865996.
  • Harris, Sam. β€œCan We Build AI without Losing Control over It?” Sam Harris: Can We Build AI without Losing Control over It? | TED Talk, <www.ted.com/talks/sam_harris_can_we_build_ai_without_losing_control_over_it/transcript>.
  • β€œImperialism Definition & Meaning.” Merriam-Webster, Merriam-Webster, www.merriam-webster.com/dictionary/imperialism.
  • Pride, William M., et al. Business. Houghton Mifflin Co., 1996.