Site icon The Republican Standard

Google’s Contention With Conservatives On AI Ethics Council Shows Troubling Problem With Emergence

Google announced in late March the formation of the Advanced Technology External Advisory Council (ATEAC), a panel that is tasked with consulting the tech giant about major issues surrounding the emergence and advancement of artificial intelligence, including provisions of ethics in the rapidly-growing technology. This pseudo-independent watchdog will ensure that AI is “socially beneficial,” according to Kent Walker, Google’s senior vice president of global affairs, who stated that the council will “provide diverse perspectives” to the company, The Verge reported.

While Google’s mission of self-governance in the field of AI is in a political “no man’s land,” considering no national legislation specifically related to how AI can and cannot be used exists, it represents the first journey into the vast and difficult realm of setting moral and philosophical issues in stone, and the passage of rules and regulations related to particular applications of AI like facial recognition technology and personal and biological data usage and storage.

Essentially though, the group is meant to help the company appease its critics while pursuing lucrative cloud computing deals.

However, less than one week after the council was created, the panel is falling apart due to irreconcilable political viewpoints.

Bloomberg reported that over the weekend, Alessandro Acquisti, a behavioral economist and privacy researcher, said he will refrain from serving on the council. “While I’m devoted to research grappling with key ethical issues of fairness, rights and inclusion in AI, I don’t believe this is the right forum for me to engage in this important work,’’ Acquisti said on Twitter.

On Monday, contentions within the group became much more evident after a group of Google employees began circulating a petition to demand that the company remove Kay Coles James, the president of Heritage Foundation, a conservative think tank. Over 500 staff members signed the petition anonymously by late Monday morning, CNN reported.

This comes at a critical point when more and more conservative politicians and citizens have criticized Google’s algorithms and content moderators for unfairly discriminating against the conservative point of view in search results, social media, news, videos, and other mediums.

AI experts and activists have also called on Google to remove from the board Dyan Gibbens, the CEO of Trumbull Unmanned, a drone technology company. Gibbens and her co-founders at Trumbull previously worked on U.S. military drones, which draws a great point of contention with some Google employees in their rejection of using AI for military purposes.

Google’s resistance-like sentiment vis-à-vis their proclivity to please those who promulgate “social justice” platitudes shows one of the problems predisposed in the emergence of artificial intelligence.

Any AI expert worth their salt will suggest that humans bring into the world a machine-based intelligence that is good and moral, rather than one that is evil and demonic.

So, how do we do that?

First, we must select “good” people to birth the eventual AI generation of machines. Walker stated that the council will guide AI to be “socially beneficial,” but what is that exact premise as defined?

The creators and overseers of AI will build whatever they are like into their machines. Their presuppositions will become amplified beyond control when the machine is built so that it can then create the second machine, and the third, and the fourth, and so on.

Considering many who are involved in the creation of AI are the same who have built the censorship bots of Google, Facebook, YouTube, Twitter, and others, which are predicated on a certain ideology – a ideology based on diversity, equity, inclusiveness, quotas, and other tropes associated with the radical political left without fully understanding the true outcomes of the aforementioned – the next evolution of society through AI will be based in the postmodernist notion that there are an innumerable number of ways in which the world can be interpreted and perceived, and since no manner of interpretation can be reliably derived, all interpretation variants are best transcribed as different forms of power struggle.

One of the complexities of AI has metastasized out of things completely axiomatic for humans, but is seen as yet another power struggle for deconstructionists.

In the late 1960s, researchers came across what they called the “frame problem,” which describes an issue with using first-order logic to express to AI that things in the environment do not change arbitrarily. The frame problem dictates that with the creation of AI, there is a near infinite number of ways to perceive a finite set of objects.

For example, when we see a chair, we do understand it is a chair due to its construction of four legs, a back, and a seat, we see that we can sit on it, thus seeing it in the world as a representation of its function. Most of what humans see in the world is through function, rather than seeing something (or a person) as an object then interpreting its function.

This is an issue in the creation of AI due to the unresolved premise of embodied cognition. A machine with the intelligence we would consider at or greater than that of a human’s cannot really “see” without a basal frame of reference, because “seeing” is essentially the mapping of the world on to action.

Therefore, looking at the world is exceptionally complex – an issue that is still being resolved today with engineers of artificial intelligence.

Nevertheless, this frame problem exists in other areas.

Postmodernist thought in literature dictates that any text can have a near infinite amount of interpretations that can be derived, because there are a near infinite number of ways to interpret the world.

While this notion is partly correct, what is left out is that they believe there is no “right” way to interpret a text, or an object, or anything else in the world. The postmodernists believe people interpret the world solely based on what facilitates their acquisition of power.

They are wrong, not wholly incorrect, but wrong because the world is complicated beyond our current understanding, so there is a very large, but not infinite number of ways things can be interpreted, but one must extricate from the world a game theory of interpretation that is causal.

Back to the example of the chair – if one derives from the object that it can be used to hit someone, that is not a very functional interpretation because many would reject that because it is hurtful and violent, and it is not a game or function that can continuously be played.

When we interact with the world, we take away a set of tools we can use to function so that suffering can be limited and providing for oneself can be maximized, inasmuch as others are willing to cooperate in a peaceful and sustainable manner. That would be a good, moral, and functional use of AI.

What society is seeing, however, is an attack on this. It is an attack on the enlightenment values of rationality, individualism, civil discourse, science based in empirical evidence, and, more importantly, an assault on the underlying metaphysical (spiritual) level of society and culture.

These manipulative, intensely totalitarian anti-values, wherein the collective predicates the individual, will be the downfall of humanity from the savage uprising of AI, which was established by people who embodied the mission of the intellectual de-evolution of society – a mission with the goal of destruction.

Humans values will manifest themselves in artificial intelligence and be amplified beyond reason. Its creation cannot be influenced by ideology.

Hopefully, good people will stop that.

Exit mobile version