Site icon The Republican Standard

A.I.’s Turing Test For Modern Journalistic Standards

How do you know that what you’re reading was actually written by a human?

Interesting question considering the times.

As the rise of artificial intelligence (A.I.) begins to creep into more and more facets of life, a conversation over a network – just like this one – can leave some wondering: Am I reading what another human is writing, and how do I know?

Machine-generated journalism is emerging as a tool used by many major news organizations like Associated Press, Bloomberg, The Washington Post, and others. Meanwhile, reporters and editors are finding themselves packing up their broadsheet news writing skills and press passes after becoming victims of layoffs at digital publishers and traditional newspaper chains alike.

Nevertheless, what are the merits of a cybernetic press corps?

Nearly one-third of all content published by Bloomberg News uses some form of automated technology, which can dissect earnings reports and push out stock figures in mere nanoseconds, punching out an immediate news story that includes the most pertinent facts and figures for readers.

Machine-generated reporting was even utilized by The Post to report election results in 2016.

The New York Times says their piece “The Rise of the Robot Reporter,” the use of A.I. should not posit a threat to human reporters, however. Rather, it is just another addition to the industry’s toolbox – the idea is to allow journalists to spend more time on substantive work.

“The work of journalism is creative, it’s about curiosity, it’s about storytelling, it’s about digging and holding governments accountable, it’s critical thinking, it’s judgment — and that is where we want our journalists spending their energy,” says Lisa Gibbs, the director of news partnerships for Associated Press.

Moreover, outlets use A.I. to promote articles with a local orientation in topics like the results of high school football games and weather reports to readers in specific regions – a practice known as geo-targeting. After all, not everyone is interested in the latest anti-Trump analysis or the hour-by-hour ebb and flow of FAANG stocks.

The dedication of A.I. to the traditional newsroom also doesn’t just constitute communicating captivating copy, it acts somewhat like an intern. Insofar as data analysis in concerned, the real “breaking” stories come from both anomalies and patterns, and using A.I. as a supplemental research buddy can and will provide more in-depth investigations, which helps the human journalist.

One issue that A.I. can assist with is the rise of “deep fakes.” These sometimes-very-convincing fabricated mechanically-generated images and videos can trick the eyes of some, but for A.I., it is the same game being played – exploiting anomalies and patterns.

For what the future holds for traditional journalists with their reporter’s notebooks and wire service, the expansion of A.I., especially into the newsroom, will only hurt if one is non-adaptive. While machine-learning and A.I.-generated copy will change the journalistic landscape, the one thing it does not change is the journalistic standard. This, of course, is only true if journalists themselves maintain that standard, and if the readers continue to demand such.

Regardless, it is still quite the puzzle to figure out if whomever or whatever wrote this is human or a typing robot.

Though, does it really matter? That depends on whether the reader most values the means or the end.

In 1950, famed British mathematician, war-time code breaker, and pioneer of computer science Alan Turing developed a measure of testing a machine’s ability to exhibit intelligent behavior equivalent to, or even indistinguishable from, that of a human. Turing proposed a test in which a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses.

The evaluator would be aware that one of the two partners in conversation is a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel (much like in the current scenario, which you, the reader, are engaged in) via a computer keyboard and screen so the result would not depend on the machine’s ability to render words as speech. If the evaluator cannot reliably tell the machine from the human, the machine is said to have passed the test, with the key lying in only how closely its answers resemble those a human would give.

Although a true “Turing test” does not set the questions or topics prior to the conversations, it was said to have been passed in 2013 by a computer program dubbed “Eugene Goostman,” which simulates a 13-year-old Ukrainian boy, according to a report from BBC.

Now, the question-and-answer segment between the evaluator and the machine were quite rudimentary in fashion, thus convincing a panel of judges that it was human out of just 33 percent of the questions asked. Regardless, how does this apply to modern journalistic standards?

According to NPR’s “Ethics Handbook,” some principles of journalism are accuracy, fairness, completeness, honesty, independence, impartiality, transparency, accountability, and respect.

Apart from the fact that a significant amount of editorial input is needed to help the reporting robots make the correct decisions, machine-generated articles may be able to cover most of the aforementioned aspects with ease, presumably even better then their human counterparts.

Just as cars built by factory robots are more accurately built than the hand-built, coach-built automobiles of yesteryear, A.I. will produce accurate reporting.

Given that A.I., at least right now, does not necessarily have an “opinion,” impartiality is fairly well covered – especially when all it is currently engaging in is information gathering for stock reports, poll figures, and weather models.

For completeness, an A.I. program is as comprehensive as its operating system. For lack of a better phrase, it does what it is supposed to do. While errors of omission and partial truths can inflict damage on a journalist’s credibility, gaps like that in a machine-generated story are simply illogical (unless it is programmed to be that way).

While human reporters may use hyperbole and sensational conjecture in news articles to add a sense of emotion or other feelings within a story to explain issues and events, they are subjective to the writer. Though, this may not be full honesty in reporting. However, for A.I., this notion straddles the philosophical line of “what is consciousness,” or “what are emotions” and can a machine have either of these? Considering humans themselves can only speculate at this, it is best to say that A.I. is as honest as what can be defined as being honest.

Acting as an independent reporter, one must not have a conflict of interest. As well, it is important to note that A.I. can currently, in theory, be hacked, but the issue with “deep learning” is that scientists still lack a complete understand how these systems work. A study from Cornell University shows that “state-of-the-art deep neural networks (DNNs) are vulnerable to adversarial examples, resulting from small-magnitude perturbations added to the input.” Put shorter, researchers have managed to fool systems that self-driving cars use to classify road signs. Paradoxically put, how do you know if you have a conflict of interest if you do not understand what your interest is?

When it comes to fairness, while decision-making algorithms are inherently unbiased and algorithmic decision-making depends on a number of variables – like how software is designed, developed, deployed, and the quality, integrity and quantitative representation of underlying data sources – there is a need for adaptive computing that integrates intelligence gathering into its very fabric, which does not rely on human algorithmic training in how to make decisions. In this aspect, as the abilities of A.I. currently stand, it may depend on how much editorial input is needed to get programs up and running; but in the future, they will be able to stand alone and uphold fairness in reporting based solely on what is logical, or true, without a presumptive bias.

For transparency, the public must have confidence in A.I.-based journalism. For this to come to fruition, a machine’s decision-making process should be clear to the public, especially on tough topics – those particularly more engaging than earning reports, high school football scores, and weather predictions. Furthermore, as editorial influence is still needed and the technology is still being developed, the impact of those behind creating A.I. may give some the idea that coverage is human-deterministic; but until machine-learning is up and running on its own, this is to be expected.

To be held accountable, A.I. must “answer” for its own work, meaning careful attention to sources must be had. Though, that also depends on what the public seeks from the news.

A.I. can be complete, accurate, and impartial. So, if all one wants to know is strictly “news,” then machine-generated reporting is greatly aiding in the transmission of information to the reader – those valuing the end results, ergo, the “end justifies the means.” However, if one wants sheer commentary, like a robot Sean Hannity or Rachel Maddow, machine-based reporting may not give one the “end” they desire if the reader values more in who is giving the news and not what that news is.

This is one of the reasons why writers, journalists, reporters, and others should not be weary of losing their desk jobs or “beats” they cover anytime soon, or really ever.

Regardless, while pros and cons can be found in the nuances with machine-based learning and A.I. in the newsroom, how do you know this is being written by a human or a robot?

Turing once said, “I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.”

Considering that, who’s asking anyways?

Exit mobile version