There are at least two consistent themes running through many (if not most) of my science fiction stories. One is of turning technology (such as data, information, Intel, electrons, etc.) into “solid or substantial things.”
And the other is of people finding multiple uses for a single piece of technology (a design idea I always try to practice when inventing) and of people accidentally discovering hidden or secret functions or uses for common pieces of technology (radios, TVs, satellites, etc).
Well tonight I was studying the etymology of some rare Anglo-Saxon and English words when I ran across this word: Pixilated.
Now pixilated has not the same meaning as our word pixelated but upon reading the definition I immediately saw the parallels. Forward (taking pixelated to its logical conclusion), and backwards (by becoming unpixelated a man becomes, so to speak, also unpixilated).
Pixilated, a very old word, means to be bewitched as if by pixies, or, to be bewildered, confused, charmed, or intoxicated, as if by pixies.
Which made me think immediately of a computer screen and the internet.
I am already writing a story in which a computer and screen basically and accidentally serves as a (Tolkienesque) Palantir-like artefcat. Though it has a second meaning: Palantir the company.
But after reading about being “pixilated” I have also decided to write a parallel story about a pixilated man. Though the way in which he becomes pixilated is by first becoming enpixelated.
Which I think will also serve as a sort of related piece to my Eye in a Distant Sky story.
Continuing my tales of the Wizard Alternaeus and his apprentice.
GOODLY EVILS, AND THE EVILS OF “THE GOOD”
“I have no satisfactory answer for you lad. Because to this very day, my boy, I am still amazed at those quantities and diversities of important things that evil men will fearlessly attempt over the paltry count of those same things that good men will attempt. Not because evil men are so much more numerous than good men, they are certainly not, if anything they are the distinct minority of all men. Nor because evil men are so much greater than good men, because by both inner nature and by outward behavior, they are not.
No, it has been my perpetual and sad observation that evil triumphs so often in this world not because evil is so irresistibly inconquerable in number, or because evil is so inherently imposing in nature, but merely because men who profess themselves to be good are so very often so very, very afraid.
Now that might very well seem to you like a bleak prophecy about the nature of men in general and the rather uncommon occurrence of real manhood in this world. And to be honest it truly is. But as far as foretelling what you must become, or any man must necessarily be, it says nothing about either of those things by any means.
So, if you have heard and understood all that I have said then this is the only answer I have for you. For all of that, still, be a good boy, and an even better man. For those two ends are very worthy ambitions.
Just remember this though as you mature; be yet far more courageous than most self-described good men.
For to be good without courage is no real ambition at all. And as a matter of fact it is the timid good man who is the certain mid-wife and the sure wet-nurse of most of the goodly evils that men do.”
I’ve been thinking about this for awhile now. I’ve started a new sci-fi story I’ve entitled The Sthencist. It will take place in the future and for about 2/3rds of the way through it will seem like an interesting (but not a spectacular) Mundane Science Fiction story.
“Oh, I understand you completely madam, but, perhaps not in the way you presume I do. You see the thing that may disturb me the most, it certainly repulses me the most, about you supposedly ‘civilized people‘ is that you are always unfailingly polite, and extremely well-mannered, and pretentiously and profusely diplomatic. Your high ideals are on constant display for all to hear, and although I rarely see them or detect them in actual action, I must say they get a whole lot of airplay on rebroadcast.
On the other hand the pointless wrongs you people encourage, the self-absorbed misdeeds you recklessly commit, and the evils you so callously allow strike me as rather atrocious. Just. Fucking. Atrocious.”
Suddenly her serving man snorted. Or scoffed. It was difficult to say which but otherwise he remained unmoving and at attention. Steinthal carried on as if he had failed to notice.
“So please excuse me if I decline any further excuse to endure your company. I suddenly feel the need to take a hard, dirty shit. And probably a long shower.
Good night my dear. Good night ladies.
Let’s not do this again, real soon.”
Steinthal placed his untouched whiskey glass back upon the silver serving plate, tapped the phone in his trouser pockets, then turned and walked for the door.
Just before reaching it he turned again and bowed graciously.
“My apologies. That was rude and somewhat misleading. I’ve reconsidered. Let’s not do this again – ever.”
Over the weekend I started a new fictional short story. A fantasy of sorts, you might say. This is the first draft. I have made no editorial corrections at all. I thought it would make an interesting experiment for others to see regarding how a short story develops over time and is edited, corrected, revised, etc.
I did not type this by the way. Because of my previously broken wrist my youngest daughter now does most of my typing. (My oldest daughter is already in college.) I write in longhand, she types. I owe her much for that, and I pay her, though it is also part of the life and practical and market skills development section of her homeschooling studies.
Since this story involves a mysterious stranger that the main character entertains and travels with from time to time (I had plotted that into the story from the beginning of my sketches for the work) and a Journey I decided to also link this to the Daily Prompt on WordPress for today
I will not be posting the entire story here, once it is completed, because I plan to publish it. But the section included here, when I make the necessary editorial corrections and revisions, that I will post later.
The story will also contain within it the poem, He Who Goes Alone. Which I actually wrote for a different purpose but last night I realized fit this story so acutely that I decided to include it as part of the story.
Ladies and gentlemen I give you The Last True Man. (And although he is not really a man, he is True to the end.)
THE LAST TRUE MAN
He lived alone. Once he had a wife, and a son and two daughters. Only one daughter had survived his thirty-third birthday. By that time he was too badly wounded to care for her and had been made permanently lame. Being unable to care for her properly, and his recuperation taking years, he had given her over to the care of his former wife’s sister. He still saw his daughter and her children occasionally, and treated her kindly though she was often in awe and afraid of him. But she did not know who he truly was. To her, as to everyone else, he was simply the old hermit who almost never spoke.
Now he was eighty-seven. Though he did not appear so, nor did he move like an old man. Nevertheless he was still partially lame from the wounds he had received as a young man. For even in his heart, as in his body, some wounds remained and never fully closed such as those injuries and wrongs that claimed the life of his wife, son, and oldest daughter.
So he lived alone. Alone among a set of ancient weathered, discolored, wan stone and marble ruins. Ruins left by a long dead and vanquished race, all of their works toppled and reclaimed by the forest, all except those he kept as a forlorn home and temple of remembrance. Yet to him it was not forlorn or even a ruin. It was the wreckage of another age he had reclaimed for himself. He who went alone.
The ruins stood beyond the horizon of the village in which his daughter dwelt. Though not far. They did not have to stand afar off for all manner of men shunned those ruins and the surrounding landscape, considering them accursed and haunted. None ventured there and aside from young boys filled with that spirit of adventure and exploration that sometimes overwhelms and possesses them view ever came within close sight, to almost all it was a place more imagined than ever observed.
Except to him. Despite the many pitfalls and the shifting rot and the persistent decay that nature worked upon the ancient place he knew it well and almost completely. He even knew of most of the most desolate and new long buried areas. He also dwelt at peace with all but a few of the surrounding creatures, be they large, small, tame, wild, fierce, or gigantic and fearsome.
His means were simple, his desires few, his quaint and modest satisfactions many in his deserted home, and his dwelling austere. He spent his days wandering, exploring and mapping the wide ruins in which he lived, drawing, sketching, mapping, writing and cataloging all he discovered. Many days he would also explore the nearby forest, visiting or entertaining creatures as they would accommodate him, or he they. At dawn he would pray, at sunset sing. At night he would take the telescope he had fashioned for himself and watch the moon and stars.
Sometimes at night he would also sit long in meditation, contemplation, or within the various memory palaces he had created in his own mind so that he could commiserate with the ghosts of his dead family and friends. In this way he would sometimes slip happily into dream and melancholy would leave him until he again awoke. When it might or might not return to him like an unreliable and unpredictable friend.
Or was friend the right word? Maybe Melancholy was his interrogator of habit, like Death was the companion of his more somber dreams and troubled visions. He was never really sure where he actually stood with the steady companions of his loneliness and exile. He only knew that he knew them well, and that they knew him as he truly was. In the center of his inmost soul.
His most steady companion however was his huge dog which so resembled a small bear in size and shape and appearance that some men took it for a strangely colored and tame bear and nicknamed him “Uroldas” or “Bear-Father.”
He built a dwelling of the old stones of what he surmised to have been the still standing remains of an ancient tower attached to the ruins of what was possibly an old wall or gate mount. Indeed he called it his tower and it was there stories tall, consisting of four levels all together, including the level he had dug underground for storage. His tower was part home, part hermitage, part-forge, (for he also worked his own metals and artifacts) and part observatory, and he named it Caerloron, after his dead son.
Occasionally he was visited at dusk, at dawn, or late at night by a mysterious figure in simple robes and a deep blue prayer shawl who would entertain him, or who he would entertain, and often during such visits they would talk long and in a familiar, friendly fashion. Though none else saw this odd visitor for two reasons; he would never approach if the man was otherwise occupied, and secondly due to the isolation and uncanniness of the old man’s dwelling. Which kept almost everyone else at bay in any case.
The man possessed a strange drinking vessel as well. An almost eerily peculiar cup he had recovered from a trove deep in the city, craftily contrived, decorated with bizarre devices and the cryptic letters of a long dead language. For in the future, many centuries hence it was whispered this cup never went dry, but that was just a rumor yet to be born. As for the man when he had first found the cup he had inscribed it with his name, Aelone. St that time he was still a young man and called himself by his name. in the years that followed everyone else forgot his name, and even who he had once been and so he took to himself, “me.” Or “I.”
Lately I have been doing a lot of what I call Cross-Over Work.
In this case I mean by saying that I have been doing a lot of work that cross-fertilizes itself in other works I am simultaneously creating. For instance I might be writing one novel and a particular scene or bit of dialogue I create will inspire another scene or piece of dialogue in another book or novel I am working on.
Though such things are not necessarily related to or limited to my various fiction writings. I might be drawing a map or making a sketch, designing something, working on a start-up project, developing an invention, writing a poem or song lyrics, or writing a novel or a non-fiction book and all of these things, or others, might give me an idea for another work I’m currently pursuing.
So today, and below (and in allusion to my previous post on actors), I am posting some of my latest Cross-Over Work. Little vignettes, or to be more accurate, often just little snippets (bits of dialogue, sections of scenes, sketch notes, etc.) of various Works I am creating and pursuing at this time.
Does your Work cross over in this way, from one work to another?
If so then feel free to comment below.
NOT A FAIR FIGHT
“Again I don’t get it. Take one shot at your actual target and three at yourself… don’t seem like much of a fair fight to me.”
From my Western The Lettered Men
“Not every possibility is true, that’s certainly true, but every possibility is always a clue – to something other than itself. If you keep forgetting that then it’s very possible the Truth will entirely escape you. And if it does then what other possibilities really matter?”
From The Detective Steinthal
“True darkness obscures. Few things can thrive in perpetual shade but those things that can definitely always wish to remain hidden. That is, until they are ready to be discovered. For reasons of their own.”
From The Detective Steinthal
“It is always best to hunt in silence.”
The Detective Steinthal
YOUR TRAINING IS OVER
“What are you training for kid? To train forever? Now who wants that kinda shit anyway? Only officers and politicians, that’s who. No, you get your ass in the fight. You’ve trained long enough. Time to be somebody.”
From Snyder’s Spiders
“And how now is your wound?”
“It itches fiercely, it hurts mightily, it swells darkly, but it bleeds freely and cleanly. It is good that it bleeds so and thus I will not complain of the other things. But if you have any more of that strange brew you drink then I will not complain of a skin full of that either.”
“I have not a skin, but I can manage a cup.”
“Then so can I…”
Suegenius describing to Fhe Fhissegrim the condition of his wound
From my fantasy The Kithariune (The Basilegate)
A RARE AND WONDROUS FEAT
“If you cannot stand up to your own old man then you will never stand up to anyone. If you can stand up to your own old man then you can stand up to anyone else, and everyone else.
If your old man ever forces you to rebel against him then do not hate him for it, respect him for it. He has done more for you in that regard, as regards the development of your actual manhood, than any other thing anyone else could ever do for you in the world. That man who forces his son into rebellion has bred a man. You owe such a father an enormous and generous debt.
That father who always insists his son obey him, right or wrong, has bred a mere and helpless and fearful slave. You owe that father your utter disdain and yourself nothing but shame for your own endless submission.
Drink to your father Edomios. Drink long and deep. He has bred a man in you. A man who can stand upright and unafraid. A rare and wondrous feat in our age.
Maybe in any age.”
Marsippius Nicea the Byzantine Commander of the Basilegate explaining to Edomios the Spanish Paladin why he owes his father a debt of manhood
From The Kithariune
THAT WAY YOU SPEAK
When Michael first lands in Thaumaturgis he is met by Harmonius Hippostatic who makes fun of the way he speaks and tries to explain to Michael where he is, and what life is like in the Lands. Michael does not at first speak in verse, but speaks in prose, but as he stays longer and longer in the land of Thaumaturgis he also comes to speak in metered, rhyming verse.
Harmonius: That way you speak, it’s quite a feat
But it will never do,
No meter, rhyme or rhythm,
It’s really quite obtuse.
Michael: Where am I?
Harmonius: Why this is Thaumaturgis,
Don’t you know your lands?
It’s one of the three countries,
Not earth, not stone, not sand.
No one’s ever figured
How it got this way
Tomorrow is the same as now
It’s always been that way.
If want you life miraculous
It’s really quite so marvelous
And never, ever dull.
But one thing in this country
You really must avoid
Speaking words in plain old prose
Is what will most annoy,
So put on your best rhyming
Your metered rhythm too
Don’t dally up a worthwhile speech
Without so much ado,
Be mannered in your speaking
Poetic when you talk
Or everyone will soon declare Your words taste just like chalk
A lot of my buddies have military and law enforcement backgrounds.
Because of that one of my friends brought this article to my attention and a few of us discussed it since it is of more than passing interest to many of us.
It gave me an idea for a new science fiction short story about the same subject matter which I’m going to call Jihadology. (For the Jihad of Technology.)
I going to completely avoid the whole Terminator and tech gone rogue approach though of modern sci-fi and rather take a particular variation on the Keith Laumer BOLO theme, though there will be nothing about BOLOs or other such machines in the story. Those stories though were as under-rated and prophetic as was Laumer himself.
Anyway I want to avoid the whole world ending, unrealistic bullcrap kind of story (both from the scientific and military standpoints) and focus more on a very tight interpretation of what might actually happen if technologies such as those listed or projected in the article below were employed against an alien species in the future.
What would be both the operational and eventual ramifications, good and bad, of such technologies,and how could such technologies get out of hand or evolve beyond specified tasks and design parameters to become something completely new in function and focus?
I’ve already got the first few paragraphs to a page written which is based loosely upon this observation I made about what the article implied:
“I’m not saying there are any easy answers, there aren’t when it comes to technology, but technology can at least potentially do two related and diametrically opposed things at once: make a task so easy and efficient and risk-free for the operator that he is never truly in danger for himself, and secondly make a task so easy and efficient and risk-free for the operator that he is never truly in danger of understanding the danger others are in.
And if you can just remove the operator altogether, and just set the tech free to do as it is programmed, well then, there ya go…”
If the stories work well then I’ll add them to my overall science fiction universe of The Curae and The Frontiersmen.
By the way, as a sort of pop-culture primer on the very early stages of these developments (though they are at least a decade old now as far as wide-scale operations go) I recommend the film, Good Kill.
Anyway here is the very interesting and good article that spurred all of this. Any ideas of your own about these subjects? Feel free to comment. If your ideas and observations are good and interesting I might even adapt them in some way and incorporate them into the short story series.
Czech writer Karel Čapek’s1920 play R.U.R. (Rossum’s Universal Robots), which famously introduced the word robot to the world, begins with synthetic humans—the robots from the title—toiling in factories to produce low-cost goods. It ends with those same robots killing off the human race. Thus was born an enduring plot line in science fiction: robots spiraling out of control and turning into unstoppable killing machines. Twentieth-century literature and film would go on to bring us many more examples of robots wreaking havoc on the world, with Hollywood notably turning the theme into blockbuster franchises like The Matrix, Transformers, and The Terminator.
Lately, fears of fiction turning to fact have been stoked by a confluence of developments, including important advances in artificial intelligence and robotics, along with the widespread use of combat drones and ground robotsin Iraq and Afghanistan. The world’s most powerful militaries are now developing ever more intelligent weapons, with varying degrees of autonomy and lethality. The vast majority will, in the near term, be remotely controlled by human operators, who will be “in the loop” to pull the trigger. But it’s likely, and some say inevitable, that future AI-powered weapons will eventually be able to operate with complete autonomy, leading to a watershed moment in the history of warfare: For the first time, a collection of microchips and software will decide whether a human being lives or dies.
Not surprisingly, the threat of “killer robots,” as they’ve been dubbed, has triggered an impassioned debate. The poles of the debate are represented by those who fear that robotic weapons could start a world war and destroy civilization and others who argue that these weapons are essentially a new class of precision-guided munitions that will reduce, not increase, casualties. In December, more than a hundred countries are expected to discuss the issue as part of a United Nations disarmament meeting in Geneva.
Last year, the debate made news after a group of leading researchers in artificial intelligence called for a ban on “offensive autonomous weapons beyond meaningful human control.” In an open letter presented at a major AI conference, the group argued that these weapons would lead to a “global AI arms race” and be used for “assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group.”
The three added that “autonomous weapons are potentially weapons of mass destruction. While some nations might not choose to use them for such purposes, other nations and certainly terrorists might find them irresistible.”
It’s hard to argue that a new arms race culminating in the creation of intelligent, autonomous, and highly mobile killing machines would well serve humanity’s best interests. And yet, regardless of the argument, the AI arms race is already under way.
Autonomous weapons have existed for decades, though the relatively few that are out there have been used almost exclusively for defensive purposes. One example is the Phalanx, a computer-controlled, radar-guided gun system installed on many U.S. Navy ships that can automatically detect, track, evaluate, and fire at incoming missiles and aircraft that it judges to be a threat. When it’s in fully autonomous mode, no human intervention is necessary.
More recently, military suppliers have developed what may be considered the first offensive autonomous weapons.Israel Aerospace Industries’ Harpy andHarop drones are designed to home in on the radio emissions of enemy air-defense systems and destroy them by crashing into them. The companysays the drones “have been sold extensively worldwide.”
In South Korea, DoDAAM Systems, a defense contractor, has developed a sentry robot called theSuper aEgis II. Equipped with a machine gun, it uses computer vision to autonomously detect and fire at human targets out to a range of 3 kilometers. South Korea’s military has reportedly conducted tests with these armed robots in the demilitarized zone along its border with North Korea. DoDAAM says it has sold more than 30 units to other governments, including several in the Middle East.
Today, such highly autonomous systems are vastly outnumbered by robotic weapons such as drones, which are under the control of human operators almost all of the time, especially when firing at targets. But some analysts believe that as warfare evolves in coming years, weapons will have higher and higher degrees of autonomy.
“War will be very different, and automation will play a role where speed is key,” says Peter W. Singer, a robotic warfare expert at New America, a nonpartisan research group in Washington, D.C. He predicts that in future combat scenarios—like a dogfight between drones or an encounter between a robotic boat and an enemy submarine—weapons that offer a split-second advantage will make all the difference. “It might be a high-intensity straight-on conflict when there’s no time for humans to be in the loop, because it’s going to play out in a matter of seconds.”
The U.S. military has detailed some of its plans for this new kind of war in aroad map [pdf] for unmanned systems, but its intentions on weaponizing such systems are vague. During a Washington Post forum this past March, U.S. deputy secretary of defense Robert Work, whose job is in part making sure that the Pentagon is keeping up with the latest technologies, stressed the need to invest in AI and robotics. The increasing presence of autonomous systems on the battlefield “is inexorable,” he declared.
Asked about autonomous weapons, Work insisted that the U.S. military “will not delegate lethal authority to a machine to make a decision.” But when pressed on the issue, he added that if confronted by a “competitor that is more willing to delegate authority to machines than we are…we’ll have to make decisions on how we can best compete. It’s not something that we’ve fully figured out, but we spend a lot of time thinking about it.”
Russia and China are following a similar strategyof developing unmanned combat systems for land, sea, and air that are weaponized but, at least for now, rely on human operators. Russia’sPlatform-M is a small remote-controlled robot equipped with a Kalashnikov rifle and grenade launchers, a type of system similar to the United States’ Talon SWORDS, a ground robot that can carry an M16 and other weapons (it was tested by the U.S. Army in Iraq). Russia has also built a larger unmanned vehicle, the Uran-9, armed with a 30-millimeter cannon and antitank guided missiles. And last year, the Russians demonstrated a humanoid military robot to a seemingly nonplussed Vladimir Putin. (In video released after the demonstration, the robot is shown riding an ATV at a speed only slightly faster than a child on a tricycle.)
China’s growing robotic arsenal includes numerous attack and reconnaissance drones. The CH-4 is a long-endurance unmanned aircraft that resembles the Predator used by the U.S. military. The Divine Eagle is a high-altitude drone designed to hunt stealth bombers. China has also publicly displayed a few machine-gun-equipped robots, similar to Platform-M and Talon SWORDS, at military trade shows.
The three countries’ approaches to robotic weapons, introducing increasing automation while emphasizing a continuing role for humans, suggest a major challenge to the banning of fully autonomous weapons: A ban on fully autonomous weapons would not necessarily apply to weapons that are nearly autonomous. So militaries could conceivably develop robotic weapons that have a human in the loop, with the option of enabling full autonomy at a moment’s notice in software. “It’s going to be hard to put an arms-control agreement in place for robotics,” concludes Wendell Wallach, an expert on ethics and technology at Yale University. “The difference between an autonomous weapons system and nonautonomous may be just a difference of a line of code,” he said at a recent conference.
In motion pictures, robots often gain extraordinary levels of autonomy, even sentience, seemingly out of nowhere, and humans are caught by surprise. Here in the real world, though, and despite the recent excitement about advances in machine learning, progress in robot autonomy has been gradual. Autonomous weapons would be expected to evolve in a similar way.
“A lot of times when people hear ‘autonomous weapons,’ they envision the Terminator and they are, like, ‘What have we done?,’ ” says Paul Scharre, who directs a future-of-warfare program at the Center for a New American Security, a policy research group in Washington, D.C. “But that seems like probably the last way that militaries want to employ autonomous weapons.” Much more likely, he adds, will be robotic weapons that target not people but military objects like radars, tanks, ships, submarines, or aircraft.
The challenge of target identification—determining whether or not what you’re looking at is a hostile enemy target—is one of the most critical for AI weapons. Moving targets like aircraft and missiles have a trajectory that can be tracked and used to help decide whether to shoot them down. That’s how the Phalanx autonomous gun on board U.S. Navy ships operates, and also how Israel’s “Iron Dome” antirocket interceptor system works. But when you’re targeting people, the indicators are much more subtle. Even under ideal conditions, object- and scene-recognition tasks that are routine for people can be extremely difficult for robots.
A computer can identify a human figure without much trouble, even if that human is moving furtively. But it’s very hard for an algorithm to understand what people are doing, and what their body language and facial expressions suggest about their intent. Is that person lifting a rifle or a rake? Is that person carrying a bomb or an infant?
Scharre argues that robotic weapons attempting to do their own targeting would wither in the face of too many challenges. He says that devising war-fighting tactics and technologies in which humans and robots collaborate [pdf] will remain the best approach for safety, legal, and ethical reasons. “Militaries could invest in very advanced robotics and automation and still keep a person in the loop for targeting decisions, as a fail-safe,” he says. “Because humans are better at being flexible and adaptable to new situations that maybe we didn’t program for, especially in war when there’s an adversary trying to defeat your systems and trick them and hack them.”
It’s not surprising, then, that DoDAAM, the South Korean maker of sentry robots, imposed restrictions on their lethal autonomy. As currently configured, the robots will not fire until a human confirms the target and commands the turret to shoot. “Our original version had an auto-firing system,” a DoDAAM engineer told the BBC last year. “But all of our customers asked for safeguards to be implemented…. They were concerned the gun might make a mistake.”
For other experts, the only way to ensure that autonomous weapons won’t make deadly mistakes, especially involving civilians, is to deliberately program these weapons accordingly. “If we are foolish enough to continue to kill each other in the battlefield, and if more and more authority is going to be turned over to these machines, can we at least ensure that they are doing it ethically?” says Ronald C. Arkin, a computer scientist at Georgia Tech.
Arkin argues that autonomous weapons, just like human soldiers, should have to follow the rules of engagement as well as the laws of war, includinginternational humanitarian laws that seek to protect civilians and limit the amount of force and types of weapons that are allowed. That means we should program them with some kind of moral reasoning to help them navigate different situations and fundamentally distinguish right from wrong. They will need to have, embodied deep in their software, some sort of ethical compass.
For the past decade, Arkin has been working on such a compass. Using mathematical and logic tools from the field of machine ethics, he began translating the highly conceptual laws of war and rules of engagement into variables and operations that computers can understand. For example, one variable specified how confident the ethical controller was that a target was an enemy. Another was a Boolean variable that was either true or false: lethal force was either permitted or prohibited. Eventually, Arkin arrived at a set of algorithms, and using computer simulations and very simplified combat scenarios—an unmanned aircraft engaging a group of people in an open field, for example—he was able to test his methodology.
Arkin acknowledges that the project, which was funded by the U.S. military, was a proof of concept, not an actual control-system implementation. Nevertheless, he believes the results showed that combat robots not only could follow the same rules that humans have to follow but also that they could do better. For example, the robots could use lethal force with more restraint than could human fighters, returning fire only when shot at first. Or, if civilians are nearby, they could completely hold their fire, even if that means being destroyed. Robots also don’t suffer from stress, frustration, anger, or fear, all of which can lead to impaired judgment in humans. So in theory, at least, robot soldiers could outperform human ones, who often and sometimes unavoidably make mistakes in the heat of battle.
“And the net effect of that could be a saving of human lives, especially the innocent that are trapped in the battle space,” Arkin says. “And if these robots can do that, to me there’s a driving moral imperative to use them.”
The U.N. has been holdingdiscussions on lethal autonomous robots for close to five years, but its member countries have been unable to draw up an agreement. In 2013,Christof Heyns, a U.N. special rapporteur for human rights, wrote an influential report noting that the world’s nations had a rare opportunity to discuss the risks of autonomous weapons before such weapons were already fully developed. Today, after participating in several U.N. meetings, Heyns says that “if I look back, to some extent I’m encouraged, but if I look forward, then I think we’re going to have a problem unless we start acting much faster.”
This coming December, the U.N.’s Convention on Certain Conventional Weapons will hold a five-year review conference, and the topic of lethal autonomous robots will be on the agenda. However, it’s unlikely that a ban will be approved at that meeting. Such a decision would require the consensus of all participating countries, and these still have fundamental disagreements on how to deal with the broad spectrum of autonomous weapons expected to emerge in the future.
In the end, the “killer robots” debate seems to be more about us humans than about robots. Autonomous weapons will be like any technology, at least at first: They could be deployed carefully and judiciously, or chaotically and disastrously. Human beings will have to take the credit or the blame. So the question, “Are autonomous combat robots a good idea?” probably isn’t the best one. A better one is, “Do we trust ourselves enough to trust robots with our lives?”
This article appears in the June 2016 print issue as “When Robots Decide to Kill.”