I’ve been thinking about this for awhile now. I’ve started a new sci-fi story I’ve entitled The Sthencist. It will take place in the future and for about 2/3rds of the way through it will seem like an interesting (but not a spectacular) Mundane Science Fiction story.
I play the Ogre in the sun
Battles in the unreal Wastes
Tracks upon the injured Earth
Men lost at bootless, useless tasks
Desperate in their race
To halt the Great Machine
Before it plows their graves
Shells fired true at noon
Scream and then erupt
Melting steel and flesh
Can any still be saved?
A flash and searing light
Mushroom clouds rise
High in Death –
Men are vaporized, but
Shielded from the shock
The Ogre shakes it off
The Ogre lumbers on,
All else is crushed like dust
Reduced to crumbled rock
In what can men now trust?
In what do they take stock?
Replacements rush the line
Their missiles roar in flight
The battle rages gold and red
Until the fall of night,
The ruins smolder long
The slag it runs like blood
The dark is still and cold
Men buried in the mud
No movement shows on scopes
No sound is heard to churn
The ground no longer shakes
No giant Beast comes forth
There is no steam or quake,
The moon spies no Mighty Monster
Neath the clouds of man-made smoke
As it runs the sky in haste –
It seems as if there’s Victory
That doom has been revoked
Yet come the dawn
Come the morning’s rise
Men in dread do hear
The distant turning wheel and clank
Of scored and tempered metal
Once more in motion on the Earth,
They feel the tremors, re-sight the Beast
As it lumbers on uncaring
Atomic piles of superheated waste
A mind of carbon frozen cold
An alloyed soul of silicates
That knows no mercy, rest, or wear
That shed no tears for corpses piled
Higher than its mighty head
With its massive maw of cannon red
And unblinking Cyclopean Eye
That watches all and sees all things
Far and wide
Except, of course, for what it does…
An Ogre plays beneath the burning sun
Battles rage in fresh ruined wastes
Tracks marred hard upon the dying Earth
Men lost in their hellish tasks
Desperate in their race
To kill the Great Machine
Before it plows their countless graves
Yet come the dusk before the dawn
Come the night before the fight
The monster knows not, cares not
Asks not, hears not the wounded
Cries and screams of men
For the Ogre has no doubt
It dreams no dream
Except the dream of what it does –
War unchecked, and winning that
The End of All,
Himself as well
To die alone
Beneath a burning sun Unneeded anymore…
Ogre is actually based on the old Keith Laumer series Bolo. Some of my favorite sci-fi tales to read when I was a kid. I recommend both the game and the books.
Today I played a particularly superb scenario that I had also written for myself. It was a “Total War Scenario.”
By the end of the scenario the human forces had been completely annihilated and the Ogre could not move having also been almost entirely destroyed.
I’ve decided to call it my War Unchecked Scenario. Or the End-All scenario.
Rarely have I fought or devised a wargames scenario that resulted in such almost perfect annihilation of both sides.
After ruminating on what it implied for a little while I wrote the poem above, The End of All.
I was surprised at how well the poem came out, especially since it was based upon nothing more than a wargames scenario. And it has been very well received and critiqued by my family, friends, and the followers of my poetry and writings. Then again I always try and envision any wargame (or role play game, or real life training scenario) as realistically in my head as possibly in order to execute it properly and to see what real life lessons might be learned.
So this poem will go into my file for Espionage and Military and Survival Poetry. Later I will seek to have it published. You are welcome to make your own comments on it as well.
A lot of my buddies have military and law enforcement backgrounds.
Because of that one of my friends brought this article to my attention and a few of us discussed it since it is of more than passing interest to many of us.
It gave me an idea for a new science fiction short story about the same subject matter which I’m going to call Jihadology. (For the Jihad of Technology.)
I going to completely avoid the whole Terminator and tech gone rogue approach though of modern sci-fi and rather take a particular variation on the Keith Laumer BOLO theme, though there will be nothing about BOLOs or other such machines in the story. Those stories though were as under-rated and prophetic as was Laumer himself.
Anyway I want to avoid the whole world ending, unrealistic bullcrap kind of story (both from the scientific and military standpoints) and focus more on a very tight interpretation of what might actually happen if technologies such as those listed or projected in the article below were employed against an alien species in the future.
What would be both the operational and eventual ramifications, good and bad, of such technologies,and how could such technologies get out of hand or evolve beyond specified tasks and design parameters to become something completely new in function and focus?
I’ve already got the first few paragraphs to a page written which is based loosely upon this observation I made about what the article implied:
“I’m not saying there are any easy answers, there aren’t when it comes to technology, but technology can at least potentially do two related and diametrically opposed things at once: make a task so easy and efficient and risk-free for the operator that he is never truly in danger for himself, and secondly make a task so easy and efficient and risk-free for the operator that he is never truly in danger of understanding the danger others are in.
And if you can just remove the operator altogether, and just set the tech free to do as it is programmed, well then, there ya go…”
If the stories work well then I’ll add them to my overall science fiction universe of The Curae and The Frontiersmen.
By the way, as a sort of pop-culture primer on the very early stages of these developments (though they are at least a decade old now as far as wide-scale operations go) I recommend the film, Good Kill.
Anyway here is the very interesting and good article that spurred all of this. Any ideas of your own about these subjects? Feel free to comment. If your ideas and observations are good and interesting I might even adapt them in some way and incorporate them into the short story series.
Czech writer Karel Čapek’s1920 play R.U.R. (Rossum’s Universal Robots), which famously introduced the word robot to the world, begins with synthetic humans—the robots from the title—toiling in factories to produce low-cost goods. It ends with those same robots killing off the human race. Thus was born an enduring plot line in science fiction: robots spiraling out of control and turning into unstoppable killing machines. Twentieth-century literature and film would go on to bring us many more examples of robots wreaking havoc on the world, with Hollywood notably turning the theme into blockbuster franchises like The Matrix, Transformers, and The Terminator.
Lately, fears of fiction turning to fact have been stoked by a confluence of developments, including important advances in artificial intelligence and robotics, along with the widespread use of combat drones and ground robotsin Iraq and Afghanistan. The world’s most powerful militaries are now developing ever more intelligent weapons, with varying degrees of autonomy and lethality. The vast majority will, in the near term, be remotely controlled by human operators, who will be “in the loop” to pull the trigger. But it’s likely, and some say inevitable, that future AI-powered weapons will eventually be able to operate with complete autonomy, leading to a watershed moment in the history of warfare: For the first time, a collection of microchips and software will decide whether a human being lives or dies.
Not surprisingly, the threat of “killer robots,” as they’ve been dubbed, has triggered an impassioned debate. The poles of the debate are represented by those who fear that robotic weapons could start a world war and destroy civilization and others who argue that these weapons are essentially a new class of precision-guided munitions that will reduce, not increase, casualties. In December, more than a hundred countries are expected to discuss the issue as part of a United Nations disarmament meeting in Geneva.
Last year, the debate made news after a group of leading researchers in artificial intelligence called for a ban on “offensive autonomous weapons beyond meaningful human control.” In an open letter presented at a major AI conference, the group argued that these weapons would lead to a “global AI arms race” and be used for “assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group.”
The three added that “autonomous weapons are potentially weapons of mass destruction. While some nations might not choose to use them for such purposes, other nations and certainly terrorists might find them irresistible.”
It’s hard to argue that a new arms race culminating in the creation of intelligent, autonomous, and highly mobile killing machines would well serve humanity’s best interests. And yet, regardless of the argument, the AI arms race is already under way.
Autonomous weapons have existed for decades, though the relatively few that are out there have been used almost exclusively for defensive purposes. One example is the Phalanx, a computer-controlled, radar-guided gun system installed on many U.S. Navy ships that can automatically detect, track, evaluate, and fire at incoming missiles and aircraft that it judges to be a threat. When it’s in fully autonomous mode, no human intervention is necessary.
More recently, military suppliers have developed what may be considered the first offensive autonomous weapons.Israel Aerospace Industries’ Harpy andHarop drones are designed to home in on the radio emissions of enemy air-defense systems and destroy them by crashing into them. The companysays the drones “have been sold extensively worldwide.”
In South Korea, DoDAAM Systems, a defense contractor, has developed a sentry robot called theSuper aEgis II. Equipped with a machine gun, it uses computer vision to autonomously detect and fire at human targets out to a range of 3 kilometers. South Korea’s military has reportedly conducted tests with these armed robots in the demilitarized zone along its border with North Korea. DoDAAM says it has sold more than 30 units to other governments, including several in the Middle East.
Today, such highly autonomous systems are vastly outnumbered by robotic weapons such as drones, which are under the control of human operators almost all of the time, especially when firing at targets. But some analysts believe that as warfare evolves in coming years, weapons will have higher and higher degrees of autonomy.
“War will be very different, and automation will play a role where speed is key,” says Peter W. Singer, a robotic warfare expert at New America, a nonpartisan research group in Washington, D.C. He predicts that in future combat scenarios—like a dogfight between drones or an encounter between a robotic boat and an enemy submarine—weapons that offer a split-second advantage will make all the difference. “It might be a high-intensity straight-on conflict when there’s no time for humans to be in the loop, because it’s going to play out in a matter of seconds.”
The U.S. military has detailed some of its plans for this new kind of war in aroad map [pdf] for unmanned systems, but its intentions on weaponizing such systems are vague. During a Washington Post forum this past March, U.S. deputy secretary of defense Robert Work, whose job is in part making sure that the Pentagon is keeping up with the latest technologies, stressed the need to invest in AI and robotics. The increasing presence of autonomous systems on the battlefield “is inexorable,” he declared.
Asked about autonomous weapons, Work insisted that the U.S. military “will not delegate lethal authority to a machine to make a decision.” But when pressed on the issue, he added that if confronted by a “competitor that is more willing to delegate authority to machines than we are…we’ll have to make decisions on how we can best compete. It’s not something that we’ve fully figured out, but we spend a lot of time thinking about it.”
Russia and China are following a similar strategyof developing unmanned combat systems for land, sea, and air that are weaponized but, at least for now, rely on human operators. Russia’sPlatform-M is a small remote-controlled robot equipped with a Kalashnikov rifle and grenade launchers, a type of system similar to the United States’ Talon SWORDS, a ground robot that can carry an M16 and other weapons (it was tested by the U.S. Army in Iraq). Russia has also built a larger unmanned vehicle, the Uran-9, armed with a 30-millimeter cannon and antitank guided missiles. And last year, the Russians demonstrated a humanoid military robot to a seemingly nonplussed Vladimir Putin. (In video released after the demonstration, the robot is shown riding an ATV at a speed only slightly faster than a child on a tricycle.)
China’s growing robotic arsenal includes numerous attack and reconnaissance drones. The CH-4 is a long-endurance unmanned aircraft that resembles the Predator used by the U.S. military. The Divine Eagle is a high-altitude drone designed to hunt stealth bombers. China has also publicly displayed a few machine-gun-equipped robots, similar to Platform-M and Talon SWORDS, at military trade shows.
The three countries’ approaches to robotic weapons, introducing increasing automation while emphasizing a continuing role for humans, suggest a major challenge to the banning of fully autonomous weapons: A ban on fully autonomous weapons would not necessarily apply to weapons that are nearly autonomous. So militaries could conceivably develop robotic weapons that have a human in the loop, with the option of enabling full autonomy at a moment’s notice in software. “It’s going to be hard to put an arms-control agreement in place for robotics,” concludes Wendell Wallach, an expert on ethics and technology at Yale University. “The difference between an autonomous weapons system and nonautonomous may be just a difference of a line of code,” he said at a recent conference.
In motion pictures, robots often gain extraordinary levels of autonomy, even sentience, seemingly out of nowhere, and humans are caught by surprise. Here in the real world, though, and despite the recent excitement about advances in machine learning, progress in robot autonomy has been gradual. Autonomous weapons would be expected to evolve in a similar way.
“A lot of times when people hear ‘autonomous weapons,’ they envision the Terminator and they are, like, ‘What have we done?,’ ” says Paul Scharre, who directs a future-of-warfare program at the Center for a New American Security, a policy research group in Washington, D.C. “But that seems like probably the last way that militaries want to employ autonomous weapons.” Much more likely, he adds, will be robotic weapons that target not people but military objects like radars, tanks, ships, submarines, or aircraft.
The challenge of target identification—determining whether or not what you’re looking at is a hostile enemy target—is one of the most critical for AI weapons. Moving targets like aircraft and missiles have a trajectory that can be tracked and used to help decide whether to shoot them down. That’s how the Phalanx autonomous gun on board U.S. Navy ships operates, and also how Israel’s “Iron Dome” antirocket interceptor system works. But when you’re targeting people, the indicators are much more subtle. Even under ideal conditions, object- and scene-recognition tasks that are routine for people can be extremely difficult for robots.
A computer can identify a human figure without much trouble, even if that human is moving furtively. But it’s very hard for an algorithm to understand what people are doing, and what their body language and facial expressions suggest about their intent. Is that person lifting a rifle or a rake? Is that person carrying a bomb or an infant?
Scharre argues that robotic weapons attempting to do their own targeting would wither in the face of too many challenges. He says that devising war-fighting tactics and technologies in which humans and robots collaborate [pdf] will remain the best approach for safety, legal, and ethical reasons. “Militaries could invest in very advanced robotics and automation and still keep a person in the loop for targeting decisions, as a fail-safe,” he says. “Because humans are better at being flexible and adaptable to new situations that maybe we didn’t program for, especially in war when there’s an adversary trying to defeat your systems and trick them and hack them.”
It’s not surprising, then, that DoDAAM, the South Korean maker of sentry robots, imposed restrictions on their lethal autonomy. As currently configured, the robots will not fire until a human confirms the target and commands the turret to shoot. “Our original version had an auto-firing system,” a DoDAAM engineer told the BBC last year. “But all of our customers asked for safeguards to be implemented…. They were concerned the gun might make a mistake.”
For other experts, the only way to ensure that autonomous weapons won’t make deadly mistakes, especially involving civilians, is to deliberately program these weapons accordingly. “If we are foolish enough to continue to kill each other in the battlefield, and if more and more authority is going to be turned over to these machines, can we at least ensure that they are doing it ethically?” says Ronald C. Arkin, a computer scientist at Georgia Tech.
Arkin argues that autonomous weapons, just like human soldiers, should have to follow the rules of engagement as well as the laws of war, includinginternational humanitarian laws that seek to protect civilians and limit the amount of force and types of weapons that are allowed. That means we should program them with some kind of moral reasoning to help them navigate different situations and fundamentally distinguish right from wrong. They will need to have, embodied deep in their software, some sort of ethical compass.
For the past decade, Arkin has been working on such a compass. Using mathematical and logic tools from the field of machine ethics, he began translating the highly conceptual laws of war and rules of engagement into variables and operations that computers can understand. For example, one variable specified how confident the ethical controller was that a target was an enemy. Another was a Boolean variable that was either true or false: lethal force was either permitted or prohibited. Eventually, Arkin arrived at a set of algorithms, and using computer simulations and very simplified combat scenarios—an unmanned aircraft engaging a group of people in an open field, for example—he was able to test his methodology.
Arkin acknowledges that the project, which was funded by the U.S. military, was a proof of concept, not an actual control-system implementation. Nevertheless, he believes the results showed that combat robots not only could follow the same rules that humans have to follow but also that they could do better. For example, the robots could use lethal force with more restraint than could human fighters, returning fire only when shot at first. Or, if civilians are nearby, they could completely hold their fire, even if that means being destroyed. Robots also don’t suffer from stress, frustration, anger, or fear, all of which can lead to impaired judgment in humans. So in theory, at least, robot soldiers could outperform human ones, who often and sometimes unavoidably make mistakes in the heat of battle.
“And the net effect of that could be a saving of human lives, especially the innocent that are trapped in the battle space,” Arkin says. “And if these robots can do that, to me there’s a driving moral imperative to use them.”
The U.N. has been holdingdiscussions on lethal autonomous robots for close to five years, but its member countries have been unable to draw up an agreement. In 2013,Christof Heyns, a U.N. special rapporteur for human rights, wrote an influential report noting that the world’s nations had a rare opportunity to discuss the risks of autonomous weapons before such weapons were already fully developed. Today, after participating in several U.N. meetings, Heyns says that “if I look back, to some extent I’m encouraged, but if I look forward, then I think we’re going to have a problem unless we start acting much faster.”
This coming December, the U.N.’s Convention on Certain Conventional Weapons will hold a five-year review conference, and the topic of lethal autonomous robots will be on the agenda. However, it’s unlikely that a ban will be approved at that meeting. Such a decision would require the consensus of all participating countries, and these still have fundamental disagreements on how to deal with the broad spectrum of autonomous weapons expected to emerge in the future.
In the end, the “killer robots” debate seems to be more about us humans than about robots. Autonomous weapons will be like any technology, at least at first: They could be deployed carefully and judiciously, or chaotically and disastrously. Human beings will have to take the credit or the blame. So the question, “Are autonomous combat robots a good idea?” probably isn’t the best one. A better one is, “Do we trust ourselves enough to trust robots with our lives?”
This article appears in the June 2016 print issue as “When Robots Decide to Kill.”
Well, they might very well convince me anyway. Since I was a kid I’ve wanted to be an astronaut and I’m more than ready to blow this rock. Too many damned backwards, insane, and evil people populating Earth at the moment.
Of course I reckon to some extent it’s always been that way, and maybe we’d just take our twisted bullshit with us. But at least it’d be a chance at a fresh start…
Okay, poster. You make a compelling argument—sign us up!
True, there will be obstacles: For one, the Martian corps that these recruitment posters from Kennedy Space Center are attempting to enlist us in does not exist. Also, as of yet, no human has ever stepped foot on the surface of the red planet, much less worked some kind of shadowy night-watch position, that (rather terrifyingly) appears to require the constant use of a space harpoon.
But, no matter! The can-do spirit of these WWI- and WWII-influenced posters has already inspired us. We will be teachers, and welders, and farmers, and satellite technicians, and guards against the Martian night-octopuses that presumably overrun its lunar plains. Just let us know when those enlistment rolls open up.
Full resolutions, suitable for printing on your own, are also publicly availableright here.
I finally have the ultimate titles for my set of mythic/high-fantasy novels. They shall be called Kal-Kithariune(Or, The Fall of Kitharia). Originally the series was to be called The Other World but I was never really pleased with that. It was only a preliminary and place-holder title anyway.
The Kal-Kithariune shall link back to another myth/history or time epoch called the Kol-Kithariad(or the Rebirth or the Establishment of Kitharia). I have not really decided if the Kithariad will refer to a period of time 300 years prior to the Kithariune (when Kitharia undergoes a Rebirth or Renaissance) or to a period 3000 years prior when Kitharia is first established and founded.
Ideally I’d like to work it out so that the Kithariad refers to the Rebirth of Kitharia, 300 years before its Fall, but realistically I’m having real trouble making that fit and so it may have to refer to the Founding. It may be better to use the Founding as the other reference point anyway, to contrast the Genesis with the Armageddon and End. But I’d prefer the Rebirth. Though that might be impossible.
Kitharia is a both an analogy and a metaphor for America. And all of the Eldeven lands for the West even though the events take place in what would in our world be The Orient (near our Real World Samarkand).
The individual novels in the series will be entitled:
The Basilegate (The Emperor’s Legate) The Caerkara (The Expeditionary Force) The Wyrding Road The Other World (or perhaps Lurial and Iÿarlðma)
The novels will be a tetralogy. Now that I finally have all of the titles, know the plots and endings of all four books, have the languages developed, many of the poems and songs written, some of the maps and illustrations drawn, have hundreds of entries in my Plot Machine and thousands of notes, and about 200 pages of the each of the first two books written I suspect I can complete the entire tetralogy in under 2 years.
This is by far the very most complicated thing I have ever constructed (to date), at least as far as writing goes and that includes a couple of epic poems I’ve written. I first conceived it in 2007 as a single book and I’m sure I have thousands and thousands of hours sunk into it since then. Despite my other workloads.
Eventually I plan to write a set of children’s short stories connected to it and to at least plan out or begin the Kithariad though that will likely have to be passed on to others.
Before I start either of those though I just want to complete the Kithariune and then move on to my other novels, such as my sci-fi series The Curae (which will be every bit as big as the Kithariune), my detective novels, and my Frontiers novels, such as The Regulator and the Lettermen. And I want to complete my literary novels such as Modern Man and The Cache of Saint Andrew. Plus I want to finish my epic poem America. And I want to write some scripts. Not just TV scripts but movie scripts. So once I finish the Kithariune it may be a long while before I return to myth and fantasy, such as after my “retirement” (though I don’t plan to ever really retire).
I have however learned much by writing the Kithariune. I now know exactly how to plot out both long, complex novels and series, and much simpler single books. So the learning and research and study period was worth it alone in that respect. And it should both add to the richness of the Kithariune and to all of the other novels I write thereafter.
AN ACCOUNTING SO FAR AND A BIT OF ADVICE FOR NATIONAL NOVEL WRITING MONTH
My Word Count output for the first day of NaNoWriMo 2015 and my novel The Old Man was 2373 words plus (I lost count after that because I wrote another scene right before bed). Today, since it is raining so hard and I can’t go help my daughter look for a new car, I plan to have an output of 3000 or more words.
I have also been using the Writing Tools I received in my NNWM writing packet along with my own Tools.
This morning I wrote what I thought was a superb introduction and set of first lines for the science-fiction part of the novel. But I still have a lot of work to do today.
Rather than in order or in linear or chronological progression I seem to be writing the book out in independent scene-sections as they occur to me. Which I’m assuming my mind will knit together in proper order later on.
I am very much enjoying working “sans editing” or by avoiding the editing altogether process as I go. This has made the writing process itself much, much easier. And this may be a better and faster way for me to write in the future, though it takes some mental effort on my part for me to get used to. Old habits die hard.
Also I am not typing anything myself but rather producing the manuscript in long-hand at my kitchen table or in bed. The way I used to write as a kid. Before I got my first typewriter in High School or my first personal computer. I very much recommend this (recently rediscovered) method. It not only produces a superior thought and plot flow, it is much more psychically comfortable than typing or dictating at my computer or office chair, both of which I detest.
Plus as I go back to hand-writing I am once again becoming very quick at it.
Tomorrow I plan to conduct a test to see how quick I am at both methods, composing at my computer, and at hand writing. I suspect I am faster at hand-writing. Certainly I enjoy it more and it is far easier to write in that way.