Wyrdwend

The Filidhic Literary Blog of Jack Günter

THE OBSERVATION OF FAILURE

Failure is the one thing that modern men are almost always willing to excuse and yet are almost never willing to learn from. No wonder it does them so little good.

from The Business, Career, and Work of Man

SPOOKED

SPOOKED

If you are a young black male and you don’t understand that police are sometimes spooked by you, especially if you live in a high crime, urban area, then you aren’t thinking this out very far. Now as a black kid or man is that necessarily your fault? If you are law abiding and peaceful and doing the best you can, then no, it is not your fault. But at the moment anyway, it is the way it is. And no one can argue the way things actually are. You might not like it, and in this case you shouldn’t like it, but you can’t argue it’s not true.
If you are a police officer and you don’t understand that a lot of young black males (or others) are sometimes spooked by you, especially if you react to them with automatic suspicion or an assumption of guilt, then you aren’t thinking this out very far. Now as a cop is this your fault? If you are a good cop and doing the best you can, then no, it is not your fault. But at this moment, anyway, this is the way it is. And no one can argue the way things actually are. You might not like it, and in this case you shouldn’t like it, but you can’t argue it’s not true.
Everyone is spooked. Sometimes for entirely legitimate reasons and sometimes for assumptively dubious and entirely erroneous reasons. And when people are spooked, then rightly or wrongly, bad things tend to happen. People react instead of carefully observe, people are triggered by instinct rather than reason, people’s emotions become actively paramount rather than their common sense. The result of those habits are often very bad (certainly stupid and unnecessary), even murderous things.
But no one but criminals and terrorists and very bad men will benefit if young law abiding citizens and young men and the police are spooked of each other, and are reflexively hostile towards and automatically dubious of each other.
What’s the answer? Hell, I wish I could tell you the answer. The one that will work in every case. But no answer will work in every case. That’s just not real life. Not the way real people are. People are people. They will at times revert to their worst instincts or their most illogical and counter-productive habits. Or even to bad or incomplete or misguided training.
However I can tell you this much: When you are angry at each other, and vengeful towards each other, and automatically suspicious of each other, and spooked by each other then no real good can come of that. And no solutions either. Sometimes though, just really thinking and dwelling on the problem can give you an understanding of how to start.
However I can tell you what ought to be happening. What ought to be happening is that young black men, the law abiding and decent and good ones should be working with the police to take down criminals and thugs and terrorists in their own neighborhoods and to straighten out those neighborhoods for everyone else. (Including for the benefit and safety of their own children and women.) What ought to be happening is that cops should not to be automatically suspicious of all young black men who live in a dangerous area (and yes, they have every right to own personal firearms and maybe even more reason than most – because, well, think about it, they live in a bad or violent or high crime neighborhood) and instead the police ought to be conscripting the young, decent, good ones as allies and informants and friends to help clean up bad neighborhoods. (And good cops cannot stand beside or defend bad ones, or even wrong ones.) There should be an alliance and a true friendship and a partnership between citizen and police, but that has to run in both directions at once and respect and protection and cooperation and trust has to also run in both directions at once, and keep running in both directions at all times and as much as humanly possible.
Now I fully understand human beings and their true natures. I’m not fooled by how things will have to go or will go, or are even likely to go. And I’m not gonna try and deceive you with a bunch of feel good, talk-show, pop-psychology, fairy dust and glitterized bullshit. Mistakes will be made and will continue to be made. That’s human nature. Humans are imperfect. But no one should defend wrongdoing in either direction and over time the mistakes should become fewer and fewer, and even less and less egregious.
But this shit has got to stop people. My nation is already entirely fucked up enough as it is. Manslaughter and mass murder and unending suspicion and chaos and innocents being slaughtered and riots in cities and snipers on rooftops and kids shot dead out of suspicion is not the way. We’ve nowhere else to go from here but straight down to hell.
Being spooked all of the time will make spooks of far too many of us. Dead men in a dying land. It is a false hope to live as ghosts in a ghostland, to be half-men in a dead land, when we could be a Great Thing in a Great Land.
We should all be living and thriving and growing and developing, and at and about worthwhile, profitable enterprises.
What we’re doing right now ain’t working, and it can’t work. And, in the end, because it cannot hope to succeed, for anyone, it will have to be abandoned anyway. Or to stubborn self-ruin we go.
I hate even mentioning shit like this because I despise politics being interjected into life and death matters and matter of Right and Wrong. Right and Wrong should always stand on it’s own because, well hell, it’s fucking Right and Wrong. If you don’t get that then I can’t help you. Truth is you should never have to interject race or class or sex or any other far lesser considerations into Right and Wrong. But my wife is black, and my kids are half-black, and a lot of my good friends are black. And I grew up around cops and I’ve worked crime and tracked murders and rapists and thieves (and I know exactly how it works, I’m not in the least naïve or misguided about how criminals and terrorists are) and a lot of my good friends are cops and God-damnit it all to hell this ain’t fucking working.
I’m sitting here about to cry just thinking about all of the totally useless, murderous, violent shit I’ve seen over the years and I don’t fucking cry. And I keep thinking, Christ in Heaven, damn this mindless, habitual shit, don’t they ever, ever, ever fucking get it? How useless this shit is? How utterly unnecessary most of it is!!?
And if they don’t get it by now then what will it actually take?
Look, I’m under no illusion that most criminals are not gonna get what I’m saying. Nor are they gonna care. But by God, why can’t the rest of us? Get it?
So start now. For God’s sake. For your own sakes… Start doing things differently. Start treating each other differently. What in the fuck do any of us have to lose if we all do this differently?
Otherwise this shit is all you’re gonna have and this cycle of idiocy and death is all you’re going to have to hand down to your children and grandchildren.
You’ve already bankrupted them. Do you want to hand them down this useless shit too?
So man the fuck up already people, throw in together, and stop being so bucking spooked when you don’t need to be. And stop giving out reasons for others to be spooked by you too.
Because what we’re doing right now can’t possibly work over time.
And we’re running the hell out of it.
Pray for your nation folks. Pray for your own understanding. But just as importantly, if not more so, start doing things differently.
This shit is all on us. The solution will be on us too.
Or the doom and the fucking damnation will be.
And I for one have had a fucking nuff of the doom and the damnation.
I want to see things they way they ought to be. I want to see all men behaving as they should.
For God’s sake, for your own sakes, don’t you?

MY WIFE’S BODY

My wife arrived home from a trip to the beach on July the First. The next morning we had get home sex. She went to sleep afterwards but I got up and wrote the following poem.

I like the poem a lot but I am having difficulty naming it. I like these potential names/titles: Rich Everafter, Those Treasures Within, The Labors of Love, The Harvest of Human Labors, or Sweat of Our Love. 

If you have a preference among those or would like to make your own suggestion then feel free to do so. I look forward to reading your ideas.

Here is the poem. Let me know what you think, and what you would call it.

My wife’s body is naked and soft like broken ground
My wife’s body smells rich like fertile soil
My wife’s body is dark and moist like morning loam the restless Meander has watered at sunrise

I think that I shall plow her deeply again when she wakes and see what treasures within us both lie hid

Like the open fields of tended Pharos or the silty banks of the flooded Nile we shall suddenly sprout silver and salt and bare fecund Earth overflowing with milk and honey and blood dark wine and rampant wild oats and thus shall we feed ourselves a lifetime on the harvest of our human labors and the sweat of our love

My wife’s body is naked

My body is naked

Now shall we again labor in earnest, produce in abundance, and be rich everafter…

UNTOLD LAYERS

Untold layers of a man, say I

But three most vital and prime: Body, Mind, and Soul.

vitru_man_large

Of Body – movement, grace, strength, and sensation
Of Mind – craft, thought, apprehension, and creation
Of Soul – his inmost Self, Endurance, Honor, Truth, and Love

Untold layers of a man, say I

On Three All Other Things Depend

TO PORT OUR HOME, TO STARBOARD STILL UNKNOWN

I began this poem around noon as a response to today’s Daily Post prompt on Voyage. I got two stanzas in and then my daughters needed my help and then someone working with me on one of my start-ups demanded my attention and so therefore I have had to leave it at this point. I apologize but that kinda thing happens in life.

I intend to finish it but cannot do so at the moment. I hope you enjoy it nonetheless, and have a good day folks…

 

TO PORT OUR HOME, TO STARBOARD STILL UNKNOWN

To port was home, to starboard unknown foreign seas, and
Lands bespoken of in dream, where endless roam great beasts
Not seen since man was in the cradle of his mother’s shore
The stars still young and uncertain in their unfixed course
Across the skies of night still bright with constellated myth
Prodigious like the unseen figures which grappled in the dark
Around the moon’s white lantern in desperate search of a world
So new, so full of wonder, that no other home would do,
Not, at least, to the Daring

To port is home but on every other course the waves break
Upon a soil unsown with the tares and tears that common habit
Bestrew along the Earth we know so well by mundane states
Unchallenged in their broad decay and rush to ruin
Across the fields of ancient countries whose ground is salted
With the misery of crawling empires and rotting kingdoms
Made of man beneath the shadow of what is most foul within him
So old, so full of apathy, that no such home can seem true
Not, at least, to the Wise…

EMPTY

I was working on a short story when I happened across the Daily Post whose prompt-subject matter was Empty. Now I’ve had a lot of personal experience with Empty over the course of my life, both the good kind, and the bad kind. So I thought I’d make a post about that and turned out this poem at lunch. Hope you enjoy it.

Have a good day folks.

 

EMPTY

I once was empty, full of naught
By calculation, mind and thought

I once was empty, hollowed out
Melancholy, heart in doubt

I once was empty, fearless, cold
My fury made me endless bold

I once was empty, cast alone
It sharpened me so I was honed

I once was empty, bleak despair
My atmosphere a poisoned air

I once was empty, of myself
That was joy I could regale

I once was empty, God was gone
Why had He left me all alone?

I know of empty, made and true
I know of empty, me and you
I know of empty, blessed, good
I know of empty, as I should

For Empty is a Friend of mine
That gives me all, and then sometimes
Relieves me of all I have known
So I am ever forced to roam

In search of what is not…

So empty anymore.

THE FUTURE OF THE WAR MANCHINE

A lot of my buddies have military and law enforcement backgrounds.

Because of that one of my friends brought this article to my attention and a few of us discussed it since it is of more than passing interest to many of us.

It gave me an idea for a new science fiction short story about the same subject matter which I’m going to call Jihadology. (For the Jihad of Technology.)

I going to completely avoid the whole Terminator and tech gone rogue approach though of modern sci-fi and rather take a particular variation on the Keith Laumer BOLO theme, though there will be nothing about BOLOs or other such machines in the story. Those stories though were as under-rated and prophetic as was Laumer himself.

Anyway I want to avoid the whole world ending, unrealistic bullcrap kind of story (both from the scientific and military standpoints) and focus more on a very tight interpretation of what might actually happen if technologies such as those listed or projected in the article below were employed against an alien species in the future.

What would be both the operational and eventual ramifications, good and bad, of such technologies,and how could such technologies get out of hand or evolve beyond specified tasks and design parameters to become something completely new in function and focus?

I’ve already got the first few paragraphs to a page written which is based loosely upon this observation I made about what the article implied:

“I’m not saying there are any easy answers, there aren’t when it comes to technology, but technology can at least potentially do two related and diametrically opposed things at once: make a task so easy and efficient and risk-free for the operator that he is never truly in danger for himself, and secondly make a task so easy and efficient and risk-free for the operator that he is never truly in danger of understanding the danger others are in.

And if you can just remove the operator altogether, and just set the tech free to do as it is programmed, well then, there ya go…”

 

If the stories work well then I’ll add them to my overall science fiction universe of The Curae and The Frontiersmen.

By the way, as a sort of pop-culture primer on the very early stages of these developments (though they are at least a decade old now as far as wide-scale operations go) I recommend the film, Good Kill.

Anyway here is the very interesting and good article that spurred all of this. Any ideas of your own about these subjects? Feel free to comment. If your ideas and observations are good and interesting I might even adapt them in some way and incorporate them into the short story series.

 

Do We Want Robot Warriors to Decide Who Lives or Dies?

As artificial intelligence in military robots advances, the meaning of warfare is being redefined

opening illustration for killer robots feature
Illustration: Carl De Torres
robots report icon

Czech writer Karel Čapek’s1920 play R.U.R. (Rossum’s Universal Robots), which famously introduced the word robot to the world, begins with synthetic humans—the robots from the title—toiling in factories to produce low-cost goods. It ends with those same robots killing off the human race. Thus was born an enduring plot line in science fiction: robots spiraling out of control and turning into unstoppable killing machines. Twentieth-century literature and film would go on to bring us many more examples of robots wreaking havoc on the world, with Hollywood notably turning the theme into blockbuster franchises like The Matrix, Transformers, and The Terminator.

Lately, fears of fiction turning to fact have been stoked by a confluence of developments, including important advances in artificial intelligence and robotics, along with the widespread use of combat drones and ground robotsin Iraq and Afghanistan. The world’s most powerful militaries are now developing ever more intelligent weapons, with varying degrees of autonomy and lethality. The vast majority will, in the near term, be remotely controlled by human operators, who will be “in the loop” to pull the trigger. But it’s likely, and some say inevitable, that future AI-powered weapons will eventually be able to operate with complete autonomy, leading to a watershed moment in the history of warfare: For the first time, a collection of microchips and software will decide whether a human being lives or dies.

Not surprisingly, the threat of “killer robots,” as they’ve been dubbed, has triggered an impassioned debate. The poles of the debate are represented by those who fear that robotic weapons could start a world war and destroy civilization and others who argue that these weapons are essentially a new class of precision-guided munitions that will reduce, not increase, casualties. In December, more than a hundred countries are expected to discuss the issue as part of a United Nations disarmament meeting in Geneva.

MQ-9 Reaper dronePhalanx gun
Photos, Top: Isaac Brekken/Getty Images; Bottom: Mass Communication Specialist 2nd Class Jose Jaen/U.S.Navy
Mortal Combat: While drones like the MQ-9 Reaper [top], used by the U.S. military, are remotely controlled by human operators, a few robotic weapons, like the Phalanx gun [bottom] on U.S. Navy ships can engage targets all on their own.

Last year, the debate made news after a group of leading researchers in artificial intelligence called for a ban on “offensive autonomous weapons beyond meaningful human control.” In an open letter presented at a major AI conference, the group argued that these weapons would lead to a “global AI arms race” and be used for “assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group.”

The letter was signed by more than 20,000 people, including such luminaries as physicist Stephen Hawking and Tesla CEO Elon Musk, who last year donated US $10 million to a Boston-based institute whose mission is “safeguarding life” against the hypothesized emergence of malevolent AIs. The academics who organized the letter—Stuart Russellfrom the University of California, Berkeley; Max Tegmark from MIT; and Toby Walsh from the University of New South Wales, Australia—expanded on their arguments in an online article for IEEE Spectrum, envisioning, in one scenario, the emergence “on the black market of mass quantities of low-cost, antipersonnel microrobots that can be deployed by one person to anonymously kill thousands or millions of people who meet the user’s targeting criteria.”

The three added that “autonomous weapons are potentially weapons of mass destruction. While some nations might not choose to use them for such purposes, other nations and certainly terrorists might find them irresistible.”

It’s hard to argue that a new arms race culminating in the creation of intelligent, autonomous, and highly mobile killing machines would well serve humanity’s best interests. And yet, regardless of the argument, the AI arms race is already under way.

Autonomous weapons have existed for decades, though the relatively few that are out there have been used almost exclusively for defensive purposes. One example is the Phalanx, a computer-controlled, radar-guided gun system installed on many U.S. Navy ships that can automatically detect, track, evaluate, and fire at incoming missiles and aircraft that it judges to be a threat. When it’s in fully autonomous mode, no human intervention is necessary.

More recently, military suppliers have developed what may be considered the first offensive autonomous weapons.Israel Aerospace IndustriesHarpy andHarop drones are designed to home in on the radio emissions of enemy air-defense systems and destroy them by crashing into them. The companysays the drones “have been sold extensively worldwide.”

In South Korea, DoDAAM Systems, a defense contractor, has developed a sentry robot called theSuper aEgis II. Equipped with a machine gun, it uses computer vision to autonomously detect and fire at human targets out to a range of 3 kilometers. South Korea’s military has reportedly conducted tests with these armed robots in the demilitarized zone along its border with North Korea. DoDAAM says it has sold more than 30 units to other governments, including several in the Middle East.

Today, such highly autonomous systems are vastly outnumbered by robotic weapons such as drones, which are under the control of human operators almost all of the time, especially when firing at targets. But some analysts believe that as warfare evolves in coming years, weapons will have higher and higher degrees of autonomy.

“War will be very different, and automation will play a role where speed is key,” says Peter W. Singer, a robotic warfare expert at New America, a nonpartisan research group in Washington, D.C. He predicts that in future combat scenarios—like a dogfight between drones or an encounter between a robotic boat and an enemy submarine—weapons that offer a split-second advantage will make all the difference. “It might be a high-intensity straight-on conflict when there’s no time for humans to be in the loop, because it’s going to play out in a matter of seconds.”

The U.S. military has detailed some of its plans for this new kind of war in aroad map [pdf] for unmanned systems, but its intentions on weaponizing such systems are vague. During a Washington Post forum this past March, U.S. deputy secretary of defense Robert Work, whose job is in part making sure that the Pentagon is keeping up with the latest technologies, stressed the need to invest in AI and robotics. The increasing presence of autonomous systems on the battlefield “is inexorable,” he declared.

Asked about autonomous weapons, Work insisted that the U.S. military “will not delegate lethal authority to a machine to make a decision.” But when pressed on the issue, he added that if confronted by a “competitor that is more willing to delegate authority to machines than we are…we’ll have to make decisions on how we can best compete. It’s not something that we’ve fully figured out, but we spend a lot of time thinking about it.”

Russia and China are following a similar strategyof developing unmanned combat systems for land, sea, and air that are weaponized but, at least for now, rely on human operators. Russia’sPlatform-M is a small remote-controlled robot equipped with a Kalashnikov rifle and grenade launchers, a type of system similar to the United States’ Talon SWORDS, a ground robot that can carry an M16 and other weapons (it was tested by the U.S. Army in Iraq). Russia has also built a larger unmanned vehicle, the Uran-9, armed with a 30-millimeter cannon and antitank guided missiles. And last year, the Russians demonstrated a humanoid military robot to a seemingly nonplussed Vladimir Putin. (In video released after the demonstration, the robot is shown riding an ATV at a speed only slightly faster than a child on a tricycle.)

China’s growing robotic arsenal includes numerous attack and reconnaissance drones. The CH-4 is a long-endurance unmanned aircraft that resembles the Predator used by the U.S. military. The Divine Eagle is a high-altitude drone designed to hunt stealth bombers. China has also publicly displayed a few machine-gun-equipped robots, similar to Platform-M and Talon SWORDS, at military trade shows.

The three countries’ approaches to robotic weapons, introducing increasing automation while emphasizing a continuing role for humans, suggest a major challenge to the banning of fully autonomous weapons: A ban on fully autonomous weapons would not necessarily apply to weapons that are nearly autonomous. So militaries could conceivably develop robotic weapons that have a human in the loop, with the option of enabling full autonomy at a moment’s notice in software. “It’s going to be hard to put an arms-control agreement in place for robotics,” concludes Wendell Wallach, an expert on ethics and technology at Yale University. “The difference between an autonomous weapons system and nonautonomous may be just a difference of a line of code,” he said at a recent conference.

In motion pictures, robots often gain extraordinary levels of autonomy, even sentience, seemingly out of nowhere, and humans are caught by surprise. Here in the real world, though, and despite the recent excitement about advances in machine learning, progress in robot autonomy has been gradual. Autonomous weapons would be expected to evolve in a similar way.

“A lot of times when people hear ‘autonomous weapons,’ they envision the Terminator and they are, like, ‘What have we done?,’ ” says Paul Scharre, who directs a future-of-warfare program at the Center for a New American Security, a policy research group in Washington, D.C. “But that seems like probably the last way that militaries want to employ autonomous weapons.” Much more likely, he adds, will be robotic weapons that target not people but military objects like radars, tanks, ships, submarines, or aircraft.

The challenge of target identification—determining whether or not what you’re looking at is a hostile enemy target—is one of the most critical for AI weapons. Moving targets like aircraft and missiles have a trajectory that can be tracked and used to help decide whether to shoot them down. That’s how the Phalanx autonomous gun on board U.S. Navy ships operates, and also how Israel’s “Iron Dome” antirocket interceptor system works. But when you’re targeting people, the indicators are much more subtle. Even under ideal conditions, object- and scene-recognition tasks that are routine for people can be extremely difficult for robots.

A computer can identify a human figure without much trouble, even if that human is moving furtively. But it’s very hard for an algorithm to understand what people are doing, and what their body language and facial expressions suggest about their intent. Is that person lifting a rifle or a rake? Is that person carrying a bomb or an infant?

Scharre argues that robotic weapons attempting to do their own targeting would wither in the face of too many challenges. He says that devising war-fighting tactics and technologies in which humans and robots collaborate [pdf] will remain the best approach for safety, legal, and ethical reasons. “Militaries could invest in very advanced robotics and automation and still keep a person in the loop for targeting decisions, as a fail-safe,” he says. “Because humans are better at being flexible and adaptable to new situations that maybe we didn’t program for, especially in war when there’s an adversary trying to defeat your systems and trick them and hack them.”

It’s not surprising, then, that DoDAAM, the South Korean maker of sentry robots, imposed restrictions on their lethal autonomy. As currently configured, the robots will not fire until a human confirms the target and commands the turret to shoot. “Our original version had an auto-firing system,” a DoDAAM engineer told the BBC last year. “But all of our customers asked for safeguards to be implemented…. They were concerned the gun might make a mistake.”

For other experts, the only way to ensure that autonomous weapons won’t make deadly mistakes, especially involving civilians, is to deliberately program these weapons accordingly. “If we are foolish enough to continue to kill each other in the battlefield, and if more and more authority is going to be turned over to these machines, can we at least ensure that they are doing it ethically?” says Ronald C. Arkin, a computer scientist at Georgia Tech.

Arkin argues that autonomous weapons, just like human soldiers, should have to follow the rules of engagement as well as the laws of war, includinginternational humanitarian laws that seek to protect civilians and limit the amount of force and types of weapons that are allowed. That means we should program them with some kind of moral reasoning to help them navigate different situations and fundamentally distinguish right from wrong. They will need to have, embodied deep in their software, some sort of ethical compass.

For the past decade, Arkin has been working on such a compass. Using mathematical and logic tools from the field of machine ethics, he began translating the highly conceptual laws of war and rules of engagement into variables and operations that computers can understand. For example, one variable specified how confident the ethical controller was that a target was an enemy. Another was a Boolean variable that was either true or false: lethal force was either permitted or prohibited. Eventually, Arkin arrived at a set of algorithms, and using computer simulations and very simplified combat scenarios—an unmanned aircraft engaging a group of people in an open field, for example—he was able to test his methodology.

Arkin acknowledges that the project, which was funded by the U.S. military, was a proof of concept, not an actual control-system implementation. Nevertheless, he believes the results showed that combat robots not only could follow the same rules that humans have to follow but also that they could do better. For example, the robots could use lethal force with more restraint than could human fighters, returning fire only when shot at first. Or, if civilians are nearby, they could completely hold their fire, even if that means being destroyed. Robots also don’t suffer from stress, frustration, anger, or fear, all of which can lead to impaired judgment in humans. So in theory, at least, robot soldiers could outperform human ones, who often and sometimes unavoidably make mistakes in the heat of battle.

“And the net effect of that could be a saving of human lives, especially the innocent that are trapped in the battle space,” Arkin says. “And if these robots can do that, to me there’s a driving moral imperative to use them.”

Needless to say, that’s not at all a consensus view. Critics of autonomous weapons insist that only a preemptive ban makes sense given the insidious way these weapons are coming into existence. “There’s no one single weapon system that we’re going to point to and say, ‘Aha, here’s the killer robot,’ ” says Mary Wareham, an advocacy director at Human Rights Watch and global coordinator of the Campaign to Stop Killer Robots, a coalition of various humanitarian groups. “Because, really, we’re talking about multiple weapons systems, which will function in different ways. But the one thing that concerns us that they all seem to have in common is the lack of human control over their targeting and attack functions.”

The U.N. has been holdingdiscussions on lethal autonomous robots for close to five years, but its member countries have been unable to draw up an agreement. In 2013,Christof Heyns, a U.N. special rapporteur for human rights, wrote an influential report noting that the world’s nations had a rare opportunity to discuss the risks of autonomous weapons before such weapons were already fully developed. Today, after participating in several U.N. meetings, Heyns says that “if I look back, to some extent I’m encouraged, but if I look forward, then I think we’re going to have a problem unless we start acting much faster.”

This coming December, the U.N.’s Convention on Certain Conventional Weapons will hold a five-year review conference, and the topic of lethal autonomous robots will be on the agenda. However, it’s unlikely that a ban will be approved at that meeting. Such a decision would require the consensus of all participating countries, and these still have fundamental disagreements on how to deal with the broad spectrum of autonomous weapons expected to emerge in the future.

In the end, the “killer robots” debate seems to be more about us humans than about robots. Autonomous weapons will be like any technology, at least at first: They could be deployed carefully and judiciously, or chaotically and disastrously. Human beings will have to take the credit or the blame. So the question, “Are autonomous combat robots a good idea?” probably isn’t the best one. A better one is, “Do we trust ourselves enough to trust robots with our lives?”

This article appears in the June 2016 print issue as “When Robots Decide to Kill.”

 

 

RedheadedBooklover

Just a redheaded woman who is obsessed with books

Hmara apna desh

"Those who try to do something and fail..are infinitely better than those who try to do nothing and succeed.."

The Angry Staff Officer

Peddling history, alcohol, defense, and sometimes all three at once

J. A. Allen

Scribbles on Cocktail Napkins

Jay Colby

Life, Inspiration & Motivation

Pax Et Dolor Magazine

Peace and Pain

Sweet-Bitter Symphony

"I love the rains as equally as I hate 'em"

all my unedited glory

a place to express my thoughts

Cooking without Limits

Food Photography & Recipes

cancer killing recipe

Just another WordPress.com site

Louise Redmann

Escape your world...

Kevin Standage

An Indian travel photography blog

M.T. Bass

Books, Stories & Musings

Alexis Chateau

Activist. Writer. Explorer.

Cross Country life

Throughout America and Canada ( Trip Starts June 1 )

Cadmus38

looking for the adventure in life

%d bloggers like this: