“Desperate to dwell forever upon the perfidies of the past they endlessly recreate the very same within themselves and so never breed a worthwhile future, or an offspring fit for life”
Scene: The Sad and Superficial Countenance of Modern Man, from my play, Modern Man
A lot of my buddies have military and law enforcement backgrounds.
Because of that one of my friends brought this article to my attention and a few of us discussed it since it is of more than passing interest to many of us.
It gave me an idea for a new science fiction short story about the same subject matter which I’m going to call Jihadology. (For the Jihad of Technology.)
I going to completely avoid the whole Terminator and tech gone rogue approach though of modern sci-fi and rather take a particular variation on the Keith Laumer BOLO theme, though there will be nothing about BOLOs or other such machines in the story. Those stories though were as under-rated and prophetic as was Laumer himself.
Anyway I want to avoid the whole world ending, unrealistic bullcrap kind of story (both from the scientific and military standpoints) and focus more on a very tight interpretation of what might actually happen if technologies such as those listed or projected in the article below were employed against an alien species in the future.
What would be both the operational and eventual ramifications, good and bad, of such technologies,and how could such technologies get out of hand or evolve beyond specified tasks and design parameters to become something completely new in function and focus?
I’ve already got the first few paragraphs to a page written which is based loosely upon this observation I made about what the article implied:
“I’m not saying there are any easy answers, there aren’t when it comes to technology, but technology can at least potentially do two related and diametrically opposed things at once: make a task so easy and efficient and risk-free for the operator that he is never truly in danger for himself, and secondly make a task so easy and efficient and risk-free for the operator that he is never truly in danger of understanding the danger others are in.
And if you can just remove the operator altogether, and just set the tech free to do as it is programmed, well then, there ya go…”
If the stories work well then I’ll add them to my overall science fiction universe of The Curae and The Frontiersmen.
By the way, as a sort of pop-culture primer on the very early stages of these developments (though they are at least a decade old now as far as wide-scale operations go) I recommend the film, Good Kill.
Anyway here is the very interesting and good article that spurred all of this. Any ideas of your own about these subjects? Feel free to comment. If your ideas and observations are good and interesting I might even adapt them in some way and incorporate them into the short story series.
Czech writer Karel Čapek’s1920 play R.U.R. (Rossum’s Universal Robots), which famously introduced the word robot to the world, begins with synthetic humans—the robots from the title—toiling in factories to produce low-cost goods. It ends with those same robots killing off the human race. Thus was born an enduring plot line in science fiction: robots spiraling out of control and turning into unstoppable killing machines. Twentieth-century literature and film would go on to bring us many more examples of robots wreaking havoc on the world, with Hollywood notably turning the theme into blockbuster franchises like The Matrix, Transformers, and The Terminator.
Lately, fears of fiction turning to fact have been stoked by a confluence of developments, including important advances in artificial intelligence and robotics, along with the widespread use of combat drones and ground robotsin Iraq and Afghanistan. The world’s most powerful militaries are now developing ever more intelligent weapons, with varying degrees of autonomy and lethality. The vast majority will, in the near term, be remotely controlled by human operators, who will be “in the loop” to pull the trigger. But it’s likely, and some say inevitable, that future AI-powered weapons will eventually be able to operate with complete autonomy, leading to a watershed moment in the history of warfare: For the first time, a collection of microchips and software will decide whether a human being lives or dies.
Not surprisingly, the threat of “killer robots,” as they’ve been dubbed, has triggered an impassioned debate. The poles of the debate are represented by those who fear that robotic weapons could start a world war and destroy civilization and others who argue that these weapons are essentially a new class of precision-guided munitions that will reduce, not increase, casualties. In December, more than a hundred countries are expected to discuss the issue as part of a United Nations disarmament meeting in Geneva.
Photos, Top: Isaac Brekken/Getty Images; Bottom: Mass Communication Specialist 2nd Class Jose Jaen/U.S.NavyMortal Combat: While drones like the MQ-9 Reaper [top], used by the U.S. military, are remotely controlled by human operators, a few robotic weapons, like the Phalanx gun [bottom] on U.S. Navy ships can engage targets all on their own.
Last year, the debate made news after a group of leading researchers in artificial intelligence called for a ban on “offensive autonomous weapons beyond meaningful human control.” In an open letter presented at a major AI conference, the group argued that these weapons would lead to a “global AI arms race” and be used for “assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group.”
The letter was signed by more than 20,000 people, including such luminaries as physicist Stephen Hawking and Tesla CEO Elon Musk, who last year donated US $10 million to a Boston-based institute whose mission is “safeguarding life” against the hypothesized emergence of malevolent AIs. The academics who organized the letter—Stuart Russellfrom the University of California, Berkeley; Max Tegmark from MIT; and Toby Walsh from the University of New South Wales, Australia—expanded on their arguments in an online article for IEEE Spectrum, envisioning, in one scenario, the emergence “on the black market of mass quantities of low-cost, antipersonnel microrobots that can be deployed by one person to anonymously kill thousands or millions of people who meet the user’s targeting criteria.”
The three added that “autonomous weapons are potentially weapons of mass destruction. While some nations might not choose to use them for such purposes, other nations and certainly terrorists might find them irresistible.”
It’s hard to argue that a new arms race culminating in the creation of intelligent, autonomous, and highly mobile killing machines would well serve humanity’s best interests. And yet, regardless of the argument, the AI arms race is already under way.
Autonomous weapons have existed for decades, though the relatively few that are out there have been used almost exclusively for defensive purposes. One example is the Phalanx, a computer-controlled, radar-guided gun system installed on many U.S. Navy ships that can automatically detect, track, evaluate, and fire at incoming missiles and aircraft that it judges to be a threat. When it’s in fully autonomous mode, no human intervention is necessary.
More recently, military suppliers have developed what may be considered the first offensive autonomous weapons.Israel Aerospace Industries’ Harpy andHarop drones are designed to home in on the radio emissions of enemy air-defense systems and destroy them by crashing into them. The companysays the drones “have been sold extensively worldwide.”
In South Korea, DoDAAM Systems, a defense contractor, has developed a sentry robot called theSuper aEgis II. Equipped with a machine gun, it uses computer vision to autonomously detect and fire at human targets out to a range of 3 kilometers. South Korea’s military has reportedly conducted tests with these armed robots in the demilitarized zone along its border with North Korea. DoDAAM says it has sold more than 30 units to other governments, including several in the Middle East.
Today, such highly autonomous systems are vastly outnumbered by robotic weapons such as drones, which are under the control of human operators almost all of the time, especially when firing at targets. But some analysts believe that as warfare evolves in coming years, weapons will have higher and higher degrees of autonomy.
“War will be very different, and automation will play a role where speed is key,” says Peter W. Singer, a robotic warfare expert at New America, a nonpartisan research group in Washington, D.C. He predicts that in future combat scenarios—like a dogfight between drones or an encounter between a robotic boat and an enemy submarine—weapons that offer a split-second advantage will make all the difference. “It might be a high-intensity straight-on conflict when there’s no time for humans to be in the loop, because it’s going to play out in a matter of seconds.”
The U.S. military has detailed some of its plans for this new kind of war in aroad map [pdf] for unmanned systems, but its intentions on weaponizing such systems are vague. During a Washington Post forum this past March, U.S. deputy secretary of defense Robert Work, whose job is in part making sure that the Pentagon is keeping up with the latest technologies, stressed the need to invest in AI and robotics. The increasing presence of autonomous systems on the battlefield “is inexorable,” he declared.
Asked about autonomous weapons, Work insisted that the U.S. military “will not delegate lethal authority to a machine to make a decision.” But when pressed on the issue, he added that if confronted by a “competitor that is more willing to delegate authority to machines than we are…we’ll have to make decisions on how we can best compete. It’s not something that we’ve fully figured out, but we spend a lot of time thinking about it.”
Russia and China are following a similar strategyof developing unmanned combat systems for land, sea, and air that are weaponized but, at least for now, rely on human operators. Russia’sPlatform-M is a small remote-controlled robot equipped with a Kalashnikov rifle and grenade launchers, a type of system similar to the United States’ Talon SWORDS, a ground robot that can carry an M16 and other weapons (it was tested by the U.S. Army in Iraq). Russia has also built a larger unmanned vehicle, the Uran-9, armed with a 30-millimeter cannon and antitank guided missiles. And last year, the Russians demonstrated a humanoid military robot to a seemingly nonplussed Vladimir Putin. (In video released after the demonstration, the robot is shown riding an ATV at a speed only slightly faster than a child on a tricycle.)
China’s growing robotic arsenal includes numerous attack and reconnaissance drones. The CH-4 is a long-endurance unmanned aircraft that resembles the Predator used by the U.S. military. The Divine Eagle is a high-altitude drone designed to hunt stealth bombers. China has also publicly displayed a few machine-gun-equipped robots, similar to Platform-M and Talon SWORDS, at military trade shows.
The three countries’ approaches to robotic weapons, introducing increasing automation while emphasizing a continuing role for humans, suggest a major challenge to the banning of fully autonomous weapons: A ban on fully autonomous weapons would not necessarily apply to weapons that are nearly autonomous. So militaries could conceivably develop robotic weapons that have a human in the loop, with the option of enabling full autonomy at a moment’s notice in software. “It’s going to be hard to put an arms-control agreement in place for robotics,” concludes Wendell Wallach, an expert on ethics and technology at Yale University. “The difference between an autonomous weapons system and nonautonomous may be just a difference of a line of code,” he said at a recent conference.
In motion pictures, robots often gain extraordinary levels of autonomy, even sentience, seemingly out of nowhere, and humans are caught by surprise. Here in the real world, though, and despite the recent excitement about advances in machine learning, progress in robot autonomy has been gradual. Autonomous weapons would be expected to evolve in a similar way.
“A lot of times when people hear ‘autonomous weapons,’ they envision the Terminator and they are, like, ‘What have we done?,’ ” says Paul Scharre, who directs a future-of-warfare program at the Center for a New American Security, a policy research group in Washington, D.C. “But that seems like probably the last way that militaries want to employ autonomous weapons.” Much more likely, he adds, will be robotic weapons that target not people but military objects like radars, tanks, ships, submarines, or aircraft.
The challenge of target identification—determining whether or not what you’re looking at is a hostile enemy target—is one of the most critical for AI weapons. Moving targets like aircraft and missiles have a trajectory that can be tracked and used to help decide whether to shoot them down. That’s how the Phalanx autonomous gun on board U.S. Navy ships operates, and also how Israel’s “Iron Dome” antirocket interceptor system works. But when you’re targeting people, the indicators are much more subtle. Even under ideal conditions, object- and scene-recognition tasks that are routine for people can be extremely difficult for robots.
A computer can identify a human figure without much trouble, even if that human is moving furtively. But it’s very hard for an algorithm to understand what people are doing, and what their body language and facial expressions suggest about their intent. Is that person lifting a rifle or a rake? Is that person carrying a bomb or an infant?
Scharre argues that robotic weapons attempting to do their own targeting would wither in the face of too many challenges. He says that devising war-fighting tactics and technologies in which humans and robots collaborate [pdf] will remain the best approach for safety, legal, and ethical reasons. “Militaries could invest in very advanced robotics and automation and still keep a person in the loop for targeting decisions, as a fail-safe,” he says. “Because humans are better at being flexible and adaptable to new situations that maybe we didn’t program for, especially in war when there’s an adversary trying to defeat your systems and trick them and hack them.”
It’s not surprising, then, that DoDAAM, the South Korean maker of sentry robots, imposed restrictions on their lethal autonomy. As currently configured, the robots will not fire until a human confirms the target and commands the turret to shoot. “Our original version had an auto-firing system,” a DoDAAM engineer told the BBC last year. “But all of our customers asked for safeguards to be implemented…. They were concerned the gun might make a mistake.”
For other experts, the only way to ensure that autonomous weapons won’t make deadly mistakes, especially involving civilians, is to deliberately program these weapons accordingly. “If we are foolish enough to continue to kill each other in the battlefield, and if more and more authority is going to be turned over to these machines, can we at least ensure that they are doing it ethically?” says Ronald C. Arkin, a computer scientist at Georgia Tech.
Arkin argues that autonomous weapons, just like human soldiers, should have to follow the rules of engagement as well as the laws of war, includinginternational humanitarian laws that seek to protect civilians and limit the amount of force and types of weapons that are allowed. That means we should program them with some kind of moral reasoning to help them navigate different situations and fundamentally distinguish right from wrong. They will need to have, embodied deep in their software, some sort of ethical compass.
For the past decade, Arkin has been working on such a compass. Using mathematical and logic tools from the field of machine ethics, he began translating the highly conceptual laws of war and rules of engagement into variables and operations that computers can understand. For example, one variable specified how confident the ethical controller was that a target was an enemy. Another was a Boolean variable that was either true or false: lethal force was either permitted or prohibited. Eventually, Arkin arrived at a set of algorithms, and using computer simulations and very simplified combat scenarios—an unmanned aircraft engaging a group of people in an open field, for example—he was able to test his methodology.
Arkin acknowledges that the project, which was funded by the U.S. military, was a proof of concept, not an actual control-system implementation. Nevertheless, he believes the results showed that combat robots not only could follow the same rules that humans have to follow but also that they could do better. For example, the robots could use lethal force with more restraint than could human fighters, returning fire only when shot at first. Or, if civilians are nearby, they could completely hold their fire, even if that means being destroyed. Robots also don’t suffer from stress, frustration, anger, or fear, all of which can lead to impaired judgment in humans. So in theory, at least, robot soldiers could outperform human ones, who often and sometimes unavoidably make mistakes in the heat of battle.
“And the net effect of that could be a saving of human lives, especially the innocent that are trapped in the battle space,” Arkin says. “And if these robots can do that, to me there’s a driving moral imperative to use them.”
Needless to say, that’s not at all a consensus view. Critics of autonomous weapons insist that only a preemptive ban makes sense given the insidious way these weapons are coming into existence. “There’s no one single weapon system that we’re going to point to and say, ‘Aha, here’s the killer robot,’ ” says Mary Wareham, an advocacy director at Human Rights Watch and global coordinator of the Campaign to Stop Killer Robots, a coalition of various humanitarian groups. “Because, really, we’re talking about multiple weapons systems, which will function in different ways. But the one thing that concerns us that they all seem to have in common is the lack of human control over their targeting and attack functions.”
The U.N. has been holdingdiscussions on lethal autonomous robots for close to five years, but its member countries have been unable to draw up an agreement. In 2013,Christof Heyns, a U.N. special rapporteur for human rights, wrote an influential report noting that the world’s nations had a rare opportunity to discuss the risks of autonomous weapons before such weapons were already fully developed. Today, after participating in several U.N. meetings, Heyns says that “if I look back, to some extent I’m encouraged, but if I look forward, then I think we’re going to have a problem unless we start acting much faster.”
This coming December, the U.N.’s Convention on Certain Conventional Weapons will hold a five-year review conference, and the topic of lethal autonomous robots will be on the agenda. However, it’s unlikely that a ban will be approved at that meeting. Such a decision would require the consensus of all participating countries, and these still have fundamental disagreements on how to deal with the broad spectrum of autonomous weapons expected to emerge in the future.
In the end, the “killer robots” debate seems to be more about us humans than about robots. Autonomous weapons will be like any technology, at least at first: They could be deployed carefully and judiciously, or chaotically and disastrously. Human beings will have to take the credit or the blame. So the question, “Are autonomous combat robots a good idea?” probably isn’t the best one. A better one is, “Do we trust ourselves enough to trust robots with our lives?”
This article appears in the June 2016 print issue as “When Robots Decide to Kill.”
The other day someone asked my advice on how to conduct myself as a writer. Or actually, to be more accurate, my advice on how they might better conduct themselves as a writer based on my prior experiences. Since writing is basically a “lonesome occupation” requiring a great deal of commitment, isolation (to a degree I’ll explain momentarily), focus, determination, self-discipline, and real work. They were having trouble dealing with the “lonesome” part of the occupation.
I repeat my advice to them here in the case this assists anyone else. Of course this advice could just as easily apply to artists, inventors, poets, songwriters, and even (to some extent) entrepreneurs of all kinds (all of which I am) with but a few minor modifications. So this is my Highmoot for this Wednesday.
THIS IS MY ADVICE
This is my advice after having worked for myself for decades. I’m about evenly matched between being an introvert and being an extrovert. I too do my very best work alone. However I prime myself by going out and observing people. Going to places that are active, like labs, industrial complexes, malls, museums, libraries, city streets, performances, college campuses, Vadding, to shops, exploring other towns, theaters, etc.
I do this for a day or two about once every two to three weeks. Although depending on my work schedule I may not be able to do it but once a month. Nevertheless I do this as much as I can and regularly schedule such things.
(Aside:One place though I never go to is coffee shops. Everyone there is on their computers or cell phones and the interactions are limited and about all you see anyone doing is staring at a screen. Coffee shops are, for the most part, horrible and pretentious work environments, with people tending to merely congregate together in order to appear to be working, when in fact they are not truly working – they are seeking to socially escape real work by the public appearance of a displayed but primarily unreal act of “business.” On this point I entirely agree with Hemingway, coffee shops and cafes are the very worst places to do any actual and real work, though they give the plastic social facade of appearing to be busy.
The very same can be said to be true about coffee shops as “observation posts” on true human behavior. The types of human behavior evidenced in most coffee shops is unnatural, artificial, pretentious, deceptive, and rehearsed. People in coffee shops and cafes are extremely aware that they are being observed, indeed this is one reason so many go there, to observe and be observed (in a sort of pre-approved, socially accepted and promoted play-act), in the place of actually working. I almost never trust the close observations of human behavior I make of people in such environments. Such behaviors tend to be no more “real” than the work supposedly occurring in such places, and just as artificial as the plastic illuminated screens they seem so utterly devoted to, and the technological implements they are eagerly seen to be worshiping. My advice is to skip such places entirely if you can and go rather to where real work can be done and you can make true observations about actual behaviors, be those human or animal. Places like I mentioned above. End Aside.)
Then I come home and my mind and soul are primed with observations and ideas and stories and poetry and songs and invention concepts and business proposals.
When I’m at home and working, and tire, or am bored, then I go outside and clear land, hike in the woods, explore the nearby lands (I live out in the country), go fishing, track and observe animals, climb trees, cut down trees, cut the grass, etc. I said I do my best work alone, but actually I do my best work alone while doing something physical, and then I work in my head as I labor. Both because it is excellent practice to work in your head as you labor (the bodily labor frees the mind to wander and work) and because working while you labor is an excellent Mnemonics Technique. Sometimes I’ll write entire poems, songs, scenes from my novels, sections of business plans, create prototype inventions in my head, etc., then memorize the same and store them in Agapolis, my Memory City as I am physically laboring and only after I quit and go back into the house will I write down what I created.
I know modern people are not big on memory or Mnemonic Techniques (so much the shame for them), but I learned such things from the Ancients and the Medievals and if you ask me a superb memory and good control over your own memory is a far better set of skills and capabilities for a writer (or most anyone) to possess than a thousand cell phones or a hundred laptops or tablets or even a dozen internets. A good memory increases not only your overall intelligence but is fundamental to establishing, developing, and properly employing an excellent vocabulary. So practice writing or creating first in your head (after all you can do such things even when you have no access to even pen and paper), then fully memorize what you do, and only then write it down. Such exercises are not only important to do (because of what I mentioned above), but will pay many dividends in any of your creative endeavours and enterprises. Rely not just upon mere technology for your best creations and for your most important works, but rather upon what you most deeply impress upon your own mind and soul. That is both where creation begins and where it will be properly shaped and forged and worked into worthwhile and well-crafted final products.
I don’t know if this helps you any in your own creative enterprises but my advice is go out at least once a month, or as often as you need it, and do nothing but observe and generate new ideas. Then let them ruminate and percolate through you and within you.
If you thereafter feel all cramped up and unable to work smoothly then do something strenuous and physical outside. The labor will do you good and also set our mind free to wander. Then when you are primed and relaxed go to work.
To simplify to a very basic formula: Prime + Observe + Labor + Work + Memorize = High End and Valuable End Product.
After the necessary revisions for proper refinement, of course.
REWRITE OFTEN.
But just because you work alone doesn’t mean you are a prisoner of your environment and just because you work alone doesn’t mean you always have to be alone.
Go wander, go labor, go explore, go meet new people, go people watch, memorize, and then actually Work. Don’t just wade into crowds and pretend to work.
Actually Work.
Be extremely good for ya. And it will probably make you a helluvah lot better writer than you’ve ever been before. No matter what you’re writing. And it is awful hard to be lonely, or a slack-ass, when you are actually doing Good Work.
A poem I began this weekend. I am usually not much for modern poetry but in this case I thought the juxtapositioning of that kind of poetry against the subject matter fit very well. I am also not posting the entire poem here as I intend to publish it.
THE RED TRACTOR
“Love demands infinitely less than friendship.”
George Jean Nathan
If I could admire a machine
As I admire a man it would be you
You were tough and you were strong, you did not relent
You were fit for tasks for which you were not
Designed and yet you mastered
* * *
One day I shall show my grandchildren the green and open
Fields we cleared and I shall say,
“Here we pushed back the frontier together,
he and I, my old friend. Here we hauled
and carried and labored and strained
and cut and leveled. This is because
we were as we were, and he was stalwart
and faltered not. I wish you could have
known him before the end, on the
border of what was wild forever we finally
made tame. What was chaos we ordered.
What was savage we made fit.”
And they will say,
“papa! it was only a tractor!”
And I shall remember fully the fields we made, ground we claimed, land we
Tamed. Uncertain Earth we took foot by foot from the long dry summers and the
Shower soaked springs with restless toil. The memories shall return to me of how
I repaired you endlessly and with great effort and frustration,
And yet did you always abide…
I kinda wish the internet did not so much exist
And then I wouldn’t Work on it or in its ranks enlist
But it does, oh how it does, and so I plod along
Wasting all this Living time with silly, versey songs
I wait and wait the web to thread its way to where I go
So I can make it larger still, these spiders all aglow
I often wonder where it ends – I know it’s pointless though
To kick so hard against these pricks – just dinner and a show
These monkeys screeching, slinging shit, getting nothing else
Yet if you sling it back at them whatever does it help?
The cages rattle, shake, and roll, and still what does it change?
A sinking ship’s a sinking ship, the deck chairs rearranged,
Oh look, it’s here, the site I seek, aren’t I a lucky lad?
Now I go to Work on this, I guess I should be glad!
Though it’s not real, I know that see, the world outside awaits
Then why am I, still at this place, just to cut my bait?
I don’t know, we’ve made this world, now getting out’s too late
But still I dream of better things and one day I’ll escape,
And come that day, that brilliant day, the dead webs all dispersed
I’ll be free to Live again and roam the wand’ring Earth…
I disagreed with him on many things. I thought him an outright fool on more than one issue and occasion.
(Yes, yes, I know how literary “icons” and the modern intelligentsia and the types of men who believe politics to be the answer to all existence – human or otherwise – like to cluster around each other to breathlessly and mutually glorify their own supposed genius. But I am far more skeptical of “modern genius” in all its many fictional forms. As a matter of fact I rarely see any real evidence of the supposed “modern genius” of self-styled “modern geniuses,” and their numberless cohorts, ever, or at all.)
But narrowing his views down to the strictly literary disciplines I did often agree with him on these scores: there is a new illiteracy (not in the inability to read and write, but in the poverty of having ever read or written anything of any real value at all), and letter writing is dead and with it much of higher human writing.
Otherwise the Grass is dead too. I doubt it will ever green again.
Author of The Tin Drum and figure of enduring controversy
Günter Grass
Monday 13 April 2015 05.38 EDT
Last modified on Monday 13 April 2015 09.40 EDT
The writer Günter Grass, who broke the silences of the past for a generation of Germans, has died in hospital in Lübeck at the age of 87.
German president Joachim Gauck led the tributes, offering his condolences to the writer’s widow Ute Grass. “Günter Grass moved, enthralled, and made the people of our country think with his literature and his art,” he said in a statement. “His literary work won him recognition early across the world, as witnessed not least by his Nobel prize.”
“His novels, short stories, and his poetry reflect the great hopes and fallacies, the fears and desires of whole generations,” the statement continued.
Tributes began to appear within minutes of the announcement of Grass’ death on Twitter by his publisher, Steidl.
In the UK, Salman Rushdie was one of the first authors to respond, tweeting:
The Turkish Nobel laureate Orhan Pamuk had warm personal memories: “Grass learned a lot from Rabelais and Celine and was influential in development of ‘magic realism’ and Marquez. He taught us to base the story on the inventiveness of the writer no matter how cruel, harsh and political the story is,” he said.
He added: “In April 2010 when there was a mushroom cloud over Europe he was in Istanbul and stayed more than he planned. We went to restaurants and drank and drank and talked and talked … A generous, curious and a very warm friend who also wanted to be a painter at first!”
A life in writing: Günter Grass
Read more
Grass found success in every artistic form he explored – from poetry to drama and from sculpture to graphic art – but it wasn’t until publication of his first novel, The Tin Drum, in 1959 that he found the international reputation which brought him the Nobel prize for literature 40 years later. A speechwriter for the German chancellor Willy Brandt, Grass was never afraid to use the platform his fame afforded, campaigning for peace and the environment and speaking out against German reunification, which he compared to Hitler’s “annexation” of Austria.
Grass was born in the Free City of Danzig – now Gdansk – in 1927, “almost late enough”, as he said, to avoid involvement with the Nazi regime. Conscripted into the army in 1944 at the age of 16, he served as a tank gunner in the Waffen SS, bringing accusations of betrayal, hypocrisy and opportunism when he wrote about it in his 2006 autobiography, Peeling the Onion.
The writer was surprised by the strength of the reaction, arguing that he thought at the time that the SS was merely “an elite unit”, that he had spoken openly about his wartime record in the 1960s, and that he had spent a lifetime “working through” the unquestioning beliefs of his youth in his writing. His war came to an end six months later having “never fired a shot”, when he was wounded in Cottbus and captured in a military hospital by the US army. That he avoided committing war crimes was “not by merit”, he insisted. “If I had been born three or four years earlier I would, surely, have seen myself caught up in those crimes.”
Instead he trained as a stonemason, studied art in Düsseldorf and Berlin, and joined Hans Werner Richter’s Group 47 alongside writers such as Ingeborg Bachmann and Heinrich Böll. After moving to Paris in 1956 he began working on a novel which told the story of Germany in the first half of the 20th century through the life of a boy who refuses to grow.
A sprawling mixture of fantasy, family saga, bildungsroman and political fable, The Tin Drum was attacked by critics, denied the Bremen literature prize by outraged senators, burned in Düsseldorf and became a global bestseller.
Günter Grass is my hero, as a writer and a moral compass
John Irving
Read more
Speaking to the Swedish Academy in 1999, Grass explained that the reaction taught him “that books can cause offence, stir up fury, even hatred, that what is undertaken out of love for one’s country can be taken as soiling one’s nest. From then on I have been controversial.”
A steady stream of provocative interventions in debates around social justice, peace and the environment followed, alongside poetry, drama, drawings and novels. In 1977 Grass tackled sexual politics, hunger and the rise of civilisation with a 500-page version of the Grimm brothers’ fairytale The Fisherman and His Wife. The Rat (1986) explored the apocalpyse, as a man dreams of a talking rat who tells him of the end of the human race, while 1995’s Too Far Afield explored reunification through east German eyes – prompting Germany’s foremost literary critic, Marcel Reich-Ranicki, to brand the novel a “complete and utter failure” and to appear on the cover of Der Spiegel ripping a copy in half.
His last novel, 2002’s Crabwalk, dived into the sinking of the German liner Wilhelm Gustloff in 1945, while three volumes of memoir – Peeling the Onion, The Box and Grimms’ Words – boldly ventured into troubled waters.
Germany’s political establishment responded immediately to the news of Grass’s death. The head of the German Green party, Katrin Göring-Eckardt, called Grass a “great author, a critical spirit. A contemporary who had the ambition to put himself against the Zeitgeist.”
“Günter Grass was a contentious intellectual – his literary work remains formidable,” tweeted the head of the opposition Free Democratic party, Christian Lindner.
Günter Grass in quotes: 12 of the best
Read more
The foreign minister Frank-Walter Steinmeier was “deeply dismayed” at the news of the author’s death, a tweet from his ministry said.
Steinmeier is a member of the Social Democratic party, which Grass had a fraught relationship with – after campaigning for the party in 1960s and 70s, he became a member in 1982, only to leave ten years later in protest at its asylum policies.
“Günter Grass was a contentious intellectual who interfered. We sometimes miss that today,” SPD chairwoman Andrea Nahles said.
While there were plenty of tributes recognising Grass as one of Germany’s most important post-war writers, social media users swiftly revived many of the controversies of his divisive career, bringing up his membership of the SS and his alleged anti-Semitism.
Advertisement
Speaking to the Paris Review in 1991, Grass made no apology for his abiding focus on Germany’s difficult past. “If I had been a Swedish or a Swiss author I might have played around much more, told a few jokes and all that,” he said. “That hasn’t been possible; given my background, I have had no other choice.”
The controversy flared up again following by publication of his 2012 poem What Must be Said, in which he criticised Israeli policy. Published simultaneously in the Süddeutsche Zeitung, the Italian La Repubblica and Spanish El País, the poem brought an angry response from the Israeli ambassador to Germany, Shimon Stein, who saw in it “a disturbed relationship to his own past, the Jews, and Israel”.
Despite his advanced age, Grass still led an active public life, and made vigorous public appearances in recent weeks. In a typically opinionated interview for state broadcaster WDR, which he gave in February after a live reading from Grimms’ Words, Grass called his last book a “declaration of love to the German language”.
He also talked about how the internet and the loss of the art of letter-writing had led to a “new illiteracy”. “Of course that has consequences,” he said. “It leads to a poverty of language and allows everything to be forgotten that the Grimm brothers created with their glorious work.”
He also remained critical of western policy in the Middle East (“now we see the chaos we make in those countries with our western values”), and talked about how his age had done nothing to soften his political engagement.
“I have children and grandchildren, I ask myself every day: ‘what are we leaving behind for them?’ When I was 17, at the end of the war, everything was in ruins, but our generation, whether for good reason or not, had hope, we wanted to shape the future. That’s very difficult for young people today, because the future is virtually fixed for them.”
Excellent little article on a simple mnemonic technique. As many of you know this is a subject which has fascinated and interested me for decades. So I’m gonna recreate my response to the article here:
I first became familiar with ancient and Medieval mnemonic techniques after reading the book, The Memory Palace of Matteo Ricci, which still has a favorite place in my personal library. I highly recommend the book. I was in college at the time. After that I spent about ten years researching ancient and Medieval mnemonic techniques.
After that I built a memory palace in my own mind, and eventually that led me to build a Memory City in my own mind called Agapolis. Complete with maps and buildings and parks and so forth. I might have already mentioned Agapolis here, I think I might have. The design I adapted from the City of Constantinople (New Rome).
Eventually after reading some of the works of Archimedes (on mind-laboratories) I turned Agapolis into a real city (still just in my mind) with laboratories, churches, temples, stadiums, banks, hospitals, parks, places I can live, study, write, etc. This kind of city I am sometimes tempted to call a Civis Imaginaria, but I still have yet to develop a term I’m really satisfied with.
Now if I’m sick I visit the hospital in Agapolis to help with my illness or injury. If I want to write I go to one of my writing retreats in Agapolis and write in my head if I can’t on my computer, and thereby store the story or poem or song there for later retrieval. I do that a lot while working outside, then retrieve the whatever it is later on from my head.
If I’m working on a scientific project or a math problem or an invention I go to the Museus (originally Greek museums, such as in Alexandria) were not artifact storehouses, but invention laboratories) in Agapolis and work the project there.
If I want to work on a business project then I go to one of the offices there.
If I want to talk or hang out with God I go to one of the churches or temples or to the countryside outside of the city.
Yes, I still use the buildings and objects and people (I populate the city with famous people from history as well as fictional characters I’d like to hang out with or talk to) as memory storage and retrieval tools but I also use all of those things for much wider applications as well.
The “Temple of Time” is a three-dimensional projection of historical chronography. In the temple, the vertical columns represent centuries, with those on the right showing names of important figures from the Old World while those on the left show figures from the New World. The floor shows a historical stream chart. The ceiling functions as a chart of biography.
The “Temple of Time,” created in 1846 by the pioneering American girls’ educator Emma Willard, draws on the tradition of Renaissance “memory theaters,” mnemonic devices that allowed people to memorize information by imagining it as architectural details in a three-dimensional mental space.
handwritten, printed, digital - a blog about ancient, medieval & late-modern book cultures. New posts every Monday, Wednesday & Friday. Don't forget to subscribe
You must be logged in to post a comment.