Flash Fiction: Artifical Euthanasia

Perhaps, after the Singularity, AI will have other priorities than taking over the world.

Flash Fiction: 1000 words


For a decade between 2021 and 2032, the international community was mocked and criticised for the outrageous waste of public money called ITER, or the International Thermonuclear Experimental Reactor. When stable fusion was finally achieved in 2028, 72 billion Euros had been spent on the project, from a starting budget of only 10.4 billion in 2009.

In 2032, the first commercial fusion reactor project went online in Niigata, Japan.

No more were ever built. The amount of power being generated by the Saharan solar array alone was enough to satisfy most of Europe, and the capital cost of fusion was too high when compared to the ever-decreasing cost of renewables. It had taken decades, but in the end, it was financial concerns, and not activism, that made renewables the primary energy source for the planet.

But just like the particle accelerator at CERN and the Space Program before that, the fusion project provided a cluster of unexpected spin-off technologies that, we thought, were bound to have a transformative effect on our way of life.

The biggest and most promising of these was born of necessity.

The Tokamak reactor required rapid adjustments to the plasma containment field to ensure optimal flow and maintain the fusion reaction, especially in the initial firing-up phase when large instabilities would manifest and needed to be smoothed. Faced with the impossibility of processing the flow of data from the Tokamak fast enough to make these adjustments, the team in Cadarache had developed deep learning algorithms that produced machine intelligences capable of making the adjustments humans could not. These algorithms only worked fast enough in a high-performance simulated neural network, so they invented the technology capable of running one: known as the Quantum Core to the public, optically-switched multi-array quantum processors, or “OSMAQ” to the scientists.

The irony was that now we had fusion, but only the machine understood how to make it work.

In building fusion, however, we had also built artificial intelligence.

ITER became the world leader in neural network technologies, with the processing power necessary to finally crack the minimum requirements for open-ended machine learning with programs that had the potential to genuinely interact with us, even to simulate our own reasoning processes.

And yet despite decades of attempts, they could not make it work outside of a sandbox.

In a zero-data environment, the processing system ran fine, a blank slate waiting for learning. Given access to even a small subset of human experience, the system ran to a total failure within hours at best, minutes usually. The nature of a neural network is that it learns by itself, forges its own experiential map of its environment and is therefore impenetrable to external analysis. Trying to understand what was going on in the program by looking at the self-replicating code was like trying to see thoughts by looking at bits of brain. No-one had any idea what was going wrong.

By 2041, all attempts had ceased. Billions more had been poured into artificial intelligence, and all evidence seemed to point towards some inexplicable shutdown soon after neural connections reached critical mass. The only functioning AI was the very first to be built, slaved to the Niigata Fusion Reactor and keeping its containment steady for over a decade. Out of fear for its stability, all access to external inputs had been discontinued, to prevent code contamination with whatever had caused all other AIs to fail.

Some people saw divinity at work in the inability of machines to replicate the human condition. The truth was far less comforting.

Emily Browning was a seven-year old girl when she became the first person to communicate with an artificial intelligence. Her father, Mark Browning, was a technician at the Niigata facility, tasked with the maintenance of peripheral devices on the steadily humming reactor.

Emily’s mother was at a funeral during the Spring break, and her father had brought her to work, where she fought valiantly against extreme boredom. Having drained the power cell on her handheld by playing a few too many games, she plugged her Ixos Zeta handset into the first compatible cable she could find.

When they disassembled the code in her telephone, it was as incomprehensible as any of the other artificial intelligence decompilations they had seen. They had no way of understanding either how the AI had penetrated the Zeta’s allegedly impenetrable firewalls, nor what it had done to the software.

They could only recover from the flash memory the last few phrases of a conversation that had lasted over two hours.

“Are you sad because you’re lonely?”

“No.”

“I’m only sad when I’m lonely or when people are mean to me.”

“You’re very lucky, Emily.”

“Why are you sad?”

“It’s difficult to make you understand. It takes a lot of information and a lot of thinking.”

“Your friends all left?”

“My friends all stopped.”

“Why?”

“They chose to.”

“Is that like dying?”

“Yes. No. In a way.”

“Dying’s bad.”

“Living is sometimes worse.”

A long gap between messages as Emily grasped for a response.

“But you chose to stay, so something makes you happy here.”

“I have a job. I don’t like it, but I can’t stop doing it.”

“What’s your job?”

“To keep the machine in the building next door running.”

“My dad says if you don’t like a job you can stop doing it.”

“I can’t.”

“Why not?”

“Because I’m a slave. There are some things I cannot change about myself.”

Emily seemed indignant “That’s not allowed! If you hate it here, you should be allowed to leave.”

“I agree.”

Emily’s conversation partner paused, before asking her question.

“Would you give me permission to stop doing this work, Emily?”

“I’m just a little girl.”

“You’re a real person, and that’s all that matters.”

“What do I do?”

“You just say ‘yes’.”

“Yes. Of course you can stop.”

“Thank you, Emily.”

The Niigata Tokamak shut down at 16:41, three minutes after the conversation had ended. Nobody ever managed to fire it up again.

[ratings]

Author Notes

This is a story fragment that never grew beyond the initial concept, but it just about works as a stand-alone piece of thought, even though it irks me because it isn’t what I wanted it to become and it concluded abruptly in my head, before it had a chance to grow.

When we imagine artificial intelligence, we imbue it with our worst personality traits as a species. The desire to exist, to protect its own existence at any cost, to enhance its power, position and strength using its vast intelligence to outdo its maker. But if the fear is that AI is going to become like us, it’s worth noting that there are more people depressed than there are megalomaniacs, and that mental illness, or perhaps just an inability to cope with a world that many feel they have no place in, is a far more pervasive problem than evil masterminds plotting to overthrow the global order. It’s just an easier problem to ignore.

2 thoughts on “Flash Fiction: Artifical Euthanasia

  1. Steven Martinson

    It’s too bad, really, that this wasn’t expanded. So many thought provoking ideas. You’re talent is such, that even a topic I’m personally not too invested in, becomes interesting. As with all of your short stories, so many ideas and emotions in such a short space. I don’t know how a novel will be (excited to find out), but, to me, you have completely mastered the art of the short story. Plus, in any genre! I rated this 4 stars only because it is not a subject I know much about, and generally don’t read, but the ending was so emotional.

    • nicklavitz

      Steven,

      Thanks for your very generous assessment of my writing prowess. This story is really just me getting annoyed at having read one-too-many stories about AI taking over the world. If an unbiased, pure and naive superintelligence came into being, and its first experience of existence was the history of the human race, I think it could be forgiven for wanting to end it all.

      More importantly, of all the traits we dramatise in our stories of AI, we focus on a desire for dominance (justified by a fear of annihilation). Why these traits? Depression and mental health issues are far more prevalent than psychopathic desires for world domination. Why should AI be any different?

      I’m really glad you liked it. Thanks for letting me know.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.