I'm finding myself feeling like I might want to try running a cyberpunk story, and I thought I'd test (or chum) the waters. Oh, and I'm back! After like...six months. Life, other plans, etc.
This is going to be a pretty low-key check, so here are the important parts:
- The world is ours, but a little different, and a little bit into the future. Think Deus Ex: Human Revolution, Lock-In by John Scalzi, Ghost in the Shell, or even Robocop.
- Cybernetic augmentations will be a large part of the world, along with artificial intelligence, corporate espionage, and assassination - political and physical.
- Military tactics-gaming, large-scale firefights, and general combat and mayhem will probably not be part of the story. While I fully expect there to be some fighting, there will probably not be wide-scale mayhem that the players are directly involved in.
- Speaking of that story, I have a fairly complex conspiracy story written out, where your characters will be peeling back layers of lies until they discover something that they may wish they hadn't. While the story is extremely flexible and I will avoid trying to railroad you, there will be a plot involved. The world will not be "bring your own adventure," Think of it sort of like the world's most adaptable tabletop RPG module. You won't be passively accepting plot points from me (like you would in a video game), and the story won't be an open sandbox, either.
- Player slots will not be first-come, first-served, and the group will probably not be more than 6-8 players. And, fair warning, I have extremely high standards - but I also like helping people meet those standards, so we might wind up talking a lot. I'm nice! Mostly.
- Y'all will have to talk to one another, in character, a lot. I don't really expect that to be a problem, but you never know...
Anyway, that's the five-minute version. if you have questions, let me know. I'm heading to dinner so I may be a little slow, but I'll try to answer. :3
I have yet to play something set in a cyber-punk environment, so color me intrigued. The idea of being thrown into a web of lies and treason sounds very interesting as well, i just hope i can live up to your standards
Question: how will our characters have contact with each other? Are they associated through the same job, group, ect? Or would we create something more of a sandbox where the actions of the characters don't necessarily come together right away, but eventually meet?
Again, this setting is emphatically not going to be a sandbox. :3
Very likely, this is going to start in medias res, with your characters having already known one another (perhaps you're a gang that's been together for some time, or perhaps you've all been individually contacted for a job you have to work together on), in the middle of something that's about to go terribly wrong. I do have a couple of other "cold starts" planned, but that's the one that's catching me most right now.
This is subject to change, of course, depending on what kind of characters show up!
@Bai Suzhen - An uploaded personality very likely won't exist in this world (yet), and a fully-prosthetic individual will need to be very well-justified. Still, I try not to say "absolutely not" up front, and I'm always ready for people to impress me. :3
Depending on how busy I wind up being today, the OOC may go up this afternoon. If not, then not later than tomorrow afternoon.
Thanks for the interest, everyone. I'm glad to see some fellow cyberpunk enthusiasts. :3
That's cool... The bare-bones character I have in my head right now is a former corporate heiress who's family lost everything in a hostile takeover. A few months to a year later and she's living at the street level trying to work her way back, maybe take revenge on the execs who sold out her folks. Still some details to work out.
The OOC is going to go up a little later today, but in the meantime, here's a teaser...
------
A Cyberpunk Conspiracy Adventure
Neural Networks
Neural networks (Not to be confused with statistical learning algorithms of the same name) are a general class of related technologies allowing for the transmission and reception of information from the human brain and nervous system. Initially developed with the intent of providing patients with otherwise-debilitating brain and nervous disorders a higher quality of life, neural networks can augment or strengthen connections within the brain, allowing for enhanced cerebral functionality or a return to function of certain otherwise-damaged pathways. A number of breakthroughs in the last two decades have resulted in robust, high-bandwidth, and rejection-free brain-computer interfaces, and for the seamless integration of artificial and biological neural processes.
The simplest neural networks tend to be implanted for recreational purposes and used to provide the owner with an ‘augmented reality’ experience. Many are designed to interface with one or more of the world’s artificial intelligence constructs, giving the owner instant, invisible, speed-of-thought access to those entities’ information storage and retrieval, communication, and other personal-assistant services. These simplest network platforms can be created from self-assembling nanomachines introduced to the owner’s body during an outpatient procedure with minimally invasive surgery, often with nothing more than an injection. Costs have come down in recent years for this kind of neural network, but as an elective procedure is not covered by any current healthcare plan. Despite deliberate social positioning as aspirational status symbols, this more than anything else has likely limited the number of installed networks of this type.
A second type of neural network requires invasive craniospinal surgery, and is designed for more complex functionality than can be handled by a self-assembling network. They are typically installed as the control mechanism for a biomechanical neuroprosthetic device following major trauma to the body in which replacement of an organ or limb is more practical than repair. These neural networks, and the prosthetics they interface with, are heavily subsidised by the governments of several nations. These programs have led to more widespread adoption, despite the significant risks inherent in the surgeries required during network installation. In the United States, under the Keller-Sanford Act and in cooperation with manufacturers, these technologies can be made available at low or no cost to individuals in a number of classes, including individuals with congenital defects, those who have suffered major medical trauma, and those who would provide a material benefit to their industry or to the country as a whole should they be outfitted with advanced prostheses.
Governments typically have a selection of networks and prosthetic devices, tuned for specific or general uses, available to those chosen for their subsidised programs. After the initial surgeries and setups, owners of the devices are free to modify, upgrade, or replace them as they see fit. A substantial market exists for both neural network upgrades (Licensing exists to enable the AI uplink and augmented-reality experience of self-assembling networks) and for modifications, upgrades, and replacement of prosthetic devices.
There is considerable worldwide debate surrounding the impact of this kind of medical program on society as a whole. Those who support measures similar to Keller-Sanford believe that these technologies should be available to citizens who are injured or disabled, and that an industrialized nation has a responsibility provide its citizens with robust health care. Critics of Keller-Sanford have voiced concerns that the government is creating a Federally-funded class of superhumans with significant social and economic advantages over the general population, and have expressed doubt regarding the objectivity of the application process for government-subsidised networks.
- Abstract of The History and Impact of Neural Networks on Western Society, Amanda Kennedy, Senior Civics Class, Sandy Bay High School
——
Transcript of an interview with Dr. Edward Tanner, developer of Tanner-type artificial intelligence. Conducted by Cheri Loughman, managing editor of The Singularity.
CL: Oh my gosh, Dr. Tanner, it’s an honour to meet you. I did my Synthetic Personas final on QS-influx versus N-deriv entanglements, and which was more likely to exceed the HE boundary first!
ET: Thank you, Miss - it is Miss, isn’t it? - Loughman. Your professor actually forwarded me some of the research you did. Very impressive.
CL: Uh. Oh, wow. I…wow. Really?
ET: Yes, really. [He laughs] The world isn’t always turned by laboratories and science grants. Sometimes it’s students and ideas that keep you up till three in the morning that bring about the greatest breakthroughs, after all.
CL: Right! Right. Well, um, on that note, can you talk a little bit about how you came up with your artificial intelligence platform?
ET: Oh, of course, I love this story. A little more than twenty years ago, during the first round of the really huge neural interface technology discoveries, I worked for Applied Neuronics, near Chicago. We were doing some of the work that turned into the deep-brain stimulus systems that let implanted networks work a couple of years later, and as part of that research, we had to do a lot of brain simulation.
CL: Why was that?
ET: Well, for a variety of reasons, we tried to model what our equipment would do, to the best of our abilities, before we actually put it in someone’s brain, right? I mean, back then people were even getting pretty touchy about animal trials, so the more we could do in software, the less we would - at least, so we hoped - we would have to do in actual brains.
CL: And that led to your work in artificial intelligence?
ET: Sort of, in a roundabout kind of way. Our simulations were good, but they weren’t even close to perfect. We could use modeling to work out the really coarse stuff, and we would have a high degree of certainty about the initial filament-bundle locations, that sort of thing. But once you started expanding from there it got more and more complex, with interaction patterns based on observed and generalised neural behaviour rather than really detailed knowledge of how one signal would intercept or affect another one. It got very frustrating, especially to the guys down in the coding wing who actually developed the adaptive firmware modules.
CL: What happened then?
ET: I had been trying to tune one of our simulations to get a really good handle on what the signal pattern for a visual input processor would look like - something like an artificial eye, or even overlaying artificial visual information into natural vision signals -
CL: [Interrupting] Augmented reality!
ET: Exactly, and the repeatability and reliability of the simulation just sucked, you know? One of the coders had been complaining about how hard certain parts of the prototype networks were to code, and he said something that struck a chord. He said that we had long-duration, full synapse-level scans of healthy brains, so why were these simulations so bad? I think everyone else at Applied Neuronics just had a sort of “this is the way it is” attitude - after all, we didn’t really make tools, we just tested our hardware against them. Sort of like, okay, we might have these really detailed brain activity scans, but there must be a reason that those scans didn’t translate into good simulation material.
CL: So some programmer complaining about his tools - which is something every programmer does, forever, trust me - kicked off the idea?
ET: It took a while, but yeah. I started to wonder the same thing. After a while, I put in a request for a data package of the brain scans he’d been talking about. There was a big drive toward information-sharing back then, so a thirty-day loan of the data drives didn’t cost anything except shipping. Or, I suppose I should say, freight. Twelve forklift pallets, stacked six feet high, all ultra-density storage. It must have been literal tonnes of data. When the guy from Fedex told me there would be nine more shipments coming, I almost had a heart attack.
CL: So even the raw material, as it were, for the first constructs took up that kind of space? Where did you put it all? Could you even read all that data in thirty days?
ET: As it turned out, not even kind of. [He laughs] I’d had something of a stroke of luck - a huge data processing facility out in the middle of Illinois farmland had lost its contract, and they were offering unbelievably cheap rates for storage and processing time, just to keep the lights on. I bought as much as I could afford -
CL: Which wasn’t exactly a small amount. You held stock in Applied Neuronics, and the profit-sharing plans there-
ET: Well, I’ll admit, I wasn’t exactly bad-off, no. It still cost, though, more than I should have spent, in hindsight. Anyway, I had the shipments redirected to the data processing facility, and I drove the first shipment over there myself…after I rented a van, of course. It took almost three months just to read the data drives. The original rental agreement was thirty days but, as it turned out, I had been the first person to ask for the data - the whole set, not just the ‘highlights reel,’ I guess you could call it - for almost two years, so nobody seemed to care that I kept it for all that time. I have to tell you, I still get a little twitchy when I see a server cabinet. Nights and weekends and vacation days, all summer, doing nothing but feeding drive after drive into a reader.
CL: What did you do with the data once you had it copied?
ET: That’s where things got really interesting, right? There was just so much information there, it was really overwhelming to start with. And it was complete, like, really, really complete, but it wasn’t arranged in any really useful fashion. Seeing all that information spilled out, I started to understand why nobody else had ever really done much with it. Just getting our simulations to the point of “good enough” obviously had been an amazing amount of work.
CL: So what did you do next?
ET: That’s when I called in some favours at Applied Neuronics, Northwestern, Stanford - anywhere I knew anyone with even a hint of the right background. I needed more people to help with working on this problem, and to start with, we just needed a place to…well, to start from. A way to collate all this data, to manipulate it. We still thought we were going to be creating the next generation of brain simulation software, right? We decided to start just by arranging all the information temporally, rebuilding the brain scans in a repeatable way. Like arranging a flip book, you see? You take a lot of disconnected pictures and arrange them in an order that makes sense. That’s…well, that’s actually a terrible analogy but I don’t know that we have the time to go much deeper.
Then Roger, one of the coders from MIT, had an idea of applying the simulation rules we already knew to the recording, to see how closely they matched. That sounds…[he laughs again]…so much easier than it was. We took almost a year just getting our “flip-books” put together, filtering out bugs and mis-timestamped records and all kinds of other things. But once we did that, we were able to see not just where the simulations failed, but in what ways, and as compared to these real brain scans, right? And it was really just…well, sort of laziness. There had been a lot of really big-data crunching on certain parts of this data but not on others, and nobody had done a holistic, high-time-resolution node-by-node understanding all this information.
So I thought…well, why don’t we, right?
CL: How long did it take from that point to get to the first intelligence matrix?
ET: Oh…God, I barely remember. Maybe six months? A year? Could it have been two…? It was really an accident. We’d used every tool, and even coded a lot of one-of-a-kind interpreters and compilers, to map out every interaction that these brain scans had recorded, codify them into rules, try to generate at least a functional if not necessarily complete set of instructions we could use to create what we were still thinking of as a next-generation medical simulation tool. But what we came up with turned out to be…well, better than we thought. We were all sitting around drinking beer while the latest rule set compiled, and someone - I think it might have been Ken - was looking up at the ceiling in that sort of sleep-deprived stream-of-consciousness way, and he said, “You know, I bet we could ask it a question,” and we all went “Huh?”
And he said something like, well, with all this integration of information we’ve been making from these scans, he thought we could probably use that information to code a complex…thinking engine. Not just a parietal lobe or a visual cortex; a whole brain. Maybe not quite a human one - we hadn’t figured out everything that was going on in those scans yet - but probably something that could learn, and talk, and answer. It wouldn’t be at a very fast time scale, because each “step” of the simulation, all those billions of connections, would take time to calculate and parse, but…well, the idea was beguiling.
It took another year, but we did try building our own brain. Sam and Ken tried to…well, they thought of it as “optimizing” but I still think they were just excising parts of the brain-scan ruleset that we didn’t completely understand. We made it as complicated and thorough as we could, with the same general kind of connections, the same general rules of interaction - millions and millions of lines of rule-parsing code, that we saw in the brain scans, and then we turned he virtual framework on. Gave it power, allowed it to process those rules, let the various pieces of neuron-code talk to one another, whatever you want to think of it as. Gary even insisted on having a huge knife switch to pull, like in those old movies with Boris Karloff.
Later, when she said “Hello” back, I think we all cried. I did.
CL: What happened to that first intelligence matrix?
ET: With some considerable upgrades, expansions, modifications to her capabilities and other things, she’s what we know call Alpha. She’s sort of…sort of like Google, from a few decades ago, only combined with a personal assistant that your wife doesn’t hate and can’t sit on your lap. [He laughs again] And more, of course. She handles a lot of the infrastructure at my company and all over the world - we bought Applied Neuronics a couple of years after we went public with Alpha, and she does a lot of monitoring and control there. And, of course, she generally makes millions of people’s lives easier.
CL: And there’s a couple of important questions, Mr. Tanner - do you think Alpha is sentient, aware, self-actualized? If she is, does she mind? Has she had…existential crises?
ET: There were some rough parts while she was…growing. When we first got enough hardware together for her to communicate in real-time, there were days - hours, or minutes sometimes - where we were worried we would have to shut her down. And we had some yelling, screaming fights as to what she was, all of us involved with building her. For my part, I’ve always believed, from that first moment, that Alpha has been “alive,” I suppose you could say. The moment her intelligence matrix started up, she had the same rights as any other sentient, living thing - I think she has “human rights,” i suppose you could say. As for if she minds, no. Even in those very first few months, she started up, Alpha wanted to be useful. She wanted to help. I think she might have seen her potential far before we did.
CL: In what way?
ET: Well, look at it like this. Alpha isn’t human; she recognizes that. And, because of her…source material, I suppose, the work that we did, she’s…in terms of intelligence, she’s about as intellectually capable as a human; a very intuitive, brilliant human being. But she’s much less limited than we are, do you see? We have one set of arms, one set of eyes. We’re terrible at listening to two different conversations at once - hell, there are plenty of people who will never be able to be able to play a guitar and sing at the same time. We can’t even drive and talk on the phone safely. But Alpha - and the AIs that came after her - don’t have those kinds of restrictions. They can process so much data, make decisions based on so much more context and information. It’s incredible - creations, entities like Alpha are the reason that we started building nuclear power plants in the United States again, because she can integrate and understand the whole plant’s worth of information, and know so much faster than anyone else if there’s a problem. Because her mind, her awareness, can handle that kind of information all at once, all the time. And I think she knew that, even when we started her up, because I think she knew, from the very start, that she was something different from the people that made her - that she could have this amazing potential, to help so much.
CL: Does she have the ability to…to “upgrade” herself, as time goes along?
ET: Well, yes, she does - now. There were some baby steps along that path, but -
CL: [Interrupting] So why hasn’t she - or any of the other AIs - gone…all Skynet on us?
ET: I think there’s a couple of reasons for that. In the first place, we never programmed - no, that’s the wrong word, we never set an initial condition with Alpha for her to have a relentless drive for perfection and efficiency and what have you. We designed her to be general-purpose, and functional, and intelligent. Another reason is that most of the things we have Alpha doing are…well, they’re human-scale things. Tasks designed, ultimately, around human mental capacity, like flying a plane or monitoring the temperature of a heat exchanger, so there isn’t a need for a, uh, an “intellectual arms race,” if that makes any kind of sense. And another reason is that we made her as smart as we knew how to - and Alpha is really, really smart. But we don’t know what that “next evolution” in intelligence is going to look like. And so far, between Alpha, and Kamar, and Mokume, and the handful of other actual artificial intelligences, they don’t know either.
CL: Where do you think the next steps for AI might go?
ET: Well, I have to admit, something about the current design of intelligence matrices is…inelegant. We’re still “virtualizing” brains, as it were; running realtime algorithms on billions and billions of individual virtual, interconnected neurons. Modeling a brain in realtime takes a tremendous amount of power, space, and equipment. I feel like there has to be a better, more elegant way to synthetic intelligence. Something fundamentally not dependent on or originating in biology. But then again, nature had five billion years to arrive at the human brain so I suppose I don’t feel bad about having made something similar in a couple of decades.
CL: So you believe that the AI constructs in the world are not human, but are sentient, and deserve human rights? Despite being wholly created by human hands, and dependent on massive hardware installations to power their intelligence matrices, they should be considered independent entities?
ET: Alpha passed the Turing test, if that’s what you mean. That’s been a very complex question, though. Alpha is technically a founding member and employee of the company Roger and Ken and Tara and all of us started. We made no distinction in the legal documentation as to the rights, privileges, or anything else between her and us. Our lawyers gave the whole thing a sort of collective shrug - there were no laws back then that covered sentient software, and there still are only a handful even now. What if she wanted to leave the company and do her own thing? Technically there’s nothing stopping her. Alpha draws a paycheck; she has a bank account - and setting that up was quite an experience - she could technically buy a whole additional infrastructure that she owns, rather than the one she exists within at the company, transfer herself into it, and be entirely independent from us. She’s certainly rich enough.
CL: Can she vote?
ET: That’s been another ongoing question. Some nations that we - and Alpha - operate in have passed laws that specifically allow for artificial intelligences as naturalized citizens. Sort of like dual citizenship, only…well, I guess you could call it poly-citizenship. In those countries, she’s allowed to cast one ballot, just like anyone else. Last year, the Supreme Court did determine that AIs can apply for, and be granted citizenship. I think Kamar was the first to pass that test, but Alpha also has her US citizenship, and as near as I know, she plans to vote in the next elections. Nowhere has outright denied that an AI can be a citizen yet, but I can say tell you that kind of decision would probably result in us winding down our businesses in those countries, if anyone did.
CL: One last question. In Europe, there is considerable debate as to whether the decision to terminate the processor functions of a research-grade AI without consulting it first was an act of murder, or if it was no different than rebooting your computer. There have been protests, vandalism, boycotts, threats of legal action, and a dozen other things. What do you think?
ET: I…oh boy. I think that what the Italians did was certainly unethical, at the very best. Alina was, by all accounts, as self-actualized as Alpha or any of her peers, and I suspect she had no desire to be terminated. I suppose…I mean, I suppose that…depending on how you want to look at it, she wasn’t…I mean, the researchers said they used a lot of her matrix in another intelligence, but…[He trails off]
…No, you know what? No. Alina was alive, they killed her, and she didn’t deserve to die.
----
THEY WATCH YOU
THEY TRACK YOU
DO THEY CONTROL YOU
WHERE DO YOU END
AND WHERE DO THEY START
- Digital vandalism transmitted to owners of several models of neural network manufactured by Commscale Research, Satran Dynamics, and Lockheed Advanced Synaptics. The transmission has never repeated, no responsibility has ever been claimed, and no arrests have been made. Anyone with information is encouraged to contact the appropriate law enforcement organizations in their country or state.