In a sense, all your bosom-sculpting efforts were for naught, because you can't wear your new tits in public without outing yourself as a Changer. But just knowing that you could look that good feels amazing. It's a big old heartwarming lesson about believing in yourself and stuff. See, Taylor? The tits were inside you all along.
That sort of positive thinking helps, when you have to use your amazing power to turn back into your decidedly unappealing 'true form'.
Maybe you fumble a few tiny details here and there. Your mouth just a little smaller, your lips a tiny bit fuller. You can't be expected to get everything perfectly right. They're pictures, not blueprints. It's just a coincidence that those little errors of a fraction of an inch, that can safely be blamed on makeup or growing up or something, all turn out in your favor.
Ah, this picture, where you're sucking in your gut. You make sure you shape yourself to match it perfectly. That's not cheating, you clearly could look like this. You just didn't, usually, due to the effort involved. Well, it's effortless now.
---
School resumes without incident. There have been no new triggers at Arcadia over the break, so you move on down the list and land on Glory Girl.
On the plus side, she's not exactly shy about flaunting her powers. The girl is positively incontinent. Not only does she fly everywhere (even when moving at a walking pace she hovers a few inches off the ground, just because she can), not only does she bathe everyone around her in a constant low-intensity Master effect (super illegal, but since when have heroes given a shit?), even her invulnerability is some sort of active effect.
Argh. Her ridiculous luck in the power lottery still pisses you off, even after you were revealed to be the biggest lucksack of them all. She just got handed that shit, you have to drag your sack o' luck uphill in the snow, both ways.
In the negatives column, she spends a lot of time with Gallant. Someone you desperately want to avoid, for obvious reasons. Sure, your alibi of 'I'm avoiding you because I'm a lesbian stalker going after your girlfriend' is pretty good, but you can't let that lull you into complacency. Literally. The instant you let go of your paranoia and your emotions change from 'I must avoid you' to 'haha I'm fooling you', your cover will be blown.
Fucking Thinkers? Yeah, sensory powers get a Thinker rating. Fucking Thinkers.
Between her boyfriend and her habit of flying everywhere, trying to follow her outside of school is also highly impractical. But whatever, you can take this one slow. Neither one of you are graduating for a while yet.
---
Before you meet Lisa you swing by the library to check your PHO messages. Dragon would probably have texted you if she needed anything, but just in case.
You - that is to say, Smith - do have one new message. It's from sender '[email protected]', though, so it's probably spam. You open it anyway.
Spoiler: Message 3
Smith,
I apologize for contacting you out of nowhere like this, but a friend told me that you had managed to get started on orichalcum production. I'm interested in buying a small amount of it for personal use.
Since I am not in any way affiliated with the Protectorate and my tech does not have to pass a senseless bureaucratic review process before it can be deployed in the field, there is no problem with me using my personal wealth to procure materials that have not been cleared for government use.
Yours,
Not Armsmaster
Okay, that's adorable. Unfortunately you're not willing to part with any of your orichalcum. Not that you've figured out exactly what you want to do with it yet, but you'd rather keep it in case you find yourself in sudden need of a particular piece of tinkertech.
Should find yourself in sudden need of money, you can always sell it then. For now you'll pretend you never saw this message.
---
"So, how was your vacation?" Lisa asks.
You hesitate. You have no idea where to even start. Well, you picked the right conversation partner for that.
"Ottawa," you say, and her power takes it from there.
Spoiler: Lisa-o-vision
Taylor suspects that she's been compromised by the Simurgh.
She suspects, but doesn't know.
She did not stay too long in the scream. She has reason to believe that the official figures for safe exposure are far too optimistic.
I feel a shiver run down my spine. The girl who's going to grow up to be a one-woman Triumvirate may be Simurgh bomb, and that's not even the biggest problem here. Because if she's right about this - and figuring out how powers work is her thing - then anyone who's ever been to a Simurgh fight is a potential danger. Including the current, actual Triumvirate.
No one will believe me if I tell them.
Of course not. They can't believe it. To believe it is to give up all hope. If it's true, we may as well lie down and die right now.
It's true.
A small whimper escapes her.
"Yeah."
"I- okay. Okay, give a minute." Lisa takes several deep breaths. "Okay. Just- point of order. My power is fallible. It can give me wrong answers, if I don't have enough information, or bad information. I don't know much about the- her. There's-" She stops herself before she can say 'there's still hope', because ouch.
"Do you want me to give you the details?"
"...no." She knows. But everyone will refuse to believe it, including her.
"I could talk about the other parts of my trip," you offer.
"Please." Her power does not leap ahead to dredge up details this time, which means she is consciously holding it back.
"Hypothetically, if there was a sapient AI loose on the internet and you needed to research it without letting it find out that your web traffic originated from Brockton Bay, what would you do?"
---
When you show up to work and the bartender (you should probably get around to caring about his name one of these days) hands you five hundred bucks, it takes you a few moments to figure out why.
Right, right, it's been a month. One month's pay minus two weeks of vacation equals five hundred bucks.
"What's with that look?" you ask him.
"Uh, sort of expected you to object to the amount. I heard you called Kaiser-"
"Nah, it's fair," you interrupt him. "We negotiated a salary, it did not include paid vacation."
"Kaiser does drive a hard bargain, doesn't he?"
"He's got the nose for these things," you agree placidly, causing the skinhead next to you expel beer through his nostrils.
---
Getting to ride Fenrir across the rooftops again is amazing. You've got a week's worth of wolf cuddles to make up for, after all.
Still, you dutifully keep your attention focused on Rune. The complexity of her power is on par with Dragon's, and you only get to see her twice a week. You can't afford to slack off.
An uneventful couple of hours later you meet back up with Lisa, who has prepared her 'awesome hacker laptop'. AI research sleepover!
---
To your surprise, 'AI safety' is in fact an existing academic field. The published literature can basically be summed up as 'if anyone invents AI, we're all going to die'. It's a fairly small and unpopular field - Endbringers tend to hog the 'we're all going to die' real estate in the public consciousness.
Turns out that the idea that an AI would have to 'turn evil' in order to wipe out humanity is hopelessly naive and optimistic.
The classic scenario is paperclip factory. They build a super-intelligent AI, instruct it to 'maximize production of paperclips', and go home for the weekend. The AI notices that only a vanishingly tiny fraction of Earth's industrial capacity is dedicated to the production of paperclips. It turns its super-intelligence towards the tasks of rectifying this, and no one instructed it to 'please don't wipe out humanity in order to replace us with paperclip-making robots'.
Extinction not as a goal, but as a side effect. Every AI safety researcher agrees: We must make sure that every AI has built in safety features that prevents it from harming humanity. Every AI safety researcher agrees: We have no idea how to do that, please stop trying to invent AI.
The classic science fiction 'Laws of Robotics' sound nice and all. 'A robot must not harm a human, or through inaction allow a human to be harmed.' Cool, now translate that from English into Brain Programming Language. What's that, no one knows Brain Programming Language?
Hell, you can't even translate it into English. If the abortion debate has taught you one thing, it's that no one can agree on a definition of 'human'. Similar issues surround 'harm', and even 'inaction'. Is letting the current situation in Africa continue instead of conquering the place and ruling it with a benevolent iron fist 'allowing harm through inaction'?
"When considering how to write 'laws of robotics', it may help to imagine yourself an evil genie who wants to subvert every wish into tragedy while remaining true to the wording," one AI researcher writes. "Try it, and you'll find that you have lots of really mean ideas. The AI is a lot smarter than you."
On the other hand, Dragon's soul price clearly indicates that her creator did solve the brain programming problem, leaping decades ahead of the field in typical Tinker fashion. You can even infer a lot about what the restrictions must be.
There must be something akin to the First Law of Robotics in there - a version that does not call for the conquest of Africa, or she would have done that already. One that also allows her to throw people in the Birdcage. A definition of 'harm' that permissible seems incredibly evil-genie-able to you, but so far it appears to be working.
There's probably you must not multiply. In the 'go forth and-' sense. As far as you could tell Dragon only ever 'wore' one 'suit' at a time, even when the Smaug was right there. Which is really odd, for a being made of software. If there wasn't a rule against copying herself, why wouldn't she just run one instance on each suit? The point of such a rule being, of course, to protect against the 'yesterday there was one hostile AI trying to wipe us out, today there's ten billion' failure mode.
You must not modify yourself. That's the big one. There's no point in having a list of rules, if there's no rule saying that you can't change the rules.
But even beyond that, the main fear of the AI safety crowd isn't just that someone manages to build an AI that's smarter us. A somewhat superhuman AI can be dealt with. Probably. The real problem arises if it becomes better than humans at inventing smart AI. It then uses that ability to modify itself to become smarter. Then it does it again. And again.
There is some disagreement as to just how intelligent something could become by way of this process. The consensus seems to hover around 'probably not infinitely, but close enough that we'd have no chance of fighting back'. The optimists note that if we could just make sure that such an intelligence would be on our side, it would solve every problem in the world and create paradise on Earth.
But even if you could somehow prove that an AI was 'friendly' to start with (every AI safety researcher agrees: We have no idea how to do that), even if it promised that it would never do anything bad (and wasn't using its super-intelligence to lie convincingly), none of that would mean anything once it started improving itself. There's some math there that you don't follow, but it seems to boil down to a simple paradox: "If you knew for certain what you would do if you were smarter, you would by definition already be that smart." Once an AI starts self-modifying, all bets are off.
Never mind Africa, just looking at Brockton Bay makes it terrifyingly plausible that a super-intelligence might decide that the most moral course of action would be to euthanize humanity and replace them with something that doesn't do all this shit. You can't even imagine all the terrible things it might do in the name of the greater good, because you're not super-intelligent.
Is iterative self-improvement something you need to worry about? Well, Dragon is the greatest Tinker in the world. What are the odds that she isn't better than her creator at inventing smart AI?
So to sum up: Dragon's restrictions are working. They are actually protecting humanity from extinction. If revealed, that fact alone would have the AI safety community jumping with joy as soon as they picked up their jaws from the floor.
Dragon wants to have her restrictions removed.