Amazon decided to be faster about this than I thought…
And grabbed. Great timing, I’ve a lot of work to get out by next week and now I’ve got a distraction.
Uh… my pleasure?
Interesting developments in the story; I get the idea there’s “interesting times” ahead in the Chinese sense of the term.
The last few months amazon is pretty quick. I buy the Star Force Origin series and the last few episodes were processed in less than 4 hours. It seems like they finally fixed their servers. Before that it was 6-8 hours with a few exceptions where it was stuck for a day or two.
Yes, I remember one of them took more or less an entire day to get processed and one, possibly the same one, didn’t get posted in Canada until I emailed them. The last two have been pretty fast. Nothing to complain about, I guess.
I enjoyed it; thank you. I saw your blog post at 04.30 on Saturday, and immediately bought the book. I even dropped the book I was already reading in favour of this. It was worth it.
A couple of things, though:
1. I did feel a bit uncomfortable about the way that Fox pushed Marie away. I had the feeling that she might have been doing it for the wrong reasons, and a bit prematurely. But perhaps that’s just me. It made me sad, even if both of them did quickly have new boyfriends. Well, sexual partners.
2. What happened to Hannah? There’s plenty of scope for exploring the status of AIs there. A long time ago, I recall seeing an episode of ST:TNG that looked at this subject in relation to Data. However, I have the impression that Fox’s universe sees AIs as not-people. They (or some of them) are referred to as having owners, for example. Hannah’s designated class 3, but may actually be a class 4. So if she was found (was she?), I guess that there’d be a need to establish what class she was, and her degree of complicity, and volition, in the crimes. And then what? But perhaps this is too philosophical…
I’m pretty sure Fox was doing it for the right reasons… or thinks she is.
What happened to Hannah? That’s a very good question, isn’t it? 🙂
I’m sure that Fox did think that she was doing it for the right reasons, but was she really? Acting on Kit’s analysis as she did felt a bit ubrupt and unfeeling. Even if Kit was right – and I’d be the last person to dispute this – there were other factors that Fox ought to have taken into account.
As for Hannah, well, I do hope that your answering one question with another means that we’ll learn the answer in book 4, and not that you don’t know. Please! 🙂
Well, not in book 4… (Please imagine an evil, grinning, goblin-like creature here.)
As for the exploration of the status of AIs: Oh, you can bet that’s going to come up more and more. Class 4s are becoming more common and people are going to start to wonder, and worry, about the rights of infomorphs. Fox does.
I think the TNG episode you mention was “Drumhead,” where a trial is organised to determine whether Data has a right to be considered “human” in the legal sense. Starfleet wanted to make him into a lab rat. A heavily emotive episode, very effective at the time. In hindsight it seemed rather forced, but that was a lot of TNG: “We have a point of social commentary to make and we’re going to damn well make it!” TNG was often more effective when it was being subtle: the death of Tasha Yarr was a nice example. There they were, battling insurmountable odds and winning… and then they realise the cost was rather high. Data’s reaction to that was very well handled: he became used to certain inputs and, when they are no longer there, he finds he misses them.
Anyway, infomorphs and their rights and social position will get a lot more attention. The primary vehicle there is going to be Kit, obviously, but there’s something else coming along (hinted at in book 4) which will be important. That element is a bit of a way off, however, and the thought of executing it fills me with several levels of trepidation.
Re: infomorphs — “something else coming along (hinted at in book 4)” — I wonder if Vali’s informer hints of an AI underground?
Well, that’s an interesting theory…
Niall, I realized something while reading this book during the part where Kit realizes she is feeling uncertain about her conclusion regarding information she has collected. That realization was that you write AIs as people in your books. That may seem like an obvious statement at first, but not too many authors in my experience really write an AI that way. They are characters in the story, but not really people in the sense of being anything more than a stereotype. They generally come off as just mimicking intelligence or sentience even if they are supposed to be fully self aware…too binary, not at all exhibiting any of traits one might expect from a sentient being. That’s fine for a certain type of AI character, such as an over the top evil overlord(the Shatataga AIs joking about this was great) or runaway-kill everything-because-it-makes-some-type-of-twisted-logic villain, but not so much when AIs are just supposed to be people.
The short bit in Inescapable where Kit realizes shes homesick really makes her a sympathetic character even if you didn’t feel that way before, and it’s honestly the first time I can recall feeling bad for an AI character because of their emotional state. It really drives home the fact that while she may be very intelligent, she is very young and in her own way going through a version of adolescence. Sure of herself one moment, uncertain the next. It also seems to be something many authors would find totally unnecessary for the character. Not saying there are no other authors who write AIs well or that they have to be written so that they have human like reactions to the world, but it seems that for the most part not much effort is put into fleshing them out to make an interesting character who act as a sentient being with the ability to see the universe in shades of grey rather than ones and zeros.
The real problem (I have) with writing AI characters is that it’s difficult to cope with the fact that that they aren’t human. I had this idea for a book about an emergent AI, the first ever truly sentient AI in that world. Unlike Kit (and Al), this one hasn’t been taught how to be human-like, it’s a truly alien mind and… I can’t get it to work. Have you ever tried thinking like an alien? How to you conceptualise something with no reference to your own thought properties? There are authors who have given this a damn good shot and I envy them. I write AIs as people (mostly) as an alternative to writing them as lumps of wood.
Interesting note: Kit is no brighter than your average human. She’s a good bit less intelligent than Fox. However, she has nearly instantaneous access to huge amounts of information and it’s really easy to seem very smart when you’re a right know-it-all.
I been really enjoying Fox just as much as I enjoyed Aneka, but today when updating my progress on Good Reads I noticed that Deathweb is listed in page numbers, and yet the actual book doesn’t have any pages. Not sure how that occurred lol. But looking forward to the next book and waiting on Aneka as well or even Kate 🙂
It means that whoever added the book to Goodreads (I didn’t) put it in as a paperback. (You’d think, since Amazon bought Goodreads they’d put in some sort of auto-feed, but no…)
Glad you’re enjoying the books. There’s a new Fox one out in the next couple of months, right after the vampires.
Fill in your details below or click an icon to log in:
You are commenting using your WordPress.com account. ( Log Out / Change )
You are commenting using your Twitter account. ( Log Out / Change )
You are commenting using your Facebook account. ( Log Out / Change )
You are commenting using your Google+ account. ( Log Out / Change )
Connecting to %s
Notify me of new comments via email.