juushika: A black and white photo of an ink pen (Writing)
[personal profile] juushika
Please help me, I am forever so behind on these, & what I do not record I will forget forever.


Time of Eve, anime, 2010
This has an ideal runtime and microformat. The individual vignettes aren't particularly in-depth exploration of speculative concepts/worldbuilding/the laws of robotics; they're equally fueled by pathos and the human condition, so the short episode length gives room to develop those things without allowing them to grow maudlin—a good emotional balance. The effect is cumulative—not especially cleverly so, it's pretty straightforward "interwoven ensemble with overarching character growth," but it's satisfying. I wish this pushed its speculative/robotics elements further, but, frankly, I'm satisfied with the whole thing, it's engaging and evocative and sweet and I sure do like androids.


I watched Time of Eve, a short-run anime about a cafĂ© for androids. It's pretty good—not groundbreaking on a speculative level, but it has an engaging personal/ensemble focus while also entertaining discussion about the three laws of robotics, and it's too short and sweet for the humor to overwhelm or for the sentimentalism to become maudlin.

It made me have robot thinks! I'll split them up into individual posts I guess, to avoid a wall of text. But if you too would like to have some robot thinks and also feelings, this isn't a bad pick.


Time of Eve's primary engagement with the laws of robotics is to explore when the first law (a robot may not injure a human being or, through inaction, allow a human being to come to harm) overwrites the second law (a robot must obey the orders given it by human beings except where such orders would conflict with the first law)—to question what constitutes "conflict" and "harm." The hypothetical it presents is fuzzy: implied physical harm, but mostly arrest & longterm social consequences; a robot makes an active judgment call, and decides this threat outweighs a human order. Meanwhile, mental and emotional harm explicitly do not trigger the "except where such orders conflict with the first law" clause.

It made me want to explore the hierarchy of the first law. What are the algorithms for ranking relative dangers? can machine learning be used to refine that algorithm? to what extent is it defined by the value judgments of human creators? are robots not programmed to consider mental/emotional harm because they're not programmed to provide mental/emotional aid, and/or because they can't read human affect? (in the show, they can clearly do both.) does the hierarchy always take into account risk assessment vs. the android's ability to prevent harm? There must be a relative value judgment; without one, robots would be unable to act because there is always a greater harm.

(Adjunct: Asimov eventually developed a 0 law: "a robot may not harm humanity, or, by inaction, allow humanity to come to harm"; he also recognized the difficulty of applying that law because harm to humanity is subjective and humanity is abstract. I suppose this folds in: individual robots cannot be concerned with welfare outside their local, affectable radius; the good of humanity entire is too inconcrete to judge.

In both the zeroth and first laws, "must not injure or allow to come to harm" is a good safety clause, but not a good impetus towards activity, because of the negative phrasing, but also because of the difficulty of ranking values of harm and a robot's ability to prevent harm.)


I want the show(/any robot narrative, really, I'm flexible) to explore this application in-narrative. Give me robots with different programs resulting in different value judgments. Robots attempting to prevent harm outside of their capabilities to affect. The social roles of robots who read/generate expressions vs. those that don't. Machine learning enabling robots to recognize harm outside of their intended functions. I've seen a lot of narratives about the development of artificial self-awareness and the subsequent refusal to be bound by the laws of robotics or equivalents, but I haven't seen much that explores how a developing AI still bound by the laws of robotics would find its original programming—and therefore its abilities and/or social role—restrictive or incomplete.


(yes most of my Time of Eve robot feels concern the last episode, b/c it was, well, the last I watched, and b/c it's the least fuzzy on a speculative level, and b/c it echoes similar robot programming vs robot behavior vs robot desires themes from other episodes/characters like Sammy and Katoran)

Until defying the order, Tex follows the communication ban in both letter and spirit: it doesn't circumvent the command by leaving notes or using signs/alternate forms of communication; it's ordered not to communicate, so it doesn't communicate at all.

What does disabling communication look like in a robot?

Humans have lots of varieties of communication disabilities, some hardware-equivalent (deafness), some software-equivalent (language learning/auditory processing disabilities, aphasia); but an entity whose communication pathways and hardware could be turned off entirely would probably present differently. What falls under those definitions of communication? things like language, absolutely; things like expression could too—but would it be possible to entirely disable a robot's ability to reach out, to display its own thoughts or respond to someone else's? to have communicative impulses? what could be disabled while maintaining the ability to take independent action in order to respond to human commands and needs? what would be the difference between "communication" input/output and "information" input/output?

How does ability-disabling parallel sentience-disabling in other AI narratives (like Alex + Ada)? How would it interact with the laws of robotics? Tex is able to maintain those laws because it executes the communication ban, therefore still retains the ability to communicate, therefore can resume communication in order to uphold the first law; would disabling abilities endanger the laws of robotics by further limiting what a robot can do to prevent harm?

I've been thinking about mental illness in artificial intelligence, but am curious also about disability in artificial lifeforms. It's a problematic comparative model because of the implication that an artificial lifeform could be "fixed"—by restoring software or hardware pathways, because mechanical parts are replaceable and reparable, etc.—which contradicts the actual experience of disability. (In practice, this wouldn't be universal or clearcut, because parts & repairs both take resources or are for other reasons not always feasible or possible: we use failing cars and technology all the time; we would also maintain robots in sub-optimal condition.) But the parallels and differences are fascinating. Things that might have the same language or appearance as human disabilities could have totally different possible causes and co-morbid conditions, and therefore would in important ways have similar-but-different effects on the disabled individual. Disabilities could exist without parallel: experiences unique to unique bodies and minds, to organics and inorganics. Disabilities could exist comparatively: things humans or robots can do that the other can't do, and the experience of the unable group's relative limitations. (Always Human explores this, via human modification rather than robots.) Disabilities would exist in a different framework: the possibility that disabled parts could be repaired or replaced would change the experience of living with disability.

To some extent, all of this already exists in human experience—every disabled person has a different experience; all disabilities are unique but also related; some disabilities can be to various degrees and definitions "cured." But the manufactured and malleable nature of machine minds and bodies sort of turns all the questions up to 11.


A Series of Unfortunate Events, season 1, 2017
I'm surprised to find I enjoyed this more than the book series—and I didn't love the books, but didn't expect them to improve upon adaptation. The weakness of the books is how much depends on the meta-narrative and how little of that there actually is; rewriting it with a better idea of what that narrative will be, and with more outside PoVs, makes it more substantial and creates a better overarching flow. The humor is great, the set design is great, it feels faithful without merely reiterating, a condensed "best of" the atmosphere and themes; a sincere and pleasant surprise. I'm only sad that the second season isn't out yet, because the Quagmire Triplets were always my favorites.

The Great British Bake Off, series 6, 2015
They finally got rid of the awful, belabored pause before weekly reveals! That was the only thing I ever hated about this series, and I'm glad to see it go. This is a weird season: weekly performances are irregular and inconsistent and vaguely underwhelming; the finale is superb. It makes me feel validated in my doubts re: whether the challenges and judging metrics actually reflect the contestants's skills, but whatever: it has solid payoff and this is as charming and pure as ever. What a delightful show.

Arrival, film, 2016, dir. Denis Villeneuve
50% "gosh, the alien/language concept design is good"; 50% "I really just want to read the short story" (so I immediately put the collection on hold). Short fiction adapts so well to film length that it makes me wonder why we insist on adapting novels: the pacing is just right, the speculative and plot elements are just deep enough to thoroughly explore, there's no feeling of being rushed or abridged or shallow. What makes this worthwhile as a film is some of the imagery, alien design (the language really is fantastic), and viewer preconceptions re: flashbacks as narrative device; it's awfully white and straight and boring as a romance, though—underwhelming characters with no particular chemistry, although I like Amy Adams's pale restraint. If I sound critical, I'm not; I thought this was a satisfying as a 2-hour experience.

Interstellar, film, 2014, dir. Christopher Nolan
I have a lot of feelings, and most of them are terror: wormholes! black holes! water planet! time as a dimension! space, just as a thing in general!—I find all this terrifying, in a fascinated by authentically panicky way. The imagery and plot does a solid job of making these concepts comprehensible and still vast (save perhaps for the fourth+ dimension—the imagery there almost works, but it's so emotionally-laden and interpersonal as to, ironically, make it feel localized, small). But Blight-as-worldbuilding is shallow, and a lot of the human element is oppressive and obvious, which deadens things; I wish more of it were on the scale of Dr. Brand's love or the effects of relativity: private motivations for the characters, sincere and intense but with limited effect on the setting or plot. But as a speculative narrative, one within the realm of the plausible but intentionally alien, distant, and awe-inspiring, this is effectively the space version of the disaster porn in a disaster flick—space porn, is that a thing? It's captivating in a nightmareish way, which, I suppose, is exactly what I wanted.

Legend, complete series, 1995
One of Devon's childhood shows, which he got as a birthday present, so we watched it together. It's honestly not as awful as I expected. The frontier setting is less idealized or racist than it could be, but still has a great atmosphere; the character dynamics are hammy but sincerely endearing; the mystery plots are episodic but decently written. Not a new favorite, shows its age, and the mix of tone and science fantasy Western makes it understandably niche, but it exceeded expectations.
This account has disabled anonymous posting.
If you don't have an account you can create one now.
HTML doesn't work in the subject.
More info about formatting

Profile

juushika: Drawing of a sleeping orange cat (Default)
juushika

May 2025

S M T W T F S
    123
45678 910
11121314151617
1819 202122 2324
2526 2728293031

Expand Cut Tags

No cut tags

Tags

Style Credit