We are not only being colonized by machines, we are becoming machines, and gladly. We are already acting like and being treated like idiots by our technology. In the car it is turn by turn navigation, on foot it is stumbling with our phones in our faces, for music it is auto-tuned, formulaic pop songs, for dance it is the robotic pop-and-lock dubstep moves that amaze us...we defend our heads with noise-canceling headphones the size of earmuffs, are interrupted every moment by smartwatches...we bumble from one mediated moment to the next, always alone, always accompanied by no one--just the glow of a screen or a hum, a talisman. We're not sane without checking our notifications, need entertainment...can't stand not watching something, can't stand dead space, can't stand our own thoughts, can't stand our own minds.
Please take my mind over, AI, we constantly beg. Put us out of our misery and upload us to the digital nirvana.
/yea I know hackernews ain't exactly the right place to blah that out there, but couldn't. stop. --karma
I had a friend who posited that the Wall-E/Inception scenario would have been a far more interesting prequel backstory for The Matrix than what was actually presented in Reloaded and Revolutions.
Basically, that the remaining "real world" humans were deluding themselves with this grand narrative of a lost war against the machines, when in fact it was humanity's own environmental screwups that blotted out the sky, and most people willingly submitted themselves to a simulated fantasy world that was set right before the collapse of civilization. And if there were multiple "one" persons or matrices or whatever, it was only to keep resetting the simulation and re-playing that golden period. Basically, you would turn the first movie on its head, where Morpheus becomes the character who has to question his world and assumptions, and break free from a tyranny of lies.
> where Morpheus becomes the character who has to question his world and assumptions, and break free from a tyranny of lies.
Narrative differences aside, this is what happens to Morpheus in the Matrix movies though. In The Matrix Reloaded Morpheus comes to realize that finding Neo doesn't end the war with the machines and that the prophecy he believed in was just another system of control. Morpheus says, "I have dreamed a dream, but now that dream is gone from me."
That's a really cool thought. It's great that the Matrix is set far in the future, after the rise of the machines. There are a lot of possible tie-ins/mashups, e.g. with Terminator.
Given the stuff Neo pulled at the end of Revolutions, there are implications that The Matrix is actually a Matrix inside a Matrix and "waking up" is just an illusion. Which ties perfectly with your idea - we created Dom0 Matrix to free us from thinking and then fabricated DomU Matrix inside it to make things interesting again...
Precisely! However, if you think about it, the Matrix Trilogy does not contradict this notion. Many people believe the "real world" presented is in fact another layer of the Matrix, otherwise, how would Neo have superpowers in that world (among other issues)?
> Many people believe the "real world" presented is in fact another layer of the Matrix, otherwise, how would Neo have superpowers in that world (among other issues)?
Short answer: WiFi
Longer answer: For reasons (probably involving systems of control, because, hey, it's the Matrix), in addition to the high-bandwidth wired connections, the implants in humans attached to the matrix also include lower-bandwidth wireless units, and the machines (obviously) have wireless network connections to coordinate mobile units. Neo can hack machines (or formerly Matrix-connected humans) via his (and their) wireless interfaces, giving him “superpowers” in the physical world.
I've read that an African American woman laid claim to the invention of both terminator and matrix and claimed that the former was a prequal to the latter. Which makes a lot of sense.
This isn't likely general knowledge here on HN, but currently here in Phoenix, we've recently (within the past couple of years) had a major uptick in the number of people who are getting on the freeway in the wrong direction and driving into on-coming traffic (usually resulting in the inevitable death of one or more people).
Most of these have been attributed to either confusion at night and older drivers, or driving while under the influence of a substance. A minor few have been found to have been caused by bad road design (leading people unfamiliar with our freeway system to enter the freeway wrongly).
However, some of these happen in broad daylight, without the person seemingly being confused or under the influence of anything. My wife has seen this happen personally. People honk and try to get the person's attention; sometimes they realize their error. Sometimes they don't. She has typically seen this happen most often when there is a lot of traffic backed up around freeway entrances.
Her theory is that people are "blindly" following the turn-by-turn directions of a GPS, and not paying attention, or becoming harried by the amount of traffic, or hearing something like "turn left in 100 feet" - but only following the "turn left" part.
I don't know what the validity of this theory is, but it's an interesting explanation for something that is happening all too common on our freeways (at one point, it was almost as if once a week you would hear about a "wrong way driver" accident - and it is known that this is an increase over the past rates - so much so that I've read that some form of warning system is going to be installed in an effort to prevent this from happening).
I asked a deputy at Bartlett last year why the surge and he scoffed & said the surge has been in step with metro population growth over his 22 years in the department(New Times[i know] & EV Tribune claim to concur I didn't click them, either) . No doubt wrong-way drivers are a problem, but the only thing new these days is someone in our legislature is taking action to mitigate(opportunize?) the problem and the stories get eyes on screens and clicks on sites more than slow news, ghetto murders & axe wielding tweekers... although today's lumberjack will probably make the MSM.
>> We are not only being colonized by machines, we are becoming machines, and gladly
Got reminded of Erich Fromm's - Automaton Conformity:
"The person who gives up his individual self and becomes an automaton, identical with millions of other automatons around him, need not feel alone and anxious any more. But the price he pays, however, is high; it is the loss of his self." [1]
Hoping that AI can create a pleasant world seems roughly as reasonable as hoping that humans can evolve, however gradually, until they can create a pleasant world for themselves. Is it not?
In the meantime, I'll cast my lot with creating a pleasant inner world.
Isn't that the beauty of intelligence? To automate a complex task, so now it is trivial? Intelligence is used only to solve problems. Eliminating a problem is our goal, not to need to be "smart"
And on the other hand there's the mindfulness counter culture which is picking up steam. Perhaps we need to see how we can lose ourselves to find ourselves?
> can't stand our own thoughts, can't stand our own minds.
Just because it may sound like hipster "blah" doesn't mean that's not exactly right, IMHO.
> Our own vitality as well as that of others frightens us, if it still manages to surface, we respond with rage and turn against our own freedom. It is vitality itself that we are opposing.
-- Arno Gruen in "The Betrayal of the Self - The Fear of Autonomy in Men and Women" (1986)
The older I get, the more I really feel that a lot of conflicts that are confusing or complicated boil down to psychopathology. People with wings they never use for some reason will rarely say "oh that's interesting, why is this person flying, and why is my back suddenly itching". They will instead try to clip your wings, and they can become blind and stupid "on command" in ways that really don't make sense when looked at without that.
It's like a bunch of squiggles that seem random until you notice the areas they always avoid, because anything is better than facing oneself. If you live a life with a too fake emotions and too much sophistry, then really deep empathy, real love or hate, as well as critical thought cannot be accepted in earnest even in small quantities. That is, one might accept a correction stemming from critical thought, or an action coming from genuine empathy, but they cannot acknowledge that these were the "internal reasons". They don't know internal reason as anything but nuisances and sources of shame, their own reasons are external punishment and reward, so they always cook up sick schemes of why someone who is not (as) sick (as them) is doing what they are doing, while they themselves do the opposite. It's like when someone has a cellar full of salt and calls it sugar, they will never eat even a piece of your pie, and when they do, they will call it salty and spit it out.
Yeah, it seems way too simple and self-righteous even to me whenever I'm formulating it, no matter how often I experience things that lead me back to these thoughts; and it still would if I bloated it with all the missing qualifiers I left as exercises to the reader, or added that I don't totally exclude myself from those I'm kind of othering here. But still. every day I see and meet and work with people who out of their comfort zones, dragged out from under their blankets, they're like a robot that got water in them for a second -- they just fucking blank, it's like I can physically see the Homer Simpson monkey in their head playing for time until the moment passes, and they come up with a fake memory for it -- then waddle back safe land immediately. And if this sounds cold, it's because I sometimes despair... I don't observe and struggle with people because I don't like them.
But too often they force on me the decision to comfort them at the expense of all the "birds who are still flying", by entertaining their twisted reality, or to be offend them. That at the latest is when I prefer parting ways. I need to protect myself, and those who want my best, instead of fighting my every step of the way. I know that nobody in the world, ever, wanted anything but receive love and empathy as a baby. Nobody in the world, ever, was at fault for not receiving it. I'm not saying that explains the whole human life, but that this is true as a categorical statement. But I cannot turn back time, I can't be peers and families and friends and lover to a single person as a single person, I can't take them hiking for a year or whatever might "make a dent". When me stopping them from abusing me or others, or just saying what I see or think, hurts and scares them a bit more than they are every day already, then I prefer that to the constant layer of untruth and pettiness they weave people and situations into otherwise. What always clinches it for me is even if I was 100% blind and obedient, they would still be unhappy and hungry and scared. I'm not the hole in their soul, after all.
Rationalization is a powerful tool for explaining behavior that has nothing to do with the post-facto rationale.
I fear that we select for this in the tech industry. I have worked with brilliant programmers that are emotionally unstable. Some of them have developed self-awareness, some have not, but they uniformly lack tools for dealing with the instability.
Conversation is a lost art nowadays, but when it was still a thing, a part of it was making general statements, and extrapolating general trends -- even if a statement was only factually correct for just a large (or small but increasing) number the people and not absolutely everybody down to the last person
In those days, counter-arguments involved argumentation on why something is not the case in general, as opposed to just saying something analogous to "speak for yourself".
Heck, in those days, the "that's according to my opinion" part didn't have to be declared either, as it was implicitly obvious to everybody.
People expect more from conversation, as opposed to sloppy thinking where the author projects their own biases out to the world and expects people will agree with them or be polite enough to be silenced. Being "polite" like that was never a good idea, and it's, happily, becoming less common, in this era when physically forcing conformity is on the wane.
>People expect more from conversation, as opposed to sloppy thinking where the author projects their own biases out to the world and expects people will agree with them or be polite enough to be silenced
Yes. They expect people to be able to freely share their opinions and observations without others knee-jerk calling them "biases" and without demanding citations for things people can judge for themselves, as if every talk has to be mediated through some (often crappy, but just formal) statistic.
I didn't say anything about being "polite enough to be silenced" (heck, I'm not that polite myself) -- just to engage with counter-arguments and that "You've made a general observation but it doesn't hold for me specifically or my friend Jack thus it is invalid" is not really an argument.
I like to think (and
the sooner the better!)
of a cybernetic meadow
where mammals and computers
live together in mutually
programming harmony
like pure water
touching clear sky.
I like to think
(right now, please!)
of a cybernetic forest
filled with pines and electronics
where deer stroll peacefully
past computers
as if they were flowers
with spinning blossoms.
I like to think
(it has to be!)
of a cybernetic ecology
where we are free of our labors
and joined back to nature,
returned to our mammal
brothers and sisters,
and all watched over
by machines of loving grace.
-- Richard Brautigen
I love this reading of the poem. The images and soundtrack fit well, but the voice of Richard Brautigen himself is fantastic, very clear with that slight computer-generated, Stephen Hawking vibe.
In SF, that poem would be what Iain M. Banks wrote of in the 'Culture' books - sort of "communism is A.I. plus electricity" ...
From just the title, however, I was reminded of Jack Williamson's With Folded Hands - robots dutifully keeping humans from harm, by not allowing them to do anything.
Obligatory. If you enjoy Adam Curtis, you should have a look at 'The Loving Trap of Pandora's Nightmares' - its only 3 minutes long and rather amusing.
Really interesting documentary. I've heard good things about Adam Curtis's documentaries.
He works, or at the very least use to, at the BBC and I remember reading a post if his on his blog where he saved something like 50 terabytes of rare unedited video of Afghanistan that was decades old, before and during it's Marxist revolution starting in the late 70's.
Probably the most interesting of his documentaries that I've seen is on the rise of propaganda and the public relations industry in his documentary "Century of the Self".
When I watch Adam Curtis I always feel like he's so close to getting to something really good but never quite makes it. They are well produced docus though, you can tell that he has incorporated quite a few lessons from his study of propaganda.
> Probably the most interesting of his documentaries that I've seen is on the rise of propaganda and the public relations industry in his documentary "Century of the Self".
Yeah, I liked that too, the fact that PR is war-time propaganda repackaged for the corporate world.
(paraphrasing) US Gov, terrified by how an entire country unified with violent hatred (Nazi Germany), hired Freud's nephew for white-picket fence propaganda to keep "the beast" in American psyche buried.
Or how cigarette companies hired him (Bernays) to turn women smoking cigarettes from huge social taboo to "sexy" -- look at recent movie, pop icons etc on how this one marketing stunt has impacted American culture.
Sorry if you felt helpless after watching it. I found it informative and feel it contributed to my awareness of insidious advertising.
He is a documentary artist in my opinion. I love the way he combines pictures and sound in such a unique way while delivering those fantastic connections.
>All Watched Over By Machines Of Loving Grace is a series of films about how this culture itself has been colonised by the machines it has has built. The series explores and connects together some of the myriad ways in which the emergence of cybernetics—a mechanistic perspective of the natural world that particularly emerged in the 1970s along with emerging computer technologies—intersects with various historical events and visa-versa. The series variously details the interplay between the mechanistic perspective and the catastrophic consequences it has in the real world.
This style of writing has always bothered me. I don't know how to describe it. College-writing? Like, "The series variously details the interplay between the mechanistic perspective..." What does this sentence actually mean? Surely there's a more efficient way to write it. At the very least by throwing out useless fluff like "variously."
"AWOBMOLG is a series of films about how the development of computers has impacted human culture. In particular, it examines the computer scientist's mechanistic worldview, and the way that this belief has negatively impacted the world."
But honestly that's pretty dry and it lacks some of the details that the original quotation has. I know what you're talking about w.r.t. overly flowery language, but I don't see this as a blatant example of it. Zizek or Haraway are the first that spring to mind when I think of that problem.
It's basically using poetic flourishes to hide sloppy, simplistic thinking. Just like when the article accuses science of imposing machinery on nature (or something). "Part two shows how the modern scientific perspective of the natural world is actually a fantasy. It has little to do with the reality of nature. It is based on mechanistic ideas that were projected on to the natural world in the 1950s by scientists..."
That's unfortunately par for the course in humanities. Occasionally that kind of language can be used to enhance good ideas but more often used to let artists have their feelings and not investigate further.
I have watched Adam Curtis' documentaries and understand the context of that quote. It's not wrong.
Scientists have models for the natural world that are fantasies. Scientific models intentionally simplify reality in order to be able to analyze them. It is arguable with our current understanding of reality that a perfect model of an aspect of reality will never be possible with quantum uncertainty.
Most of what you've said in your comment is non sequitur, and I suggest you edit it out.
Using simplification to make predictions about reality ≠ "having a fantasy".
The entire mode of operation of science involves being aware that one's approach is less than absolutely accurate. The advance of science certainly debunked vitalist views of nature as some irreducible essence. It's a "common trope" to equate to such debunking into turning nature into a machine but nature itself remains as it is regardless of our ideas.
Disagree. A model is not reality, and is indeed a fantasy. Scientists should be (and usually are) well aware of this.
Importantly, the documentary explores some historical examples of where these models were used to attempt to understand, predict, and shape human behavior. Some might find the use of models in these ways immoral, and in my opinion the documentary does a pretty good job of exploring it.
The linguistic sense of the fantasy is of an idea of dubious value. But you (and the OP) seem to take position that one can use the term for any idea. If we're going to operate that way, I could just as well say that your claim that "A model is not reality, and is indeed a fantasy." is itself a "fantasy".
Comments like par for the course in humanities [...] often used to let artists have their feelings aren't terribly polite either, freighted as they are with implications of intellectual inferiority.
Since that poster can dish out criticism, I'm sure he can take some in return.
I don't. It wasn't a personal attack. I think he jumps to conclusions that are unwarranted, and I'm asking him to kindly edit them out. In no way did I attack his person.
In the same spirit I perceive that you that telling people to remove comments that you don't agree with reduces my opportunities to read and consider them. Please refrain from doing this.
"I realized that the purpose of writing is to inflate weak ideas, obscure poor reasoning, and inhibit clarity. With a little practice, writing can be an intimidating and impenetrable fog!"
Exactly what the words say. It's pretty direct and straightforward; “mechanistic perspective” and “catastrophic consequences [...] in the real world” could use illustration by concrete examples, but not in the sentence itself. (The article does appear to detail examples of the former, but not the latter.)
> Surely there's a more efficient way to write it.
Y I can shave of between 5-10 words to make it snappier but after that it begins to lose meaning, and has already lost valuable subtleties. To misquote Einstein, things should be as simple as possible, but not simpler.
It's basically a shibboleth that signifies quality in certain circles, like college humanities, but actually is the opposite.
People in those circles can't say what they mean due to stifling ideological policing and status seeking games (and don't have many interesting thoughts as a consequence), so they all just sit around disguising aborted thoughts as impenetrable wisdom.
'Like, "The series variously details the interplay between the mechanistic perspective..." What does this sentence actually mean?'
I'm not a big fan of this style either, which I think greatly tempts both author and reader to mistake style for substance, but it doesn't help that you've cut the sentence off mid-clause. "The series variously details the interplay between the mechanistic perspective and the catastrophic consequences it has in the real world." can be translated into something a bit more HN-ish as "The series discusses the interactions between machines and their catastrophic real-world consquences through a mechanistic lens."
While subjective, it is at least a sentence that contains real content, in the sense that it can be falsified, or at the very least, sensible arguments raised for and against it. I won't disagree that the style you are reacting against does tend to indicate content-free sentences, though.
If you have no interest in what's being said, nor anything to say, then we can all agree it's not in your interests to bother with a trade language.
The discussion here is pretty clear evidence that something has been said. You may find it unnecessarily obtuse from a layman's perspective, but so is any legal contract, math textbook, or CS theory article. Trade languages exist for a reason.
I watched this documentary and I actually disliked the perspective he gave. As a person with an Bachelors Degree in Computer Science and I took an AI course I came from a position that for the most part Computers Improve society. While I was watching this I started to pick apart his viewpoint.
As I have been to more AI ethics meetups I now think of ways to democratize data / AI instead. I may have forgotten his viewpoints in the documentary since it was awhile ago but now I am more inclined to agree with him.
Can you provide some examples of points you disagree with?
I do feel like Curtis can be a touch hyperbolic at times, but that's also part of storytelling and I think it's especially important to strongly represent the viewpoints that are non-normative. I think of curtis much like michael moore, he gets a few things wrong sometimes but also offers a lot of insights.
I've always wondered if electronic technics have decreased world suffering or expanded it. Will AI democratize power or concentrate it?
His storytelling method usually creates an idea of us vs them and its very black and white on who is causing the problem and the people being exploited. For example the combination of disciples of Ayn Rand and 60s counterculture group seems painted too broad of a brush. I think the group was bigger than those two groups and there were many more shades of grey instead of black and white.
In the context of Curtis I also like to point out James Burke's original "Connections" series. It's a deconstruction of history that is about ideas, rather than places and dates, and the first season is structured around the development of arguably the 20th century's most ambitious achievement (no spoilers:) The last episode is the most Curtis-esque in asking questions about managing human progress.
Pandora's Box is my favorite introduction to Adam Curtis. In short, the theme is "times that people thought they knew what they were doing, but actually didn't."
Seriously, though, thinking of them as essays is a very good idea. When I hear documentary, I usually think of David Attenborough, fact-conveying kind of film.
Curtis is more about ideas, and my usual reaction to his films is "Now that gave me a lot to think about."
Curtis sees the connection between machines, hierarchies, capitalism, and technology. Curtis aptly points out that in our world, technology has not bred equality, that rather, the very concept of ownership spits in the eye of non-hierarchical society. I also question the degree to which Silicon Valley types fundamentally believe in (socioeconomic) equality - while Rand supporters often invoke freedom, liberty, justice, etc. they generally fail to mention equality at all. Instead, they opt for lofty "rising tide" rhetoric or relate back to a "free and open web". However, Curtis is dead right about the type of fantasy that technological progress has given our world.
This fantasy - that if we just keep pushing forward, we can solve our problems belies a fundamental, inherent fallacy in late-capitalistic logic. That is: when the problems are necessary parts of the system, it is impossible for the system to solve them. Current nation-state-capitalism relies on a few tenets that cause many of our planet's problems. Nations and people must compete, rather than cooperate. Growth must always continue, lest we face stagnation. Property and the means of acquiring wealth must not be equally distributed, in fact, such distribution would be inherently immoral. The last point here is explicitly Randian and is at the heart of global society's moral compass. That what one man has, no other has a right to take, regardless of how that property was acquired. Thus it is wrong for Palestine to contest the land given to Israel at the end of the 2nd World War, it is wrong for young black men and women to stand on the bridges that wealthy San Franciscans take to work, it is wrong for the government to appropriate the wealth of Mark Zuckerberg - despite the fact that his idea and wealth was ostensibly stolen from others.
The fantasy of a "free web" can be only be recognized in its relation to property and ownership. While megacorps like Google and Facebook make lofty claims about freedom, they unequivocally deny that the rest of the web ought to have access to their data, their infrastructure, their systems. They support the laws that make hacking illegal, and in many cases, prove two-faced about what they really want: a free web, but with some limitations that favor them. Google, for instance, supports net neutrality but does not have any interest in limiting their own ability to profit off the web. What we end up with is a "free society" where the ultimate arbiters of justice are not beholden to society in any way - capable of setting their own rules and saying "you may enter if ..."
My response would be that you are casting society and societal organisation in terms of absolutes and extremes. Some people - Rand for example - believe that all redistribution is always wrong and immoral. Some people participate in this fantasy. Many, many people don't, many many people pay taxes happily, and vote for taxes because they realise that redistribution prevents poverty and promotes social harmony - due to ensuring that there is a minimal population of murderous starving feral children. Many nations do not compete all the time, many nations co-operate and support each other, most of the time.
At the same time most people see elastic limits in redistribution of wealth, space, time and privilege. For example it is widely believed that it's ok for less privileged groups to take public space and media time to protest and illustrate the injustice that has placed them in extremis. However to do so to the extent that others cannot care for their children or enjoy their basic rights - safety, shelter, is unacceptable.
I was fortune enough to catch a screening of this at the Alamo Drafthouse in Austin, TX for $3. It was my first exposure to Adam Curtis, and I have been hooked ever since. His work is hypnotic, unnerving sometimes, misleading / biased often, but always a good jumping off point for exploring ideas and people on your own.
I read a description of his latest, "Hypernormalization," the was something like "a three hour journey through 100 wikipedia tabs" and it perfectly described his documentaries. They start with a basic premise but then veer wildly through topics whose links are tenuously held together by their relation to each other + stock footage and great music.
"Machines that make us smart" argues that what makes us human is not that we can build machines, or that machines = intelligence but that human + machine = intelligent system.
While we are on the subject, here is another video from the same site on becoming a machine. Stumbled across this lecture while looking for more Curtis documentaries and thought it was a very good guide through a cybernetics roadmap of the recent past, focusing on the work of Professor Kevin Warwick, who makes this subject quite entertaining and very accessible for the general/non-technical viewer.
As well as the long multi-part documentaries, Adam Curtis wrote short five minute clips for the show NewsWipe, which are also worth watching, e.g. his talk about news reporting and "oh dearism" - https://www.youtube.com/watch?v=8moePxHpvok
Future Shock by Alvin Toffler in 1972 narrated by Orson Welles is well worth adding to the list of documentaries to watch. Much of what Toffler/Welles touch is very relevant now.
https://www.youtube.com/watch?v=fkUwXenBokU
>What about antibiotics? That seems like a clear case of changing the world for the better.
At first glance, yes, but then you remember that the universe is not static, but dynamic, constantly evolving, and the introduction of antibiotics, while useful --- miraculous, even --- at first, eventually results in "antibiotic-resistance"[0], and now we have a worse problem than the one we started with.
There is a wonderful essay by Paul Kingsnorth, titled "Dark Ecology"[1] that identifies this pattern as a "progress trap".
This quote from the Tao Te Ching is especially apt:
"Do you think you can take over the universe and improve it?
I do not believe it can be done."
Bacterial infections are no longer a serious threat -- even bubonic plague has been brought under control. It's unlikely that bacteria will ever become fully immune to one or another antibiotic, but if that does happen we've still got penicillin, bacteriophages, and sterilization techniques; bacteria will never again be as dangerous as they were as recently as the late 1800s.
For further clear-cut changes of the world for the better, there's smallpox eradication, the immanent eradication of guinea worms, the extirpation of polio from most of the world, and the development of anaesthesia.
I don't disagree, but bear in mind that 'better' involves a whole bunch of implicit assumptions. Some people think a more austere and purely Darwinian world would be better, and that humanity has gone soft. I don't agree, but try to get in the habit of examining your opinions from radically different perspective to realize how fundamentally subjective they are.
"Better" here just involves the assumptions that pain is worse than not pain, and that human life is intrinsically valuable. I'm fine with making both assumptions -- especially since I've read Timothy Snyder's _Black Earth: the Holocaust as History and Warning_, which explores the thought of the most famous proponent of the mankind-needs-more-Darwinian-selection-pressure alternative. "I disagree with Hitler" isn't exactly compelling evidence for "my opinions are fundamentally subjective."
(I'm also of the opinion that the earth isn't flat, and that both sides of a contradiction can't be true. In those cases, too, I'm comfortable with saying that those who disagree with me are wrong -- and in uninteresting ways, too.)
On topic, this does seem like a video series I'll have to look at. Human/Machine interactions will only become more important as time goes on; it will be interesting to watch both the philosophy and the reality of it all evolve.
Just going off the summary of part 1, I'm not sure what I'm supposed to make of this. Every cultural voice in the US -- except a few eccentric Greens -- was pro-computer-revolution; the Left wanted decentralization and freedom, while the Right wanted prosperity and anti-Communism. Neither major side in the US saw surveillance capitalism coming. If the author's trying to absolve the Left of responsibility for surveillance capitalism, he's wasting his time -- think of Apple Computer, the Free Software Foundation, 37signals.com, and the whole close association of computers (especially personal computers) with the counterculture...
He also enumerates the following goals that the personal-computing revolution was after:
* No economic risk or failure
* No boom-bust cycle
* Decentralized political power
* Democracy
* Connectedness
* Non-hierarchy
* Pursuit of self-interest
* Desire to improve the world
Which of these does he think we shouldn't have pursued? Which of them does he think are bad? If you oppose all of them, you're asking to live in the Egypt of the Pharaohs, and not even the neo-reactionaries want _that_.
At the risk of sounding glib, watching the documentary itself would answer your questions, no? If you're interested in the topic, as you seem to be, they are likely worth your time.
> watching the documentary itself would answer your questions, no?
I'm definitely feeling engaged, but not entirely enthusiastic... (EDIT: But I feel like I'm picking a fight with you, and I apologize for that. I'll definitely give this documentary a shot!)
Part 1 sounds like the fascist mode of thinking: "all our intractable emergent problems are due to an evil conspiracy of outsiders in our midst, with an inexplicable desire to destroy the world!" The whole problem with Ayn Rand is that she tried too hard to be anti-communist and accidentally turned fascist; seeing her followers pointed to as a fascist-style internal outgroup is as depressing as it is silly.
As for the other two parts -- tearing down the Gaia hypothesis and the Selfish Gene -- what does he think are the right things to think, if these are the wrong things? Are we going to just start pretending that the establishment Left was always anti-hippie, now that hippieness has turned out to have nasty emergent consequences?
I'm particularly annoyed that he blames the Congo situation on the Selfish Gene. The establishment likes to pretend that the Congo is too complicated to understand -- because the alternative is owning up to how the French backed the Hutus during and after the Rwandan genocide, and how the international community is still more sympathetic to the Hutus than the Tutsis. (See also the allegedly inexplicable causes of WWI.)
Please take my mind over, AI, we constantly beg. Put us out of our misery and upload us to the digital nirvana.
/yea I know hackernews ain't exactly the right place to blah that out there, but couldn't. stop. --karma