Now that I have seemingly taken on managing DNS at my current company I have seen several inadequacies of DNS that I was not aware of before. Main one being that if an upstream DNS server returns SERVFAIL, there is no distinction really between if the server you are querying is failed, or the actual authoritative server upstream is broken (I am aware of EDEs but doesn't really solve this). So clients querying a broken domain will retry each of their configured DNS servers, and our caching layer (Unbound) will also retry each of their upstreams etc... Results in a bunch of pointless upstream queries like an amplification attack. Also have issue with the search path doing stupid queries with NXDOMAIN like badname.company.com, badname.company.othername.com... etc..
> So clients querying a broken domain will retry each of their configured DNS servers, our caching layer (Unbound) will also retry each of their upstreams etc...
I expect this is why BIND 9 has the 'servfail-ttl' option. [0]
Turns out that there's a standards-track RFC from 1998 that explicitly permits caching SERVFAIL responses. [1] Section 8 of that document suggests that this behavior was permitted by RFC 1034 (published back in 1987).
re: your SERVFAIL observation, oh man did I run into this exact issue about a year or so ago when this came up for a particular zone. all I was doing was troubleshooting it on the caching server. Took me a day or two to actually look at the auth server and find out that the issue actually rooted from there.
My parents were tricked the other day by a fake youtube video of "racist cop" doing something bad and getting outraged by it. I watch part of the video and even though it felt off I couldn't immediately tell for sure if it was fake or not. Nevertheless I googled the names and details and found nothing but repostings of the video. Then I looked at the youtube channel info and there it said it uses AI for "some" of the videos to recreate "real" events. I really doubt that.. it all looks fake. I am just worried about how much divisiveness this kind of stuff will create all so someone can profit off of youtube ads.. it's sad.
I'm pretty sure we're already decades in to the world of "has created".
Everyone I know has strong opinions on every little thing, based exclusively their emotional reactions and feed consumption. Basically no one has the requisite expertise commensurate with their conviction, but being informed is not required to be opinionated or exasperated.
And who can blame them (us). It is almost impossible to escape the constant barrage of takes and news headlines these days without being a total luddite. And each little snippet worms its way into your brain (and well being) one way or the other.
It's just been too much for too long and you can tell.
> It is almost impossible to escape the constant barrage of takes and news headlines these days without being a total luddite
Its odd to me to still use "luddite" disparagingly while implying that avoiding certain tech would actually have some high impact benefits. At that point I can't help but think the only real issue with being a luddite is not following the crowd and fitting in.
Hipster used to mean that but meaning changed to being someone who “doesn’t fit in” but only for performative reasons, not really “for real” but just to project an image of how cool they are
> Its odd to me to still use "luddite" disparagingly while implying that avoiding certain tech would actually have some high impact benefits
They didn't say to avoid certain tech. They said to avoid takes and news headlines.
Your conflation of those two is like someone saying "injecting bleach into your skin is bad" and you responding with "oh, so you oppose cleaning bathrooms [with bleach]?"
How so? The OP referenced how difficult it is to avoid said takes and news without being a complete luddite. That certainly implies avoiding certain tech, I have to assume they meant much of the digital tech we use today rather than the power loom luddites were pushing back on.
Your bleach scenario is confusing to me, its also you arguing against something completely unrelated to the discussion here.
it's malware in the mind. it was happening before deep fakes was possible. news outlets and journalists have always had incentive to present extreme takes to get people angry, cause that sells. now we have tools that pretty much just accelerate and automate that process. it's interesting. it would be helpful to figure out how to prevent people (especially upcoming generations) from getting swept away by all this.
I think fatigue will set in and the next generation will 'tock' back from this 'tick.' Getting outraged by things is already feeling antiquated to me, and I'm in my 30's.
There's a massive industry built around this on YT, exemplified by the OP's post about his parents. To a first-order approximation, every story with a theme of "X does sexist/racist/ageist/abusive thing to Y and then gets their comeuppance" on YouTube is AI-generated clickbait. The majority of the "X does nice thing for Y and gets a reward or surprise" dating from the last year or two are also AI-generated clickbait, but far more of the former. Outrage gets a lot more clicks than compassion.
> news outlets and journalists have always had incentive to present extreme takes to get people angry, cause that sells.
As someone who’s read a newspaper daily for 30+ years, that is definitely not true. The news has always tried to capture your attention but doing so using anger and outrage, and using those exclusively, is a newer development. Newspapers and broadcast news used to use humor, suspense, and other things to provoke curiosity. When the news went online, it became focused on provoking anger and outrage. Even print edition headlines tend to be tamer than what’s in the online edition.
> It is almost impossible to escape the constant barrage of takes and news headlines these days without being a total luddite
It really isn't that hard, if I'm looking at my experience. Maybe a little stuff on here counts. I get my news from the FT, it's relatively benign by all accounts. I'm not sure that opting out of classical social media is particularly luddite-y, I suspect it's closer to becoming vogue than not?
Being led around by the nose is a choice still, for now at least.
I think the comment you're replying to isn't necessarily a question of opting out of such news, it's the fact that it's so hard to escape it. I swipe on my home screen and there I am, in my Google news feed with the constant barrage of nonsense.
I mostly get gaming and entertainment news for shows I watch, but even between those I get CNN and Fox News, both which I view as "opinion masquerading as news" outlets.
My mom shares so many articles from her FB feed that are both mainstream (CNN, etc) nonsense and "influencer" nonsense.
Right, and my point is how easy opting out actually is.
I have no news feed on my phone. I doubt on android it is any harder to evade. Social media itself is gone. The closest I get to click-bait is when my mother spouts something gleaned from the Daily Mail. That vector is harder to shift I concede!
Fair points on both fronts! Though I think you may be conflating simple with easy. Removing social media from one's life is certainly simple (just uninstall the app!), but it's not that easy for some people because it's their only method of communication with some folks. I mostly don't use SM but I log onto Instagram because some of my friends only chat there, same with Facebook.
I honestly think it might be downstream of individualized mass-market democracy; each person is tasked with fully understanding the world as it is so they can make the correct decisions at all level of voting, but ain't nobody got time for that.
So we emotionally convince ourselves that we have solved the problem so we can act appropriately and continue doing things that are important to us.
The founders recognized this problem and attempted to setup a Republic as an answer to it; so that each voter didn't have to ask "do I know everything about everything so I can select the best person" and instead were asked "of this finite, smaller group, who do I think is best to represent me at the next level"? We've basically bypassed that; every voter knows who ran for President last election, hardly anyone can identify their party's local representative in the party itself (which is where candidates are selected, after all).
Completely agree, but at the same time I can't bring myself to believe that reinforcing systems like the electoral college or reinstating a state-legislature-chosen Senate would yield better outcomes.
Most people I know who have strong political opinions (as well as those who don't) can't name their own city council members or state assemblyman, and that's a real problem for functioning representative democracy. Not only for their direct influence on local policy, but also because these levels of government also serve as the farm team or proving grounds for higher levels of office.
By the time candidates are running with the money and media of a national campaign, in some sense it's too late to evaluate them on matters of their specific policies and temperaments, and you kind of just have to assume they're going to follow the general contours of their party. By and large, it seems the entrenched political parties (and, perhaps, parties in general) are impediments to good governance.
I think it's an inherent problem with democracy in itself, and something that will have to be worked out at some time, somewhere.
The accidents that let it occur may no longer be present - there are arguments that "democracy" as we understand it was impossible before rapid communication, and perhaps it won't survive the modern world.
We're living in a world where a swing voter in Ohio may have more effect/impact on Iran than a person living there - or even more effect on Europe than a citizen of Germany.
The issue is the disconnect between professed principles and action. And the fact that nowadays there are not many ways to pick and choose principles except two big preset options.
It's easier to focus on fewer representatives, and because the federal government has so much power (and then state governments), life-changing policies mainly come top-down. Power should instead flow bottom-up, with the top being the linchpin, but alas.
> It is almost impossible to escape the constant barrage of takes and news headlines these days without being a total luddite.
It’s quite easy actually. Like the OP, I have no social media accounts other than HN (which he rightfully asserts isn’t social media but is the inheritor of the old school internet forum). I don’t see the mess everyone complains about because I choose to remove myself from it. At the same time, I still write code every day, I spend way too much time in front of a screen, and I manage to stay abreast of what’s new in tech and in the world in general.
Too many people conflate social media with technology more broadly and thus make the mistake of thinking that turning away from social media means becoming a luddite. You can escape the barrage of trolls and hottakes by turning off social media while still participating in the much smaller but saner tech landscape that remains.
I feel like you people are intentionally misconstruing what "Luddite" means. It doesn't mean "avoids specific new tech." It means "avoiding ALL new tech because new things are bad."
A luddite would refuse the covid vaccine. They'd refuse improved trains. They'd refuse EVs. etc. This is because ludditism is the blanket opposition to technological improvements.
you have completely misunderstood what it means to be a luddite.
the luddites were a labor movement opposed to the negative externalities imposed by rapid industrialization of formerly-craft/artisinal markets. it was a movement that stood for the protection of workers rights and the quality of goods produced; it was not opposed to new technologies. what it did oppose was the irresponsible use of those technologies at the expense of workers and consumers.
what you're referring to is probably more accurately described as primitivism.
> I feel like you people are intentionally misconstruing what "Luddite" means.
That’s a very unfair accusation to throw at someone off the cuff. Anyway, what you wrote is not what a Luddite is at all, especially not the anti-vaccine accusation. I don’t think you’re being deliberately deceptive here, I think you just don’t know what a Luddite is (was).
For starters: They were not anti-science/medicine/all technology. They did not have “blanket opposition to all technological improvement.” You’re expressing a common and simplistic misunderstanding of the movement and likely conflating it with (an also flawed understanding of) the Amish.
They were, at their core, a response against industrialization that didn’t account for the human cost. This was at the start of the 19th century. They wanted better working conditions and more thoughtful consideration as industrialization took place. They were not anti-technology and certainly not anti-vaccine.
The technology they were talking about was mostly related to automation in factories which, coupled with anti-collective bargaining initiatives, led to further dehumanization of the workforce as well as all sorts novel and horrific workplace accidents for adults and children alike. Their call for “common sense laws” and “guardrails” are echoed today with how many of us talk about AI/LLM’s.
> It is almost impossible to escape the constant barrage of takes and news headlines these days without being a total luddite.
Then I am very proudly one. I don't do TikTok, FB, IG, LinkedIn or any of this crap. I do a bit of NH here and there. I follow a curated list of RSS feeds. And I twice a day look at a curated/grouped list of headlines from around the world, built from a multitude of sources.
Whenever I see a yellow press headline from the German bullshit print medium "BILD" when paying for gas or out shopping, I can't help but smile. That people pay money for that shit is - nowadays - beyond me.
To be fair. This was a long process. And I still regress sometimes. I started my working life at an editorial team for an email portal. Our job was to generate headlines that would stop people from logging in to read their mail and read our crap instead - because ads embedded within content were way better paid than around emails.
So I actually learned the trade. And learned that outrage (or sex) sells. This was some 18 or so years ago - the world changed since then. It became even more flammable. And more people seem to be playing with their matches. I changed - and changed jobs and industries a few times.
So over time I reduced my news intake. And during the pandemic learned to definitely reduce my social media usage - it is just not healthy for my state of mind. Because I am way to easily dopamine addicted and trigger-able. I am a classic xkcd.com/386 case.
> Everyone I know has strong opinions on every little thing, based exclusively their emotional reactions and feed consumption. Basically no one has the requisite expertise commensurate with their conviction, but being informed is not required to be opinionated or exasperated.
Case in point: if you ask for expertise verification on HN you get downvoted. People would rather argue their point, regardless of validity. This site’s culture is part of the problem and it predates AI.
Just twenty minutes ago i got a panic call that someone was getting dozens of messages that their virusscanner is not working and they have hundreds of viruses.
By removing Google Chrome from sending messages to the Windows notification bar everything went back to normal on the computer.
Customer asked if reporting these kinds of illegal ads would be the best course. Nope, not by a long shot. As long as Google gets its money, they will not care. Ads have become a cancer of the internet.
If there was a GDPR type law for any company above a certain size (so as to only catch the big Ad networks) that allowed the propagation of "false" ads claiming security issues, monetary benefits or governmental services, then it could stop transmission of most of the really problematic ads, because any company the size of Google is also in the risk minimization business and they will set up a workflow to filter out "illegal" ads to at least a defensible level so they don't get fines that cost more than the ads pay.
Also can you set Windows not to allow Ads notifications through to the notification bar? If not that should also be a point of the law.
Now I bet somebody is going to come along and scold me for trying to solve social problems by suggesting laws be made.
Not scold (that is how we shape social behavior), but only note that Safe Harbor essentially grants the opposite of this (away from the potential default of "By transiting malware you are complicit and liable in the effect of the malware") so it'd have to be a finely-crafted law to have the desired effect without shutting down the ability to do both online advertising and online forums at all.
... which doesn't sound impossible. It's also entirely possible that the value of Section 230 has run its course and it should generally be remarkably curtailed (its intent was to make online forums and user-generated-content networks, of which ad networks are a kind, possible, but one could make the case that it has been demonstrated that operators of online forums have immense responsibility and need to be held accountable for more of the harms done via the online spaces they set up).
People just need to have like a barebones understanding of: computer hardware level, OS level, browser level and how permissions work between the three.
If you have that you will never get scared by a popup in Chrome.
That’s not because it’s decentralized or open, it’s because it doesn’t matter.
If it was larger or more important, it would get run over by bots in weeks.
Any platform that wants to resist bots need to
- tie personas to real or expensive identities
- force people to add AI flag to AI content
- let readers filter content not marked as AI
- and be absolutely ruthless in permabanning anyone who posts AI content unmarked, one strike and you are dead forever
The issue then becomes that marking someone as “posts unmarked AI content” becomes a weapon. No idea about how to handle it.
It's never going to happen, but I felt we solved all of this with forums and IRC back in the day. I wish we gravitated towards that kind of internet again.
Group sizes were smaller and as such easier to moderate. There could be plenty of similar interest forums which meant even if you pissed of some mods, there were always other forums. Invite only groups that recruited from larger forums (or even trusted members only sections on the same forum) were good at filtering out low value posters.
There were bots, but they were not as big of a problem. The message amplification was smaller, and it was probably harder to ban evade.
> I wish we gravitated towards that kind of internet again.
So do it. Forums haven't gone away, you just stopped going to them. Search for your special interest followed by "Powered by phpbb" (or Invision Community, or your preferred software) and you'll find plenty of surprisingly active communities out there.
Yeah, you are right! I have started going down that road the last year or so, but mostly in the IRC sphere. I started hanging out libera.chat, but found a smaller community on irc.inthemansion.com which I really enjoy.
I'm probably just jaded as most of the forums I visited back in the day became ghost towns during the 2010s. I should make more of an effort here
> It's never going to happen, but I felt we solved all of this with forums and IRC back in the day. I wish we gravitated towards that kind of internet again.
IME young people use Discord, and those servers often require permission to even join. Nearly all my fandom communications happen on a few Discord servers, most of which you cannot join without an invitation, and if you're kicked (bad actors will be kicked), you cannot re-join (without permission).
I guess I am kind of describing Discord in some sense, I personally discounted Discord as I've only ever used it as a free voice chat for small groups. But to be fair, I would rather leverage basic HTTP websites for consuming social media content than everything being that boring discord client.
>and be absolutely ruthless in permabanning anyone who posts AI content unmarked,
It would certainly be fun to trick people I dislike into posting AI content unknowingly. Maybe it has to be so low-key that they aren't even banned on the first try, but that just seems ripe for abuse.
I want a solution to this problem too, but I don't think this is reasonable or practical. I do wonder what it would mean if, philosophically, there were a way to differentiate between "free speech" and commercial speech such that one could be respected and the other regulated. But if there is such a distinction I've never been able to figure it out well enough to make the argument.
Worst is... the bots, spam and ads are still there, even if there is no-one to read them.
Usenet might still be alive (for piracy/binaries at least), and maybe a handfull of still-alive text-groups, but in the text-groups I used to read, it's nothing but a constant flow of spam since 15+ years.
I’m spending way too much time on the RealOrAI subreddits these days. I think it scares me because I get so many wrong, so I keep watching more, hoping to improve my detection skills. I may have to accept that this is just the new reality - never quite knowing the truth.
Those subreddits label content wrong all the time. Some of top commentors are trolling (I've seen one cooking video where the most voted comment is "AI, the sauce stops when it hits the plate"... as thick sauce should do.)
You're training yourself with a very unreliable source of truth.
> Those subreddits label content wrong all the time.
Intentionally if I might add. Reddit users aren't particularly interested in providing feedback that will inevitably be used to make AI tools more convincing in the future, nobody's really moderating those subs, and that makes them the perfect target for poisoning via shitposting in the comments.
> You're training yourself with a very unreliable source of truth.
I don’t just look at the bot decision or accept every consensus blindly. I read the arguments.
If I watch a video and think it’s real and the comments point to the source, which has a description saying they use AI, how is that unreliable?
Alternatively, I watch a video and think it’s AI but a commenter points to a source like YT where the video was posted 5 years ago, or multiple similar videos/news articles about the weird subject of the video, how is that unreliable?
I don't understand. In the grandparent comment you say you have a problem spending too much time on those subreddits and watching too many of those videos, but then you push back here.
Personally, I don't think that behavior is very healthy, and the other parent comment suggested an easy "get out of jail free" way of not thinking about it anymore while also limiting anxiety: they're unreliable subreddits. I'd say take that advice and move on.
I show my young daughter this stuff and try to role model healthy skepticism. Critical thinking YT like Corridor Crew’s paranormal UFO/bigfoot/ghosts/etc series is great too. Peer pressure might be the deciding factor in what she ultimately chooses to believe, though.
I think the broader response and re-evaluation is going to take a lot longer. Children of today are growing up in an obviously hostile information environment whereas older folk are trying to re-calibrate in an environment that's changing faster than they are.
If the next generation can weather the slop storm, they may have a chance to re-establish new forms of authentic communication, though probably on a completely different scale and in different forms to the Web and current social media platforms.
Yeah this is what I always expected to happen. Cryptographic signing of source material so you can verify who the initial claimant is, and base credibility on the identity of that person.
It was always easy to fake photos too. Just organize the scene, or selectively frame what you want. There is no such thing as any piece of media you can trust.
The construction workers having lunch on the girder in that famous photo were in fact about four feet above a safety platform; it's a masterpiece of framing and cropping. (Ironically the photographer was standing on a girder out over a hundred stories of nothing).
My favorite theory about those subreddits is that it's the AI companies getting free labeling from (supposed) authentic humans so they can figure out how to best tweak their models to fool more and more people.
a reliable giveaway for AI generated videos is just a quick glance at the account's post history—the videos will look frequent, repetitive, and lack a consistent subject/background—and that's not something that'll go away when AI videos get better
AI is capable of consistent characters now, yes, but the platforms themselves provide little incentive to. TikTok/Instagram Reels are designed to serve recommendations, not a user-curated feed of people you follow, so consistency is not needed for virality
Sort by oldest. If the videos go back more than 3 years watch an old one. So many times the person narrating the old vids is nothing like the new vids and a dead ringer for AI. If the account is less than a year old, 100% AI.
Content farms, whether AI generated or not their incentive is to pump out low quality high output. Most of their content even it involves a human narrator are heavily packed with AI generated media.
I use them – well, mostly en dashes because that's the custom where I'm from – because I'm a bit of a typography nerd and have grown to dislike the barrenness of ASCII.
In this case Apple has cared about typography since its very beginning. Steve Jobs obsessed over it. The OS also replaces simple quotes with fancier ones.
I do the same on my websites. It's embedded into my static site generator.
Not long ago, a statistical study found that AI almost always has an 'e' in its output. It is a firm indicator of AI slop. If you catch a post with an 'e', pay it no mind: it's probably AI.
Uh-oh. Caught you. Bang to rights! That post is firmly AI. Bad. Nobody should mind your robot posts.
I'm incredibly impressed that you managed to make that whole message without a single usage of the most frequently used letter, except in your quotations.
Such omission is a hobby of many WWW folk. I can, in fact, think back to finding a community on R*ddit known as "AVoid5", which had this trial as its main point.
I did ask G'mini for synonyms. And to do a cursory count of e's in my post. Just as a 2nd opinion. It found only glyphs with quotation marks around it. It graciously put forward a proxy for that: "the fifth letter".
It's not oft that you run into such alluring confirmation of your point.
My first post took around 6 min & a dictionary. This post took 3. It's a quick skill.
No LLMs. Ctrl+f shows you all your 'e's without switching away from this tab. (And why count it? How many is not important, you can simply look if any occur and that's it)
This is not the right thing to take away from this. This isn't about one group of people wanting to be angry. It's about creating engagement (for corporations) and creating division in general (for entities intent on harming liberal societies).
In fact, your comment is part of the problem. You are one of the people who want to be outraged. In your case, outraged at people who think racism is a problem. So you attack one group of people, not realizing that you are making the issue worse by further escalating and blaming actual people, rather than realizing that the problem is systemic.
We have social networks like Facebook that require people to be angry, because anger generates engagement, and engagement generates views, and views generate ad impressions. We have outside actors who benefit from division, so they also fuel that fire by creating bot accounts that post inciting content. This has nothing to do with racism or people on one side. One second, these outside actors post a fake incident of a racist cop to fire up one side, and the next, they post a fake incident about schools with litter boxes for kids who identify as pets to fire up the other side.
Until you realize that this is the root of the problem, that the whole system is built to make people angry at each other, you are only contributing to the anger and division.
> Until you realize that this is the root of the problem, that the whole system is built to make people angry at each other, you are only contributing to the anger and division.
It's not built to make people angry per se - it's built to optimise for revenue generation - which so happens to be content that makes people angry.
People have discovered that creating and posting such content makes them money, and the revenue is split between themselves and the platforms.
In my view if the platforms can't tackle this problem then the platforms should be shutdown - promoting this sort of material should be illegal, and it's not an excuse to say our business model won't work if we are made responsible for the things we do.
ie while it turns out you can easily scale one side of publishing ( putting stuff out their and getting paid by ads ), you can't so easily scale the other side of publishing - which is being responsible for your actions - if you haven't solved both sides you don't have a viable business model in my view.
In social networks, revenue is enhanced by stickiness.
Anger increases stickiness. Once one discovers there are other people on the site, and they are guilty of being wrong on the internet, one is incentivized to correct them. It feels useful because it feels like you're generating content that will help other people.
I suspect the failure of the system that nobody necessarily predicted is that people seem to not only tolerate, but actually like being a little angry online all the time.
Sure -it's a mix - but to be honest I think it's over-emphasized - in that in the US most of that kind of money driving politics operates in plain sight.
For example, the 'Russian interference' in the 2016 US election, was I suspect mostly people trying to make money, and more importantly, was completely dwarfed by US direct political spending.
There is also a potentially equally, if not larger problem, in the politicisation of the 'anti-disinformation' campaigns.
To be honest I'm not sure if there is much of a difference between a grifter being directly paid to promote a certain point of view, and somebody being paid indirectly ( by ads ).
In both cases neither really believes in the political point they are making they are just following the money.
> In fact, your comment is part of the problem. You are one of the people who want to be outraged. In your case, outraged at people who think racism is a problem. So you attack one group of people, not realizing that you are making the issue worse by further escalating and blaming actual people, rather than realizing that the problem is systemic.
I don't see anything like outrage in GP, just a vaguely implied sense of superiority (political, not racial!).
I agree with grandparent and think you have cause and effect backwards: people really do want to be outraged so Facebook and the like provide rage bait. Sometimes through algos tuning themselves to that need, sometimes deliberately.
But Facebook cannot "require" people do be angry. Facebook can barely even "require" people to log in, only those locked into Messenger ecosystem.
I don't use Facebook but I do use TikTok, and Twitter, and YouTube. It's very easy to filter rage bait out of your timeline. I get very little of it, mark it "uninterested"/mute/"don't recommend channel" and the timeline dutifully obeys. My timelines are full of popsci, golden retrievers, sketches, recordings of local trams (nevermind), and when AI makes an appearance it's the narrative kind[1] which I admit I like or old jokes recycled with AI.
The root of the problem is in us. Not on Facebook. Even if it exploits it. Surfers don't cause waves.
No, they do not. Nobody[1] wants to be angry. Nobody wakes up in the morning and thinks to themselves, "today is going to be a good day because I'm going to be angry."
But given the correct input, everyone feels that they must be angry, that it is morally required to be angry. And this anger then requires them to seek out further information about the thing that made them angry. Not because they desire to be angry, but because they feel that there is something happening in the world that is wrong and that they must fight.
I disagree. Why are some of the most popular subreddits things like r/AmITheAsshole, r/JustNoMIL, r/RaisedByNarcissists, r/EntitledPeople, etc.: forums full of (likely fake) stories of people behaving egregiously, with thousands of outraged comments throwing fuel on a burning pile of outrage: "wow, your boyfriend/girlfriend/husband/wife/father/mother/FIL/MIL/neighbor/boss/etc. is such an asshole!" Why are advice/gossip columns that provide outlets for similar stories so popular? Why is reality TV full of the same concocted situations so popular? Why is people's first reaction to outrageous news stories to bring out the torches and pitchforks, rather than trying to first validate the story? Why can an outrageous lie travel halfway around the world while the truth is still getting its boots on?
As someone who used to read some of these subreddits before they became swamped in AI slop, I did not go there to be angry but to be amused and/or find like-minded people.
I suppose the subtlety is that people want to be angry if (and only if) reality demands it.
My uneducated feeling is that, in a small society, like a pre-civilisation tribal one where maybe human emotions evolved, this is useful because it helps enact change when and where it's needed.
But that doesn't mean that people want to be angry in general, in the sense that if there's nothing in reality to be angry about then that's even better. But if someone is presented with something to be angry about, then that ship has sailed so the typical reaction is to feel the need to engage.
>in a small society, like a pre-civilisation tribal one where maybe human emotions evolved, this is useful because it helps enact change when and where it's needed
Yes, I think this is exactly it. A reaction that may be reasonable in a personal, real-world context can become extremely problematic in a highly connected context.
It's both that, as an individual, you can be inundated with things that feel like you have a moral obligation to react. On the other side of the equation, if you say something stupid online, you can suddenly have thousands of people attacking you for it.
Every single action seems reasonable, or even necessary, to each individual person, but because everything is scaled up by all the connections, things immediately escalate.
The issue right now is that the only things you can do to protect yourself from certain kinds of predators is literally what will get you blown up on social media when taken out of context.
There's a difference between wanting to be angry and feeling that anger is the correct response to an outside stimulus.
I don't wake up thinking "today I want to be angry", but if I go outside and see somebody kicking a cat, I feel that anger is the correct response.
The problem is that social media is a cat-kicking machine that drags people into a vicious circle of anger-inducing stimuli. If people think that every day people are kicking cats on the Internet, they feel that they need to do something to stop the cat-kicking; given their agency, that "something" is usually angry responses and attacks, which feeds the machine.
Again, they do not do that because they want to be angry; most people would rather be happy than angry. They do it because they feel that cats are being kicked, and anger is the required moral response.
And if you seek out (and push ‘give me more’ buttons on) cat kicking videos?
At some point, I think it’s important to recognize the difference between revealed preferences and stated preferences. Social media seems adept at exposing revealed preferences.
If people seek out the thing that makes them angry, how can we not say that they want to be angry? Regardless of what words they use.
And for example, I never heard anyone who was a big Fox News, Rush Limbaugh, or Alex Jones fan who said they wanted to be angry or paranoid (to be fair, this was pre-Trump and awhile ago), yet every single one of them I saw got angry and paranoid after watching them, if you paid any attention at all.
>If people seek out the thing that makes them angry, how can we not say that they want to be angry?
Because their purpose in seeking it out is not to get angry, it's to stop something from happening that they perceive as harmful.
I doubt most people watch Alex Jones because they love being angry. They watch him because they believe a global cabal of evildoers is attacking them. Anger is the logical consequence, not the desired outcome. The desired outcome is that the perceived problem is solved, i.e. that people stop kicking cats.
The reason they feel that way (more) is because of those videos. Just like most people who watch Alex Jones probably didn’t start by believing all the crazy things.
We can chicken/egg about it all day, but at some point if people didn’t want it - they wouldn’t be doing it.
Depending on the definition of ‘want’ of course. But what else can we use?
I don’t think anyone would disagree that smokers want cigarettes, eh? Or gamblers want to gamble?
I think most people have experienced relatives of theirs falling down these rabbit holes. They didn't seek out a reason to be angry; they watched one or two episodes of these shows because they were on Fox, or because a friend sent it, or because they saw it recommended on Facebook. Then they became angry, which made them go back because now it became a moral imperative to learn more about how the government is making frogs gay.
None of these people said to themselves, "I want to be angry today, and I heard that Alex Jones makes people angry, therefore I will watch Alex Jones."
A lot of people really do, and it predates any sort of media too. When they don't have outrage media they form gossip networks so they can tell each other embellished stories about mundane matters to be outraged and scandalized about.
> When they don't have outrage media they form gossip networks so they can tell each other embellished stories about mundane matters to be outraged and scandalized about.
But again in this situation the goal is not to be angry.
This sort of behaviour emerges as a consequence of unhealthy group dynamics (and to a lesser extent, plain boredom). By gossiping, a person expresses understanding of, and reinforces, their in-group’s values. This maintains their position in the in-group. By embellishing, the person attempts to actually increase their status within the group by being the holder of some “secret truth” which they feel makes them important, and therefore more essential, and therefore more secure in their position. The goal is not anger. The goal is security.
The emotion of anger is a high-intensity fear. So what you are perceiving as “seeking out a reason to be angry” is more a hypervigilant scanning for threats. Those threats may be to the dominance of the person’s in-group among wider society (Prohibition is a well-studied historical example), or the threats may be to the individual’s standing within the in-group.
In the latter case, the threat is frequently some forbidden internal desire, and so the would-be transgressor externalises that desire onto some out-group and then attacks them as a proxy for their own self-denial. But most often it is simply the threat of being wrong, and the subsequent perceived loss of safety, that leads people to feel angry, and then to double down. And in the world we live in today, that doubling down is more often than not rewarded with upvotes and algorithmic amplification.
I disagree. In these gossip circles they brush off anything that doesn't make them upset, eager to get to the outrageously stuff. They really do seek to be upset. It's a pattern of behavior which old people in particular commonly fall into, even in absence of commercialized media dynamics.
> In these gossip circles they brush off anything that doesn't make them upset
Things that they have no fear about, and so do not register as warranting brain time.
> eager to get to the outrageously stuff.
The things which are creating a feeling of fear.
It’s not necessary for the source of a fear to exist in the present moment, nor for it to even be a thing that is real. For as long as humans have communicated, we have told tales about things that go bump in the dark. Tales of people who, through their apparent ignorance of the rules of the group, caused the wrath of some spirits who then punished the group.
It needn’t matter whether a person’s actions actually caused a problem, or whether it caused the spirits to be upset, or indeed whether the spirits actually ever existed at all. What matters is that there is a fear, and there is a story about that fear, and the story reinforces some shared group value.
> It's a pattern of behavior which old people in particular commonly fall into,
Here is the fundamental fear of many people: the fear of obsolescence, irrelevance, abandonment, and loss of control. We must adapt to change, but also often have either an inability or unwillingness to do so. And so the story becomes it is everyone else who is wrong. Sometimes there is wisdom in the story that should not be dismissed. But most often it is just an expression of fear (and, again, sometimes boredom).
What makes this hypothesis seem so unbelievable? Why does it need to be people seeking anger? What would need to be true for you to change your opinion? This discussion thread is old, so no need to spend your energy on answering if you don’t feel strongly about it. Just some parting questions to mull over in the bath, perhaps.
Thank you for raising this idea originally, and for engaging with me on it.
The opposite question - why so insistent that people wouldn’t seek it out, when behavior pretty strongly shows it?
Why are you so insistent that people don’t do what they clearly seem to do?
Why is that hypothesis so unbelievable?
Is it the apparent lack of (actual) agency for many people? Or the concerning worry that we all could be steering ourselves to our own dooms, while convincing ourself we aren’t?
> Why are you so insistent that people don’t do what they clearly seem to do?
I’m not rejecting the idea that people fixate on stimuli that produce anger. The question is why they do that, and the answer is unlikely to be “people just want to be angry”.
> Why is that hypothesis so unbelievable?
Because it runs counter to the best available literature I am aware of and is a conclusion based on a superficial observation which has no underlying theoretical basis, whereas the hypothesis I present is grounded in some amount of actual science and evidence. Even the superficial Wikipedia article on anger emphasises the role of threat response here. Mine isn’t, as far as I can tell, some fringe position; it is very much in line with the research. It is also in line with my personal experience. “People just want to be angry” is not.
It is important to understand that the things people try to avoid through gossip, exaggeration, and expressions of anger are not all mortal threats. They can also be very mundane things like not wanting to eat something that they just think tastes bad. So make sure not to take the word “threat” too narrowly when considering this hypothesis.
I don’t have any skin in the game here other than an interest in the truth of the matter and a willingness to engage since I find this sort of thing both interesting and sociologically very important. If you or anyone have some literature to shove in my face that offers some compelling data in support of the “people love feeling angry” hypothesis, then sure, I would accept that and integrate that into my understanding of human behaviour.
You may be vastly overestimating average media competence. This is one of those things where I'm glad my relatives are so timid about the digital world.
Many people seek being outraged. Many people seek to have awareness of truth. Many people seek getting help for problems. These are not mutually exclusive.
Just because someone fakes an incident of racism doesn't mean racism isn't still commonplace.
In various forms, with various levels of harm, and with various levels of evidence available.
(Example of low evidence: a paper trail isn't left when a black person doesn't get a job for "culture fit" gut feel reasons.)
Also, faked evidence can be done for a variety of reasons, including by someone who intends for the faking to be discovered, with the goal of discrediting the position that the fake initially seemed to support.
Is a video documenting racist behavior a racist or an anti-racist video? Is faking a video documenting racist behavior (that never happened) a racist or an anti-racist video? Is the act of faking a video documenting racist behavior (that never happened) or anti-racist behavior?
Video showing racist behavior is racist and anti-racist at the same time. A racist will be happy watching it, and anti-racist will forward it to forward their anti-racist message.
Faking a racist video that never happend is, first of all, faking. Second, it's the same: racist and anti-racist at the same time. Third, it's falsifying the prevalence of occurrence.
If you'll add to the video a disclaimer: "this video has been AI-generated, but it shows events that happen all across the US daily" then there's no problem. Nobody is being lied to about anything. The video shows the message, it's not faking anything. But when you impersonate a real occurence, but it's a fake video, then you're lying, and it's simple as that.
Can a lie be told in good faith? I'm afraid that not even philosophy can answer that question. But it's really telling that leftist are sure about the answer!
That's not necessarily just a leftist thing. Plenty of politicians are perfectly fine with saying things they know are lies for what they believe are good reasons. We see it daily with the current US administration.
Well yes, that's what he wrote, but that's like saying: stealing can be done for variety of reasons, including by someone who intends the theft to be discovered? Killing can be done for variety of reasons, including by someone who intends the killing to be discovered?
I read it as "producing racist videos can sometimes be used in good faith"?
They're saying one example of a reason someone could fake a video is so it would get found out and discredit the position it showed. I read it as them saying that producing the fake video of a cop being racist could have been done to discredit the idea of cops being racist.
There is significant differences between how the information world and the physical world operate.
Creating all kinds of meta-levels of falsity is a real thing, with multiple lines of objective (if nefarious) motivation, in the information arena.
But even physical crimes can have meta information purposes. Putin for instance is fond of instigating crimes in a way that his fingerprints will inevitably be found, because that is an effective form of intimidation and power projection.
I think they’re just saying we should interpret this video in a way that’s consistent with known historical facts. On one hand, it’s not depicting events that are strictly untrue, so we shouldn’t discredit it. On the other hand, since the video itself is literally fake, when we discredit it we shouldn’t accidentally also discredit the events it’s depicting.
So make fake videos of events that never actually happened, because real events surely did that weren’t recorded? Or weren’t viral enough? Or something?
How about this question: Can generating an anti-racist video be justified as a good thing?
I think many here would say "yes!" to this question, so can saying "no" be justified by an anti-racist?
Generally I prefer questions that do not lead to thoughts being terminated. Seek to keep a discussion not stop it.
On the subject of this thread, these questions are quite old and are related to propaganda: is it okay to use propaganda if we are the Good Guys, if, by doing so, it weakens our people to be more susceptible to propaganda from the Bad Guys. Every single one of our nations and governments think yes, it's good to use propaganda.
Because that's explicitly what happened during the rise of Nazi Germany; the USA had an official national programme of propaganda awareness and manipulation resistance which had to be shut down because the country needed to use propaganda on their own citizens and the enemy during WW2.
So back to the first question, its not the content (whether it's racist or not) it's the effect: would producing fake content reach a desired policy goal?
Philosophically it's truth vs lie, can we lie to do good? Theologically in the majority of religions, this has been answered: lying can never do good.
Game theory tells us that we should lie if someone else is lying, for some time. Then we should try trusting again. But we should generally tell the truth at the beginning; we sometimes lose to those who lie all the time, but we can gain more than the eternal liar if we encounter someone who behaves just like us. Assuming our strategy is in the majority, this works.
But this is game theory, a dead and amoral mechanism that is mostly used by the animal kingdom. I'm sure humanity is better than that?
Propaganda is war, and each time we use war measures, we're getting closer to it.
I like that saying. You can see it all the time on Reddit where, not even counting AI generated content, you see rage bait that is (re)posted literally years after the fact. It's like "yeah, OK this guy sucks, but why are you reposting this 5 years after it went viral?"
Rage sells. Not long after EBT changes, there were a rash of videos of people playing the person people against welfare imagine in their head. Women, usually black, speaking improperly about how the taxpayers need to take care of their kids.
Not sure how I feel about that, to be honest. On one hand I admire the hustle for clicks. On the other, too many people fell for it and probably never knew it was a grift, making all recipients look bad. I only happened upon them researching a bit after my own mom called me raging about it and sent me the link.
Not AI. Not bots. Not Indians or Pakistanis. Not Kremlin or Hasbara agents. All the above might comprise a small percentage of it, but the vast majority of the rage bait and rage bait support we’ve seen over the past year+ on the Internet (including here) is just westerners being (allowed and encouraged by each other to be) racist toward non-whites in various ways.
that sounds like one of the worst heuristics I've ever heard, worse than "em-dash=ai" (em-dash equals ai to the illiterate class, who don't know what they are talking about on any subject and who also don't use em-dashes, but literate people do use em-dashes and also know what they are talking about. this is called the Dunning-Em-Dash Effect, where "dunning" refers to the payback of intellectual deficit whereas the illiterate think it's a name)
The em-dash=LLM thing is so crazy. For many years Microsoft Word has AUTOCORRECTED the typing of a single hyphen to the proper syntax for the context -- whether a hyphen, en-dash, or em-dash.
I would wager good money that the proliferation of em-dashes we see in LLM-generated text is due to the fact that there are so many correctly used em-dashes in publicly-available text, as auto-corrected by Word...
Which would matter but the entry box in no major browser do was this.
The HN text area does not insert em-dashes for you and never has. On my phone keyboard it's a very lot deliberate action to add one (symbol mode, long press hyphen, slide my finger over to em-dash).
The entire point is it's contextual - emdashes where no accomodations make them likely.
Yeah, I get that. And I'm not saying the author is wrong, just commenting on that one often-commented-upon phenomenon. If text is being input to the field by copy-paste (from another browser tab) anyway, who's to say it's not (hypothetically) being copied and pasted from the word processor in which it's being written?
Well, its probably lower false positive than en-dash but higher false negative, especially since AI generated video, even when it has audio, may not have AI generated audio. (Generation conditioned on a text prompt, starting image, and audio track is among the common modes for AI video generation.)
Thank you for saving me the time writing this. Nothing screams midwit like "Em-dash = AI". If AI detection was this easy, we wouldn't have the issues we have today.
With the right context both are pretty good actually.
I think the emoji one is most pronounced in bullet point lists. AI loves to add an emoji to bullet points. I guess they got it from lists in hip GitHub projects.
The other one is not as strong but if the "not X but Y" is somewhat nonsensical or unnecessary this is very strong indicator it's AI.
Similarly: "The indication for machine-generated text isn't symbolic. It's structural." I always liked this writing device, but I've seen people label it artificial.
When I see emojis in code, especially log statements, it is 100% giveaway AI was involved. Worse, it is an indicator the developer was lazy and didn't even try to clean up the most basic slop.
If nobody used em-dashes, they wouldn’t have featured heavily in the training set for LLMs. It is used somewhat rarely (so e people use it a lot, others not at all) in informal digital prose, but that’s not the same as being entirely unused generally.
That's the only way I know how to get an em dash. That's how I create them. I sometimes have to re-write something to force the "dash space <word> space" sequence in order for Word to create it, and then I copy and paste the em dash into the thing I'm working on.
Windows 10/11’s clipboard stack lets you pin selections into the clipboard, so — and a variety of other characters live in mine. And on iOS you just hold down -, of course.
Ctrl+Shit+U + 2014 (em dash) or 2013 (en dash) in Linux. Former academic here, and I use the things all the time. You can find them all over my pre-LLM publications.
I didn't know these fancy dashes existed until I read Knuth's first book on typesetting. So probably 1984. Since then I've used them whenever appropriate.
Because I could not stop for Death –
He kindly stopped for me –
The Carriage held but just Ourselves –
And Immortality.
We slowly drove – He knew no haste
And I had put away
My labor and my leisure too,
For His Civility –
Her dashes have been rendered as en dashes in this particular case rather than em dashes, but unless you're a typography enthusiast you might not notice the difference (I certainly didn't and thought they were em dashes at first). I would bet if I hunted I would find some places where her poems have been transcribed with em dashes. (It's what I would have typed if I were transcribing them).
Dickinson's dashes tended to vary over time, and were not typeset during her lifetime (mostly). Also, mid-19th century usage was different—the em-dash was a relatively new thing.
But many have built their writing habits about LaTeX typing, and a – or even an — are hardcoded into their text editors / operating systems, much like other correct diacritics and ligatures may be.
It's a band-aid solution, given that eventually AI content will be indistinguishable from real-world content. Maybe we'll even see a net of fake videos citing fake news articles, etc.
Of course there are still "trusted" mainstream sources, expect they can inadvertently (or for other reasons) misstate facts as well. I believe it will get harder and harder to reason about what's real.
It's not really any different that stopping selling counterfeit goods on a platform. Which is a challenge, but hardly insurmountable and the pay off from AI videos won't be nearly so good. You can make a few thousand a day selling knock offs to a small amount of people and get reliably paid within 72 hours. To make the same off of "content" you would have to get millions of views and the pay out timeframe is weeks if not months. Youtube doesn't pay you out unless you are verified, so ban people posting AI and not disclosing it and the well will run dry quickly.
Well then email spam will never have an incentive. That is a relief! I was going to predict that someday people would start sending millions of misleading emails or texts!
It's not a band-aid at all. In fact, recognition is nearly always algorithmically easier than creation. Which would mean fake-AI detectors could have an inherent advantage over fake-AI creators.
I have no insight, but I assume they are doing it because they can use AI to make a few variations of a video and then automatically A/B test them to see which ones get more engagement, and then use that to make videos that are more engaging than what the author actually uploaded.
This is "innocent" if you accept that the author's goal is simplify to maximize engagement and YouTube is helping them do that. It's not if you assume the author wants users to see exactly what they authored.
Eventually it will make everyone say that videos are fake because nobody trusts videos anymore. We will ironically be back to something like the 40s where security cameras didn't exist and photography was rare and relatively expensive. A strange kind of privacy.
The problem’s gonna be when Google as well is plastered with fake news articles about the same thing. There’s very little to no way you will know whether something is real or not.
I fail to understand your worry. This will change nothing regarding some people’s tendency to foster and exploit negative emotions for traction and money. “AI makes it easier”, was it hard to stumble across out-of-context clips and photoshops that worked well enough to create divisiveness? You worry about what could happen but everything already has happened.
> “AI makes it easier”, was it hard to stumble across out-of-context clips and photoshops that worked well enough to create divisiveness?
Yes. And I think this is what most tech-literate people fail to understand. The issue is scale.
It takes a lot of effort to find the right clip, cut it to remove its context, and even more effort to doctor a clip. Yes, you're still facing Brandolini's law[1], you can see that with the amount of effort Captain Disillusion[2] put in his videos to debunk crap.
But AI makes it 100× times worse. First, generating a convincing entirely video only takes a little bit of prompting, and waiting, no skill is required. Second, you can do that on a massive scale. You can easily make 2 AI videos a day. If you want to doctor videos "the old way", you'll need a team of VFX artists to do it at this scale.
I genuinely think that tech-literate folks, like myself and other hackernews posters, don't understand that significantly lowering the barrier to entry to X doesn't make X equivalent to what it was before. Scale changes everything.
Just have video cameras (mostly phones these days) record a crypto hash into the video that the video sharing platforms read and display. That way we know a video was recorded with the uploader's camera and not just generated in a computer software.
There aren't that many big tech companies that are responsible for creating the devices people use to record and host the platforms and software that people use to play back the content.
It is that it is increasingly becoming indistinguishable from not-slop.
There is a different bar of believability for each of us. None of us are always right when we make a judgement. But the cues to making good calls without digging are drying up.
And it won’t be long before every fake event has fake support for diggers to find. That will increase the time investment for anyone trying to figure things out.
It isn’t the same staying the same. Nothing has ever stayed the same. “Staying the same” isn’t a thing in nature and hasn’t been the trend in human history.
True for videos, but not true for any type of "text claim", which were already plenty 10 years ago and they were already hard to fight (think: misquoting people, strangely referring to science article, dubiously interpreting facts, etc.).
But I would claim that "trusting blindly" was much more common hundreds of years ago than it is now, so we might make some progress in fact.
If people learn to be more skeptical (because at some point they might get that things can be fake) it might even be a gain. The transition period can be dangerous though, as always.
But today’s text manufacturing isn’t our grand.., well yesterday’s text manufacturing.
And pretty soon it will be very persuasive models with lots of patience and manufactured personalized credibility and attachment “helping” people figure out reality.
The big problem isn’t the tech getting smarter though.
It’s the legal and social tolerance for conflict of interests at scale. Like unwanted (or dark pattern permissioned) surveillance which is all but unavoidable, being used to manipulate feeds controlled by third parties (between us and any organic intentioned contacts), toward influencing us in any way anyone will pay for. AI is just walking through a door that has been left wide open despite a couple decades of hard lessons.
Incentives, as they say, matter.
Misinformation would exist regardless, but we didn’t need it to be a cornerstone business model with trillions of dollars of market cap unifying its globally coordinated efficient and effective, near unavoidable, continual insertion into our and our neighbors lives. With shareholders relentlessly demanding double digit growth.
Doesn’t take any special game theory or economic theory to see the problematic loop there. Or to predict it will continue to get worse, and will be amplified by every AI advance, as long as it isn’t addressed.
How many people were getting quote tweeted on Twitter with deep fake porn of them before Grok could remove the clothes off your person with a simple prompt?
It's sad, yeah. And exhausting. The fact that you felt something was off and took the time to verify already puts you ahead of the curve, but it's depressing that this level of vigilance is becoming the baseline just to consume media safely
Humans have strong self preservation instincts so at some point they'll start to ignore the free internet content or treat it as no more than fiction. There might come the time when people will demand personal meetings for everything, kind of a return to nature because they won't trust anything that can be machine generated. In way that's positive as current trends of weakening deeper human connections will be reversed thanks to AI :)
I think people will eventually learn to not trust any videos or stories they see online. I think the much bigger issue will be what happens when the LLM providers encode "alignment" into the models to insist on certain worldviews, opinions, or even falsehoods. Trust in LLMs and usage of them is increasing.
"Great question! No, we have always been at war with Eurasia. Can I help with anything else?"
"Eventually" does alot of heavy lifting in your prediction. This is like saying that if you feed poison to panda bears, they will eventually become immune to poison. On what timescale though? 8 million years from now, if the species survived, and if I've been feeding that poison to each and every generation... sure.
If I just feed it to 10 pandas, today, they're all dead.
And I suspect that humanity's position in this analogy is far closer to the latter than the former.
People stopped falling for photoshopped pictures and staged Chinese reels pretty quickly. I think people will pretty quickly decide anything outrageous is probably AI. And by people I mean the right half of the bell curve, which is all you can hope for. The left half will have problems in the world as they always have.
As others have noted, it’s a long-term trend - agree that as you note it’ll get worse. The Russian psy-ops campaigns from the Internet Research Agency during Trump #1 campaign being a notable entry, where for example they set up both fake far-left and far-right protest events on FB and used these as engagement bait on the right/left. (I’m sure the US is doing the same/worse to their adversaries too.)
Whatever fraction bots play overall, it has to be way higher for political content given the power dynamics.
Google is complicit in this sort of content by hosting it no questions asked. They will happily see society tear itself apart if they are getting some as revenue. Same as the other social media companies.
And yes I know the argument about Youtube being a platform it can be used for good and bad. But Google control and create the algorithm and what is pushed to people. Make it a dumb video hosting site like it used to be and I'll buy the "bad and good" angle.
I have a Samsung neo g9 57" which is like 1/2 an 8k monitor (or 2 4k monitors side-by-side) which is sweet since I use picture-by-picture mode to have my work computer on one side and my personal computer on the other side.
I have the SAMSUNG 49" Odyssey Neo G9 G95NA - but despite spending literally dozens of hours - I was never able to get text to work clearly on it - either Mac or PC - tried both the DisplayPort, HDMI - tried all the (many) HDMI cables I had at home, and a couple expensive Monoprice cables, firmware updates, monitor resets, every setting I could figure - no luck. Text is just ... fuzzy in a way that it isn't with any other monitor I've ever owned - kind of a deal breaker when I spend all day in tmux.
Is it an oled display? It’s likely that the subpixel rendering does not match the physical display, so the subcolors are in the wrong location making the text worse instead of better (in the case of macs Apple entirely removed subpixel rendering some years back so there is no solution whatsoever, Apple on standard density always looks like shit).
I think the real issue is that for the powers that be, inflation is seen as either neutral or a good thing. The only people it hurts is the working class and the blame is nebulous. So it is used as a tool to increase taxes without changing laws, lower the cost of debt, and cut labor wages since they don't get pay raises commensurate with inflation. So I think it is a trick played upon the working class to screw them over in the long term while the wealthy are protected because all their assets simply go up in value with inflation. I think the target inflation rate should be 0%, not 2%. I simply don't believe the justification for the 2% target.
We're well above 2% anyway, I doubt they will ever hit that again - they are already having to cut rates because job market is frozen, and that will increase inflation pressure.
I track my spend each year and my personal actual inflation rate has averaged about 4.5% over past 5 years. And I'm pretty low income, my spending is all core stuff.
> Consumer PCs and laptops spend most of their time idle
Not when Windows gets its grubby mitts on them. I will frequently hear the fans spin up on my Win10 laptop when it should be doing nothing, only to find the Windows Telemetry process or the Search Indexer using an entire fucking CPU core.
It's like with cars - better performing drive trains (et al) is used to increase the power envelope instead of lowering fuel consumption, since that leads to more sales allegedly.
It really isn't. I have a pocket-sized device that would utterly thrash a supercomputer from a couple of decades ago, and it goes a day or two on a 20Wh battery. Going full blast it'll consume maybe 25-30W, which is less than the idle power consumption of far less powerful devices from not all that long ago.
Incidentally, cars are also a lot more fuel efficient these days than they used to be.
Well technically, this is just a philosophical point, but the bill of rights is supposed to protect _natural_ rights that apply to everyone regardless of where they live. So in theory the UK can and does routinely violate peoples rights
While I am not particularly interested in the design, I am intrigued by the idea of making your own monitor. I have had some ideas about features I would like in a monitor before. Are there some boards out there that are easy to hack on to add firmware features etc?
The whole thing is a joke but presented in a serious manner. The idea is that if you know your program input at compile time you can turn everything into a constexpr which gets evaluated at compile time so your program is "ran" by the compiler instead of at run time. So he built a "runtime" that is actually ran by the compiler around this idea for fun.
> So he built a "runtime" that is actually ran by the compiler around this idea for fun.
Funnily enough, sufficiently enough of C++ is constexpr-able that it was the driving force for compile-time reflection in C++[1], which is not unlike what the author has done.
Although the new syntax is much more readable than what the author chose to do with expression templates, it's still annoying, as is much of C++. But I still like it, so I am decidedly Stockholmed.
reply