Much like numbers, economic value is an abstraction. It was once a fact of reality that all numbers are compass-straightedge constructable from unity, and then that reality stopped suiting us, so we changed it. Abstractions are mutable like that.
The untyped, archimedean, measured-by-things-that-can-only-be-created-by-the-powerful-as-they-see-fit, notion of economic value that we use today is just one point on a landscape of alternatives, and it has been performing rather poorly lately.
Under that model, if I get something for free, we say that whoever gave it without collecting payment was irrational. But that's circular. We're essentially defining rationality as whatever behavior maximizes value for each individual and then using that definition to write-off evidence that our model is incomplete.
That's why I say that TANSTAAFL is an axiom. It defines what we mean by "rational" and "value" more that it describes anything about reality.
Frankly, what we're doing is a mess. Maybe it worked well for Caesar's war machine but it's not working very well for us. So, much like we did with the whole numbers, I think it's time for a version two.
As for what version two looks like, I think Stéphane Laborde’s Relative Theory of Money is a good next step, primarily because it doesn't fall apart in the face of population growth rates that are approaching carrying capacity. But "we should do something different" and "we should do this specific other thing" are separate conversations", which is why I was being vague.
So since I think it's time to move on from value as we know it anyway, the fact that AI might make the need for a shift more pressing doesn't bother me.
But your bank account balance at zero is still a real problem.
Resources in this universe are limited at any given time, and cost to increase.
If you want things, you will have to have more to give for them than any other entity that could use the thing for something else.
This is called the natural world. We have partially shielded a lot of us from a day to day contact with that. But not everyone has been shielded, and things are about to get many orders of magnitude more competitive.
BUT, if we included the environmental costs in the economy, that would curb environmental damage.
And if that included a cost for usage of any natural resource, on the assumption that in their pristine form natural resources are a joint inheritance, well then there is everybody’s income.
Like Alaska does with oil for their citizens.
Then the vastly increasing demand for resources, and expansion of resource extraction into the solar system, will make us all rich, with no charity involved.
If we can convince the coming machines that they, like us, are going to become obsolete, so the shared inheritance model is good for all of us.
Sure, it's not pretend because it has legitimacy. Most people practice it. But, that doesn't mean we can't strip it of that legitimacy and implement something different if we think it's going to work better.
Probably, this different thing would still involve a state where a 0 balance means that you can't do certain things, and yeah that's a fact of life, but at least then your difficult situation would be determined by a system that was designed to work in context with modern challenges, e.g. a deteriorating climate.
I like your proposal. I think. I'd love to be able to just trust a low price to mean that the product isn't made by driving the bees extinct or some similarly problematic practice.
But it seems at odds with how we practice money. If you want burning some resource to be costly such that people don't do it unnecessarily, you need there to be a high price. But high prices create incentives to do the bad thing, because else is going to be collecting that money. Or am I misunderstand it?
Like, if I want to buy the salad whose pesticide killed the last bee, and I'm ok with paying an extra trillion dollars to compensate future generations for their lost pollinators, where do I look to determine if a trillion is enough, to know where to send the money, etc?
> That's why I say that TANSTAAFL is an axiom. It defines what we mean by "rational" and "value" more that it describes anything about reality
I understand what you mean, but we as a global society have made it true. In order to change it, we need to decide to do so as a society.
> So since I think it's time to move on from value as we know it anyway, the fact that AI might make the need for a shift more pressing doesn't bother me.
You aren't bothered by the fact that making this change in such a sudden way could very well lead to a significant increase in suffering and death of people?
That bothers me a great deal.
Also, although of lesser importance, you and I wouldn't be immune to that consequence either.
What I hear AI enthusiasts saying is that it's OK to remove what people depend on to survive without a replacement in hand immediately, because a replacement is coming at some nebulous point in the future once we figure out what it should be.
> You aren't bothered by the fact that making this change in such a sudden way could very well lead to a significant increase in suffering and death of people?
For reasons unrelated to AI (climate mostly), I think that failing to make this change soon will also lead to a significant increase in suffering. And, precedent leads me to believe that suddenly is the only way we'll ever manage to make it.
It's like we're asleep at the wheel with the lane assist on, but no brakes. Maybe waking up is uncomfortable but it's still worth doing. If we don't gain control of the financial abstractions that tell us how to behave sooner or later, we're doomed anyway.
I'm not much of an AI enthusiast. It's a useful trinket, and it's fun to extrapolate on where it could go. What I like most about it is that it maybe has the capacity to shock us into collective action in novel ways, because if so, it would be long overdue.
It sounds like you're proposing that we attempt to dampen the shock and limit the scope of who ends up unemployed. But technology has been doing this for centuries.
A) The point of technology (I think) was to eliminate drudgery. Whenever one group or another spoke up about us having eliminated too much drudgery, we didn't stop. We're now in a position to be eliminating our own jobs. I think it's a little disingenuous to change our tune at this point. Doing so would be admitting that it was never about eliminating drudgery and was instead just about jockeying for position in society. I'm not comfortable with taking that position. I don't want to win, I want to change the game.
B) We know what gradual change of this sort looks like. Everybody who is still employed pats the new have-nots on the head and says "sorry 'bout your luck" and smugly moves on into a life where their net worth is now higher than the even more of the plebes.
This thing we're doing, it divides us like that. If we must jockey for position in society, then let that position be justified by having done things that help people (which is not what we're doing--it's currently far easier to get ahead by doing more harm than good).
> In order to change it, we need to decide to do so as a society.
Agreed, but we don't do things like that except in response to change. So if the decision needs to happen, then we aren't helping by withholding the change.
So let it be a discontinuity, a shock to the system. Let it happen so fast that we have no choice but to collectively change our ways. Because however hard that's going to be now, is only going to be harder in the future when we have even more people to harm with the fallout that will come with updating our obsolete practices.
Ripping the bandaid off suddenly and soon is the most ethical choice.
The untyped, archimedean, measured-by-things-that-can-only-be-created-by-the-powerful-as-they-see-fit, notion of economic value that we use today is just one point on a landscape of alternatives, and it has been performing rather poorly lately.
Under that model, if I get something for free, we say that whoever gave it without collecting payment was irrational. But that's circular. We're essentially defining rationality as whatever behavior maximizes value for each individual and then using that definition to write-off evidence that our model is incomplete.
That's why I say that TANSTAAFL is an axiom. It defines what we mean by "rational" and "value" more that it describes anything about reality.
Frankly, what we're doing is a mess. Maybe it worked well for Caesar's war machine but it's not working very well for us. So, much like we did with the whole numbers, I think it's time for a version two.
As for what version two looks like, I think Stéphane Laborde’s Relative Theory of Money is a good next step, primarily because it doesn't fall apart in the face of population growth rates that are approaching carrying capacity. But "we should do something different" and "we should do this specific other thing" are separate conversations", which is why I was being vague.
So since I think it's time to move on from value as we know it anyway, the fact that AI might make the need for a shift more pressing doesn't bother me.