To what end? I think this has become the “feel good that I am doing something about it” approach but it literally has almost zero effect beyond creating rhetoric from the politicians.
You need to hold your political leaders responsible with your vote. Don’t just automatically vote for the politicians that are “saying” the right things. Find out what your representatives are “doing” and hold them responsible for their actions or more importantly, inactions.
Well markets are evaluated on a number of different metrics depending on what you’re trying to determine.
If you want to go be pedantic about it and select one metric, markets are evaluated on their Brier Score or some other Proper Scoring Rule, not accuracy.
However, I prefer calibration as a high level way to explain prediction market performance to people, as it’s more intuitive.
Yeah it's a good way to introduce the idea. But I don't think someone would really grasp it until they understand why both calibration and "discrimination" are necessary in determining if a prediction market is accurate.
I suspect that you are arguing semantics, where parent and grandparent focus on the nuance of what is ACTUALLY being measured. I am saying it like this, because while I never used prediction markets, I briefly looked into them to see if I could use them well. The question of accuracy came up, which is why I happen to align with posters above.
Noob question from me: what’s the difference between accuracy and calibration? A well calibrated market would be more accurate and vice versa, not?
Edit: just found the answer myself: “accuracy measures the percentage of correct predictions out of total predictions, while calibration assesses whether a prediction market's assigned probabilities align with the actual observed frequency of those outcomes”
Suppose there are 1000 events and 500 will have outcome A and 500 will have outcome B. If you predict a 50% chance of A for every event you'll be perfectly calibrated. On the other hand, if you predict a 90% chance of a certain outcome and you're right for 800 events, you're not perfectly calibrated but you have a lower Brier score (lower is better).
A forecaster can be calibrated but almost only assign probabilities in the 40--60 % range. This is not as ueful as one assiging calibrated probabilities in the full range.
We try to measure the increased usefulness of the latter with proper scoring rules.
reply