in these countries simcards and cell phones are not so strictly linked to personal identity documents, so even if the chats are decrypted it is not very helpful
What about location? Wasn’t there a thing about whatsapp encryption leaking gps location or something?
Really sad to see humans being able to be this nasty to each other. Technology being the enabler and enforcer, and also the means around detection.
These scams are really a good excuse to force whatsapp to do something about their technology. Afterall they patented it (probably) so their own it and they should do their best to ensure it’s not abused.
myanmar has had an ongoing civil war for decades so location is moot. there is no central authority that has the ability to deal with these things. the scam centres can get a lot of freedom just by supplying tinned food and petrol to whichever group they are closest to.
there are 100+ formal languages in Myanmar, at least 100 unique ethnic groups, and over 150 armed combat groups. and the ethnic diversity is very abrupt, people living 30km away from each other can be so different they can't communicate with each other at all. foreign governments have almost zero influence on the ground
Aeon[1] just published a piece on the topic. It discusses the victim versus villain aspect.
Having read that, it seems that the only remedy would be a Chinese government intervention (as it seems to be Chinese criminal gangs that run these facilities). That intervention might be triggered by the international image lose for the government in being associated with these scams.
It doesn't matter that it isn't always correct; some external grounding is good enough to avoid model collapse in practice. Otherwise training coding agents with RL wouldn't work at all.
I mean it in the sense that tokens that pass some external filter (even if that filter isn't perfect) are from a very different probability distribution than those that an LLM generates indiscriminately. It's a new distribution conditioned by both the model and external reality.
Model collapse happens in the case where you train your model indefinitely with its own output, leading to reinforcing the biases that were originally picked up by the model. By repeating this process but adding a "grounding" step, you avoid training repeatedly on the same distribution. Some biases may end up being reinforced still, but it's a very different setting. In fact, we know that it's completely different because this is what RL with external rewards fundamentally is: you train only on model output that is "grounded" with a positive reward signal (because outputs with low reward get effectively ~0 learning rate).
Oh interesting. I guess that means you need to deliberately select a grounding source with a different distribution. What sort of method would you use to compare distributions for this use case? Is there an equivalent to an F-test for high dimensional bit vectors?
Or they find a new feeding ground? Why does the universe bend to “badass penguins”?
The universe really does not care, in a “badass” way. Major league not caring.
It’s our interpretation that something is “badass” mainly because our species has pretty much negatively affected most parts of the environment.
It’s us that are “badass” and don’t “get it” when it comes to nature and the environment.
As someone else points out, there is no such thing as a nihilist penguin, it’s purely us putting a label on behaviour that we - once again - don’t understand.
That debt clock looks like a monopoly board. Lets be honest, the world economy has become one massive game of monopoly with big tech and big oil being the banks.
What would the founding fathers, the ghosts of the French Revolution or Plato say to that? Nothing, they’re all dead.
We should be doing the changing, not the long dead past.
There are different kinds of Death, and Socrates story of resolve still greatly affects peoples ideas of free speech and due process within a just rule of law.
https://aeon.co/essays/we-cooperate-to-survive-but-if-no-one...
But cooperation only occurs when the entire group is at risk, that isn’t the case currently.
reply