Interesting write-up. Did I miss it, or did the author not talk specifically about the implementation architecture for the redis instances, i.e. single instances vs. high-availability with replication? If you run replication then you at least need persistence for the master, and since any of the replicas can fail over and become master that basically means you need persistence for all of them.
The article didn't go into depth of the server architecture. But in general all instances i wrote about were setted up as a classical Master <-> Slave connection. Sometimes with multiple slaves.
Related to persistence: It depends. It depends on your tooling around.
E.g. If you have a Master <-> Slave setup with persistence enabled, you have multiple options. BGSAVE on master only. BGSAVE on slave only. A seperate slave without traffic only for BGSAVE (to avoid the forking issue, many companies doing this) or BGSAVE on every node.
BGSAVE on every node can make sense of you have a running sentinel that propagades the slave as the new master once the master failed.
IMO it depends. We are running master <-> slave envs with persistence enabled on every node.
Were we don't need persistence there we are running only single instances that are combined by the client to a consistent hashing ring.
Does this answer your question? Any questions left i can answer?
Why would you want all of the replicas to be able to fail over and become master?
A common pattern is to separate writes from reads. Writes go to a small pool, reads get replicated into everywhere. Your read-only replicas will only ever be slaves, and do not need persistence.