In a company I work for they decided to do the same, around the same time. I believe it was a wrong call. Over time requirements have grown and we ended up bolting all kinds of kafka features on top of the zeromq thing, but of course much crappier. And in the meantime kafka doesn't require zookeeper anymore and is the de-facto standard
Of course, ZeroMQ and Kafka are two very different tools that serve different purposes and one needs to understand the tradeoffs.
For us, delivering an on-prem commercial off the shelf solution, it was untenable to expect the customer IT team to operate a separate, relatively huge piece of tech (remember, this is 2014). Maybe the heuristics would be different today with K8s and advancement of Kafka. But ZeroMQ as an in-process, distributed messaging layer is dead simple. If your use case requires anything else on top of that, it's on the team to design the right solution like resiliency, statefulness, etc.
For a high throughput, distributed compute focused use case, I think ZMQ is still a great choice for coordinating execution. But Kafka and other options on the market now are great choices for higher order abstractions.
I worked on a similar greenfield project around the same time and we looked at RabbitMQ and Kafka, eventually going the RabbitMQ route. We were also developing an on-prem COTS product, and zookeeper played a big role in our decision to go with RMQ, not to mention at the time CloudAMQP had a very generous free tier (not so much anymore, but it's still okay-bordering-decent). No one install would ever hit the scale where Kafka makes sense so I still think it was a good call pushing 10 years later.
A company I worked for had the same problem. Messages were being dropped, and either no one on backend knew how or wanted to investigate. I was on the data team and we just had to deal with it.