Use case for deleting corrupted Kafka topic


We had a week ago a case in which the client could not delete a topic from the cluster (Kafka version in this case was 1.0.0).
When the topic was listed, there were no leaders assigned for the partitions. It was pretty clear that it would not delete it until we fixed it.
First we tried a reassignment of partition in the idea that a leader would be assigned in this process. A JSON file was generated for the specified topic and executed using After verification’s, we concluded that the reassignment failed.
The next step was to delete the topic from the zookeeper meta-data cache.
We came to this conclusion following article:

The command was

rmr /brokers/topics/[topic_name]

under script. Running this, fixed our leader problem. It was strange, but very convenient.

There was one extra thing we needed to do. Version 1.0.0 has an bug that affects the cluster controller – Error found in the log Cached zkVersion [3] not equal to that in zookeeper, skip updating ISR (kafka.cluster.Partition) –

We restarted the cluster to fix this, but since there was already an active request for the topic delete, a refresh of this was required.
In order to do that you can run

rmr /admin/delete_topics/[topic_name]

After doing so, the topic won’t be appear as marked for deletion, but if you run the delete command again, it will mark it and the controller will actively start the deletion process.

That was also the case for us, after running the delete command again, the topic was removed from the brokers.