I'm using the MirroredStrategy to perform multi-gpu training and it doesn't appear to be properly sharding the data. How do you go about manually sharding data?
I know that I could use the shard method for a tf.data dataset, but for that I need access to the worker ID and I can't figure out how to get that. How do I access the worker ids?
MirroredStrategyruns on a single worker (for multiple workers there is MultiWorkerMirroredStrategy). Because it runs on only one worker,MirroredStrategyruns a singleDatasetpipeline without any data sharding. At each step,MirroredStrategyrequests one dataset element per worker.