I use Docker (version 24.0.2) with multiple running containers, without the use of Kubernetes.
When I try to read logs from some of them using docker logs I get various errors such as:
Error grabbing logs: invalid character '\\x00' looking for beginning of value
Error grabbing logs: invalid character 'l' after object key:value pair
err="Could not parse timestamp from 'Error': parsing time \"Error\" as \"2006-01-02T15:04:05.999999999Z07:00\": cannot parse \"Error\" as \"2006\""
Some of the Docker images we use are sourced directly from Docker Hub, while others are custom-built using Python. And we haven't identified a discernible pattern regarding this behavior.
However, some containers log well, while others not.
Also, there are cases where I can retrieve logs using docker logs --tail XX but not using docker logs --since XX and vice versa.
docker/daemon.json:
{
"data-root": "/storage/docker",
"insecure-registries": [###, ###],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m",
"max-file": "5"
}
}
The issues I'm facing are:
- Promtail is unable to read logs from containers experiencing these errors.
- A very high CPU usage.
Would anyone happen to have insights or suggestions regarding these issues?
I attempted to resolve the issue of problematic characters in the log files by using the command sed -i 's/\x00//g' ./*.log* to remove them, (both from the live log file and the archived ones) -- However, this action rendered the logs unreadable.
Many suggested solutions require deleting either the ~/.docker/ directory or the corrupted log files. Unfortunately, deleting these files isn't an option as I need to keep the logs.
Thank you!