This article is intended for NextGen.
Introduction
One of the biggest challenges when automating a code delivery process is providing visibility to the user. With this, the user can understand what caused a deployment to fail, dispensing the need for someone (usually a DevOps) to troubleshoot the issue.
Fortunately, Harness natively offers many observability tools, from manifest generation to application rollout. Sometimes, it is necessary to access the logs from within the application’s container to find out what caused a crash. To achieve this, we wrote this article that covers this functionality, as we don’t have this natively.
How-To
-
Add a Shell Script step after/parallel to the Rolling Deployment or rollback section.
option 1
option 2
option 3
-
Include the code below in the Shell Script step and customize it as needed:
export KUBECONFIG=${HARNESS_KUBE_CONFIG_PATH}
echo "Pods:"
kubectl get pods
echo "Deployments"
kubectl get deployments
echo "Logs :"
pods=$(kubectl get pods -n <+infra.namespace> --selector=<your-selector> --output=jsonpath='{.items[*].metadata.name}')
for pod in $pods
do
echo "Logs for pod $pod"
kubectl -n <+infra.namespace> logs $pod --all-containers=true --since=5m
kubectl -n <+infra.namespace> logs $pod --all-containers=true --since=5m --previous
done
Replace the --selector=<your-selector>
in the code and that’s it. Run your pipeline and see the results.
Conclusion
If you have any suggestions on how to improve this article, or helpful and specific examples of permissions related issues that may be of use to others, please leave a comment with the information as this document is intended to evolve over time.
If this article cannot resolve your issue, don’t hesitate to contact us here: [email protected] – or through the Zendesk portal in Harness SaaS.