-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
maxBuffer length exceeded #273
Comments
Sounds like a permission issue. The issue below seems to be related. |
@lomkju can you enable debug logging hopefully will produce more info |
Adding the below step before the checkout step fixed the problem. It would be a hack until this is fixed or know why this is happening? Maybe the docker image checkouts the branch as root?
The logs for the same. The list is huge for permissions denied just pasting the top ones.
|
in my environment, the runner runs as a "action-runner" user. one of the action uses an docker instance where the user inside is root. When the docker instance generates files, those are owned as root (even from the host perspective).
We also see this when we try to rerun workflows (where there is a ton of output in some of those jobs). Clearing the logs before retrying helps but I'm concerned however that we might end up getting even more output from those builds, and hit that limit again. |
Hello I am having the exact same issue using game CI action on a self-hosted runner. However this workaround
Did not work for me, the cleanup is lasting forever like blocked. Has anyonje found another solution ? (cleaner, or atleast that would not block on my machine ?) Thanks in advance ! |
I'm facing this error as well on a self-hosted runner like @pixsaoul.
|
+1 we are also seeing this issue |
Is there any update on this issue, I am getting it on a self-hosted runner. Thanks !! |
We have seen this when there are permissions issues. tracking down what generate the files with the wrong permission and making sure that the ownership is correct fixed it for us. (we had some docker container based processes running as root, which left behind some leftover temp files, which in turn prevented the following checkout action from cleaning up. not quite sure why that shows up as "maxBuffer length exceeded" (other than we had a LOT of files with that problem). We tracked down which job by going through the _diag/*.log files to identify the problematic job... |
My solution has been to add a step in pipeline executing a SSH command to delete project folder in |
Am also facing the |
|
Anyone have an elegant solution to this? |
We tried a few things here:
The only thing that i can think of is that more recent runners have the option to run some script between jobs, iirc, and that might be your best bet to address the issue on your end. See here for more details: https://docs.github.com/en/actions/hosting-your-own-runners/running-scripts-before-or-after-a-job Ultimately, you will also need to sort out how to elevate permissions to be able to change the ownership of all the directory/ies where the checkout end up at back to the user that runs the runner... |
Other temporal solution I found is to delete the folder and then recreate the folder, that works for me: - name: cleanup #https://github.com/actions/checkout/issues/211
run: |
echo ${{ secrets.DEPLOY_PASSWORD }} | sudo -S rm -rf ${GITHUB_WORKSPACE}
mkdir ${GITHUB_WORKSPACE} |
I think we the gitlab runner user should not have admin privileges using sudo, if so anyone with permissions to modify the github actions pipeline could cause damage to the self runner if he wanted. If the checkout actions is not able to remove existing files from the workspace it could be because these files are not own by the gitlab runner user. Maybe another github actions that ran previously has a defect and creates files in the folder owned by the root user. |
FYI: |
I was using an action 'kube-tools' by stefanprodan/kube-tools@v1 and the commands that were run were in fact executed by root uid / gid outside of the docker image which I was surprised. I ended up embedding a chown -R : of my github actions user/group - but it's not pretty for sure. I will investigate another solution now that I have my own self-hosted runners, like install those tools locally and not depend on this action and the "leaking" root uid/gid like you say as a defect. |
Just to Give permission with "sudo chown -R $USER:$USER $GITHUB_WORKSPACE " and then add aline to add node_modules with sudo " sudo rm -rf node_modules" name: Ci/Cd Pipeline on: jobs:
|
Can we get the The goals...
|
Hi,
I get the below error when running checkoutv2
I always need to delete the contents of the repo for all build.
_work/api
The text was updated successfully, but these errors were encountered: