spark VM autopause issue
I've been taking advantage of the preconfigured Hail runtime for notebooks, which has been lovely. And I've noticed the runtime will pause after some amount of time, which is great because those can be expensive. However, I've never successfully recovered from a pause. I get an error that there was a problem with my VM, then I get directed to the runtime config popup window, then Terra fails to create the new environment because the errored one is still there. So I go to the #clusters path and delete it manually, which can take a long time. The autopause feature isn't useful if it isn't recoverable. Should I just wait for the update that allows users to configure the autopause behavior (and turn it off) or is there a better solution?
Thanks for writing in! This seems like a bug rather than a missing feature, so we have moved your post to the General Discussion section. We are going to try and replicate this error, and file a bug report if we're able to do so successfully. We will comment on this post with any updates!
In the meantime, I think this article might be helpful for you: Adjusting autopause for Cloud Environments using Swagger
Thank you for bringing this to our attention!
Please sign in to leave a comment.