I have been trying to start a cluster through Terra running the default Hail docker image. I customized my environment when creating, selecting an 8 CPU machine as the spark master and 5 workers, also with 8 CPUs. Prior to starting the cluster, my configuration looks like this:
However, I seem to running into memory issues when running Hail. I went to reconfigure and add more works and saw that my spark master had a different configuration than what I selected when starting the cluster. My configuration looked like this:
Each time I tried to update the spark master to an 8 CPU machine, nothing happened. I would click "next" and the cluster would never update. I verified that this is not just an issue with the UI displaying the wrong configuration through gcloud:
I'm wondering if I am doing something wrong here? Or am I miss-interpreting this? Right now, I'm unable to use Hail in a jupyter notebook with some of my datasets -- not sure if the issues are related but they are having memory errors so maybe this is the problem.
Please sign in to leave a comment.