I have a workflow I'm running on a set of ~1500 samples. The workflow failed, but Job Manager won't give me any information. I get a 500 error. From the Swagger API page, the metadata is too big to return (>150MB). I'm only asking for includeKey=executionStatus, but apparently it's still too big. Am I using that endpoint wrong? How can I get the smallest metadata possible? Even just to know which tasks failed would be helpful. Crawling through the execution directory in my bucket, everything that I've seen has succeeded. It looks like most if not all of the subworkflows succeeded. Is there a way to get the outputs of the subworkflows? If so, I can manually stitch them together.
The weird thing is that this workflow has succeeded before on these samples, so I know it's not the number of samples or the inputs that's causing the problem.
Please sign in to leave a comment.