perf: don't retry getQueryResults as often with ambiguous errors in QueryJob.result()
#1903
Labels
api: bigquery
Issues related to the googleapis/python-bigquery API.
priority: p3
Desirable enhancement or fix. May not be included in next release.
Important
Do not change the retries for
jobs.getQueryResults
REST API calls inRowIterator
, only inQueryJob.result()
where the HTTP status code are ambiguous. Once we get toRowIterator
we know the job has succeeded and these errors cease to be ambigous.Is your feature request related to a problem? Please describe.
When a job fails due to quota issues, the
jobs.getQueryResults
BigQuery REST API translates job failure status into failure HTTP codes. This means that exceptions likegoogle.api_core.exceptions.TooManyRequests
are ambiguous. It could be at the Google Frontend level and mean we've hit our API request quota or could be because the job has failed due to query quota issues.Because these ambiguous errors are included in the default retry predicate, we end up retrying the
jobs.getQueryResults
request until our retries expire. Only after which do we do a call tojobs.get
to see that the job has failed for a retriable reason.Describe the solution you'd like
Likely we need the default retry object for our calls to
jobs.getQueryResults
(the ones where we know we're waiting for a job to complete, not the ones where we're downloading the results) to be different from the default retry for all other API requests.This may require an additional parameter to
QueryJob.result()
and the methods that call it.Describe alternatives you've considered
Note: This issue has been mitigated by #1734 which ensures that the default
job_retry
has a deadline that exceeds the default deadline ofretry
. It means we don't retry the job nearly as quickly as we could, though.Additional context
See the discussion at https://github.com/googleapis/python-bigquery/pull/1900/files#r1565837480 also internal folks can see similar discussions on issue 311358887 and 312216177 in the Java client.
The text was updated successfully, but these errors were encountered: