TimeoutError

Hello,

I keep running into this timeout error when trying to pull the monthly-export table of a project with 300+ wells and two combos.

('Connection aborted.', TimeoutError(10060, 'A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond', None, 10060, None))

Can you post the actual call?

It processes around 170,000 records before the timeout error

url = 'https://api.combocurve.com/v1/projects/62c84707e7867800124d5633/scenarios/62c87d7133c2fb0012223942/econ-runs/62cd8bede7867800125c85ba/monthly-exports/b1215249-c47e-4c3a-9406-bd69c21312f4?take=200'

def get_all2(url, headers, get_post=True):
    # First request
    has_more = True

    # Keep fetching while there are more records to be returned
    while has_more:
        response = requests.get(url, headers=headers)
        data = response.json()

        # Return the records to be processed by the caller instead of doing the processing here
        yield from data["results"]

        url = get_next_page_url(response.headers)
        has_more = url is not None


def get_monthly_records(url, auth_headers=combocurve_auth.get_auth_headers()):
    tmp_list = []

    counter = 0
    for record in get_all2(url, auth_headers):

        tmp_list.append(record)

        if counter % 1000 == 0:
            print(counter, " records processed")
        counter += 1
        
    return pd.DataFrame(tmp_list)

Is it always exactly the same number of records processed before it times out, or is there a little variance?

I did not log the exact number of records but they seem to be the same

OK, we will look into it.

Yero was able to dig into the logs. It appears that it isn’t always failing at the same point, but we actually don’t see anything erroring out on our end. We would recommend adding a retry functionality to your code because this looks like it might just be a transient network error.

1 Like