I am trying to figure out the most efficient way of downloading monthly records. I have working python script that creates the monthly export url based on the project and then reads each record and stores it. This process seems very slow (15+ minutes for large projects) when compared to download the csv files straigth from ComboCurve (1-2 minutes for large projects), so just curious if there is an option to start a bulk download or can someone post their code on how they are parsing through the monthly records quicker.
It is extremely slow - if I was using this for anything other than loading into our database I would have gone a more efficient path - but it takes around 3 min run and we have it syncing nightly with the rest of our ETL endpoints.
We can launch an investigation into the endpoint. I think it might just be due to the sheer amount of data but we should be able to continue to optimize the fetch.
Having a similar issue. It’s taking about 10 minutes to pull in all data from a single scenario. Not a big deal if doing a nightly database update, but I’m embedding my script into a Spotfire project to dynamically pull data while running different scenarios.
Maybe adding a “date” parameter to the endpoint will help to bring down the volume of data. For me, I typically only care about the next 3-5 years of monthly data. I’m not sure exactly what that does under the hood, but if it limits the data on your side before sending over, it could help.
Have you tried using the new route for monthly-exports? We have a route for monthly-econ-results that fetches the same information as the monthly-exports without having to run an export first.