Python: Routine for handling API request limits -


i need make lot of http requests (> 1000) against public service allows 500 http requests per day. hence, have count number of executed requests , stop when reach maximum daily amount continue next day remaining calls. in particular, iterate on non-sorted list, cannot assume elements in order. code looks this:

from requests import session, request  request_parameters = {'api_key': api_key}  user_id in all_user_ids:     r = requests.get('http://public-api.com/%s'% user_id, request_parameters)     text = r.content     # stuff text 

is there package or pattern can recommend counting , resuming api calls this?

i suggest implementing simple counter stop when have hit limit day along local cache of data have received. when run process again next day check each record against local cache first , go on call web service if there no record in local cache. way have of data unless generating more requests per day service usage limit.

the format of cache depend on returned web service , how of data need, may simple csv file unique identifier search against , other fields need retrieve in future.

another option store whole response each call (if need lot of it) in dictionary key being unique identifier , value being response. saved json file , loaded memory checking against on future runs.


Comments

Popular posts from this blog

java.util.scanner - How to read and add only numbers to array from a text file -

rewrite - Trouble with Wordpress multiple custom querystrings -