2021-05-03 - DynamoDB pagination: Difference between revisions

From Izara Wiki
Jump to navigation Jump to search
 
 
(One intermediate revision by the same user not shown)
Line 1: Line 1:
[[NPM module - izara-shared]]
[[NPM module - izara-shared]]
= Ideas =
* can use Limit to test pagination
https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html#DDB-Query-request-Limit
* [[NPM module - izara-shared#dynamodbSharedLib.query]] returns queryDetails which states whether more pages/results exist
* we need to consider all Query requests, can they hit limit and paginate? How do we handle, eg:
# We can restrict the number of records, eg we can use UserLimits to allow only 1000 NotificationGroups per user, trusting Query will never exceed Dynamo pagination limit
# One script can keep requesting from Dynamo until all records received -> danger of script exceeding resources if no limit on number of records
# Use async Lambda flow to process parts of the logic in sections *this is the stronger method for large result sets
* if data set might change during the async process, maybe have a timestamp (eg in a parent record for the set of data) saying when the data last changed, we send the last LastEvaluatedKey and this timestamp on to next lambda invocation, it can check the timestamp to make sure data is still the same.. if its changed, handle accordingly, eg start again




[[Category:Working documents| 2021-05-03]]
[[Category:Working documents| 2021-05-03]]
[[Category:Working documents - izara-shared| 2021-05-03]]
[[Category:Working documents - izara-shared| 2021-05-03]]

Latest revision as of 12:11, 3 May 2021

NPM module - izara-shared

Ideas

  • can use Limit to test pagination

https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html#DDB-Query-request-Limit

  1. We can restrict the number of records, eg we can use UserLimits to allow only 1000 NotificationGroups per user, trusting Query will never exceed Dynamo pagination limit
  2. One script can keep requesting from Dynamo until all records received -> danger of script exceeding resources if no limit on number of records
  3. Use async Lambda flow to process parts of the logic in sections *this is the stronger method for large result sets
  • if data set might change during the async process, maybe have a timestamp (eg in a parent record for the set of data) saying when the data last changed, we send the last LastEvaluatedKey and this timestamp on to next lambda invocation, it can check the timestamp to make sure data is still the same.. if its changed, handle accordingly, eg start again