> I think that iterate through each user will be very computation expensive.
You can preload the data you need in bulk. Let's say you have a query will give you a mapping of User ID -> Product IDs. You run that query first (for all the users) and cache the result in memory (this is where an ORM will probably be counter-productive and I suggest you convert the result to primitive types like a dictionary to save memory). It's a huge query however it's also a single query, so the database can internally optimize it and it shouldn't be too big of a problem.
You repeat this for all the data you think you'll need (it's fine if you get a bit extra, the optimization of fetching it in bulk makes up for it).
You can even reuse existing caches your app might be using. If the data you need is already in Memcached/Redis as a result of another process you could just fetch it from there directly and avoid hitting the database at all.
Now that you have all that data in memory, you do the actual processing in the code. Compute and memory capacity is relatively cheap compared to engineering efforts to optimize it further (especially if it involves rearchitecting your database layout or denormalizing certain data) and you can go even cheaper if you outsource this process to a bare-metal server which is more cost-effective for raw compute power than a cloud provider.
> I think that iterate through each user will be very computation expensive.
You can preload the data you need in bulk. Let's say you have a query will give you a mapping of User ID -> Product IDs. You run that query first (for all the users) and cache the result in memory (this is where an ORM will probably be counter-productive and I suggest you convert the result to primitive types like a dictionary to save memory). It's a huge query however it's also a single query, so the database can internally optimize it and it shouldn't be too big of a problem.
You repeat this for all the data you think you'll need (it's fine if you get a bit extra, the optimization of fetching it in bulk makes up for it).
You can even reuse existing caches your app might be using. If the data you need is already in Memcached/Redis as a result of another process you could just fetch it from there directly and avoid hitting the database at all.
Now that you have all that data in memory, you do the actual processing in the code. Compute and memory capacity is relatively cheap compared to engineering efforts to optimize it further (especially if it involves rearchitecting your database layout or denormalizing certain data) and you can go even cheaper if you outsource this process to a bare-metal server which is more cost-effective for raw compute power than a cloud provider.