Can you rate my caching strategy?

I have a master table with the most frequently accessed data.

However, the data in that table must be constantly changing as its supplied by the users in an unpredictable manner. This means that I will not be able to cache the queries, and so there will be a continuously high load on db (Right?).

So I’m thinking of creating a few additional tables, that will function as logs, recording each change in the data as a separate line.

Then once a day at midnight there will run a cron job that will consolidate the logs and write them into the main table. Thus the main table will be updated only once a day, and therefore all the queries within the day can be served from cache.

Is this a good strategy?

There are a number of insightful MongoDB webinars on schema design. If you search online, you should easily find them.

1 Like

Yes, I watched a few and also completed their course in mongodb academy. The key takeaways are 1) identify the most frequently accessed data and try to embed it in one document, 2) create indexes according to the most frequently used keys, 3) if any of the data can grow exponentially, separate it to another table and reference using the id (instead of embedding).

But there’s nothing about caching strategies. So this seems like a good one: 1) identify the most frequently accessed data and try to embed it in one document, 2) identify what data is going to change frequently and create a separate “log” table for each, 3) use cron based functions to consolidate the “log” tables once or twice a day into the most frequently used table.

This way the most frequently used table will be updated only once or twice a day, which means that all of the queries will be served from the redis cache minimizing the number of connections to the db.