LiveQuery cost over multiple object writes

How expensive are the Live Queries actually?

For example… If user would have only 1-to-1 connections with other users, then each user could have an mailbox object, where he would receive all the communications and updates and therefore he would just subscribe to one Live Query on his mailbox.

But if there is equal mix of 1-to-1 and many-to-many connections it feels better that there are Live Queries for each connection separately, because otherwise user A would need to send messages and updates to each user separately to theirs mailboxes. But in that case each user could end up with 20+ Live Query subscriptions.

So there is a trade between many object writes and many Live Query subscriptions. Reads might be still somehow equal as each user reads the updates and messages anyway separately. Therefore my question would be:

How expensive are Live Queries compare to database writes?

one feels to load MongoDB, the other seems to need many Live Query servers in case of large user base. Would anyone like to share experience? I have unfortunately no experience, if I should aim for as few LQ as possible, or is it totally safe for each user to subscribe to 100 queries? I believe my question have similar answer no matter how large the user base is, but if that matters, let’s assume 1 million user

Thank you!

I’m not sure if someone has a benchmark on that and the scalability of Live Queries really depends on the queries you are doing and your data model. I believe you will have to test and compare both scenarios. Anyways, have in mind that, if a single user opens multiple subscriptions, it will only generate a single ws connection. It will make the later writes more expensive though, since, for each write, the live query run again to check if an update is necessary to the client.

Thank you, Antonio, for valuable advice. At the end of the day, benchmark is a must, I understand. Just now as my idea is far from production and I am not really sure how to load the benchmark exactly to generate representative results, deciding how to build it from scratch I wanted to pick most reasonable way to start with. Sorry for long posts, It helps me much to write things down. If my understanding is correct, then:

  1. there is no impact on the LQ Server for maintaining the connection as there is only one WebSocket opened. If nothing else would be in play client can subscribe to any number of queries.

  2. when one user sends update/message to n other users, there is an n LQ triggers being triggered, because each of the n other users are subscribed separately. So from that regard there is not much to optimise as one write always triggers other user’s LQ separately.

The only trade off I can think of is between LQ complexity and write operations, where there are two scenarios:

1-to-1

  • User A have to distribute n messages, therefore writes n objects in certain class with recipient field set to Id of each user B,C,D,… separately. Other fields in object defines group and other information passed and no nested object.
  • The same process is followed in any group that any user is, so any user gets updates always with a recipient set only to his Id.
  • Users B,C,D,… subscribe LQ on documents with .equalTo("recipient", myId) what is the simplest query we can get I believe and it needs only one extra index to be maintained in MongoDB. That way they listen for updates that they receive from any group communication.

1-to-n

  • User A writes only 1 message/update object where a field groupId is defined
  • Users B,C,D,… keep a list of groupIds they are in and subscribe LQ on documents with .containedIn( "groupId", myGroupsIdArray ) that is a more complex query, but still makes use of only one extra index.

So it comes down to benchmark of:

  1. The write of n messages in MongoDB triggering n LQ equalTo updates
  2. The write of 1 message in MongoDB triggering n LQ containedIn updates

Assuming that users would have myGroupsIdArray of average size 20+ and hardly more than 100 the n * containedIn could be still noticeably lower load than equalTo combined with n writes as writes are in general expensive, right?

If not and the 1-to-1 is still something that can perform well at scale, it brings an advantage of encrypted social graph with well defined ACL as Signal app is using, what seems to be a welcomed feature nowadays.

I guess one single object will perform better. Also, consider using ACL and Roles that Parse makes already available instead of creating your own group mechanisms.

1 Like

Thank you. Is there any limit on how many Roles I can have? For example if every connection would generate a Role, it could end up having unbounded number of roles. From What I understand, Role is at the end only another object class and there should be no performance hit. Is that correct?

Yes. That’s correct.