How does "await" works with parse?

İ generally use await function in cloud codes. Does this blocks all other operations? For example a user run a cloud code which has await function. Can second user also use simultaneously same cloud code or different cloud code which has await ? Or does second user have to wait until first user finishes job?

TLDR: With async/await many users can run the same cloud function simultaneously

Await is not a parse feature. Async/await function run asynchronously in JS, si it do not block the event loop and allow to parallelize operations.

Await act as a .then in Js Promise.
you can use await as much as you want, it will not block response or parallel users :slight_smile:

info here: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/async_function

1 Like

Thank you very much. I read somewhere Nodejs can use only one core of cpu and cannot do multi threading. That’s why i asked. Do you know anything about this? Should i choose my server with strong one core cpu? Or multiple core normal cpu’s? Which one is best for node js? Thanks again

Parse server and node js are super light and fast (parse can run perfectly under 250MB of RAM). You can start with a mono core VPS.
To use multi core you need to scale horizontaly parse server, depending of the size of your app you can do it later.

If you do not want to host it and manage hosting/scaling: Back4App is a good provider :slight_smile:

PS: Scaling parse horizontaly is another topic :slight_smile:

1 Like

Horizontal scaling means creating multiple parse server and put a load balancer in front of it right?

But my question is more like:
Assume ram is equal. I have two server:
Server a has 1 core with 1000 point
Server b has also 1000 point but also has 2 cores so 500 point to each core.

Which server is better for parse server. Server a or server b?
Btw thank you for answer

TLDR : A simple scale of parser server without global redis cache or live query shared redus need only to start new parse server node instance in front of a load balancer like nginx/docker

Bigger mono core will be the best solution at the begging and avoid some overhead about load balancing, cache etc…, then when you reach the limit of hardware scale you can switch to horizontal scale, but it really depends of your needs :slight_smile:

Note : Parse have a non compressible ram footprint of 60-90MB so scaling horizontaly will eat some RAM.

1 Like