Possibility to set Class Level Permissions via file?

I am wondering if there is a method to hardcode the class level permissions in a file or even in my index.js where I start the Parse Server, just like the protected fields or like Firebase does?

Cheers!

Edit: I misread the context of the question, please disregard this comment.

To my understanding, not at the moment. You can restrict file saves using the file triggers, but as the files are fetched using a GET request, it is not possible to know the user or session that requested the URL.

However, I am working on a PR to create temporary access tokens and provide some ACL to parse files. It wouldn’t be too hard for the feature to evolve to include CLPs.

@dblythy i think here we talk about a static configuration from a Json or via an object into the Parse Server config.

@mrvn currently it’s not supported. I think this feature will be available in 2021.

How ever, due to the growing demand for this feature, as a workaround i exported the script that i use in many projects (production ready) to create/update/migrate Parse Schemas from static config JS/TS file

Here the gist: https://gist.github.com/Moumouls/e4f0c6470398efc7a6a74567982185fa
Could be used into JS or TS project (script is in TS you just need to remove TS stuff like types)

1 Like

Yes, my bad sorry I read it as CLP for Parse.File, not CLPs from a file.

Is this the feature that you drafted previously, that we will be working on next year @Moumouls?

yes exactly, i would like to work on it and ship it next year !

Ok, cool. Should I work off your old PR, or start a new branch and take bits and pieces from the gist and the PR? If there’s any way I can help you close out the remaining challenges rather than starting a new approach I would be happy to help. Let me know.

For this feature we have 2 approach:

  • Easiest one: Use my current script with a little rework and register the script before the serverStartComplete trigger function (not super clean but will largely do the job)

  • Most complicated one: What i tried to achieve into my draft, use the core initialization process of parse server and deep dive into the Parse server schema system

If the solution one looks good to others contributors, the feature could be sent next month. (i can find some time since i have the code and also somes tests (not referenced into the gist))

@davimacedo what do you think about that ?

May be we can try an implementation in 2 stages to get the feature on the package ASAP: we can implement the first simple solution and then introduce the complex one for cleaner support.

@dblythy to avoid some waste of time here and a wrong implementation, we need to wait some feebacks ?

@dplewis may be that you are interested in this discussion?

2 Likes

I like the idea of the 2 steps implementation. Would the solution work only for setting the CLP or the whole schema?

Solution works for the whole schema features indexes, protected fields, fields, clp etc…
Everything supported by the Parse.Schema JS SDK

That would be great. I am only little bit afraid of having it in the Parse Server initialization. Any change in the schema would be lost when the server restarts. How are you planning to solve that?

@davimacedo we need to avoid dual source of truth, so when developer choose to use the predefined schemas feature, we need to force and disable addField: {} via CLP on each class and also set allowClientClassCreation: false

Then my current script support migration, class creation if user restart the server with new/changed Schemas

Security note: Using predefined schema feature will also increase default security by blocking mis configured CLPs and Parse server. (in order to avoid fraudulent clients creation of classes/fields)

I agree… we can actually “block” all schema write endpoints.

1 Like

I am very interested in this. My first MongoDB experience was with Mongoose/Meteor and their static schema declaration systems. I think this is a huge contribution to the Parse API (I came for the Auth+GraphQL).

I’d like to contribute. Would you like help with documentation or beta testing? I’m starting a new project and will start working with the linked script today. Thanks @Moumouls !

i have planned to push a PR with my script onboarded at the end of the year @MichaelJCole, so with other contributors feedback i think the feature could be available on the package with beta flag in January.

1 Like

@Moumouls Thank you, I’m looking forward to it. To use the script you posted, what’s the best place to initialize this?

Easiest one: Use my current script with a little rework and register the script before the serverStartComplete trigger function (not super clean but will largely do the job)

Why does buildSchemas() need to run before “serverStartComplete”?

If to prevent outside users from using the schema before it has initialized, an express middleware could respond “500:server initializing” until serverStartComplete set’s a flag it’s ready.

Is there another reason (internal to parse framework) to do this before serverStartComplete?

Thank you!

@MichaelJCole, check the gist, you should buildSchema into the serverStartComplete.

1 Like

For anyone curious, here’s how I did this:

// schemaBuilder.js
// Has buildSchemas completed?
let schemasReady = false

// Express middleware to wait for buildSchemas to finish before serving requests
const waitForSchemas = (req, res, next) => {
  // Accept public connections if schema is ready
  if (schemasReady) return next()
  // Accept connections from local host to initialize server.  Verify this is secure in your environment..
  if (req.connection.localAddress === req.connection.remoteAddress) return next()
  // If not ready and not localhost, send error
  res.status(503).send({ error: 'Server initializing' })
}

// This function update, migrate and create Classes
const buildSchemas = async (localSchemas) => {
  // Note that Parse has already been added to global, and does not need to be imported.
  // ... code in attached link
  schemasReady = true
}

And in my Express server:

// index.js
const express = require('express')
const { default: ParseServer, ParseGraphQLServer } = require('parse-server')

const { waitForSchemas, buildSchemas } = require('./schemaBuilder')
const { mySchemas } = require('./mySchemas')

// Create express app
const app = express()

// Return 503 until schemas have been created.
app.use(waitForSchemas)

// Create a Parse Server Instance
const parseServer = new ParseServer({
  allowClientClassCreation: false,
  ...otherOptions
  
  // Parse has already been added to global
  async serverStartComplete(error) {
    if (error) throw error
    console.log('==>Build schemas...')
    await buildSchemas(mySchemas)
    console.log('...Build schemas complete')
  }
})
// ... other init code.

About mySchemas: https://docs.parseplatform.org/js/guide/#schema

1 Like

@MichaelJCole note here if you read my script, the script DO NOT delete classes after the diff check to avoid data loss. Also the migration strategy of fields is destructive. If the type of a field change on a restart ( to change schema structure) the field will be deleted and then re created. To keep data developers MUST create some migration scripts or avoiding changing type of a field.

To delete a class already created by the script, you should use the API or Parse Dashboard

1 Like

@Moumouls Cool, this worked out great. I like how the output from Parse.Schema.all() can be fed into your buildSchemas() function.

This makes me think I can edit the schema in the Parse-Dashboard, save the schema to a file while developing, then apply that file in production (with migration issues you noted).

This would be nice for iterating the schema in development. I built this into my serverStartComplete code, but it could also be supported as backup/restore in the Parse-Dashboard.

const backupSchemas = async (writeStream) => {
	const allSchemas = await Parse.Schema.all();
	allSchemas.forEach((backupSchema) => {
		backupSchema.indexes = lib.fixactualIndexes(backupSchema.indexes)
	})
	writeStream.write('module.exports = ')
	writeStream.write(JSON.stringify(allSchemas, null, 2))
}

What do you think of this approach? Is there anything important it will forget?

note here if you read my script, the script DO NOT delete classes after the diff check to avoid data loss. Also the migration strategy of fields is destructive.

Hi @Moumouls, I was looking at how to integrate existing data migration tools with buildSchema().

Whenever buildSchema thinks to delete a field,
if buildSchema instead renamed the backing table/collection, and created a new one in it’s place,
then buildSchema would not have to be destructive.

Data migration scripts should run after Parse buildSchema() code has created/modified the tables or collections.

This could also be done in serverStartComplete.

If buildSchema() returned a promise for the number of tables it renamed, the migration code could .then() on that promise.

If serverStartComplete accepted an async function, who’se promise was used to open the API to outside connections, the whole thing could look like this:

const parseServer = new ParseServer({
  allowClientClassCreation: false,
  ...otherOptions
  
  // Parse has already been added to global
  async serverStartComplete(error) {
     const numMigrations = await Parse.buildSchema()
     if (numMigrations) await myMigrationCode()  // SQL or Mongojs to migrate data
     return true   // start accepting connections
  }

If the Parse Javascript SDK could query the old data, migrations could be implemented as a stream of rows to be migrated, with migrations declared like this:

rowMigrations: {
    classname: function (oldRow, newRow)  => {  /* modify newRow */ return true /* indicates complete */  },
    otherclass: ()=>{},
}

This would allow the same migration code to work for Mongo and Postgres. Adding a “version” column to the Parse class would allow for versioned updates.

This migration stream may be out of scope for the SDK, but would be enabled by an Parse API to query the old table/collections.

What do you think?