Fs.chunks in mongodb eating too much space?

I’m not much of an expert with mongodb but it seems that if GridFS is used for storing files fs.chunks is present. However, it’s eating too much space from the db. I’m storing some pictures for profile pictures…
fs.file is small compared to chunks ( 6mb for fs.files whereas it’s 700mb for fs.chunks ). Any possible idea to gain much-needed space? is it safe to delete fs.chunks ?

Besides GridFS is very convenient for development, it’s not recommended for production. Try to use S3 or GCS instead. There are easy-to-use adapters that you can find here. S3 and GCS storage are much faster and cheaper than storing in your MongoDB database.

1 Like

is it a bird? is it a plane ? no it’s davimacedo ! xD
Thank you for your precious help!
I switched successfully to S3 and now files are stored there. fs.chunks and fs.files will no longer be used? can I delete them safely ? I also noticed that old saved files are not found . They are considered in aws s3 but they aren’t actually there …

1 Like

You need first to migrate your files from GridFS to S3. Then you can delete the MongoDB fs.file and fs.chunks collections. Is your app already in production? About how many files you think you have already stored in GridFS?

Yes it is and probably around 1200-1300 profile pictures.( I tested it on a development server ) .
I wonder how I can migrate them… Can’t do it manually unfortunately.

I once used the script below. No warranties :slight_smile:

'use strict';

var AWS = require('aws-sdk');
var Promise = require('bluebird');
var mime = require('mime');
var mongodb = require('mongodb');
var MongoClient = mongodb.MongoClient;
var GridStore = mongodb.GridStore;

// setup your mongo connection
var MONGO_URI = 'yourmongourihere';

// setup AWS credentials
  accessKeyId: 'Your AWS access key id',
  secretAccessKey: 'Your AWS secret access key'

// setup AWS clients
var s3 = new AWS.S3({
  params: {Bucket: 'yourbucketname'},
  region: 'us-east-1' // your region if different of north virginia

 * High-level overview of the migration process:
 *    list app files:
 *       ** concurrency: 10 files **
 *       read file content
 *       upload file to S3

var db;

  .catch(function (err) {
    return closeConnections();

function connectToMongoDB() {
  console.log('Opening connection to MongoDB');
  return MongoClient.connect(MONGO_URI)
    .then(function (database) {
      db = database;

function migrateAppFiles() {  
  return listAppFiles().then(function (filenames) {
    console.log('Copying ' + filenames.length + ' files');
    // migrate 10 files at a time
    return Promise.map(filenames, migrateFile, {concurrency: 10});

function listAppFiles() {
  return GridStore.list(db);

function migrateFile(filename) {
  console.log('Copying file ' + filename);
  return readFileFromMongo(filename)
    .then(function (data) {
      return writeFileToS3(filename, data);

function readFileFromMongo(filename) {
  return GridStore.read(db, filename);

function writeFileToS3(filename, data) {
  return new Promise(function (resolve, reject) {
    var params = {
      ACL: 'public-read',
      Key: filename,
      Body: data,
      ContentType: mime.lookup(filename)
    s3.upload(params, function (err, data) {
      if (err) {
      } else {

function closeConnections() {
  console.log('Closing connections');
  return db.close();

function logError(err) {
1 Like

Woah! Thank you so much! What a legend!

1 Like