-
Notifications
You must be signed in to change notification settings - Fork 0
/
params.json
6 lines (6 loc) · 11.7 KB
/
params.json
1
2
3
4
5
6
{
"name": "Nodestream",
"tagline": "Streaming library for binary data transfers",
"body": "[npm-badge]: https://badge.fury.io/js/nodestream.svg\r\n[npm-url]: https://npmjs.org/package/nodestream\r\n[travis-badge]: https://travis-ci.org/nodestream/nodestream.svg\r\n[travis-url]: https://travis-ci.org/nodestream/nodestream\r\n[coveralls-badge]: https://img.shields.io/coveralls/nodestream/nodestream.svg\r\n[coveralls-url]: https://coveralls.io/r/nodestream/nodestream\r\n[inch-badge]: http://inch-ci.org/github/nodestream/nodestream.svg\r\n[inch-url]: http://inch-ci.org/github/nodestream/nodestream\r\n[make-badge]: https://img.shields.io/badge/built%20with-GNU%20Make-brightgreen.svg\r\n[ns-fs]: https://github.com/nodestream/nodestream-filesystem\r\n[fs-icon]: https://cloud.githubusercontent.com/assets/3058150/13901081/d81b824c-ee17-11e5-8fbe-40eff40646f7.png\r\n[ns-s3]: https://github.com/nodestream/nodestream-s3\r\n[s3-icon]: https://cloud.githubusercontent.com/assets/3058150/13901098/80692616-ee18-11e5-98c1-91c35b936c51.png\r\n[ns-gridfs]: https://github.com/nodestream/nodestream-gridfs\r\n[gridfs-icon]: https://cloud.githubusercontent.com/assets/3058150/13901696/59652146-ee2c-11e5-8c7e-3cba5ba9854c.png\r\n[ns-gcs]: https://github.com/nodestream/nodestream-gcs\r\n[gcs-icon]: https://cloud.githubusercontent.com/assets/3058150/13907413/bfb554e0-eeed-11e5-9e51-ce490fad8abd.png\r\n[ns-checksum]: https://github.com/nodestream/nodestream-transform-checksum\r\n[ns-compress]: https://github.com/nodestream/nodestream-transform-compress\r\n\r\n[![NPM Version][npm-badge]][npm-url]\r\n[![Build Status][travis-badge]][travis-url]\r\n[![Coverage Status][coveralls-badge]][coveralls-url]\r\n[![Documentation Status][inch-badge]][inch-url]\r\n![Built with GNU Make][make-badge]\r\n\r\n## Description\r\n\r\nThis library aims to provide a unified API for all the major storage systems out there (filesystem, AWS S3, Google Cloud Storage etc.). It also provides an easy way to manipulate data streams as they are being uploaded/downloaded from those storage systems (compression/ checksum calculation/encryption etc.).\r\n\r\n### Use cases\r\n\r\n- Single API to rule them all\r\n- Easy way to transform incoming/outgoing data\r\n- Work with filesystem storage during development, AWS S3 in production without changing code\r\n- *Insert your idea here*\r\n\r\n## Available adapters\r\n\r\n| [![S3][s3-icon]][ns-s3] | [![GridFS][gridfs-icon]][ns-gridfs] | [![GCS][gcs-icon]][ns-gcs] | [![Filesystem][fs-icon]][ns-fs] |\r\n|:-----------------------:|:-----------------------------------:|:--------------------------:|:-------------------------------:|\r\n| Amazon S3 | GridFS (WIP) | Google Cloud Storage | Local Filesystem |\r\n\r\n\r\n## Available transforms\r\n\r\n> See [Pipelines and Transforms](#pipelines-and-transforms) section for more info.\r\n\r\n| [checksum][ns-checksum] | [compress][ns-compress] | progress (WIP) | crypto (WIP) |\r\n|:-----------------------:|:-----------------------:|:----------------:|:----------------------:|\r\n| Checksum Calculator | Stream (de)compressor | Progress monitor | Stream (en/de)cryption |\r\n\r\n## Usage\r\n\r\n### Installation\r\n\r\nThe first step is to install nodestream into your project:\r\n\r\n`npm install --save nodestream`\r\n\r\nThe next thing is to decide which *adapter* you want to use. An adapter is an interface for nodestream to be able to interact with a particular storage system. Let's use local filesystem for a start:\r\n\r\n`npm install --save nodestream-filesystem`\r\n\r\n### Configuration\r\n\r\nLet's create and configure a nodestream instance with which your application can then interact:\r\n\r\n```js\r\n// Require the main Nodestream class\r\nconst Nodestream = require('nodestream')\r\nconst nodestream = new Nodestream({\r\n // This tells nodestream which storage system it should interact with\r\n // Under the hood, it will try to require `nodestream-filesystem` module\r\n adapter: 'filesystem',\r\n // This object is always specific to your adapter of choice - always check\r\n // the documentation for that adapter for available options\r\n config: {\r\n // The `filesystem` adapter requires a `root` configuration option, so let's provide one\r\n root: [__dirname, '.storage']\r\n }\r\n})\r\n```\r\n\r\nGreat! At this point, nodestream is ready to transfer some bytes!\r\n\r\n### Actions\r\n\r\n#### Uploading\r\n\r\nYou can upload any kind of readable stream. Nodestream does not care where that stream comes from, whether it's an http upload or a file from your filesystem or something totally different.\r\n\r\nFor this example, we will upload a file from our filesystem.\r\n\r\n> We will be uploading the file to our local filesystem as well as reading it from the same filesystem. Normally you would probably use a source different from the target storage, but Nodestream does not really care.\r\n\r\n```js\r\nconst fs = require('fs')\r\n// This is the file we will upload - create a readable stream of that file\r\nconst profilePic = fs.createReadStream('/users/me/pictures/awesome-pic.png')\r\n\r\nnodestream.upload(profilePic, {\r\n // directory and name are supported by all storage adapters, but each\r\n // adapter might have additional options you can use\r\n directory: 'avatars',\r\n name: 'user-123.png'\r\n})\r\n.then(results => {\r\n // results can contain several properties, but the most interesting\r\n // and always-present is `location` - you should definitely save this\r\n // somewhere, you will need it to retrieve this file later!\r\n console.log(results.location)\r\n})\r\n.catch(err => {\r\n // U-oh, something blew up 😱\r\n})\r\n```\r\n\r\nCongratulations, you just uploaded your first file!\r\n\r\n#### Downloading\r\n\r\nDownloading a file is quite straight-forward - all you need is the file's location as returned by the `upload()` method and a destination stream to which you want to send the data. This can be any valid writable stream. Again, Nodestream does not care where you are sending the bytes, be it local filesystem, an http response or even a different Nodestream instance (ie. S3 to GridFS transfer).\r\n\r\n```js\r\n// Let's create a destination for the download\r\nconst fs = require('fs')\r\nconst destination = fs.createWriteStream('/users/me/downloads/picture.png')\r\n\r\n// We are hardcoding the location here, but you will probably want to\r\n// retrieve the file's location from a database\r\nnodestream.download('avatars/user-123.png', destination)\r\n.then(() => {\r\n // All good, destination received all the data!\r\n})\r\n.catch(err => {\r\n // Oh well...\r\n})\r\n```\r\n\r\n#### Removing\r\n\r\nJust pass the file's location to the `.remove()` method.\r\n\r\n```js\r\nnodestream.remove('avatars/user-123.png')\r\n.then(location => {\r\n // The file at this location has just been removed!\r\n})\r\n.catch(err => {\r\n // Oh no!\r\n})\r\n```\r\n\r\n## Pipelines and Transforms\r\n\r\nNodestream supports two features which are meant to be used together - pipelines and transforms.\r\n\r\n- **Transform**: A plugin which takes an input stream and produces an output stream\r\n- **Pipeline**: A re-usable, ordered collection of transforms\r\n\r\nThe real power of pipelines is that you only have to create a single pipeline, tell it which transforms it should use and then you just keep sending files to or retrieving files from it - all files will be processed in exactly the same way.\r\n\r\nHere are some ideas what a transform can be used for. With pipelines, you can combine them to your heart's liking:\r\n\r\n- Calculating checksums\r\n- Compressing/decompressing data\r\n- Modifying the data completely, ie. appending headers/footers and whatnot\r\n\r\n### Registering a transform\r\n\r\nAll transforms must be first registered with your Nodestream instance before you can use them in a pipeline. Registering is easy and is generally recommended to be done immediately after your application is started, because requiring a module is a synchronous, blocking operation, so you want to get it done before you start doing something important.\r\n\r\nOnce you configure your Nodestream instance, you can register a transform using the `.registerTransform()` function.\r\n\r\n```js\r\n// Let's register a compression transform! The following will try to require\r\n// `nodestream-transform-compress` package.\r\nnodestream.registerTransform('compress')\r\n\r\n// You can also register an actual implementation of the transform!\r\nconst compress = require('nodestream-transform-compress')\r\nnodestream.registerTransform(compress)\r\n```\r\n\r\n### Using pipelines\r\n\r\nTo use a pipeline, you must first create one! Once you have your pipeline, you can then go on and tell it to use any of the registered transforms. Pipelines are reusable, so the general practice is to create one pipeline and use it for all uploads/downloads.\r\n\r\nYou may want to create multiple pipelines per project to accommodate different processing needs for your files, ie. you might have one pipeline for image uploads (with a transform plugin to calculate checksums and one to crop the images) and another pipeline for other files (with just the checksum transform). Any combination can be achieved.\r\n\r\n```js\r\n// Let's create our first pipeline\r\nconst pipeline = nodestream.pipeline()\r\n\r\n// Now, we can tell the pipeline to use any of the registered transforms\r\n// The second parameter is specific to each transform so always check the\r\n// transform's docs to see what you can set\r\npipeline\r\n .use('checksum', { algorithm: 'md5' })\r\n .use('compress', { algorithm: 'gzip' })\r\n\r\n// You can use a single pipeline for multiple file uploads/downloads\r\n// Aaand, you can also pass per-file, transform-specific options here\r\npipeline.upload(file, { name: 'pic.png', compress: { mode: 'compress' } })\r\n```\r\n\r\n> **WARNING!**\r\n>\r\n> The **order** in which transforms are added to a pipeline using `.use()` **matters!** Transforms are applied to the stream in the order they were added. For example, if you first add a checksum calculation transform and then a compress transform, the checksum will be calculated from the uncompressed data. Switching the order would cause the checksum to be calculated from the compressed data. There are use cases for both situations, so the choice of ordering is completely up to you.\r\n\r\n\r\n### Accessing transform results\r\n\r\nYou might have noticed that Nodestream returns a Promise which is resolved with a `results` object. In addition to the `location` property, this object also contains results of all applied transformations (if there are any).\r\n\r\nEach transform declares its \"identity\" - a string which will be used as a \"scope\" to publish its results or to provide a mechanism to customise the transform's options. Check each of the transforms' documentation to learn what is its `identity` string.\r\n\r\nFor the `checksum` transform, its identity is, surprisingly, \"checksum\". When used, the calculated checksum can be obtained as follows:\r\n\r\n```js\r\nnodestream\r\n .pipeline()\r\n .use('checksum')\r\n .upload(file, { name: 'pic.png' })\r\n .then(results => {\r\n // The `results` object will have a \"checksum\" property, because that's\r\n // the transform's identity\r\n console.log(results.checksum.value)\r\n })\r\n```\r\n\r\nThere is no limit to the amount of transforms which can be registered per Nodestream instance, although there are some practical limitations which restrict you to only one particular transform type per Nodestream instance (ie. only one checksum transform with only one compress transform).\r\n\r\n## License\r\n\r\nThis software is licensed under the **BSD-3-Clause License**. See the [LICENSE](LICENSE) file for more information.\r\n",
"note": "Don't delete this file! It's used internally to help with page regeneration."
}