Skip to content
This repository has been archived by the owner on Dec 19, 2024. It is now read-only.

Commit

Permalink
0.7.0 - Migrated to new StorageEngine system
Browse files Browse the repository at this point in the history
Merge pull request #22 from tycrek/storage-engines
  • Loading branch information
Josh Moore authored Jul 6, 2021
2 parents b725384 + 130876c commit bd176a1
Show file tree
Hide file tree
Showing 10 changed files with 212 additions and 130 deletions.
1 change: 0 additions & 1 deletion MagicNumbers.json
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,6 @@
"HTTPS": 443,
"CODE_OK": 200,
"CODE_NO_CONTENT": 204,
"CODE_BAD_REQUEST": 400,
"CODE_UNAUTHORIZED": 401,
"CODE_NOT_FOUND": 404,
"CODE_PAYLOAD_TOO_LARGE": 413,
Expand Down
62 changes: 56 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,18 +29,22 @@
- ✔️ Thumbnail support
- ✔️ Basic multi-user support
- ✔️ Configurable global upload limit (per-user coming soon!)
- ✔️ Basic macOS/Linux support using other clients including [Flameshot](https://flameshot.org/) ([ass-compatible Flameshot script](https://github.com/tycrek/ass#flameshot-users-linux)) & [MagicCap](https://magiccap.me/)
- ✔️ Local storage *or* block-storage support for [Amazon S3](https://aws.amazon.com/s3/) (including [DigitalOcean Spaces](https://www.digitalocean.com/products/spaces/))
- ✔️ Custom pluggable frontend dashboards using [Git Submodules](https://git-scm.com/book/en/v2/Git-Tools-Submodules)
- ✔️ Multiple access types
- **[ZWS](https://zws.im)**
- **Mixed-case alphanumeric**
- **Gfycat**
- **Original**
- ❌ Multiple database types
- **JSON**
- **Mongo** (soon!)
- **MySQL** (soon!)
- **PostgreSQL** (soon!)
- ✔️ Multiple storage methods using [ass StorageEngines](https://github.com/tycrek/ass-storage-engine) (JSON by default)
- **File**
- **JSON**
- **YAML** (soon!)
- **Databases**
- **Mongo** (soon!)
- **MySQL** (soon!)
- **PostgreSQL** (soon!)

### Access types

Expand Down Expand Up @@ -117,7 +121,7 @@ If you primarily share media on Discord, you can add these additional (optional)
| **`X-Ass-OG-Author-Url`** | URL to open when the Author is clicked |
| **`X-Ass-OG-Provider`** | Smaller text shown above the author |
| **`X-Ass-OG-Provider-Url`** | URL to open when the Provider is clicked |
| **`X-Ass-OG-Color`** | Colour shown on the left side of the embed. Must be one of `&random`, `&vibrant`, or a hex colour value (for example: `#fe3c29`). Random is a randomly generated hex value and Vibrant is sourced from the image itself |
| **`X-Ass-OG-Color`** | Colour shown on the left side of the embed. Must be one of `&random`, `&vibrant`, or a hex colour value (for example: `#fe3c29`). Random is a randomly generated hex value & Vibrant is sourced from the image itself |

#### Embed placeholders

Expand Down Expand Up @@ -178,6 +182,52 @@ Now you should see `My awesome dashboard!` when you navigate to `http://your-ass

**For a detailed walkthrough on developing your first frontend, [consult the wiki](https://github.com/tycrek/ass/wiki/Writing-a-custom-frontend).**

## StorageEngines

[StorageEngines](https://github.com/tycrek/ass-storage-engine) are responsible for managing your data. "Data" has two parts: an identifier & the actual data itself. With ass, the data is a JSON object representing the uploaded resource. The identifier is the unique ID in the URL returned to the user on upload.

ass aims to support these storage methods at a minimum:

- **JSON**
- **Mongo** (soon)

An ass StorageEngine implements support for one type of database (or file, such as JSON or YAML). This lets ass server hosts pick their database of choice, because all they'll have to do is plugin the connection/authentication details, then ass will handle the rest, using the resource ID as the key.

The only storage engine ass comes with by default is **JSON**. Others will be published to [npm](https://www.npmjs.com/) and listed here. If you find (or create!) a StorageEngine you like, you can use it by installing it with `npm i <package-name>` then changing the contents of [`data.js`](https://github.com/tycrek/ass/blob/master/data.js). At this time, a modified `data.js` might look like this:

```js
/**
* Used for global data management
*/

//const { JsonStorageEngine } = require('@tycrek/ass-storage-engine');
const { CustomStorageEngine } = require('my-custom-ass-storage-engine');

//const data = new JsonStorageEngine();

// StorageEngines may take no parameters...
const data1 = new CustomStorageEngine();

// multiple parameters...
const data2 = new CustomStorageEngine('Parameters!!', 420);

// or object-based parameters, depending on what the StorageEngine dev decides on.
const data3 = new CustomStorageEngine({ key1: 'value1', key2: { key3: 44 } });

module.exports = data1;

```

As long as the StorageEngine properly implements `GET`/`PUT`/`DEL`/`HAS`
StorageFunctions, replacing the file/database system is just that easy.

If you develop & publish a Engine, feel free to [open a PR on this README](https://github.com/tycrek/ass/edit/master/README.md) to add it.

- [`npm publish` docs](https://docs.npmjs.com/cli/v7/commands/npm-publish)
- ["How to publish packages to npm (the way the industry does things)"](https://zellwk.com/blog/publish-to-npm/)([`@tycrek/ass-storage-engine`](https://www.npmjs.com/package/@tycrek/ass-storage-engine) is published using the software this guide recommends, [np](https://github.com/sindresorhus/np))

**A wiki page on writing a custom StorageEngine is coming soon. Once complete, you can find it [here](https://github.com/tycrek/ass/wiki/Writing-a-StorageEngine).**

## Flameshot users (Linux)

Use [this script](https://github.com/tycrek/ass/blob/master/flameshot_example.sh) kindly provided by [@ToxicAven](https://github.com/ToxicAven). For the `KEY`, put your token.
Expand Down
2 changes: 1 addition & 1 deletion ass-x
Submodule ass-x updated from 43c808 to 7e27c2
4 changes: 2 additions & 2 deletions ass.js
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ const ROUTERS = {
// Read users and data
const users = require('./auth');
const data = require('./data');
log('Users & data read from filesystem');
log(`StorageEngine: ${data.name} (${data.type})`);
//#endregion

// Create thumbnails directory
Expand Down Expand Up @@ -71,4 +71,4 @@ app.use(([err, , res,]) => {
});

// Host the server
app.listen(port, host, () => log(`Server started on [${host}:${port}]\nAuthorized users: ${Object.keys(users).length}\nAvailable files: ${Object.keys(data).length}`));
app.listen(port, host, () => log(`Server started on [${host}:${port}]\nAuthorized users: ${Object.keys(users).length}\nAvailable files: ${data.size}`));
12 changes: 2 additions & 10 deletions data.js
Original file line number Diff line number Diff line change
Expand Up @@ -2,14 +2,6 @@
* Used for global data management
*/

const fs = require('fs-extra');
const { log, path } = require('./utils');

// Make sure data.json exists
if (!fs.existsSync(path('data.json'))) {
fs.writeJsonSync(path('data.json'), {}, { spaces: 4 });
log('File [data.json] created');
} else log('File [data.json] exists');

const data = require('./data.json');
const { JsonStorageEngine } = require('@tycrek/ass-storage-engine');
const data = new JsonStorageEngine();
module.exports = data;
54 changes: 52 additions & 2 deletions package-lock.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

3 changes: 2 additions & 1 deletion package.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"name": "ass",
"version": "0.6.0",
"version": "0.7.0",
"description": "The superior self-hosted ShareX server",
"main": "ass.js",
"engines": {
Expand Down Expand Up @@ -34,6 +34,7 @@
"url": "https://patreon.com/tycrek"
},
"dependencies": {
"@tycrek/ass-storage-engine": "0.2.5",
"any-shell-escape": "^0.1.1",
"aws-sdk": "^2.930.0",
"check-node-version": "^4.1.0",
Expand Down
102 changes: 47 additions & 55 deletions routers/resource.js
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,8 @@ const escape = require('escape-html');
const fetch = require('node-fetch');
const { deleteS3 } = require('../storage');
const { diskFilePath, s3enabled } = require('../config.json');
const { path, saveData, log, getTrueHttp, getTrueDomain, formatBytes, formatTimestamp, getS3url, getDirectUrl, getSafeExt, getResourceColor, replaceholder } = require('../utils');
const { CODE_BAD_REQUEST, CODE_UNAUTHORIZED, CODE_NOT_FOUND, } = require('../MagicNumbers.json');
const { path, log, getTrueHttp, getTrueDomain, formatBytes, formatTimestamp, getS3url, getDirectUrl, getSafeExt, getResourceColor, replaceholder } = require('../utils');
const { CODE_UNAUTHORIZED, CODE_NOT_FOUND, } = require('../MagicNumbers.json');
const data = require('../data');
const users = require('../auth');

Expand All @@ -14,16 +14,17 @@ const router = express.Router();
// Middleware for parsing the resource ID and handling 404
router.use((req, res, next) => {
// Parse the resource ID
req.ass = { resourceId: escape(req.resourceId).split('.')[0] };
req.ass = { resourceId: escape(req.resourceId || '').split('.')[0] };

// If the ID is invalid, return 404. Otherwise, continue normally // skipcq: JS-0093
(!req.ass.resourceId || !data[req.ass.resourceId]) ? res.sendStatus(CODE_NOT_FOUND) : next();
// If the ID is invalid, return 404. Otherwise, continue normally
data.has(req.ass.resourceId)
.then((has) => has ? next() : res.sendStatus(CODE_NOT_FOUND)) // skipcq: JS-0229
.catch(next);
});

// View file
router.get('/', (req, res) => {
router.get('/', (req, res, next) => data.get(req.ass.resourceId).then((fileData) => {
const { resourceId } = req.ass;
const fileData = data[resourceId];
const isVideo = fileData.mimetype.includes('video');

// Build OpenGraph meta tags
Expand All @@ -47,15 +48,12 @@ router.get('/', (req, res) => {
oembedUrl: `${getTrueHttp()}${getTrueDomain()}/${resourceId}/oembed`,
ogtype: isVideo ? 'video.other' : 'image',
urlType: `og:${isVideo ? 'video' : 'image'}`,
opengraph: replaceholder(ogs.join('\n'), fileData)
opengraph: replaceholder(ogs.join('\n'), fileData.size, fileData.timestamp, fileData.originalname)
});
});
}).catch(next));

// Direct resource
router.get('/direct*', (req, res) => {
const { resourceId } = req.ass;
const fileData = data[resourceId];

router.get('/direct*', (req, res, next) => data.get(req.ass.resourceId).then((fileData) => {
// Send file as an attachement for downloads
if (req.query.download)
res.header('Content-Disposition', `attachment; filename="${fileData.originalname}"`);
Expand All @@ -73,58 +71,52 @@ router.get('/direct*', (req, res) => {
};

uploaders[s3enabled ? 's3' : 'local']();
});
}).catch(next));

// Thumbnail response
router.get('/thumbnail', (req, res) => {
const { resourceId } = req.ass;

// Read the file and send it to the client
fs.readFile(path(diskFilePath, 'thumbnails/', data[resourceId].thumbnail))
router.get('/thumbnail', (req, res, next) =>
data.get(req.ass.resourceId)
.then(({ thumbnail }) => fs.readFile(path(diskFilePath, 'thumbnails/', thumbnail)))
.then((fileData) => res.type('jpg').send(fileData))
.catch(console.error);
});
.catch(next));

// oEmbed response for clickable authors/providers
// https://oembed.com/
// https://old.reddit.com/r/discordapp/comments/82p8i6/a_basic_tutorial_on_how_to_get_the_most_out_of/
router.get('/oembed', (req, res) => {
const { resourceId } = req.ass;

// Build the oEmbed object & send the response
const { opengraph, mimetype } = data[resourceId];
res.type('json').send({
version: '1.0',
type: mimetype.includes('video') ? 'video' : 'photo',
author_url: opengraph.authorUrl,
provider_url: opengraph.providerUrl,
author_name: replaceholder(opengraph.author || '', data[resourceId]),
provider_name: replaceholder(opengraph.provider || '', data[resourceId])
});
});
router.get('/oembed', (req, res, next) =>
data.get(req.ass.resourceId)
.then(({ opengraph, mimetype, size, timestamp, originalname }) =>
res.type('json').send({
version: '1.0',
type: mimetype.includes('video') ? 'video' : 'photo',
author_url: opengraph.authorUrl,
provider_url: opengraph.providerUrl,
author_name: replaceholder(opengraph.author || '', size, timestamp, originalname),
provider_name: replaceholder(opengraph.provider || '', size, timestamp, originalname)
}))
.catch(next));

// Delete file
router.get('/delete/:deleteId', (req, res) => {
const { resourceId } = req.ass;
const deleteId = escape(req.params.deleteId);
const fileData = data[resourceId];

// If the delete ID doesn't match, don't delete the file
if (deleteId !== fileData.deleteId) return res.sendStatus(CODE_UNAUTHORIZED);

// If the ID is invalid, return 400 because we are unable to process the resource
if (!resourceId || !fileData) return res.sendStatus(CODE_BAD_REQUEST);

log(`Deleted: ${fileData.originalname} (${fileData.mimetype})`);

// Save the file information
Promise.all([s3enabled ? deleteS3(fileData) : fs.rmSync(path(fileData.path)), fs.rmSync(path(diskFilePath, 'thumbnails/', fileData.thumbnail))])
.then(() => {
delete data[resourceId];
saveData(data);
res.type('text').send('File has been deleted!');
router.get('/delete/:deleteId', (req, res, next) => {
let oldName, oldType; // skipcq: JS-0119
data.get(req.ass.resourceId)
.then((fileData) => {
// Extract info for logs
oldName = fileData.originalname;
oldType = fileData.mimetype;

// Clean deleteId
const deleteId = escape(req.params.deleteId);

// If the delete ID doesn't match, don't delete the file
if (deleteId !== fileData.deleteId) return res.sendStatus(CODE_UNAUTHORIZED);

// Save the file information
return Promise.all([s3enabled ? deleteS3(fileData) : fs.rmSync(path(fileData.path)), fs.rmSync(path(diskFilePath, 'thumbnails/', fileData.thumbnail))]);
})
.catch(console.error);
.then(() => data.del(req.ass.resourceId))
.then(() => (log(`Deleted: ${oldName} (${oldType})`), res.type('text').send('File has been deleted!'))) // skipcq: JS-0090
.catch(next);
});

module.exports = router;
Loading

0 comments on commit bd176a1

Please sign in to comment.