Skip to content

Latest commit

 

History

History
192 lines (139 loc) · 11.4 KB

README.md

File metadata and controls

192 lines (139 loc) · 11.4 KB

Foodle

A licensed kitchen rental service.

Setup Development

Server

  1. cd server
  2. Add a .env.local file to the root of /server, all the environment variables will be sent to you by one of the teammates over a private Slack message.
  3. yarn to install deps.
  4. yarn db:up to start the database
  5. yarn prisma:migrate:deploy to project the generated migration from prisma.schema onto your database
  6. yarn prisma:generate to generate the prisma client
  7. yarn nexus:watch to generate the graphql.schema
  8. yarn dev to start the server in development
  9. navigate to "localhost:5000/graphql" for the apollo interface where you can manually test queries/mutations
  10. yarn db:seed to seed the database.

Logging

Default setting is 'none'. By chaning the LOG_LEVEL environment variable to info basic logging for the most important tasks is provided. By chaningn to debug sql queries are logged too.

Client

  1. Ensure the dev server is on
  2. cd client
  3. yarn to install deps.
  4. Add a .env.local file to the root of /clinet, all the environment variables will be sent to you by one of the teammates over a private Slack message.
  5. yarn codegen:generate
  6. yarn next:dev to run the file watcher. front end should be accessible at localhost:3000 NOTE: when you change front end queries or mutations to the backend you need to manually run step 1 again.

Api Tests

  1. Important: Dev Server needs to be stopped.
  2. yarn test:api to start the test

Frontend/ E2E - Tests

Frontend and end-to-end test are currently developed in the branch "frontend-tests" and will be merged to master shortly.

Add NEXT_PUBLIC_GOOGLE_REFRESH_TOKEN env variable to run these. yarn cy:run-only to run the tests after starting both the client and server.

Foodle's Architecture

foodleArchitecture2

Repository Structure

  • This repository has a monolithic architecture
  • The web pages can be found /pages and the reusable components for these pages under /components
  • Global SCSS styles can be found under /styles
  • The Prisma schema and migrations made from it can be found under /prisma
  • The Nexus Generated GraphQL schema can be found under server/generated/schema.graphql
  • Code Definitons for GraphQL queries, mutations and types can be found under server/graphql/types
  • Authentication relevant functions can be found under server/passport.ts as well as server/index.ts and utils/forgeJWT
  • AWS-SDK: Currently AWS S3 CRUD functions for images are in the pages/api and are being called in the Step4 (and related) components of the Create A Listing flow. (this currently resides in the feat/s3 branch)

Server Architecture

foodleServerArchitecture3

Server Tech Stack

  • Prisma: An Object-Relational Mapper that migrates changes to its schema to an SQL schema on command.
  • NexusJS: A schema generator for GraphQL APIs. It provides type definitions for the GraphQL schema using the Code-First approach. On top it offers a prisma plugin that provides two APIs to integrate prisma into nexus. One API to project fields from models defined in the prisma schema into the GraphQL API. And a second API to build GraphQL root fields that allow the client to query and mutate data directly on the PostgreSQL database.
  • Apollo-Express Server: A GraphQL Server handling CRUD operations called from the frontend. In addition, this server can handle REST requests to handle our Google OAuth authentication process that goes through PassportJS middleware.

Client Tech Stack

  • Next.Js + React.Js: ReactJs and the framework Next.Js built on top of it are used for routing, state management, server-side-rendering, CRUD requests from the Next API to our AWS S3 bucket, and much more.
  • GraphQL CodeGen: Uses raw GraphQL queries to generates types (for our TypeScript code definitions) and react hooks to query our Server.
  • Apollo Client: Also used to query our Server (but will be removed soon since it does not offer the same type safety as Codegen hooks)
  • SCSS Modules: For component level styles
  • 7-1 SCSS Architecture: For global styles and utility classes

Deployment

The client is currently as a next.js application deployed on vercel. The backend is deployed on heroku and the postgreSQL database is deployed to render. A continuous integration pipeline on the master branch is implemented with vercel. On every new pull request vercel provides a deployed preview for testing before merching to master.

API Design

Schema first approach with nexus Nexus enables to write both the schema and resolver logic in the same spot, using TypeScript. The graphql schema is then programatically generated based on the types defiend using nexus. This apprach comes with some benefits opposed to the schema-first approach:

  • Resolver logic and type definitions are not only in one place but also written in the same language. As the schema is autogenerated we don't have to switch all the time between SDL and typeScript.
  • More flexibility during development as the schema still grows in complexity and size.

FoodleAPIDesign

DB-Schema

The prisma schema file is the main configuration file for the prisma configuration. It holds the following configurations:

  • Data sources: We defined a PostgreSQL datasource.
  • Generators: When running prisma generate a typesafe Prisma JavaScript Client (typesafe ORM) is generated.
  • Data model definitions

The following design decisions have been taken:

  • When a listing is created, a property and a propertySlot gets saved to the db. For every concrete date of a propertySlot a concrete DaySlot is created and saved.
  • Is a daySlot is not (yet) related to a specific booking, the daySlot is still available. Once the bookingId of a daySlot is set, the DaySlot is no longer available.
  • We allow for users to make a booking on only a part of a daySlot. Therefore when creating a booking and saving the bookingId to a daySlot, the booked time also needs to be set, in order to retrace the price of the booking.

FoodleDBSchema

Authentication

Currently we have authentication done through Google's OAuth process, facilitated by PassportJS and some ExpressJS routes. Foodle Authentication
This flow looks roughly as above (from The Net Ninja), except that we have a PostgreSQL database instead of a MongoDB NoSQL database. Any protected NextJS route checks for a valid JWT cookie and redirects users to the home page if they are not authenticated.

Threat Model (Alex)

FoodleThreatModel Foodle's Security Protections:

  • Google OAuth login with PassportJs
  • Input Validation in Backend Requests
  • Check for JWT on several Next.Js pages
  • Security Policy for AWS S3 bucket
  • Added CSP for NextJS, ExpressJS
  • Added Security Headers to NextJS and Express
  • Made cookies enfore Https and Samesite-strict
  • Turned off introspection for Apollo Server in production and added csrfPrevention and a CORS config to it

Deployment

This section covers everything important to know about the deployment

Infrastructure:

  • Provider: Linode, Vercel
  • Products used (Linode): Shared Compute, Managed Postgresql DB
  • Products used (Vercel): Stadard offering provided

Files & Folders:

A large chunk of the deployment is automated, in fact everything can run fully automated as long as it's not setup from scratch again. To enable this we use the verecel integration with Github, terraform for deployment and our github action workflows.**

.github/workflows contains all workflows

  • .github/workflows/deploy-backend: Is responsible to deploy our backend as a container on the linode instance
  • .github/workflows/terraform-plan: Shows terraform changes which might happen if you edit the terraform files
  • .deployConfig: Contains the docker-compose file which is used to deploy our backend + traeffik, a proxy which enables SSL without much config
  • server/Dockerfile: contains the dockerfile to build the image for the servers
  • terraform/*: Contains all terrafrom files to provision the required infrastructure
  • terraform/linode-compute: Provisions the compute instance, if you need a more ressources edit this file
  • terraform/linode-db: Will be used to provision the database, if you need more storage and such, edit this file
  • terraform/linode-firewall: Will put up the firewall between the linode compute instance and the internet
  • terraform/variables: Variables which will need to supplied via a terraform.tfvars file, can also be supplied while running terraform apply

How to use terraform?

You can run it via the official cli

  1. Install cli
  2. setup terraform.tfvars
  3. run terraform init
  4. run terraform plan shows ressources and configuration
  5. run terraform apply if you want to provision these ressources

If you want to remove this ressources run: terraform destroy

How does the deployment work:

  • view the comments in the gh actions workflow

Things to keep in mind

  • It's important to run docker image prune at regular intervalls, to prevent running out of storage. (will be done automatically)
  • If you setup the deployment from scratch, you need to change some enviroment variables as well
  • If you setup the deployment from scratch, you also need to change the pointer of the subdomain (server.foodle-kitchens.com) to the new, public ipv4 of the linode instance or else you wont be able to access it (simply replace the ip in the domain registrar)
  • The doppler token needs to be supplied manually or via ssh (currently via ssh)
  • If you want to access the db either whitelist all IPS in linode or find out your public ip. AFTER YOU ARE DONE, REMOVE IT!
  • if you change enviroment varibles in doppler, nodejs wont have the newest one, restart the container for this
  • The private and public ssh key can be found in doppler
  • There is a stack script referenced, this is a script which will run the first time the compute instance is created, and in our case should install docker & docker-compose. If that for whatever reason fails, simply copy the contents, ssh into the compute instance and run past it there.
  • doppler contains all enviroment variables which will be used in production, no .env file or something else, exepct one for the doppler secret is used!
  • DB migrations need to be run in the backend container and via: doppler run -- yarn prisma:migrate:deploy
  • There seems to be the slight chance that above step in the backend workflow will fail, you can run it from the container again and it will work
  • If you run the project from scratch you might need to change the DB url and rerun the action

Improvments:

There are still some things left which can be improved upon.

  1. Copy the docker-compose file only over if there are changes to it, or if you create the enviroment from scratch
  2. Disable password access for ssh completly
  3. See backend-deployment workflow

Attribution:

Close icons created by ariefstudio - Flaticon