A licensed kitchen rental service.
cd server
- Add a
.env.local
file to the root of /server, all the environment variables will be sent to you by one of the teammates over a private Slack message. yarn
to install deps.yarn db:up
to start the databaseyarn prisma:migrate:deploy
to project the generated migration from prisma.schema onto your databaseyarn prisma:generate
to generate the prisma clientyarn nexus:watch
to generate the graphql.schemayarn dev
to start the server in development- navigate to "localhost:5000/graphql" for the apollo interface where you can manually test queries/mutations
yarn db:seed
to seed the database.
Logging
Default setting is 'none'. By chaning the LOG_LEVEL environment variable to info basic logging for the most important tasks is provided. By chaningn to debug sql queries are logged too.
- Ensure the dev server is on
cd client
yarn
to install deps.- Add a
.env.local
file to the root of /clinet, all the environment variables will be sent to you by one of the teammates over a private Slack message. yarn codegen:generate
yarn next:dev
to run the file watcher. front end should be accessible at localhost:3000 NOTE: when you change front end queries or mutations to the backend you need to manually run step 1 again.
- Important: Dev Server needs to be stopped.
yarn test:api
to start the test
Frontend and end-to-end test are currently developed in the branch "frontend-tests" and will be merged to master shortly.
Add NEXT_PUBLIC_GOOGLE_REFRESH_TOKEN env variable to run these.
yarn cy:run-only
to run the tests after starting both the client and server.
- This repository has a monolithic architecture
- The web pages can be found /pages and the reusable components for these pages under /components
- Global SCSS styles can be found under /styles
- The Prisma schema and migrations made from it can be found under /prisma
- The Nexus Generated GraphQL schema can be found under server/generated/schema.graphql
- Code Definitons for GraphQL queries, mutations and types can be found under server/graphql/types
- Authentication relevant functions can be found under server/passport.ts as well as server/index.ts and utils/forgeJWT
- AWS-SDK: Currently AWS S3 CRUD functions for images are in the pages/api and are being called in the Step4 (and related) components of the Create A Listing flow. (this currently resides in the feat/s3 branch)
- Prisma: An Object-Relational Mapper that migrates changes to its schema to an SQL schema on command.
- NexusJS: A schema generator for GraphQL APIs. It provides type definitions for the GraphQL schema using the Code-First approach. On top it offers a prisma plugin that provides two APIs to integrate prisma into nexus. One API to project fields from models defined in the prisma schema into the GraphQL API. And a second API to build GraphQL root fields that allow the client to query and mutate data directly on the PostgreSQL database.
- Apollo-Express Server: A GraphQL Server handling CRUD operations called from the frontend. In addition, this server can handle REST requests to handle our Google OAuth authentication process that goes through PassportJS middleware.
- Next.Js + React.Js: ReactJs and the framework Next.Js built on top of it are used for routing, state management, server-side-rendering, CRUD requests from the Next API to our AWS S3 bucket, and much more.
- GraphQL CodeGen: Uses raw GraphQL queries to generates types (for our TypeScript code definitions) and react hooks to query our Server.
- Apollo Client: Also used to query our Server (but will be removed soon since it does not offer the same type safety as Codegen hooks)
- SCSS Modules: For component level styles
- 7-1 SCSS Architecture: For global styles and utility classes
The client is currently as a next.js application deployed on vercel. The backend is deployed on heroku and the postgreSQL database is deployed to render. A continuous integration pipeline on the master branch is implemented with vercel. On every new pull request vercel provides a deployed preview for testing before merching to master.
Schema first approach with nexus Nexus enables to write both the schema and resolver logic in the same spot, using TypeScript. The graphql schema is then programatically generated based on the types defiend using nexus. This apprach comes with some benefits opposed to the schema-first approach:
- Resolver logic and type definitions are not only in one place but also written in the same language. As the schema is autogenerated we don't have to switch all the time between SDL and typeScript.
- More flexibility during development as the schema still grows in complexity and size.
The prisma schema file is the main configuration file for the prisma configuration. It holds the following configurations:
- Data sources: We defined a PostgreSQL datasource.
- Generators: When running prisma generate a typesafe Prisma JavaScript Client (typesafe ORM) is generated.
- Data model definitions
The following design decisions have been taken:
- When a listing is created, a property and a propertySlot gets saved to the db. For every concrete date of a propertySlot a concrete DaySlot is created and saved.
- Is a daySlot is not (yet) related to a specific booking, the daySlot is still available. Once the bookingId of a daySlot is set, the DaySlot is no longer available.
- We allow for users to make a booking on only a part of a daySlot. Therefore when creating a booking and saving the bookingId to a daySlot, the booked time also needs to be set, in order to retrace the price of the booking.
Currently we have authentication done through Google's OAuth process, facilitated by PassportJS and some ExpressJS routes.
This flow looks roughly as above (from The Net Ninja), except that we have a PostgreSQL database instead of a MongoDB NoSQL database. Any protected NextJS route checks for a valid JWT cookie and redirects users to the home page if they are not authenticated.
Foodle's Security Protections:
- Google OAuth login with PassportJs
- Input Validation in Backend Requests
- Check for JWT on several Next.Js pages
- Security Policy for AWS S3 bucket
- Added CSP for NextJS, ExpressJS
- Added Security Headers to NextJS and Express
- Made cookies enfore Https and Samesite-strict
- Turned off introspection for Apollo Server in production and added csrfPrevention and a CORS config to it
This section covers everything important to know about the deployment
- Provider: Linode, Vercel
- Products used (Linode): Shared Compute, Managed Postgresql DB
- Products used (Vercel): Stadard offering provided
A large chunk of the deployment is automated, in fact everything can run fully automated as long as it's not setup from scratch again. To enable this we use the verecel integration with Github, terraform for deployment and our github action workflows.**
.github/workflows contains all workflows
.github/workflows/deploy-backend
: Is responsible to deploy our backend as a container on the linode instance.github/workflows/terraform-plan
: Shows terraform changes which might happen if you edit the terraform files.deployConfig
: Contains the docker-compose file which is used to deploy our backend + traeffik, a proxy which enables SSL without much configserver/Dockerfile
: contains the dockerfile to build the image for the serversterraform/*
: Contains all terrafrom files to provision the required infrastructureterraform/linode-compute
: Provisions the compute instance, if you need a more ressources edit this fileterraform/linode-db
: Will be used to provision the database, if you need more storage and such, edit this fileterraform/linode-firewall
: Will put up the firewall between the linode compute instance and the internetterraform/variables
: Variables which will need to supplied via aterraform.tfvars
file, can also be supplied while running terraform apply
You can run it via the official cli
- Install cli
- setup
terraform.tfvars
- run
terraform init
- run
terraform plan
shows ressources and configuration - run
terraform apply
if you want to provision these ressources
If you want to remove this ressources run:
terraform destroy
- view the comments in the gh actions workflow
- It's important to run
docker image prune
at regular intervalls, to prevent running out of storage. (will be done automatically) - If you setup the deployment from scratch, you need to change some enviroment variables as well
- If you setup the deployment from scratch, you also need to change the pointer of the subdomain (server.foodle-kitchens.com) to the new, public ipv4 of the linode instance or else you wont be able to access it (simply replace the ip in the domain registrar)
- The doppler token needs to be supplied manually or via ssh (currently via ssh)
- If you want to access the db either whitelist all IPS in linode or find out your public ip. AFTER YOU ARE DONE, REMOVE IT!
- if you change enviroment varibles in doppler, nodejs wont have the newest one, restart the container for this
- The private and public ssh key can be found in doppler
- There is a stack script referenced, this is a script which will run the first time the compute instance is created, and in our case should install docker & docker-compose. If that for whatever reason fails, simply copy the contents, ssh into the compute instance and run past it there.
- doppler contains all enviroment variables which will be used in production, no .env file or something else, exepct one for the doppler secret is used!
- DB migrations need to be run in the backend container and via:
doppler run -- yarn prisma:migrate:deploy
- There seems to be the slight chance that above step in the backend workflow will fail, you can run it from the container again and it will work
- If you run the project from scratch you might need to change the DB url and rerun the action
There are still some things left which can be improved upon.
- Copy the docker-compose file only over if there are changes to it, or if you create the enviroment from scratch
- Disable password access for ssh completly
- See backend-deployment workflow