-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[IT-3984] Update for Schematic app #4
Conversation
* Setup container deployment for Schematic * Update docs * OpenChallenges container uses a list of shared secrets however Schematic uses different secrets in each environment therefore we need to refactor the secrets handling to get secrets per environment.
The schematic app is running on the https port in the container therefore we need to update the load balancer's https listener to forward to the same port.
There's a new deployment for Schematic app at Sage-Bionetwork-IT/schematic-infra-v2. Reconfirgure github action CI to deploy from there. depends on Sage-Bionetworks-IT/schematic-infra-v2#4
app_service_props = ServiceProps( | ||
"schematic-app", | ||
443, | ||
1024, | ||
"ghcr.io/sage-bionetworks/schematic:v0.1.90-beta", | ||
{}, | ||
f"{stack_name_prefix}-DockerFargateStack/{environment}/ecs", | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some questions on the current infra deployed compared to this new logic:
- How are containers horizontally scaled in this logic? Currently the schematic API containers scale between 3 and 5 instances depending on 50% CPU/memory scaling
- Currently schematic is running 3 Tasks, each with 4 vCPU and 8 GB of memory. There are no requests set on the containers. My concern here is that I see the schematic container has a "Memory hard limit" of 1 GB, however, from what we pulled up in cloudwatch - The container was reaching close to 40% (~4GB) of memory regularly. Can the memory limits be bumped up to address for this concern. No concerns on the CPU drop.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR already has a lot going on so to make reviewing easier I was planning to address scaling parameters for on a follow on PR. I can add it in this PR if you want though. Anyways we don't need to set to production
scale until we deploy to production.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was planning to address scaling parameters for on a follow on PR. I can add it in this PR if you want though.
As long as it's not going to change the structure a lot then I'd be fine with this to be completed in a separate PR. To note: In parallel I am creating a branch off your changes here so I can get the ball rolling on deploying out the Opentelemetry collector. I can handle large structural changes, but try to keep them to a minimum.
Anyways we don't need to set to production scale until we deploy to production.
Should the configuration of resources be extracted to environment specific if this is the case? That way we can set dev
to be the lower values (For now), prod/stage to the higher values, then if/when we need to we can always copy the values down to dev
. But that way - at least, the values are set up ahead of time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should the configuration of resources be extracted to environment specific if this is the case?
the pattern established for the OC assumes that deployments to each environment should be the same because we use the promotion workflow dev->stage->prod. That workflow assumes that we want each environment to be the same so that when we promote the apps/container we get the same result. The Schematic deployment can work differently if you prefer. It's really up to you and what workflow you want to establish.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the pattern established for the OC assumes that deployments to each environment should be the same because we use the promotion workflow dev->stage->prod. That workflow assumes that we want each environment to be the same so that when we promote the apps/container we get the same result.
I am on-board with this pattern.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
There's a new deployment for Schematic app at Sage-Bionetwork-IT/schematic-infra-v2. setup github action CI to deploy from there. depends on Sage-Bionetworks-IT/schematic-infra-v2#4
with secrets manager so we update the code to get secrets from there secrets manager
instead of the paramter store.