-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Specify the task CPU and memory (IT-4056) #83
base: dev
Are you sure you want to change the base?
Conversation
I realize that each task must share its resources with two more containers:
One solution is to increment the memory made available to the task. The increment is usually 1 GB, which costs $3.20 / month (see Fargate pricing). So for 13 tasks, that about $40 / month / environment or a total of $1440 / year. Alternatively, we could keep the memory allocated to the tasks as defined in this PR, but reduce the memory allocated to the OC container. Unlike the task definition, containers added to a task can have their memory set freely (no fixed increments) as long as the value is not larger that the memory allocated to the task.
|
These are the numbers they recommend: For task definitions, you must set the CPU and memory parameters.
For the service, you set the log configuration in the Service Connect configuration.
I like this idea as the first approach. We can then use cloudwatch metrics and adjust from there if we need to bump up to the next valid memory/cpu config. |
from enum import Enum | ||
|
||
|
||
class FargateCpuMemory(Enum): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i'm not a fan of this because these combinations are already kept in AWS and they could change over time. Keeping a copy here would introduce a maintenance burden of keeping this up to date. Instead how about we just query AWS for the correct combination and if the user doesn't provide the right combination throw an exception and provide a link for the user to lookup valid combos in AWS?
I'm not a fan of either of these solutions because both would require management of both task and container memories. I suggestion we change to only set the task cpu and memory and don't set the container memory at all. This would allow all of the containers in a ECS task to share the cpu and memory defined at the task level. This seems like the easiest solution. This article on how ECS memory and cpu settings work helped me understand how those settings work, particularly the section on Scenarios for different memory configurations |
Closes https://sagebionetworks.jira.com/browse/IT-4056
Changelog