-
Notifications
You must be signed in to change notification settings - Fork 27.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[docs] Spanish translation of model_memory_anatomy.md #30885
[docs] Spanish translation of model_memory_anatomy.md #30885
Conversation
cc: @tadeodonegana , @gisturiz Hello guys, I would appreciate if you could review the translation, it is a very technical documentation so I am open to any feedback. Thank you for your help. |
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Amazing work as always, @aaronjimv! I just left some minimal comments.
Thanks for your work 🥇
GPU memory occupied: 0 MB. | ||
``` | ||
|
||
Parece estar bien: la memoria de la GPU no está ocupada como esperaríamos antes de cargar cualquier modelo. Si no es el caso en tu máquina, asegúrate de detener todos los procesos que estén utilizando la memoria de la GPU. Sin embargo, no toda la memoria libre de la GPU puede ser utilizada por el usuario. Cuando se carga un modelo en la GPU, también se cargan los kernels, lo que puede ocupar 1-2GB de memoria. Para ver cuánta es, cargemos un tensor diminuto en la GPU, lo que también desencadena la carga de los kernels. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggest changing:
"Para ver cuánta es"
to
"Para ver cuánta memoria será ocupada por defecto",
to clarify that we are referring to the memory that will be taken by kernels and to make that more clear.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your feedback! I really appreciate it.
|
||
## Utilización de la memoria en el entrenamiento | ||
|
||
Vamos a utilizar el [`Trainer`] y entrenar el modelo sin utilizar ninguna técnica de optimización del rendimiento de la GPU y un tamaño de lote de 4: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggest referencing the Trainer docs page, as done in the original page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, that seems fine to me. Thanks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Thanks! Hello @stevhliu, let me know anything about the changes. Thanks🤗 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work! 👏
>>> ds.set_format("pt") | ||
``` | ||
|
||
Para imprimir estadísticas resumidas para la utilización de la GPU y la ejecución del entrenamiento con [`Trainer`](https://huggingface.co/docs/transformers/v4.41.0/en/main_classes/trainer#transformers.Trainer), definimos dos funciones auxiliares: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's use this link so its not tied to any one version of the docs :)
Para imprimir estadísticas resumidas para la utilización de la GPU y la ejecución del entrenamiento con [`Trainer`](https://huggingface.co/docs/transformers/v4.41.0/en/main_classes/trainer#transformers.Trainer), definimos dos funciones auxiliares: | |
Para imprimir estadísticas resumidas para la utilización de la GPU y la ejecución del entrenamiento con [`Trainer`](https://huggingface.co/docs/transformers/en/main_classes/trainer#transformers.Trainer), definimos dos funciones auxiliares: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok.
|
||
## Utilización de la memoria en el entrenamiento | ||
|
||
Vamos a utilizar el [`Trainer`](https://huggingface.co/docs/transformers/v4.41.0/en/main_classes/trainer#transformers.Trainer) y entrenar el modelo sin utilizar ninguna técnica de optimización del rendimiento de la GPU y un tamaño de lote de 4: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Vamos a utilizar el [`Trainer`](https://huggingface.co/docs/transformers/v4.41.0/en/main_classes/trainer#transformers.Trainer) y entrenar el modelo sin utilizar ninguna técnica de optimización del rendimiento de la GPU y un tamaño de lote de 4: | |
Vamos a utilizar el [`Trainer`](https://huggingface.co/docs/transformers/en/main_classes/trainer#transformers.Trainer) y entrenar el modelo sin utilizar ninguna técnica de optimización del rendimiento de la GPU y un tamaño de lote de 4: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
* add model_memory_anatomy to es/_toctree.yml * copy model_memory_anatomy.md to es/ * translate first section * translate doc * chage forward activations * fix sentence and and link to Trainer * fix Trainer link
What does this PR do?
Add the Spanish version of
model_memory_anatomy.md
totransformers/docs/source/es
#28936
Before submitting
Pull Request section?
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@stevhliu