Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding context lengths. #182

Merged
merged 2 commits into from
Oct 3, 2024
Merged

Conversation

trentfowlercohere
Copy link
Contributor

@trentfowlercohere trentfowlercohere commented Oct 3, 2024

This pull request introduces new chat fine-tuning features for Command R 08-2024, enhancing the fine-tuning capabilities of the model.

Changes:

  • Support for longer context lengths: The PR increases the context length for fine-tuning training and MultiLoRA from 8192 to 16384, enabling the model to handle more extensive input sequences during fine-tuning.
  • Integration with Weights & Biases: It integrates Weights & Biases for real-time tracking of fine-tuning experiments, providing users with insights into the fine-tuning process.

@trentfowlercohere trentfowlercohere requested a review from a team as a code owner October 3, 2024 16:17
Copy link

github-actions bot commented Oct 3, 2024

Copy link

github-actions bot commented Oct 3, 2024

@trentfowlercohere trentfowlercohere merged commit 34fe84e into main Oct 3, 2024
2 checks passed
@trentfowlercohere trentfowlercohere deleted the release-finetuning-command-r branch October 3, 2024 16:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants