Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MegatronLMClient expects cache usage #106

Open
janEbert opened this issue Jan 22, 2024 · 0 comments
Open

MegatronLMClient expects cache usage #106

janEbert opened this issue Jan 22, 2024 · 0 comments

Comments

@janEbert
Copy link

When trying to issue a length-exceeded warning while --no_cache is given, the MegatronLMClient errors out with the below stack trace.
The reason being that cache_key is None, which is not checked. The easiest solution would probably be to just omit printing the text strings if cache_key is None (i.e., if caching is disabled).

Traceback (most recent call last):
  File "/p/project/opengptx-elm/ebert1/opengpt/lm-evaluation-harness/main.py", line 101, in <module>
    main()
  File "/p/project/opengptx-elm/ebert1/opengpt/lm-evaluation-harness/main.py", line 67, in main
    results = evaluator.simple_evaluate(
  File "/p/project/opengptx-elm/ebert1/opengpt/lm-evaluation-harness/lm_eval/utils.py", line 242, in _wrapper
    return fn(*args, **kwargs)
  File "/p/project/opengptx-elm/ebert1/opengpt/lm-evaluation-harness/lm_eval/evaluator.py", line 103, in simple_evaluate
    results = evaluate(
  File "/p/project/opengptx-elm/ebert1/opengpt/lm-evaluation-harness/lm_eval/utils.py", line 242, in _wrapper
    return fn(*args, **kwargs)
  File "/p/project/opengptx-elm/ebert1/opengpt/lm-evaluation-harness/lm_eval/evaluator.py", line 297, in evaluate
    resps = getattr(lm, reqtype)([req.args for req in reqs])
  File "/p/project/opengptx-elm/ebert1/opengpt/lm-evaluation-harness/lm_eval/base.py", line 210, in loglikelihood_rolling
    string_nll = self._loglikelihood_tokens(
  File "/p/project/opengptx-elm/ebert1/opengpt/lm-evaluation-harness/lm_eval/models/megatronlm.py", line 187, in _loglikelihood_tokens
    f"WARNING: Length of concatenated context ...{repr(cache_key[0][-20:])} and continuation {repr(cache_key[1])} exceeds max length {self.max_length + 1}"
TypeError: 'NoneType' object is not subscriptable
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant