Skip to content

Commit

Permalink
[hf parser] Fix bug where we're not properly appending outputs for mu…
Browse files Browse the repository at this point in the history
…ltiple return sequences (#411)

[hf parser] Fix bug where we're not properly appending outputs for
multiple return sequences

Minor bug fix I noticed that I messed up when adding streaming support
in #380 (after I already
supported multiple inputs in
#345). The reason i missed
this is that streaming only supports a single return sequence so didn't
end up testing for multiple non-streaming. We should have automated
tests. Added GH issue in


Tested by reabsing onto:
#412

---
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with
[ReviewStack](https://reviewstack.dev/lastmile-ai/aiconfig/pull/411).
* #414
* __->__ #411
* #410
  • Loading branch information
rossdanlm authored Dec 6, 2023
2 parents 7ff8d88 + 49da477 commit c91a15a
Showing 1 changed file with 3 additions and 2 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -251,6 +251,7 @@ async def run_inference(
response : List[Any] = generator(**completion_data)
for count, result in enumerate(response):
output = construct_regular_output(result, count)
outputs.append(output)
else:
if completion_data.get("num_return_sequences", 1) > 1:
raise ValueError("Sorry, TextIteratorStreamer does not support multiple return sequences, please set `num_return_sequences` to 1")
Expand All @@ -261,8 +262,8 @@ async def run_inference(
thread = threading.Thread(target=generator, kwargs=completion_data)
thread.start()
output = construct_stream_output(streamer, options)
if output is not None:
outputs.append(output)
if output is not None:
outputs.append(output)

prompt.outputs = outputs
return prompt.outputs
Expand Down

0 comments on commit c91a15a

Please sign in to comment.