From 4ea172255c3b25627711a99d28d5cb2ef7b87cdd Mon Sep 17 00:00:00 2001 From: Joe Runde Date: Thu, 11 Apr 2024 04:47:10 -0600 Subject: [PATCH] :loud_sound: Add TGIS response logs (#15) This PR updates our grpc_server to add TGIS-style logs similar to https://github.com/IBM/text-generation-inference/blob/main/router/src/grpc_server.rs#L504-L512 This also disables the vllm per-request logging so that we don't double-log each request The timing info collected here is pretty rough, it doesn't plumb into the LLMEngine, it just times the generators to get the total time spent in the engine. We could do better, but this is a start. Example logs: ``` INFO 04-09 21:51:01 logs.py:43] generate_stream{input=[b'This is the story of Obama ridin...'] prefix_id= input_chars=[70] params=sampling { } stopping { max_new_tokens: 200 min_new_tokens: 16 } response { } decoding { } tokenization_time=0.45ms queue_and_inference_time=1096.67ms time_per_token=5.48ms total_time=1097.12ms input_toks=16}: Streaming response generated 200 tokens before NOT_FINISHED, output 848 chars: b' California. The story is told i...' INFO 04-09 21:51:08 logs.py:43] generate{input=[b'Lorem ipsum dolor sit amet, cons...', b'foooood man where is it'] prefix_id= input_chars=[469] params=sampling { } stopping { max_new_tokens: 20 min_new_tokens: 16 } response { } decoding { } tokenization_time=2.03ms queue_and_inference_time=122.23ms time_per_token=6.11ms total_time=124.26ms input_toks=124}: Sub-request 0 from batch of 2 generated 20 tokens before MAX_TOKENS, output 25 chars: b'?\\n\\n