Skip to content

Releases: HolyWu/vs-dpir

v4.2.0

01 Dec 07:27
Compare
Choose a tag to compare
  • Add num_batches and trt_static_shape parameters.
  • Remove trt_int8, trt_int8_sample_step and trt_int8_batch_size parameters.
  • Improve performance by using separate streams for transfering tensors between CPU and GPU.

v4.1.0

19 May 06:37
Compare
Choose a tag to compare
  • Lower default trt_min_shape.
  • Mildly decrease TRT engine building time.
  • Bump PyTorch to 2.4.0.dev.
  • Remove vstools dependency.

v4.0.0

12 May 14:42
Compare
Choose a tag to compare
  • Add support for TensorRT dynamic shapes.
  • Add support for TensorRT INT8 mode using Post Training Quantization (PTQ), giving 2x performance increase over FP16 mode.
  • Bump PyTorch to 2.3.
  • Bump VapourSynth to R66.
  • Bump TensorRT to 10.0.1.

v3.1.1

21 May 07:03
Compare
Choose a tag to compare
  • Remove nvfuser and cuda_graphs parameters.
  • Bump PyTorch to 2.0.1.
  • Bump TensorRT to 8.6.1.
  • Bump VapourSynth to R60.

v3.0.1

12 Mar 07:46
Compare
Choose a tag to compare
  • Allow strength clip to be any GRAY format.
  • Don't globally set default floating point tensor type when the input is of RGBH format.

v3.0.0

12 Feb 16:08
Compare
Choose a tag to compare
  • Switch to PyTorch again for inference.
  • Change function name to lowercase.

v2.3.0

25 Jun 16:30
Compare
Choose a tag to compare
  • Fix strength clip not properly normalized.
  • Allow GRAY8 format for strength clip.
  • Add dual parameter to perform inference in two threads for better performance. Mostly useful for TensorRT, not so useful for CUDA, and not supported for DirectML.

v2.2.0

19 Jun 05:25
Compare
Choose a tag to compare
  • Add trt_max_workspace_size parameter.
  • Allow specifying GRAYS clip for strength parameter.

v2.1.0

25 Mar 16:37
Compare
Choose a tag to compare
  • Add AMD MIGraphX provider.

v2.0.0

19 Mar 15:55
Compare
Choose a tag to compare
  • Rename tile_x, tile_y parameters to tile_w, tile_h.
  • Change the default of tile_pad to 8.
  • Switch to ONNX Runtime for inferencing.