Skip to content

2.13.0

Compare
Choose a tag to compare
@xenova xenova released this 27 Dec 15:00
· 640 commits to main since this release

What's new?

🎄 7 new architectures!

This release adds support for many new multimodal architectures, bringing the total number of supported architectures to 80! 🤯

1. VITS for multilingual text-to-speech across over 1000 languages! (#466)

import { pipeline } from '@xenova/transformers';

// Create English text-to-speech pipeline
const synthesizer = await pipeline('text-to-speech', 'Xenova/mms-tts-eng');

// Generate speech
const output = await synthesizer('I love transformers');
// {
//   audio: Float32Array(26112) [...],
//   sampling_rate: 16000
// }
mms-tts-eng.mp4

See here for the list of available models. To start, we've converted 12 of the ~1140 models on the Hugging Face Hub. If we haven't added the one you wish to use, you can make it web-ready using our conversion script.

2. CLIPSeg for zero-shot image segmentation. (#478)

import { AutoTokenizer, AutoProcessor, CLIPSegForImageSegmentation, RawImage } from '@xenova/transformers';

// Load tokenizer, processor, and model
const tokenizer = await AutoTokenizer.from_pretrained('Xenova/clipseg-rd64-refined');
const processor = await AutoProcessor.from_pretrained('Xenova/clipseg-rd64-refined');
const model = await CLIPSegForImageSegmentation.from_pretrained('Xenova/clipseg-rd64-refined');

// Run tokenization
const texts = ['a glass', 'something to fill', 'wood', 'a jar'];
const text_inputs = tokenizer(texts, { padding: true, truncation: true });

// Read image and run processor
const image = await RawImage.read('https://github.com/timojl/clipseg/blob/master/example_image.jpg?raw=true');
const image_inputs = await processor(image);

// Run model with both text and pixel inputs
const { logits } = await model({ ...text_inputs, ...image_inputs });
// logits: Tensor {
//   dims: [4, 352, 352],
//   type: 'float32',
//   data: Float32Array(495616)[ ... ],
//   size: 495616
// }

You can visualize the predictions as follows:

const preds = logits
  .unsqueeze_(1)
  .sigmoid_()
  .mul_(255)
  .round_()
  .to('uint8');

for (let i = 0; i < preds.dims[0]; ++i) {
  const img = RawImage.fromTensor(preds[i]);
  img.save(`prediction_${i}.png`);
}
Original "a glass" "something to fill" "wood" "a jar"
image prediction_0 prediction_1 prediction_2 prediction_3

See here for the list of available models.

3. SegFormer for semantic segmentation and image classification. (#480)

import { pipeline } from '@xenova/transformers';

// Create an image segmentation pipeline
const segmenter = await pipeline('image-segmentation', 'Xenova/segformer_b2_clothes');

// Segment an image
const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/young-man-standing-and-leaning-on-car.jpg';
const output = await segmenter(url);

image

See output
[
  {
    score: null,
    label: 'Background',
    mask: RawImage {
      data: [Uint8ClampedArray],
      width: 970,
      height: 1455,
      channels: 1
    }
  },
  {
    score: null,
    label: 'Hair',
    mask: RawImage {
      data: [Uint8ClampedArray],
      width: 970,
      height: 1455,
      channels: 1
    }
  },
  {
    score: null,
    label: 'Upper-clothes',
    mask: RawImage {
      data: [Uint8ClampedArray],
      width: 970,
      height: 1455,
      channels: 1
    }
  },
  {
    score: null,
    label: 'Pants',
    mask: RawImage {
      data: [Uint8ClampedArray],
      width: 970,
      height: 1455,
      channels: 1
    }
  },
  {
    score: null,
    label: 'Left-shoe',
    mask: RawImage {
      data: [Uint8ClampedArray],
      width: 970,
      height: 1455,
      channels: 1
    }
  },
  {
    score: null,
    label: 'Right-shoe',
    mask: RawImage {
      data: [Uint8ClampedArray],
      width: 970,
      height: 1455,
      channels: 1
    }
  },
  {
    score: null,
    label: 'Face',
    mask: RawImage {
      data: [Uint8ClampedArray],
      width: 970,
      height: 1455,
      channels: 1
    }
  },
  {
    score: null,
    label: 'Left-leg',
    mask: RawImage {
      data: [Uint8ClampedArray],
      width: 970,
      height: 1455,
      channels: 1
    }
  },
  {
    score: null,
    label: 'Right-leg',
    mask: RawImage {
      data: [Uint8ClampedArray],
      width: 970,
      height: 1455,
      channels: 1
    }
  },
  {
    score: null,
    label: 'Left-arm',
    mask: RawImage {
      data: [Uint8ClampedArray],
      width: 970,
      height: 1455,
      channels: 1
    }
  },
  {
    score: null,
    label: 'Right-arm',
    mask: RawImage {
      data: [Uint8ClampedArray],
      width: 970,
      height: 1455,
      channels: 1
    }
  }
]

See here for the list of available models.

4. Table Transformer for table extraction from unstructured documents. (#477)

import { pipeline } from '@xenova/transformers';

// Create an object detection pipeline
const detector = await pipeline('object-detection', 'Xenova/table-transformer-detection', { quantized: false });

// Detect tables in an image
const img = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/invoice-with-table.png';
const output = await detector(img);
// [{ score: 0.9967531561851501, label: 'table', box: { xmin: 52, ymin: 322, xmax: 546, ymax: 525 } }]
Show example output

image

See here for the list of available models.

5. DiT for document image classification. (#474)

import { pipeline } from '@xenova/transformers';

// Create an image classification pipeline
const classifier = await pipeline('image-classification', 'Xenova/dit-base-finetuned-rvlcdip');

// Classify an image 
const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/coca_cola_advertisement.png';
const output = await classifier(url);
// [{ label: 'advertisement', score: 0.9035086035728455 }]

See here for the list of available models.

6. SigLIP for zero-shot image classification. (#473)

import { pipeline } from '@xenova/transformers';

// Create a zero-shot image classification pipeline
const classifier = await pipeline('zero-shot-image-classification', 'Xenova/siglip-base-patch16-224');

// Classify images according to provided labels
const url = 'http://images.cocodataset.org/val2017/000000039769.jpg';
const output = await classifier(url, ['2 cats', '2 dogs'], {
    hypothesis_template: 'a photo of {}',
});
// [
//   { score: 0.16770583391189575, label: '2 cats' },
//   { score: 0.000022096000975579955, label: '2 dogs' }
// ]

See here for the list of available models.

7. RoFormer for masked language modelling, sequence classification, token classification, and question answering. (#464)

import { pipeline } from '@xenova/transformers';

// Create a masked language modelling pipeline
const pipe = await pipeline('fill-mask', 'Xenova/antiberta2');

// Predict missing token
const output = await pipe('Ḣ Q V Q ... C A [MASK] D ... T V S S');
See output
[
  {
    score: 0.48774364590644836,
    token: 19,
    token_str: 'R',
    sequence: 'Ḣ Q V Q C A R D T V S S'
  },
  {
    score: 0.2768442928791046,
    token: 18,
    token_str: 'Q',
    sequence: 'Ḣ Q V Q C A Q D T V S S'
  },
  {
    score: 0.0890476182103157,
    token: 13,
    token_str: 'K',
    sequence: 'Ḣ Q V Q C A K D T V S S'
  },
  {
    score: 0.05106702819466591,
    token: 14,
    token_str: 'L',
    sequence: 'Ḣ Q V Q C A L D T V S S'
  },
  {
    score: 0.021606773138046265,
    token: 8,
    token_str: 'E',
    sequence: 'Ḣ Q V Q C A E D T V S S'
  }
]

See here for the list of available models.

🛠️ Misc. improvements

  • Fix Next.js Dockerfile HOSTNAME by @Lian1230 in #461
  • Add spaces template link to README in #467

🤗 New Contributors

Full Changelog: 2.12.1...2.13.0