Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for pix2struct #523

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -324,6 +324,7 @@ You can refine your search by selecting the task you're interested in (e.g., [te
1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (from Meta AI) released with the paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al.
1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (from Google AI) released with the paper [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby.
1. **[Phi](https://huggingface.co/docs/transformers/main/model_doc/phi)** (from Microsoft) released with the papers - [Textbooks Are All You Need](https://arxiv.org/abs/2306.11644) by Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio César Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital Shah, Harkirat Singh Behl, Xin Wang, Sébastien Bubeck, Ronen Eldan, Adam Tauman Kalai, Yin Tat Lee and Yuanzhi Li, [Textbooks Are All You Need II: phi-1.5 technical report](https://arxiv.org/abs/2309.05463) by Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar and Yin Tat Lee.
1. **[Pix2Struct](https://huggingface.co/docs/transformers/model_doc/pix2struct)** (from Google) released with the paper [Pix2Struct: Screenshot Parsing as Pretraining for Visual Language Understanding](https://arxiv.org/abs/2210.03347) by Kenton Lee, Mandar Joshi, Iulia Turc, Hexiang Hu, Fangyu Liu, Julian Eisenschlos, Urvashi Khandelwal, Peter Shaw, Ming-Wei Chang, Kristina Toutanova.
1. **[ResNet](https://huggingface.co/docs/transformers/model_doc/resnet)** (from Microsoft Research) released with the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun.
1. **[RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta)** (from Facebook), released together with the paper [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov.
1. **[RoFormer](https://huggingface.co/docs/transformers/model_doc/roformer)** (from ZhuiyiTechnology), released together with the paper [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/abs/2104.09864) by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu.
Expand Down
1 change: 1 addition & 0 deletions docs/snippets/6_supported-models.snippet
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,7 @@
1. **[OPT](https://huggingface.co/docs/transformers/master/model_doc/opt)** (from Meta AI) released with the paper [OPT: Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al.
1. **[OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit)** (from Google AI) released with the paper [Simple Open-Vocabulary Object Detection with Vision Transformers](https://arxiv.org/abs/2205.06230) by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby.
1. **[Phi](https://huggingface.co/docs/transformers/main/model_doc/phi)** (from Microsoft) released with the papers - [Textbooks Are All You Need](https://arxiv.org/abs/2306.11644) by Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio César Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital Shah, Harkirat Singh Behl, Xin Wang, Sébastien Bubeck, Ronen Eldan, Adam Tauman Kalai, Yin Tat Lee and Yuanzhi Li, [Textbooks Are All You Need II: phi-1.5 technical report](https://arxiv.org/abs/2309.05463) by Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar and Yin Tat Lee.
1. **[Pix2Struct](https://huggingface.co/docs/transformers/model_doc/pix2struct)** (from Google) released with the paper [Pix2Struct: Screenshot Parsing as Pretraining for Visual Language Understanding](https://arxiv.org/abs/2210.03347) by Kenton Lee, Mandar Joshi, Iulia Turc, Hexiang Hu, Fangyu Liu, Julian Eisenschlos, Urvashi Khandelwal, Peter Shaw, Ming-Wei Chang, Kristina Toutanova.
1. **[ResNet](https://huggingface.co/docs/transformers/model_doc/resnet)** (from Microsoft Research) released with the paper [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) by Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun.
1. **[RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta)** (from Facebook), released together with the paper [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov.
1. **[RoFormer](https://huggingface.co/docs/transformers/model_doc/roformer)** (from ZhuiyiTechnology), released together with the paper [RoFormer: Enhanced Transformer with Rotary Position Embedding](https://arxiv.org/abs/2104.09864) by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu.
Expand Down
18 changes: 18 additions & 0 deletions scripts/supported_models.py
Original file line number Diff line number Diff line change
Expand Up @@ -722,6 +722,24 @@
'susnato/phi-1_5_dev',
],
},
'pix2struct': {
# Image-to-text
'image-to-text': [
'fxmarty/pix2struct-tiny-random',
'google/pix2struct-textcaps-base',
],

# Visual Question Answering (VQA)
'visual-question-answering': [
'google/deplot',
'google/pix2struct-docvqa-base',
'google/pix2struct-widget-captioning-base',
'google/pix2struct-ai2d-base',
'google/pix2struct-chartqa-base',
'google/pix2struct-screen2words-base',
'google/pix2struct-infographics-vqa-base',
],
},
'roberta': {
# Feature extraction
'feature-extraction': [
Expand Down
77 changes: 72 additions & 5 deletions src/models.js
Original file line number Diff line number Diff line change
Expand Up @@ -374,6 +374,15 @@ async function seq2seqForward(self, model_inputs) {
decoderFeeds.encoder_attention_mask = model_inputs.attention_mask
}

if (self.decoder_merged_session.inputNames.includes('decoder_attention_mask')) {
// TODO: When we perform parallelism, we must adjust attention mask depending on
// location of pad token
decoderFeeds.decoder_attention_mask = new Tensor(
'int64',
new BigInt64Array(model_inputs.decoder_input_ids.data.length).fill(1n),
model_inputs.decoder_input_ids.dims,
)
}
preparePositionIds(self.decoder_merged_session, decoderFeeds, use_cache_branch);
self.addPastKeyValues(decoderFeeds, past_key_values);

Expand Down Expand Up @@ -437,7 +446,9 @@ function seq2seqStartBeams(self, inputTokenIds, generation_config, numOutputToke
}

if (requires_attention_mask) {
start.attention_mask = prepareAttentionMask(self, tokens);
start.attention_mask =
generation_config.attention_mask
?? prepareAttentionMask(self, tokens);
}

beams.push(start);
Expand Down Expand Up @@ -981,7 +992,7 @@ export class PreTrainedModel extends Callable {
* @typedef {Object} DecoderOutput
*
* Generates text based on the given inputs and generation configuration using the model.
* @param {Tensor|Array|TypedArray} inputs An array of input token IDs.
* @param {Tensor|Array|TypedArray|Object} inputs An array of input token IDs.
* @param {Object|GenerationConfig|null} generation_config The generation configuration to use. If null, default configuration will be used.
* @param {Object|null} logits_processor An optional logits processor to use. If null, a new LogitsProcessorList instance will be created.
* @param {Object} options options
Expand All @@ -1006,8 +1017,8 @@ export class PreTrainedModel extends Callable {
MODEL_WITH_LM_HEAD_MAPPING_NAMES.get(modelType)
?? MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING_NAMES.get(modelType)
?? MODEL_FOR_SPEECH_SEQ_2_SEQ_MAPPING_NAMES.get(modelType)
// ?? MODEL_FOR_TEXT_TO_SPECTROGRAM_MAPPING_NAMES.get(modelType) // TODO
?? MODEL_FOR_VISION_2_SEQ_MAPPING_NAMES.get(modelType);
?? MODEL_FOR_VISION_2_SEQ_MAPPING_NAMES.get(modelType)
?? MODEL_FOR_TEXT_TO_SPECTROGRAM_MAPPING_NAMES.get(modelType);

if (possibleInfo) {
// TODO: support multiple possible classes
Expand All @@ -1017,7 +1028,7 @@ export class PreTrainedModel extends Callable {
}

if (!(inputs instanceof Tensor) && !isTypedArray(inputs) && !Array.isArray(inputs)) {
throw Error(`\`inputs\` must be a Tensor, TypedArray, or Array, but is "${inputs.constructor.name}".`);
throw Error(`\`inputs\` must be a Tensor, TypedArray, or Array, but is "${inputs?.constructor?.name}".`);
}

let input_ids_seq_length;
Expand Down Expand Up @@ -3044,6 +3055,61 @@ export class VisionEncoderDecoderModel extends PreTrainedModel {
}
//////////////////////////////////////////////////

export class Pix2StructPreTrainedModel extends PreTrainedModel { }

/**
* A conditional generation model with a language modeling head. Can be used for sequence generation tasks.
*/
export class Pix2StructForConditionalGeneration extends Pix2StructPreTrainedModel {
main_input_name = 'flattened_patches';

/**
* Creates a new instance of the `VisionEncoderDecoderModel` class.
* @param {Object} config The configuration object specifying the hyperparameters and other model settings.
* @param {Object} session The ONNX session containing the encoder model.
* @param {any} decoder_merged_session The ONNX session containing the merged decoder model.
* @param {Object} generation_config Configuration object for the generation process.
*/
constructor(config, session, decoder_merged_session, generation_config) {
super(config, session);
this.decoder_merged_session = decoder_merged_session;
this.generation_config = generation_config;

const textConfig = this.config.text_config;
this.num_encoder_layers = this.num_decoder_layers = textConfig.num_layers;
this.num_encoder_heads = this.num_decoder_heads = textConfig.num_heads;
this.encoder_dim_kv = this.decoder_dim_kv = textConfig.d_kv;
}

/**
* Generates outputs based on input and generation configuration.
* @param {Object} inputs Input data for the model.
* @param {Object} generation_config Configuration object for the generation process.
* @param {Object} logits_processor Optional logits processor object.
* @returns {Promise<Object>} Promise object represents the generated outputs.
*/
async generate(
inputs,
generation_config = null,
logits_processor = null,
) {
const { flattened_patches, attention_mask } = inputs;

// Create generation config object
generation_config = this._get_generation_config(generation_config);

// Compute image embeddings
const outputs = await super.generate(flattened_patches, {
...generation_config,
decoder_input_ids: [this.config.pad_token_id],
attention_mask,
}, logits_processor);

return outputs
}

}

//////////////////////////////////////////////////
// CLIP models
export class CLIPPreTrainedModel extends PreTrainedModel { }
Expand Down Expand Up @@ -5326,6 +5392,7 @@ const MODEL_FOR_QUESTION_ANSWERING_MAPPING_NAMES = new Map([

const MODEL_FOR_VISION_2_SEQ_MAPPING_NAMES = new Map([
['vision-encoder-decoder', ['VisionEncoderDecoderModel', VisionEncoderDecoderModel]],
['pix2struct', ['Pix2StructForConditionalGeneration', Pix2StructForConditionalGeneration]],
]);

const MODEL_FOR_DOCUMENT_QUESTION_ANSWERING_MAPPING_NAMES = new Map([
Expand Down
23 changes: 19 additions & 4 deletions src/pipelines.js
Original file line number Diff line number Diff line change
Expand Up @@ -1743,15 +1743,30 @@ export class ImageToTextPipeline extends (/** @type {new (options: TextImagePipe
const isBatched = Array.isArray(images);
const preparedImages = await prepareImages(images);

const { pixel_values } = await this.processor(preparedImages);
const inputs = await this.processor(preparedImages);

let batchedInputs = [];

const main_input = inputs[this.model.main_input_name];
if (this.model.config.model_type === 'pix2struct') {
const batch_size = main_input.dims[0];
for (let i = 0; i < batch_size; ++i) {
const items = {};
for (const key in inputs) {
items[key] = inputs[key][i].unsqueeze(0);
}
batchedInputs.push(items);
}
} else {
batchedInputs = main_input.unsqueeze(1);
}

const toReturn = [];
for (const batch of pixel_values) {
batch.dims = [1, ...batch.dims]
for (const batch of batchedInputs) {
const output = await this.model.generate(batch, generate_kwargs);
const decoded = this.tokenizer.batch_decode(output, {
skip_special_tokens: true,
}).map(x => ({ generated_text: x.trim() }))
}).map(generated_text => ({ generated_text }))
toReturn.push(decoded);
}

Expand Down
Loading
Loading