This repository has been archived by the owner on Jan 3, 2023. It is now read-only.
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
* Fix broadcast v1 reference (#3880) * Added reproducer for issue with broadcast v1 * Make reference broadcast work with V1 broadcast * Deprecate runtime::Tensor::copy_from * Force Gelu decompose on CPU (#3887) * Round the right bit with denorms (#3885) * Round the right bit with denorms * Rounding to inf * Attribute visitor (#3579) * Sketch of attribute walker * Review comments * merge error? * Remove unused method * simplify, make some ser tests work * Don't look for keys that aren't there * Factory registry, more ops visited, generic ser/dser start * More merge * cleanup * Adapter for enums * Compiler error * Test of user-defined op * Simplify enum name pairing * Update distributed.hpp * Review comments * compiler error * Direct access to non-primitive types from adapters * Define and export type info * attr enums, AvgPool*, vectors * Cleanup * some comments * Allow type info to be used as a key. * Don't leave output serialization shapes set. * Auto adapter * More ops, adapters * Missing symbol * Remove PartialShape and element::Type methods from visitor * Fix type info * Remove unused variable * Simplify * namespace error * exports * Uniform names * Some better names * More name cleanup, simplify visitor implementation * Fix template, add test * Revert serializer * Add instantiations * Work-around gcc issue * VS exports * VS exports * windows export * vs * vs * vs * vs * Simplify * vs * vs * Add some missing attributes * Missing factories * Merge error * Fix Add factories * Missed type * [FUSED] Add new LogSoftmax fused op (#3867) * LogSoftmax introduced * Added LogSoftmax to serializer * Fixed style * Fixed CmakeLists style * code review remarks introduced * Code review remarks introduced * [ONNX] Importer should use fused op for MatMul (#3842) * [ONNX] Importer should use fused op for MatMul * Fix a bug in fused matmul op * Dont reshape matmul inputs to at least 2D any more * [SPEC] Add auto_broadcast parameter to SquaredDifference (#3856) * [SPEC] Add auto_broadcast parameter to SquaredDifference * Rename set_autobroadcast->set_autob * [Spec][FusedOp]Adjust SpaceToDepth fused op to specification (#3862) * Added support mode for SpaceToDepth * Added unit tests * Fixed styles * Revert changes in prototxt files * Force AutoBroadcast defaults (#3878) * Force AutoBroadcast to be specified at the op level since no default is correct for all ops. * exports * Added constant folding for binary ops (#3895) * Modify Gather constant folding to support v1 op. * Address PR feedback. * Update fused ops groupconvolution, gelu and layernorm to be dynamic friendly (#3876) * set output et * set output et * overwrote validate and infer * Add full path to gtest for build via ninja (#3882) * [FUSED] Add reciprocal op (#3851) * [FUSED] Add reciprocal op * Review Fix #1 * Move operator op::v1 -> op * Fix serializer * Review Fix I * [SPEC] Add new v1::FloorMod operator (#3852) * [SPEC] Add new v1::FloorMod operator * Review Fix I * [MLIR] Fix MLIR build on mac OS (#3896) * Fix MLIR build on mac OS * Style * Style * [MLIR] Bump MLIR commit to c61db4bb (#3879) * WIP * WIP * WIP * WIP * style * WIP * WIP * Add err msg * Fix headers and cleanup * Bug Fix: incorrect shape validation logic. (#3897) * Allow for overriding functions in visualization (#3900) * Add ReplaceSlice to ZeroDimTensorEliminiation pass (#3899) (#3910) * Add ReplaceSlice to ZeroDimTensorEliminiation pass * style * Default constructor needs to init autob (#3913) * Implementation of CrossEntropy and CrossEntropyBackprop as fused Op's (#3818) * - Implementaion of CrossEntropy and CrossEntropyBackprop as fused Op's * - unit test case for CE fprop - fix bug in decompose_op * WIP debug PDPD unit test failure * fixed broadcasting issue * -fix bdcast issue for multi dim tensor * utilities to restore the original tensor shape * i) style-fix ii) rename variables * - unit test for multiple dimensions ii) refactor create_mask to seperate function * - fixed unit tests * fix style * set output element type to dynamic in pre_validate and infer shape * disable ce with one hot unit test on PlaidML * add CE op to fused_op_tbl * - add serialzier support for CE and CE Backprop * Update ToC to better match docplan spreadsheet (#3846) * New ToC * Working on docplan * Clean up for toc * Link to existing APIs on quantization doc * Better align topics with docplan ToC; add section for dyn shapes * Title casing to be consistent * PR reviews * New build preview * Add default opset version, new versioning schema * Remove duplicate file causing doc build warning * Fix CSS rendering issues (#3921) * Fix for the bug with as_type_ptr for TensorIterator::Input/Ouput desc (#3906) * Updated unit test to reproduce a bug * Code style * Add exports * Added missed export * Bug fix in conv v1 shape inference (#3912) * [SPEC] Add new v1::VariadicSplit operator (#3868) * [SPEC] Add new v1::VariadicSplit operator * Add missing namespace, fix a typo in doc * Apply suggestions from code review Co-Authored-By: Michał Karzyński <[email protected]> * Style fix * Set all of the inputs to be relevant to output shape * Set output type if numer of outputs is known * Add node validation for known input * Fix for windows ninja (#3917) * Fix for windows ninja * Fix for centos build * Remove fix for centosa * Update ONNX importer to use v1 version of Softmax (#3894) * Added downgrade pass for Softmax. * Updated Softmax op to v1. * Created vector with a right capacity. * Include numeric header to enable std::iota function * Removed unused numeric header from the old file * Fix includes style * Fix shape inference of TensorIterator body (#3922) * fix for shape inference of tensor iterator body * updated unit test for case end = -2 * indexes in unit tests * Updated formula for num_iterations * resolve compiler warning (#3923) * Added u1 precision for binary weights (#3914) * Added U1 precision for binary weights * Handle switch cases with u1 type * Fixed code style * Added convert_to_string support for u1 type * Use real C type for u1 type. Co-Authored-By: Robert Kimball <[email protected]> * Fused_op: BatchMatMulTranspose (#3871) * Initial commit * Add decompose_op and unit-test * Style fix * Fix CI error * Address review comments * Remove CPUBatchFusion * Address review feedback * Address review feedback * Added type_prop tests * Moved 1 test from cpu to core to keep together * Address PR comments * Fix style * Change repositories addresses to use SSH (#3889) * Move CPU only unit tests to the cpu test file (#3919) * Cyphers/uop (#3903) * Address op_tbl issues * fix * fix * fix * Cleanup * cleanup * cleanup * More fixes * Revert ser changes * Compiles * opset conversion fixed * Fix opset conversion tests * Deal with Reciprocal and FloorMod movement * Cleanup * Remove duplicate enums * Experiment * experiment * Types * Reorg around clang 3.9 bug * Add default constructor to some ops missing them (#3924) * [SPEC] HardSigmoid adjustments (#3857) * Construct HardSigmoid with alpha and beta as inputs * Switch to the new HardSigmoid constructor entirely * Broadcast with numpy style in hard sigmoid * Python bindings adjustment to the new constructor * Different way of creating constants * Accept scalars instead of 1D vectors for alpha and beta * Adjust the python tests to the new HardSigmoid constructor * Use v1 ops in fused HardSigmoid * Relax the static shape requirement for alpha and beta * Fix merge * CropAndResize op (#3893) (#3925) * Stub for CropAndResize * Cut and pasteo * Need a cast * Put all the op header includes in one header file, ops.hpp (#3929) * Put all the op header includes in one header file, ops.hpp * Update ops.hpp * Fix compilation issues for default constructors (#3928) * Make Node's type_info mandatory (#3891) * Make Node's type_info mandatory * Add ReplaceSlice to ZeroDimTensorEliminiation pass (#3899) * Add ReplaceSlice to ZeroDimTensorEliminiation pass * style * Force Gelu decompose on CPU (#3902) * Copy rt info (#3934) * Matmul float type test case for UEP (#3877) * Matmul float type test case for UEP Signed-off-by: suryasidd <[email protected]> * Removed microsoft ops domains and ran clang-format Signed-off-by: suryasidd <[email protected]> * [SPEC] Add OneHot:v1 (#3884) * Moved OneHot to v0 * Introduced OneHot:v1 * Added shape calculation for OneHot:v1 * Added element types checking * Added output shape tests * Added tests to checking if inputs are scalars * Updated OneHot:v1 doc * Implemented OneHot:v1 downgrade pass * Using OneHot:v1 in onnx_importer * Implemented OneHot:v0 upgrade * Fixed OneHot onnx_importer * Refactored normalize_axis * Added OneHot:v1 serialized * Code review remarks introduced * Added doc to normalize_axis * Enable pipelining in CPU Backend (#3916) * Enable pipelining in CPU Backend * Applying clang-formatting to my previous commit * Changing CPU backend test. executable_can_create_tensor will now return true * [SPEC] Add support string as AutoBroadcastSpec (#3909) * Support string casting to AutoBroadcastSpec * Make string values consistent * Adding default ctor for Constant (#3938) * Adding default ctor * Address PR feedback * Cumulative Sum (#3873) * - Op defination for cummalative sum * WIP reference kernel for cummulative sum * - unit test case for default cum_sum - addition ctor for cumsum to accept axis as a integer insted of Node type - style fix * - add serializer support - fix failing unit test case - update Op in the interpreter dispatcher * - CPU builder and DEX support for CumSum * - implemented mapping tensor elements to corrosponding axis * - unit test for multiple dims - fix axis in the op defination - support for reference kernel to compute across all axis * - added support for exclusive and reverse modes - more unit test case for all modes * - codegen support for CumSum - disable CumSum unit test for PlaidML * -Add missing header to codegen stream writer * fixed codegen writer * change return type of exclusive and reverse to bool * - support for dynamic shape - support to handle all tensor types in CPU builder * - add support for interpreter to handle different axis types * Style fix * Fix incorrect uses of `description()` (#3946) * Fix incorrect uses of `description()` * type-o/namespace * Move non-primitive attribute adapters to adaptee's files (#3949) * Move non-primitive attribute adapters to adaptee's files * Cast in copy * Update ONNX importer Gemm to produce MatMul op (#3927) * Update ONNX importer Gemm to produce MatMul op * Address opset3 bug * [SPEC][FusedOp] Add Mod operator (#3908) * Mod operator introduced * Introduced onnx importer, fixed implementation * styles applied * Refactored assert comment for mod * Add failure mod test to plaidml manifest * Code review remarks introduced * Changed ops used in decompose to v1 * Moved Mod to op_v1_tbl * Partially fixed visibility for symbols (Ops, Nodes, Transformations, Matchers) (#3767) * Partially fixed visibility for symbols: * Resolved issues with RTTI and AppleClang * style * review fixes * fixed compilation with msvc 2019 * Export extra API which is used in other public classes * CMAKE: MSVS -> MSVC * Fixed template export * Fixed compilation flags * Fixed default args * removed self-inclusion * export * shape * export strides * Export all symbols needed for OpenVINO * Export * disable cpu * AxisSet * disable warning * fix * removed second declaration * fixed runtime exports * Reverted some changes * Fixed LNK2005 error on Windows * Fixed code style check * Fixed EnumAttributeAdapterBase * Remove export of template classes * Fixed code style for EnumAttributeAdapterBase * Fixed for protobuf * Test cleanups (#3942) * Documentation for Dynamic Shapes and additional graph construction options (#3930) * Initial dynamic shapes doc * Basics on dynamic shapes, with example code * Add glossary defs and dynamic shapes example * Slightly better organization * Address make style check failure, maybe * Test dynamic shapes doc w 0.27.0-rc.0+9aa81d9 * Resolve doc build error w new opset versioning * Review comments addressed * Add theme-relevant revised illustrations from collab_ngai * style * Style fixes * Run make style-apply with clang-format-3.9 * [ONNX] Add CumSum to ONNX importer (#3918) * Register CumSum operator in onnx importer * Missing whitespace * Update CMakeLists.txt * ONNX importer - CumSum op init * Simple CumSum onnx model * ONNX CumSum model simple test * Default axis * Axis input test * Inputs variable * Style apply * Test 3d exclusive reverse * Apply style * Add memory header and std namespace * Add model_cum_sum tests to plsidml unit_test.manifest * Add model_cum_sum tests to plaidml unit_test.manifest * Changed default axis type * Test model update * Style apply * Add test for dynamic axis input * [MLIR] Fused Ops dialect declaration (#3860) * WIP * WIP * WIP * All ops * Fix layernorm backprop op name * WIP: Adding tests * WIP: Adding LIT parsing/printing tests * WIP * Added LSTM cells. Fixed some ops * All builder tests * PR fixes * Fix spacing. Add missing setter to SpaceToDepth * Update spaceToDepth lit test * PR fixes * Build fix * Another fix * Fixed optional args * [MLIR] Enable ViewOp in Affine Lowerer (#3911) * Map each ng tensor to a linear buffer and a view * fix comment * Create views only when a value is assigned a buffer id * style * Fix lit test * ConstantFolding for v1::StridedSlice operation (#3955) * constant folding for strided slice * code style * Refactoring * fix for warning: deleting an unused variable * Opset1 Definition (#3813) * Opset1 * Added opset1.hpp * Added more ops to opset0 and opset1 * Move opset1.hpp up and remove opset0.hpp * Add versioning to more ops * Revert to older pass names to keep compatibility for external components * Fix compilation errors with codegen * merge * Added compile-time check for opset * Added opset1 tbl * Add op_version table of all ops * Create factories from op_version_tbl * reorg unsupported ops in int backend * Added temporary alias for GreaterEqual * Add missing case to interpreter enumeration * Finish opset serializer cleanup (#3939) * Opset-based opset conversion (#3937) * Opset-based opset conversion * Add other opset conversion * Use ops.hpp * Update opset0_tbl.hpp * Switch interpreter to opset0 + a few extras (#3941) * Switch interpreter, gcpu to opset0 * Remove unnused files * Give interpreter its own opset * style * Fix namespace * Fix rounding type conversion * Work-around for bad clang3.9 bug * Work-around * [SPEC] Add negative axes support for ReverseSequence (#3926) * Added negative axes support for ReverseRequence * code review remarks introduced * Disable reverse sequence for PlaidMl tests * Fixed styles * Fixed axes assignment * Fixed normalized axes assignment * [SPEC] Adjust ConvolutionBackpropData op. (#3935) * [SPEC] Adjust ConvolutionBackpropData op. ``` inputs: 1. filters-------+ 2. output_delta | -> 1. data +---> 2. filters 3. data_batch_shape -> 3. output_shape(+optional) attributes: 1. strides -> 1. strides 2. dilations-----+ 3. pads_begin | -> 2. pads_begin 4. pads_end | -> 3. pads_end +---> 4. dilations -> 5. +auto_pad(optional)[PadType::EXPLICIT] -> 6. +output_padding(optional)[zeros] ``` * Review fix I * [SPEC] ConvertLike op (#3944) * [Spec] Add 3-input constructor to DetectionOutput (#3966) * Add 3-input constructor to DetectionOutput * Review comments * v1::Reshape zero_flag renamed. Default value unset (#3945) * Add groupconvolution bprop (#3940) * add placeholder for conv bprop * add constructor, api, serializer and can compile * implement decompose_op * fix arg num * fix and update * address comment, clean up and add ut placeholder * update ut * address comment on groups * Added explicit dependencies between buildable target and external project (#3962) * Relax check on LRN for rank requirement to be >=3 (#3952) * relax check for LRN for requirement rank should be >=3 * rename unit test names * - Disable lrn unit test with axes for CPU backend * remove outdated unit test on rank requirement from type_prop * - disable newly added lrn unit test in plaidMl * [SPEC] ReduceLogicalAnd & ReduceLogicalOr (#3874) * ReduceLogicalAnd op implementation * ReduceLogicalOr op implementation * Add basic constant folding support * Fix typo * Revert "Add basic constant folding support" This reverts commit 5d14a18. * Introduce and use a new base class for logical reductions * Constant folding for v1::ReduceLogicalAnd * Constant folding for v1::ReduceLogicalOr * Obsolete cout removal * [SPEC] Adjust Split (#3943) * Changed axis to Node * Added using normalize from validation util * refactored split * Added typrop tests to Split * Added set_input_is_relevant_to_shape for Split * clang style applied * Fixed var name * Code refactor * mergre from master. part.2 * Constructor to provide CI compatibility * CI compatibility * CI compatibility * Updated get_outputs * CI compitability * Fixed get_outputs function * [SPEC] Add DeformablePSROIPooling v1 (#3954) * Initial commit * Moved DeformablePSROIPooling to v1 * Moved DeformablePSROIPooling to v1. Part.2 * Added missing fields * Added inferance shape * Added type prop UT * Added serialization * Doc + styles applied * Revert incorrect changes * Revert incorrect changes. Part.2 * Moved to NGRAPH_API * integration with master * Code review remarks introduced * DeformablePSROIPooling updated to new spec * Add v1 version of Subtract with Numpy broadcasting as default (#3957) * V1 version of Subtract with default Numpy autobcast * Update op_v1_tbl.hpp with v1 version of Subtract * Use v1 of Subtract in ONNX importer * Add v1 namespace * Update namspece * Missing punctuation * Add Subtract to opset0 downgrade * Add Subtract to opset1 upgrade * Add Subtract header to cpu emmiter * Update serializer * Add Subtract to opset_pass tests * Use downgrade method * Add get_version method * Style apply * Add v1 Substract to check opset1 * Add NGRAPH_API before class name * Removed get_version method * Separate cases for Subtract and Subtract_v1 in serializer * Update op_version_tbl with v1 Subtract * NUMPY autobcast for no args constructor * Add Subtract_v1 to serializer * [SPEC] Add constant folding for LogicalNot:v1 (#3961) * Added consant folding for LogicalNot * Fixed alphabetical order * Update the tolerance on auto_broadcast_test (#3959) * Copy RT info for parameters (#3969) * [SPEC] Add GatherTree:v1 (#3967) * GatherTree introduced * Added GatherTree type_prop tests
- Loading branch information