v0.10.0
What's Changed
Breaking Changes
- Binary ops (
add
,sub
,div
,mul
,maximum
,minimum
) take ownership of rhs by @coreylowman in #268 - backwards only allows 0d tensors now by @coreylowman in #206
- Clone now keeps same id, removing Tensor::duplicate by @coreylowman in #249
- Multi axis reductions
- See docs
- #189, #190, #194
- Reduction functions now can reduce across any axis/axes:
mean
,sum
,max
,min
,stddev
,var
,softmax
,log_softmax
, andlogsumexp
- Remove
-1
from valid axes, addtrait HasLastAxis
to use in generic functions instead - Adding
normalize
function that normalizes across any axis - Removing single axis reduction functions
fn *_axis()
:mean_axis
,sum_axis
,max_axis
,min_axis
,normalize_axis
,std_axis
,var_axis
- Rename
HasAxis
toHasAxes
- Add
trait BroadcastTo
- Remove
trait Broadcast1
,trait Broadcast2
,trait Broadcast3
,trait Broadcast4
- Remove
- Add
trait Reduce
/trait ReduceTo
- Remove
trait Reduce1
- Remove
- Batched select & select consistency
- See docs
- Renaming SelectTo, using SelectTo for batched select by @coreylowman in #217
- Add Batched Select for devices and tensor_ops by @coreylowman in #182
- Reduce things in prelude by @coreylowman in #209
- Renaming FlattenImage to Flatten2D by @coreylowman in #243
New features
Arc
in Tensors instead of Rc by @caelunshun in #236powi()
andpowf()
functions by @coreylowman in #167no_std
support- See feature flags docs
- Remove num-traits, no default features on depends by @coreylowman in #200
- Adding intel-mkl feature and removing the 4 mkl-- features by @coreylowman in #239
- Adding module that has docs for feature flags by @coreylowman in #240
- Adding "numpy" feature to make numpy & npz optional by @coreylowman in #241
- Adding
#![no_std]
support viano_std_compat
by @coreylowman in #244 - Adding default-features = false to dependencies by @coreylowman in #257
- Adding Axis permutations via
trait PermuteTo
. - Adding
trait ModuleMut
- See docs
- #225
- Removing Module super traits by @coreylowman in #223
- Rework Dropout/DropoutOneIn to use ModuleMut by @coreylowman in #226
- Adding decoupled/l2 weight decay in optimizers:
- See docs
- add HasArrayData to GradientProvider by @cBournhonesque in #261
- Add weight decay to SGD by @cBournhonesque in #258
- Adding weight_decay to Adam by @coreylowman in #275
- Adding weight decay to RMSprop by @coreylowman in #276
- Adding
nn::Transformer
#175, #173, #180- See docs
- Adding
nn::MinPool2D
,nn::MaxPool2D
,nn::AvgPool2D
by @coreylowman in #214- See docs
- Adding
nn::MinPoolGlobal
,nn::MaxPoolGlobal
,nn::AvgPoolGlobal
by @coreylowman in #216- See docs
- Adding
nn::BatchNorm2D
by @coreylowman in #228- See docs
Misc changes
- Add tensor() function as a convenient way to make tensors from arrays by @coreylowman in #161
- See docs
- Remove allocation in dropout implementation by @coreylowman in #164
- Removing Tensor::OwnedTape by @coreylowman in #197
- Revamping examples/ by @coreylowman in #205
- Conv cleanup
- Moving conv into device and cleaning up a bit by @coreylowman in #212
- Minifying conv impls by @coreylowman in #213
- Changing conv2d and conv2d_batched to methods of tensors by @coreylowman in #221
- Replacing conv2d implementation with matmuls by @coreylowman in #237
- Fix typos by @cBournhonesque in #235
- Combining multiple where clauses with const generics into a single one by @coreylowman in #264
- Checking for null ptr in AllocateZeros by @coreylowman in #271
- Reducing allocations in
map_df_uses_fx
by @coreylowman in #272 - Adding with_empty_tape and with_diff_tape by @coreylowman in #274
New Contributors
- @cBournhonesque made their first contribution in #235
- @caelunshun made their first contribution in #236
Full Changelog: v0.9.0...v0.10.0