You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm not sure what exactly the right behavior should be but currently it seems like I've been running into a bunch of surprises around Distributed with MKL and threading. Ideally whatever the default behavior is for OpenBLAS should also apply to MKL?
The text was updated successfully, but these errors were encountered:
According to a stacktrace from a hung DistributedNext CI job this task was
causing the process to hang before exiting:
```julia
InterruptException()
_jl_mutex_unlock at C:/workdir/src\threading.c:1012
jl_mutex_unlock at C:/workdir/src\julia_locks.h:80 [inlined]
ijl_task_get_next at C:/workdir/src\scheduler.c:458
poptask at .\task.jl:1163
wait at .\task.jl:1172
task_done_hook at .\task.jl:839
jfptr_task_done_hook_98752.1 at C:\hostedtoolcache\windows\julia\nightly\x64\lib\julia\sys.dll (unknown line)
jl_apply at C:/workdir/src\julia.h:2233 [inlined]
jl_finish_task at C:/workdir/src\task.c:338
start_task at C:/workdir/src\task.c:1274
From worker 82: fatal: error thrown and no exception handler available.Unhandled Task ERROR: InterruptException:
Stacktrace:
[1] poptask(W::Base.IntrusiveLinkedListSynchronized{Task})
@ Base .\task.jl:1163
[2] wait()
@ Base .\task.jl:1172
[3] wait(c::Base.GenericCondition{ReentrantLock}; first::Bool)
@ Base .\condition.jl:141
[4] wait
@ .\condition.jl:136 [inlined]
[5] put_buffered(c::Channel{Any}, v::Int64)
@ Base .\channels.jl:420
[6] put!(c::Channel{Any}, v::Int64)
@ Base .\channels.jl:398
[7] put!(rv::DistributedNext.RemoteValue, args::Int64)
@ DistributedNext D:\a\DistributedNext.jl\DistributedNext.jl\src\remotecall.jl:703
[8] (::DistributedNext.var"#create_worker##11#create_worker##12"{DistributedNext.RemoteValue, Float64})()
@ DistributedNext D:\a\DistributedNext.jl\DistributedNext.jl\src\cluster.jl:721
```
Replaced it with a call to `timedwait()`, which has the advantage of being a lot
simpler than an extra task.
I wanted to make sure to also report this bug I described over at SciML/LinearSolve.jl#427 (comment) here so it can be discussed.
I'm not sure what exactly the right behavior should be but currently it seems like I've been running into a bunch of surprises around Distributed with MKL and threading. Ideally whatever the default behavior is for OpenBLAS should also apply to MKL?
The text was updated successfully, but these errors were encountered: