-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make vector times matrix faster #1937
Conversation
It has to be 8*I as it not c..
+I*sizeof,,,()
…On Thu, 31 Oct 2024, 23:17 Tommy Hofmann, ***@***.***> wrote:
@fieker <https://github.com/fieker> I is working well for vector *
matrix, but I am having trouble in the other case. I guess he does not like
my
bb.rows = reinterpret(Ptr{Ptr{ZZRingElem}}, pointer([pointer(be) + i for i in 0:length(b) - 1]))
but I am not sure what I am doing wrong.
julia> A = matrix(ZZ, 2, 2, [1, 2, 3, 4]); b = ZZRingElem[10, 0];
julia> A * b
2-element Vector{ZZRingElem}:
7690
0
------------------------------
You can view, comment on, or merge this pull request online at:
#1937
Commit Summary
- 04f5903
<04f5903>
bla
File Changes
(1 file <https://github.com/Nemocas/Nemo.jl/pull/1937/files>)
- *M* src/flint/fmpz_mat.jl
<https://github.com/Nemocas/Nemo.jl/pull/1937/files#diff-40f464b6a6e7233fe4abb2db0f0386afa9e46a52ea4e0fd1053e7d54855af5e3>
(77)
Patch Links:
- https://github.com/Nemocas/Nemo.jl/pull/1937.patch
- https://github.com/Nemocas/Nemo.jl/pull/1937.diff
—
Reply to this email directly, view it on GitHub
<#1937>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AA36CV62ZEU27Q2VBX6HIE3Z6KT6HAVCNFSM6AAAAABQ7HEVTGVHI2DSMVQWIX3LMV43ASLTON2WKOZSGYZDOOJSGQYTGMY>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
ah right, thanks |
I am surprised that this is faster than just calling flint in these cases. Can you (once it works) post some benchmarks here? |
It is still calling flint, but eventually |
Timings for
Here is the same for the new version versus
Here the same for
So we might have to do some tuning for small dimensions. |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #1937 +/- ##
==========================================
- Coverage 88.00% 87.92% -0.09%
==========================================
Files 99 99
Lines 36361 36402 +41
==========================================
+ Hits 31999 32005 +6
- Misses 4362 4397 +35 ☔ View full report in Codecov by Sentry. 🚨 Try these New Features:
|
@@ -1784,13 +1784,69 @@ end | |||
addmul!(z::ZZMatrixOrPtr, a::ZZMatrixOrPtr, b::Integer) = addmul!(z, a, flintify(b)) | |||
addmul!(z::ZZMatrixOrPtr, a::IntegerUnionOrPtr, b::ZZMatrixOrPtr) = addmul!(z, b, a) | |||
|
|||
function _very_unsafe_convert(::Type{ZZMatrix}, a::Vector{ZZRingElem}, row = true) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
function _very_unsafe_convert(::Type{ZZMatrix}, a::Vector{ZZRingElem}, row = true) | |
function _very_unsafe_convert(::Type{ZZMatrix}, a::Vector{ZZRingElem}, Val{row} = Val(true)) where {row} |
this should push performance ever so slightly further as it eliminates a jump in if row
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I doubt this is measurable
@thofma I am unsure how to read your tables. What are the floating point numbers in it: times? ratios? Since you write "The bigger the number, the faster the new method." it sounds like perhaps "old time divided by new time", i.e. "by which factor did we get faster"? And then a value below 1 means the new method is slower? If that guess is right I think I understand the first table. For the second table you write
What does that mean? What is being compared here exactly? |
The guess is correct.
Hopefully clarified it to
|
I added some cutoff when the new method is used. Maybe @fieker can have quick artificial glance? |
@thofma could you move the |
done |
Another maybe stupid question: wouldn't it be even faster to just do a similar conversion from vectors to matrices inside of flint? Aka leave everything in Nemo the same and just move everything from here to flint? (I know that this needs c code and stuff, but could be considered before you do this here for all of the matrix types) |
On Wed, Nov 20, 2024 at 02:39:16PM -0800, Lars Göttgens wrote:
Another maybe stupid question: wouldn't it be even faster to just do a
similar conversion from vectors to matrices inside of flint? Aka leave
everything in Nemo the same and just move everything from here to
flint? (I know that this needs c code and stuff, but could be
considered before you do this here for all of the matrix types)
Don't think so: this would require, in c, to deal with a julia array of
fmpz which are a julia struct. In julia, it is fast and easy (and
possibly dangerous) to just extract the .d entry which is the fmpz.
<y gues is you might be able to do this for Vector{Int} and so, but I do
not believe it will be (seriously) faster than in julia.
…
--
Reply to this email directly or view it on GitHub:
#1937 (comment)
You are receiving this because you were mentioned.
Message ID: ***@***.***>
|
Sure, anyone should feel free to implement this directly in C. But unless this happens this week and there is a new flint release this week, I will go move with the approach here. @fieker are you happy with the changes here? |
@fieker I is working well for vector * matrix, but I am having trouble in the other case. I guess he does not like my
but I am not sure what I am doing wrong.