Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tag v1.0 #719

Open
dlfivefifty opened this issue Nov 12, 2024 · 24 comments · May be fixed by #729
Open

Tag v1.0 #719

dlfivefifty opened this issue Nov 12, 2024 · 24 comments · May be fixed by #729

Comments

@dlfivefifty
Copy link
Contributor

Given this package has not had a breaking change in the last 6 years, perhaps it would be worth just tagging v1.0?

@KristofferC
Copy link
Collaborator

#481 was argued to be breaking. But I think we should release 1.0 to finally get that into a release.

@devmotion
Copy link
Member

It broke downstream packages (and probably still does) since it was put into a semver-nonbreaking release initially (which was then reverted). But IMO this change should either be released (in a semver-breaking release) or reverted, the current situation with backporting fixes and tagging releases only on the release-0.10 branch is not a good long-term solution.

Personally I'm in favour of releasing this change, it seems reasonable to me. If it turns out to be not feasible/too problematic in practice, it could be reverted in another (breaking?) release; but since it would be opt-in rather than opt-out this time, downstream packages and users would have more time to adapt to this change.

IIRC there was some discussion about including other breaking changes in a 1.0 release, and probably therefore the development version on the master branch is 0.11.0-DEV instead of 1.0.0-DEV.

@KristofferC
Copy link
Collaborator

IIRC there was some discussion about including other breaking changes in a 1.0 release

Hasn't seen to happened in two years so at some point you just have to move forward. #481 fixed a lot of issues and seems intuitively right so I say let's just tag the current state of master as 1.0.

@devmotion
Copy link
Member

I think one question was whether NaN-safe mode should be the default.

@KristofferC
Copy link
Collaborator

That seemed to be prohibitively expensive if I recall correctly.

@dlfivefifty
Copy link
Contributor Author

Why would NaN-safe mode be the default? I think in Julia speed is almost always preferable to safety so there has to be a very strong argument.

IIRC there was some discussion about including other breaking changes in a 1.0 release

There can always be a 2.0 release

@devmotion
Copy link
Member

Why would NaN-safe mode be the default?

Because IMO the default setting should be safe and not give incorrect results. It's also very difficult for users to realize what the problem is, I saw many users struggle with this. IMO users should have to opt-in if they want to sacrifice correctness/safety for performance, as e.g. with fastmath or @inbounds. And I'm not the only one who thinks NaN-safe would be a better default, e.g. #451 (comment).

I think in Julia speed is almost always preferable to safety

I strongly disagree. At least without NaN-safe mode usually you realize very clearly if there's a problem, so at least it's not failing silently; one main issue is that many users do not realize that NaN-safe mode would fix their issue (and many don't even know that it exists).

@ajwheeler
Copy link

ajwheeler commented Nov 13, 2024

I want to strongly second the position that the nan-safe mode should be the default. In my performance-critical code, I leave it off, but I've run into a couple instances where new users are very turned off by the default behavior and come away with the impression that using ForwardDiff/julia requires arcane knowledge.

I would add that given it's place in Julia's autodiff ecosystem as the most tried-and-true, almost-always-works autodiff package (this is my experience at least), it's especially important that ForwardDiff be safe by default.

@KristofferC
Copy link
Collaborator

What performance regressions are the people arguing for the nan safe mode willing to accept? Just so I get an understanding? 2x, 5x, 10x, 50x, 100x, 1000x, infinity?

@ajwheeler
Copy link

Not speaking for others, but 2x would be acceptable, imho. The number I have in my head (from reading #179) is 10%, but that might not be realistic.

@dlfivefifty
Copy link
Contributor Author

Maybe I'm not understanding but literally nothing in Julia is "NaN-safe". All of LinearAlgebra.jl just propagates NaNs. I don't understand how a "user" can be confused by this just in ForwardDiff.jl but fine with it in every other package...

@dlfivefifty
Copy link
Contributor Author

And for every novice turned off by "unexpected" behaviour there's a non-novice that's going to walk away thinking Julia is just slow if its 2x slower than it should be...

@KristofferC
Copy link
Collaborator

That is not really what nan safe is referring to here though.

@dlfivefifty
Copy link
Contributor Author

Ok can it be clarified what nan-safe was referring to? I assumed it meant extra checks to throw errors instead of just letting nans propagate…

@ajwheeler
Copy link

I have confirmed that the nan-safe mode incurs a minimal performance penalty when autodiffing through my science case (computing a stellar spectrum). I didn't do a super-principled benchmark, but the @btime numbers are actually (weirdly) better with nan-safe mode on (are 10.6s vs 10.5s).

Not claiming that this will generalize, but it's one data point to consider.

@ajwheeler
Copy link

#179 and https://juliadiff.org/ForwardDiff.jl/stable/user/advanced/#Fixing-NaN/Inf-Issues have a good description of what is meant by nan-safe mode. But in short it's dealing with a class of cases where the partials have a mathematically-motivated non-NaN value, but end up NaN for computational/implementation reasons. If you have a mental model of how ForwardDiff works, it's easy to avoid/troubleshoot these situations, but if you don't it could be frustrating.

@KristofferC
Copy link
Collaborator

In any case, switching nan safe by default is not breaking no? So does this discussion really have to be here?

@devmotion
Copy link
Member

No, I agree. And if others agree as well, then that's not blocking for a 1.0 release.

A PR for tagging 1.0 has been open for quite some time: #467 And it seems the breaking change on the master branch would also resolve the main criticism in the PR (breaking release without breaking change).

@dlfivefifty
Copy link
Contributor Author

Ok thanks for the clarification, sorry about the Misunderstanding. I think nansafe by default is probably the right choice as it claims it’s only a 5-10% degredation and it’s easy to switch to a fast mode

@thomvet
Copy link
Contributor

thomvet commented Nov 18, 2024

I would be in favor of a 1.0 release at this point of time. It's a very stable package (i.e., 1.0 level) and with the inclusion of #481 the breaking release is also justified and not needless.

I do think that NaN-safe mode turned on makes sense for this package - given its position as a "ground truth" within the Julia autodiff ecosystem. However, I would appreciate if there is strong signposting within the documentation (and maybe even upon use - unless opted out) that there is a more performant mode available. Maybe even with a write up of #179?

@KristofferC
Copy link
Collaborator

KristofferC commented Nov 19, 2024

From my memory, the perf impact was much worse since it turns core operations (like multiplying the partials) from a few SIMD instructions to a whole slew of branchy code:

Non nan-safe:

julia> a = ForwardDiff.Dual(1.0,2.0,3.0,4.0,5.0)
Dual{Nothing}(1.0,2.0,3.0,4.0,5.0)

julia> @code_native debuginfo=:none ForwardDiff._mul_partials(a.partials, a.partials, 2.0, 1.0)
	push	rbp
	vbroadcastsd	ymm0, xmm0
	vbroadcastsd	ymm1, xmm1
	vmulpd	ymm0, ymm0, ymmword ptr [rsi]
	vmulpd	ymm1, ymm1, ymmword ptr [rdx]
	mov	rbp, rsp
	mov	rax, rdi
	vaddpd	ymm0, ymm0, ymm1
	vmovupd	ymmword ptr [rdi], ymm0
	pop	rbp
	vzeroupper
	ret

Nan-safe

julia> a = ForwardDiff.Dual(1.0,2.0,3.0,4.0,5.0)
Dual{Nothing}(1.0,2.0,3.0,4.0,5.0)

julia> @code_native debuginfo=:none ForwardDiff._mul_partials(a.partials, a.partials, 2.0, 1.0)
	push	rbp
	vsubsd	xmm2, xmm1, xmm1
	vmovupd	ymm3, ymmword ptr [rsi]
	movabs	rcx, offset .LCPI0_0
	mov	rbp, rsp
	mov	rax, rdi
	vucomisd	xmm2, xmm2
	vmovsd	xmm2, qword ptr [rdx]           # xmm2 = mem[0],zero
	jp	.LBB0_1
.LBB0_5:                                # %L61
	vxorpd	xmm4, xmm4, xmm4
	vcmpneqpd	ymm4, ymm3, ymm4
	vmovmskpd	esi, ymm4
	test	esi, esi
	jne	.LBB0_7
# %bb.6:                                # %L61
	vmovapd	xmm5, xmmword ptr [rcx]
	vsubsd	xmm4, xmm0, xmm0
	vcmpordsd	xmm4, xmm4, xmm4
	vblendvpd	xmm0, xmm5, xmm0, xmm4
.LBB0_7:                                # %L61
	vmovupd	xmm4, xmmword ptr [rdx + 8]
	vbroadcastsd	ymm0, xmm0
	vbroadcastsd	ymm1, xmm1
	vmulpd	ymm0, ymm0, ymm3
	vpermpd	ymm3, ymm4, 208                 # ymm3 = ymm4[0,0,1,3]
	vbroadcastsd	ymm4, qword ptr [rdx + 24]
	vblendpd	ymm2, ymm3, ymm2, 1             # ymm2 = ymm2[0],ymm3[1,2,3]
	vblendpd	ymm2, ymm2, ymm4, 8             # ymm2 = ymm2[0,1,2],ymm4[3]
	vmulpd	ymm1, ymm1, ymm2
	vaddpd	ymm0, ymm0, ymm1
	vmovupd	ymmword ptr [rax], ymm0
	pop	rbp
	vzeroupper
	ret
.LBB0_1:                                # %L37
	vxorpd	xmm4, xmm4, xmm4
	vucomisd	xmm2, xmm4
	jne	.LBB0_5
	jp	.LBB0_5
# %bb.2:                                # %L37
	vmovsd	xmm5, qword ptr [rdx + 8]       # xmm5 = mem[0],zero
	vucomisd	xmm5, xmm4
	jne	.LBB0_5
	jp	.LBB0_5
# %bb.3:                                # %L37
	vmovsd	xmm5, qword ptr [rdx + 16]      # xmm5 = mem[0],zero
	vucomisd	xmm5, xmm4
	jne	.LBB0_5
	jp	.LBB0_5
# %bb.4:                                # %L50
	vcmpneqsd	xmm4, xmm4, qword ptr [rdx + 24]
	vmovapd	xmm5, xmmword ptr [rcx]
	vblendvpd	xmm1, xmm5, xmm1, xmm4
	jmp	.LBB0_5

But I guess it depends what code you benchmark and if possible newer LLVM versions has improved this since I last measured.

@antoine-levitt
Copy link
Contributor

It'd be nice to do this, as the new version includes a fix for silently wrong answers (eg #653 #727)

@KristofferC
Copy link
Collaborator

So let' just do it?

@dlfivefifty
Copy link
Contributor Author

I made a PR: #728

So when people are happy just merge and register?

@devmotion devmotion linked a pull request Dec 6, 2024 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants