Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add heavy compilation benchmarks #313

Open
wants to merge 6 commits into
base: master
Choose a base branch
from
Open

Conversation

aviatesk
Copy link
Member

@aviatesk aviatesk commented Apr 6, 2023

This commit adds more real-world, heavy compilation benchmarks, using DataFrames.jl, CSV.jl, Plots.jl and OrdinaryDiffEq.jl as examples.

replaces #252
closes #251

This commit adds more real-world, heavy compilation benchmarks,
using DataFrames.jl, CSV.jl, Plots.jl and OrdinaryDiffEq.jl as examples.

replaces #252
closes #251
@aviatesk
Copy link
Member Author

aviatesk commented Apr 7, 2023

@vtjnash what do you think on this?

@vtjnash
Copy link
Member

vtjnash commented Apr 7, 2023

We need to think about some way to keep the code stable as a target. Maybe we should vendor copies of it?

@aviatesk
Copy link
Member Author

aviatesk commented Apr 7, 2023

Well, it would be huge amount of code if we vendor them with all their dependencies? CSV might be small, but Plots and OrdinaryDiffEq come with lots of dependencies.

@vtjnash
Copy link
Member

vtjnash commented Apr 7, 2023

Yeah, that is the trouble. Either it may be too much variability to know if something has changed or too much code to maintain. And the code needs to support a very wide range of Julia versions to be a useful comparison too.

@KristofferC
Copy link
Contributor

We have a manifest for that?

@aviatesk
Copy link
Member Author

Yes, Manifest.toml has been checked into BaseBenchmarks.jl, so I think we are going to use a stable dependency graph as far as we don't change it.

And the code needs to support a very wide range of Julia versions to be a useful comparison too.

Just a heads up, the "inference" benchmark only works with Julia 1.8 or higher. Also, sometimes the CI test on Windows machines goes haywire, but I haven't been able to replicate the issue on macOS or Linux machines yet.

@aviatesk
Copy link
Member Author

So after all I think this PR is ready to go as is?

@aviatesk aviatesk closed this Apr 10, 2023
@aviatesk aviatesk reopened this Apr 10, 2023
@KristofferC
Copy link
Contributor

Wouldn't it make sense to check in a separate manifest for these specific tests? Otherwise it becomes hard to upgrade only the dependencies of BaseBenchmarks.jl itself.

@aviatesk
Copy link
Member Author

aviatesk commented Apr 10, 2023

Wouldn't it make sense to check in a separate manifest for these specific tests? Otherwise it becomes hard to upgrade only the dependencies of BaseBenchmarks.jl itself.

Done. EDIT: It doesn't seem to work.

Comment on lines 278 to 290
using Pkg
let old = Pkg.project().path
infbenchmarkenv = @__DIR__
try
Pkg.activate(infbenchmarkenv)
Pkg.instantiate()
Pkg.precompile()

using DataFrames, CSV, Plots, OrdinaryDiffEq
finally
Pkg.activate(old)
end
end
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@KristofferC Do you know how we can achieve something like this? Currently we get:

  Activating project at `/private/var/folders/xh/6zzly9vx71v05_y67nm_s9_c0000gn/T/jl_pat9yZ`
ERROR: LoadError: ArgumentError: Package BaseBenchmarks does not have DataFrames in its dependencies:
- You may have a partially installed environment. Try `Pkg.instantiate()`
  to ensure all packages in the environment are installed.
- Or, if you have BaseBenchmarks checked out for development and have
  added DataFrames as a dependency but haven't updated your primary
  environment's manifest file, try `Pkg.resolve()`.
- Otherwise you may need to report an issue with BaseBenchmarks
Stacktrace:
  [1] macro expansion
    @ ./loading.jl:1594 [inlined]
  [2] macro expansion
    @ ./lock.jl:267 [inlined]
  [3] require(into::Module, mod::Symbol)
    @ Base ./loading.jl:1571
  [4] top-level scope
    @ ~/julia/packages/BaseBenchmarks/src/inference/InferenceBenchmarks.jl:286
  [5] include(mod::Module, _path::String)
    @ Base ./Base.jl:457
  [6] include(x::String)
    @ BaseBenchmarks ~/julia/packages/BaseBenchmarks/src/BaseBenchmarks.jl:1
  [7] top-level scope
    @ none:1
  [8] eval
    @ ./boot.jl:370 [inlined]
  [9] load!(group::BenchmarkGroup, id::String; tune::Bool)
    @ BaseBenchmarks ~/julia/packages/BaseBenchmarks/src/BaseBenchmarks.jl:42
 [10] load!
    @ ~/julia/packages/BaseBenchmarks/src/BaseBenchmarks.jl:39 [inlined]
 [11] macro expansion
    @ ./timing.jl:393 [inlined]
 [12] loadall!(group::BenchmarkGroup; verbose::Bool, tune::Bool)
    @ BaseBenchmarks ~/julia/packages/BaseBenchmarks/src/BaseBenchmarks.jl:58
 [13] loadall!
    @ ~/julia/packages/BaseBenchmarks/src/BaseBenchmarks.jl:54 [inlined]
 [14] #loadall!#3
    @ ~/julia/packages/BaseBenchmarks/src/BaseBenchmarks.jl:52 [inlined]
 [15] loadall!()
    @ BaseBenchmarks ~/julia/packages/BaseBenchmarks/src/BaseBenchmarks.jl:52
 [16] top-level scope
    @ ~/julia/packages/BaseBenchmarks/test/runtests.jl:8
 [17] include(fname::String)
    @ Base.MainInclude ./client.jl:478
 [18] top-level scope
    @ none:6
in expression starting at /Users/aviatesk/julia/packages/BaseBenchmarks/src/inference/InferenceBenchmarks.jl:1
in expression starting at /Users/aviatesk/julia/packages/BaseBenchmarks/test/runtests.jl:8
ERROR: Package BaseBenchmarks errored during testing

when running Pkg.test().

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you try push this directory to LOAD_PATH instead of using activate?

Copy link
Member Author

@aviatesk aviatesk Apr 10, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm this does not work neither:

using Pkg
let
    push!(LOAD_PATH, @__DIR__)
    try
        Pkg.instantiate()
        using DataFrames, CSV, Plots, OrdinaryDiffEq
    finally
        pop!(LOAD_PATH)
    end
end

(The error is same. And it persists even we do not call Pkg.instantiate())

@aviatesk
Copy link
Member Author

Can we move this forward without setting up a separate environment for "inference" benchmark since I couldn't find a workaround for #313 (comment)?

@KristofferC
Copy link
Contributor

Sure, I can look at fixing that up later.

@staticfloat
Copy link
Member

I pushed a commit that I think fixes things locally, but it fails on CI; feel free to drop that commit and go back to what Shuhei was doing originally.

@aviatesk aviatesk force-pushed the avi/heavy-comp-benchmark branch 4 times, most recently from 9ddf5b0 to 78d1114 Compare April 14, 2023 09:21
Comment on lines 60 to 48
Pkg.activate(project_path) do
# If you're running into dependency problems when loading a benchmark,
# try uncommenting this, it can help you to understand what's going on.
# Pkg.status()
needs_instantiate && Pkg.instantiate()

Core.eval(Main, :(include($modpath)))
modsuite = Core.eval(Main, modsym).SUITE
group[id] = modsuite
if tune
results = BenchmarkTools.load(PARAMS_PATH)[1]
haskey(results, id) && loadparams!(modsuite, results[id], :evals)
end
end
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@staticfloat @KristofferC
Any idea for instantiating the separate environment for distributed workers? In the test suite we use Distributed and worker process errors since the separate environment isn't instantiated there: https://github.com/JuliaCI/BaseBenchmarks.jl/actions/runs/4733690961/jobs/8401569811?pr=313#step:5:611

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure... Seems something is wrong with the LOAD_PATH on the worker? There shouldn't be a need to instantiate again because once the environment is instantiated, everything should be downloaded. Maybe for now, we just go with one manifest and pin all the packages that are used for testing?

staticfloat and others added 3 commits August 1, 2023 12:41
This changes the `load!()` machinery to eval into `Main` instead of
`BaseBenchmarks`, which actually allows for a runtime-changable
dependency list.
@aviatesk
Copy link
Member Author

aviatesk commented Aug 1, 2023

Okay, so I was unsure how to dynamically activate the environment in runtime of the distributed processes, so I returned to the original single Manifest approach. With this, the tests should pass without problems. The packages used in the inference benchmark are pinned, so there should be no issues when updating other dependent packages. It should work with Julia 1.9 and later, so comparisons across different Julia versions should be possible.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Compilation benchmarks
4 participants