Skip to content

Commit

Permalink
Source-to-source (#90)
Browse files Browse the repository at this point in the history
* Implement BBCode

* Add IR optimisation functionality

* Only use type of ctx in TInterp

* Factor out ir lookup

* Remove redundant invoke-to-call functionality

* Remove redundant return node creation code

* Improve comment

* Import more things during testing

* Initial work on statement-specific transformations

* Include s2s files

* Add more cases in statement translation

* Remove tests associated to removed code

* Fix docstring

* Improve comment

* Improve docstrings

* Improve comments

* Improve comment

* Improve comment

* Throw informative error message for PhiCNodes

* Throw error for UpsilonNode

* Disable tests for interpreter during development

* Initial pass over call transformation

* More work

* Basics working

* Fix tests

* Print debug info

* Print debug info

* Ignore code coverage statements

* Restrict to 1.10

* Fix CI and comment out IR inspection

* Enable all tests

* Improve performance result display

* Make const work

* Enable more tests

* Tidy up throw_if_not_def

* Fix up throw_undef_if_not

* Improve throw_undef_if_not test

* Add additional GlobalRef test

* Improve BBCode documentation

* Loosen perf tolerance on handwritten sum

* Clean up register transformation

* Remove redundant code

* Tidy up formatting

* More register tests

* Tidy up output check

* Use new tidier register functionality

* Reactivate more tests

* Unhandle feature exception and globalref typeof

* Improve handling of GlobalRefs

* Update literals and QuoteNodes to have stacks when differentiable

* Handle PiNode in BBCode

* Update register types

* Add helper functionality to augmented register

* Import ad_stmt_info in front_matter

* Reactive more tests

* Refactor registers and implement PiNode

* Move inc_arg_numbers to utils and unit test

* Move around include order

* Refactor captures handling

* Enable more tests and mostly fix PhiNode

* Fix up phinode problem

* Fix comment

* Fix typos

* Fix vector elementtype in bbcode

* Make varargs + splatting work

* Add ReverseDiff to test deps

* Move to testing s2s

* Fix some lingering bugs

* Fix comment and disable bad test case

* Use s2s in benchmarks

* Helper functiong

* Fix comment

* Some minor improvements to shared data and compiler

* Additional test case

* Do not inline stack pushes and pops

* Improve compile times

* Fix bug

* Fix union of registers bug

* Readability improvements

* Cache oc compilation

* Strip code coverage lines

* Ignore thing with inlining problem

* Do not store input ref stacks if singleton type

* Formatting

* NoTangent for Tuples

* NamedTuple NoTangent

* Improve predecessor compute times

* Stop printing

* Inline tuple constructor

* Ignore Base check_reducedims

* Make _getter constant

* Move tuple_map and extend it

* Refine DerivedRule construction

* Fix new for NoTangent result types

* Fix new

* Remove redundant commented lines

* Add non-differentiable function to test utils

* Add comments, tidy up, rename some things

* Add non-differentiable const tester

* Rename my_tangent_stack to ret_tangent_stack

* Remove redundant line

* Support non-constant global refs

* Don't verify IR after passes

* Support copyast

* Use full benchmarking

* Tidy up some abstract type edge cases

* Align registers with OpaqueClosure type inference

* Tidy up types

* Inline stuff again

* Safer implementation of ipiv

* Make memory in Stack constant

* Formatting and lgetfield tests

* Formatting

* lsetfield tests

* Move TestResources import around

* NoTangent for composites with NoTangent fields

* GC preserve stuff

* Add NoTangent path to ifelse rule

* Enable multiple lines for reverse-pass

* Construct arg tangent stack refs in function

* Fix basic tests

* Use fixed stacks

* Exclude some rrules from DiffRules

* Use inbounds

* Add additional test and tighten performance req

* Use fixed-location tangent stacks, and 32-bit block numbering

* Display which benchmark is running

* Fix typo

* Fix low_level_rules tests

* Fix caching

* Functionality to reset global ID counter

* Try not inlining block pushes

* Improve bbcode documentation

* Remove redundant code

* Formatting

* Formatting

* Document unhandled_feature

* Use type information in BBCode and update s2s to reflect this

* Fix PhiNode inference

* Update PhiNode transform unit tests

* Improve formatting of test_utils

* Ignore local scratch file

* Fix Turing hanging bug

* Fix derived rule tester

* Fix up testS

* Use fixed tangent stack for PiNode

* Tweak bounds for test utils

* Fix Distributions deprecations

* Revert PiNode fixed stack update

* Try more stuff to fix getrf pullback

* Fix test tolerances

* Fix typo in comment

* Revert attempted LAPACK fix

* Restrict primal evals to 1

* Extend preservation to cover ccall

* Make copy of ipiv after calculations are run

* Improve tuple tangent_type

* Inline getfield rules

* Force uninit_codual to inline

* Force-inline uninit_tangent

* NoTangentStack for DataType

* Revert change to tangent stack type

* Avoid recompilation in dynamic dispatch

* Tighten performance bounds

* Tighten performance bounds on naive mat mul

* Enable all Turing.jl tests

* Remove redundant comment

* Remove comment and add blank line

* Formatting

* Improve docstrings and comments in s2s

* Remove interpreter timings from Turing integration tests

* Remove redundant import in benchmarks

* Move value_and_gradient to interface file

* Remove redundant arg in benchmarking

* Improve interface

* Update README

* Formatting

* Improve README

* Do not export increment_field
  • Loading branch information
willtebbutt authored Mar 22, 2024
1 parent 26cc134 commit d3d32c0
Show file tree
Hide file tree
Showing 41 changed files with 2,737 additions and 371 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -4,3 +4,4 @@ bench/Manifest.toml
analysis_results
.vscode
profile.pb.gz
scratch.jl
4 changes: 3 additions & 1 deletion Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ ChainRulesCore = "d360d2e6-b24c-11e9-a2a3-2a2ae2dbcce4"
DiffRules = "b552c78f-8df3-52c6-915a-8e097449b14b"
DiffTests = "de460e47-3fe3-5279-bb4a-814414816d5d"
ExprTools = "e2ba6199-217a-4e67-a87a-7c52f15ade04"
Graphs = "86223c79-3864-5bf0-83f7-82e725a168b6"
InteractiveUtils = "b77e0a4c-d291-57a0-90e8-8db25a27a240"
JET = "c3a54625-cd67-489e-a8e7-0a5a0ff4e31b"
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
Expand Down Expand Up @@ -46,10 +47,11 @@ Distributions = "31c24e10-a181-5473-b8eb-7969acd0382f"
FillArrays = "1a297f60-69ca-5386-bcde-b61e274b549b"
KernelFunctions = "ec8451be-7e33-11e9-00cf-bbf324bd1392"
PDMats = "90014a1f-27ba-587c-ab20-58faa44d9150"
ReverseDiff = "37e2e3b7-166d-5795-8a7a-e32c996b4267"
SpecialFunctions = "276daf66-3868-5448-9aa4-cd146d93841b"
StableRNGs = "860ef19b-820b-49d6-a774-d7a799459cd3"
Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40"
Turing = "fce5fe82-541a-59a6-adf8-730c64b5f9a0"

[targets]
test = ["AbstractGPs", "BenchmarkTools", "DiffTests", "Distributions", "FillArrays", "KernelFunctions", "PDMats", "SpecialFunctions", "StableRNGs", "Test", "Turing"]
test = ["AbstractGPs", "BenchmarkTools", "DiffTests", "Distributions", "FillArrays", "KernelFunctions", "PDMats", "ReverseDiff", "SpecialFunctions", "StableRNGs", "Test", "Turing"]
36 changes: 22 additions & 14 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ The goal of the `Taped.jl` project is to produce a reverse-mode AD package which

# How it works

`Taped.jl` is based around a single function `rrule!!`, which computes vector-Jacobian products (VJPs).
`Taped.jl` is based around a function `rrule!!` (which computes vector-Jacobian products (VJPs)) and a related function `build_rrule` (which builds functions which are semantically identical to `rrule!!`).
These VJPs can, for example, be used to compute gradients.
`rrule!!` is similar to ChainRules' `rrule` and Zygote's `_pullback`, but supports functions which mutate (modify) their arguments, in addition to those that do not, and immediately increments (co)tangents.
It has, perhaps unsurprisingly, wound up looking quite similar to the rule system in Enzyme.
Expand All @@ -18,7 +18,7 @@ For a given function and arguments, it is roughly speaking the case that either
2. no hand-written method of `rrule!!` is applicable.

In the first case, we run the `rrule!!`.
In the second, we create an `rrule!!` by "doing AD" -- we decompose the function into a composition of functions which _do_ have hand-written `rrule!!`s.
In the second, we use `build_rrule` to create a function with the same semantics as `rrule!!` by "doing AD" -- we decompose the function into a composition of functions which _do_ have hand-written `rrule!!`s.
In general, the goal is to write as few hand-written `rrule!!`s as is necessary, and to "do AD" for the vast majority of functions.


Expand Down Expand Up @@ -48,17 +48,19 @@ All of our testing is implemented via this (or via another function which calls

This contrasts with `Zygote.jl` / `ChainRules.jl`, where the permissive (co)tangent type system complicates both composition of `rrule`s and testing.

Additionally, our approach to AD naturally handles control flow which differs between calls of a function. This contrasts with e.g. `ReverseDiff.jl`'s compiled tape, which can give silent numerical errors if control flow ought to differ between gradient evaluations at different arguments.
~~Additionally, we augment the tape that we construct with additional instructions which throw an error if control flow differs from when the tape was constructed.
This contrasts with `ReverseDiff.jl`, which silently fails in this scenario.~~
Additionally, our approach to AD naturally handles control flow which differs between multiple calls to the same function.
This contrasts with e.g. `ReverseDiff.jl`'s compiled tape, which can give silent numerical errors if control flow ought to differ between gradient evaluations at different arguments.
~~Additionally, we augment the tape that we construct with additional instructions which throw an error if control flow differs from when the tape was constructed.~~
~~This contrasts with `ReverseDiff.jl`, which silently fails in this scenario.~~

### Performance

Hand-written `rrule!!`s have excellent performance, provided that they have been written well (most of the hand-written rules in Taped have excellent performance, but some require optimisation. Doing this just requires investing some time).
Consequently, whether or not the overall AD system has good performance is largely a question of how much overhead is associated to the mechanism by which hand-written `rrules!!`s are algorithmically composed.

~~At present (11/2023), we do _not_ do this in a performant way, but this will change.~~
At present (01/2024), we do this in a _moderately_ performant way.
~~At present (01/2024), we do this in a _moderately_ performant way.~~
At present (03/2024), we do this in a _moderately_ performant way (but better than the previous way!)
See [Project Status](#project-status) below for more info.

Additionally, the strategy of immediately incrementing (co)tangents resolves long-standing performance issues associated with indexing arrays.
Expand All @@ -85,8 +87,14 @@ The plan is to proceed in three phases:
You should take this with a pinch of salt, as it seems highly likely that we will have to revisit some design choices when optimising performance -- we do not, however, anticipate requiring major re-writes to the design as part of performance optimisation.
We aim to reach the maintenance phase of the project before 01/06/2024.

*Update: (22/03/2024)*
Phase 2 is now further along.
`Taped.jl` now uses something which could reasonably be described as a source-to-source system to perform AD.
At present the performance of this system is not as good as that of Enzyme, but often beats compiled ReverseDiff, and comfortably beats Zygote in any situations involving dynamic control flow.
The present focus is on dealing with some remaining performance limitations that should make `Taped.jl`'s performance much closer to that of Enzyme, and consistently beat ReverseDiff on a range of benchmarks.

*Update: (16/01/2024)*
Phase 2 is now well underway. We now make use of a much faster approach to interpreting / executing Julia code, which yields performance that is comparable with ReverseDiff (when things go well). The current focus is on ironing out performance issues, and simplifying the implementation.
~~Phase 2 is now well underway. We now make use of a much faster approach to interpreting / executing Julia code, which yields performance that is comparable with ReverseDiff (when things go well). The current focus is on ironing out performance issues, and simplifying the implementation.~~

*Update: (06/11/2023)*
~~We are mostly through the first phase.~~
Expand All @@ -99,9 +107,8 @@ Phase 2 is now well underway. We now make use of a much faster approach to inter

# Trying it out

There is not presently a high-level interface to which we are commiting, but if you want to
compute the gradient of a function, take a look at
`Taped.TestUtils.set_up_gradient_problem` and `Taped.TestUtils.value_and_gradient!!`.
There is not presently a high-level interface to which we are yet commiting, but if you want to compute the gradient of a function, take a look at `value_and_pullback!!` / `value_and_gradient!!`.
They both provide a high-level interface which will let you differentiate things, and their implementation demonstrates how an `rrule!!` / rrule-like function should be used.

*Note:* I have found that using a mixture of `PProf` and the `@profview` functionality from Julia's `VSCode` extension essential when profiling code generated by `Taped.jl`.
`PProf` provides complete type information on its flame graphs, which is important for figuring out what is getting called, but it doesn't highilght type-instabilities.
Expand All @@ -115,13 +122,14 @@ Noteworthy things which should be work and be performant include:
1. value-dependent control flow
1. mutation of arrays and mutable structs

These are noteworthy in the sense that they are different from ReverseDiff / Zygote. Enzyme is also able to do these things.
These are noteworthy in the sense that they are different from ReverseDiff / Zygote.
Enzyme is also able to do these things.

Please be aware that by "performant" we mean similar performance to ReverseDiff with a compiled tape.
Please be aware that by "performant" we mean similar or better performance than ReverseDiff with a compiled tape, but not as good performance as Enzyme.

### What won't work

While Taped should now work on a very large subset of the language, there remain things that you should expect not to work. A non-exhaustive list of things to bear in mind includes:
While `Taped.jl` should now work on a very large subset of the language, there remain things that you should expect not to work. A non-exhaustive list of things to bear in mind includes:
1. It is always necessary to produce hand-written for `ccall`s (and, more generally, foreigncall nodes). We have rules for many `ccall`s, but not all. If you encounter a foreigncall without a hand-written rule, you should get an informative error message which tells you what is going on and how to deal with it.
1. Builtins which require rules. The vast majority of them have rules now, but some don't. Notably, `apply_iterate` does not have a rule, so Taped cannot currently AD through type-unstable splatting -- someone should resolve this.
1. Builtins which require rules. The vast majority of them have rules now, but some don't. Notably, `apply_iterate` does not have a rule, so `Taped.jl` cannot currently AD through type-unstable splatting -- someone should resolve this.
1. Anything involving tasks / threading -- we have no thread safety guarantees and, at the time of writing, I'm not entirely sure what error you will find if you attempt to AD through code which uses Julia's task / thread system. The same applies to distributed computing. These limitations ought to be possible to resolve.
45 changes: 21 additions & 24 deletions bench/run_benchmarks.jl
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ using Taped:
TInterp,
_typeof

using Taped.TestUtils: _deepcopy, to_benchmark, set_up_gradient_problem
using Taped.TestUtils: _deepcopy, to_benchmark

function zygote_to_benchmark(ctx, x::Vararg{Any, N}) where {N}
out, pb = Zygote._pullback(ctx, x...)
Expand Down Expand Up @@ -154,66 +154,63 @@ function generate_inter_framework_tests()
]
end

function benchmark_rules!!(
test_case_data,
default_ratios,
include_other_frameworks::Bool,
tune_benchmarks::Bool,
)
function benchmark_rules!!(test_case_data, default_ratios, include_other_frameworks::Bool)

test_cases = reduce(vcat, map(first, test_case_data))
memory = map(x -> x[2], test_case_data)
ranges = reduce(vcat, map(x -> x[3], test_case_data))
tags = reduce(vcat, map(x -> x[4], test_case_data))
GC.@preserve memory begin
results = map(enumerate(test_cases)) do (n, args)
@info "$n / $(length(test_cases))", _typeof(args)
suite = BenchmarkGroup()
suite = Dict()

# Benchmark primal.
primals = map(x -> x isa CoDual ? primal(x) : x, args)
suite["primal"] = @benchmarkable(
@info "primal"
suite["primal"] = @benchmark(
(a[1][])((a[2][])...);
setup=(a = (Ref($primals[1]), Ref(_deepcopy($primals[2:end])))),
evals=1,
)

# Benchmark AD via Taped.
rule, in_f = set_up_gradient_problem(args...)
@info "taped"
rule = Taped.build_rrule(args...)
coduals = map(x -> x isa CoDual ? x : zero_codual(x), args)
suite["taped"] = @benchmarkable(
to_benchmark($rule, zero_codual($in_f), $coduals...);
)
to_benchmark(rule, coduals...)
suite["taped"] = @benchmark(to_benchmark($rule, $coduals...))

if include_other_frameworks

if should_run_benchmark(Val(:zygote), args...)
suite["zygote"] = @benchmarkable(
@info "zygote"
suite["zygote"] = @benchmark(
zygote_to_benchmark($(Zygote.Context()), $primals...)
)
end

if should_run_benchmark(Val(:reverse_diff), args...)
@info "reversediff"
tape = ReverseDiff.GradientTape(primals[1], primals[2:end])
compiled_tape = ReverseDiff.compile(tape)
result = map(x -> randn(size(x)), primals[2:end])
suite["rd"] = @benchmarkable(
suite["rd"] = @benchmark(
rd_to_benchmark!($result, $compiled_tape, $primals[2:end])
)
end

if should_run_benchmark(Val(:enzyme), args...)
@info "enzyme"
dup_args = map(x -> Duplicated(x, randn(size(x))), primals[2:end])
suite["enzyme"] = @benchmarkable(
suite["enzyme"] = @benchmark(
autodiff(Reverse, $primals[1], Active, $dup_args...)
)
end
end

if tune_benchmarks
@info "tuning"
tune!(suite)
end
@info "running"
return (args, run(suite; verbose=true))
return (args, suite)
end
end
return combine_results.(results, tags, ranges, Ref(default_ratios))
Expand Down Expand Up @@ -259,7 +256,7 @@ function benchmark_hand_written_rrules!!(rng_ctor)
tags = fill(nothing, length(test_cases))
return map(x -> x[4:end], test_cases), memory, ranges, tags
end
return benchmark_rules!!(test_case_data, (lb=1e-3, ub=25.0), false, false)
return benchmark_rules!!(test_case_data, (lb=1e-3, ub=25.0), false)
end

function benchmark_derived_rrules!!(rng_ctor)
Expand All @@ -271,7 +268,7 @@ function benchmark_derived_rrules!!(rng_ctor)
tags = fill(nothing, length(test_cases))
return map(x -> x[4:end], test_cases), memory, ranges, tags
end
return benchmark_rules!!(test_case_data, (lb=1e-3, ub=150), false, false)
return benchmark_rules!!(test_case_data, (lb=1e-3, ub=150), false)
end

function benchmark_inter_framework_rules()
Expand All @@ -280,7 +277,7 @@ function benchmark_inter_framework_rules()
test_cases = map(last, test_case_data)
memory = []
ranges = fill(nothing, length(test_cases))
return benchmark_rules!!([(test_cases, memory, ranges, tags)], (lb=0.1, ub=150), true, true)
return benchmark_rules!!([(test_cases, memory, ranges, tags)], (lb=0.1, ub=150), true)
end

function flag_concerning_performance(ratios)
Expand Down
17 changes: 12 additions & 5 deletions src/Taped.jl
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ using
BenchmarkTools,
DiffRules,
ExprTools,
Graphs,
InteractiveUtils,
LinearAlgebra,
Random,
Expand All @@ -20,8 +21,8 @@ using Base.Experimental: @opaque
using Base.Iterators: product
using Core:
Intrinsics, bitcast, SimpleVector, svec, ReturnNode, GotoNode, GotoIfNot, PhiNode,
PiNode, SSAValue, Argument
using Core.Compiler: IRCode
PiNode, SSAValue, Argument, OpaqueClosure
using Core.Compiler: IRCode, NewInstruction
using Core.Intrinsics: pointerref, pointerset
using LinearAlgebra.BLAS: @blasfunc, BlasInt, trsm!
using LinearAlgebra.LAPACK: getrf!, getrs!, getri!, trtrs!, potrf!, potrs!
Expand All @@ -35,11 +36,14 @@ include("codual.jl")
include("stack.jl")

include(joinpath("interpreter", "contexts.jl"))
include(joinpath("interpreter", "abstract_interpretation.jl"))
include(joinpath("interpreter", "bbcode.jl"))
include(joinpath("interpreter", "ir_utils.jl"))
include(joinpath("interpreter", "ir_normalisation.jl"))
include(joinpath("interpreter", "abstract_interpretation.jl"))
include(joinpath("interpreter", "registers.jl"))
include(joinpath("interpreter", "interpreted_function.jl"))
include(joinpath("interpreter", "reverse_mode_ad.jl"))
include(joinpath("interpreter", "s2s_reverse_mode_ad.jl"))

include("test_utils.jl")

Expand All @@ -54,13 +58,13 @@ include(joinpath("rrules", "misc.jl"))
include(joinpath("rrules", "new.jl"))

include("chain_rules_macro.jl")
include("interface.jl")

export
primal,
tangent,
randn_tangent,
increment!!,
increment_field!!,
NoTangent,
Tangent,
MutableTangent,
Expand All @@ -74,6 +78,9 @@ export
_dot,
zero_codual,
codual_type,
rrule!!
rrule!!,
build_rrule,
value_and_gradient!!,
value_and_pullback!!

end
2 changes: 1 addition & 1 deletion src/codual.jl
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ zero_codual(x) = CoDual(x, zero_tangent(x))
See implementation for details, as this function is subject to change.
"""
uninit_codual(x) = CoDual(x, uninit_tangent(x))
@inline uninit_codual(x::P) where {P} = CoDual(x, uninit_tangent(x))

"""
codual_type(P::Type)
Expand Down
60 changes: 60 additions & 0 deletions src/interface.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
"""
value_and_pullback!!(rule, ȳ, f::CoDual, x::CoDual...)
In-place version of `value_and_pullback!!` in which the arguments have been wrapped in
`CoDual`s. Note that any mutable data in `f` and `x` will be incremented in-place. As such,
if calling this function multiple times with different values of `x`, should be careful to
ensure that you zero-out the tangent fields of `x` each time.
"""
function value_and_pullback!!(rule::R, ȳ::T, fx::Vararg{CoDual, N}) where {R, N, T}
out, pb!! = rule(fx...)
@assert _typeof(tangent(out)) == T
ty = increment!!(tangent(out), ȳ)
return primal(out), pb!!(ty, map(tangent, fx)...)
end

"""
value_and_gradient!!(rule, f::CoDual, x::CoDual...)
Equivalent to `value_and_pullback(rule, 1.0, f, x...)` -- assumes `f` returns a `Float64`.
"""
function value_and_gradient!!(rule::R, fx::Vararg{CoDual, N}) where {R, N}
return value_and_pullback!!(rule, 1.0, fx...)
end

"""
value_and_pullback!!(rule, ȳ, f, x...)
Compute the value and pullback of `f(x...)`.
`rule` should be constructed using `build_rrule`.
*Note:* If calling `value_and_pullback!!` multiple times for various values of `x`, you
should use the same instance of `rule` each time.
*Note:* It is your responsibility to ensure that there is no aliasing in `f` and `x`.
For example,
```julia
X = randn(5, 5)
rule = build_rrule(dot, X, X)
value_and_pullback!!(rule, 1.0, dot, X, X)
```
will yield the wrong result.
*Note:* This method of `value_and_pullback!!` has to first call `zero_codual` on all of its
arguments. This may cause some additional allocations. If this is a problem in your
use-case, consider pre-allocating the `CoDual`s and calling the other method of this
function.
"""
function value_and_pullback!!(rule::R, ȳ, fx::Vararg{Any, N}) where {R, N}
return value_and_pullback!!(rule, ȳ, map(zero_codual, fx)...)
end

"""
value_and_gradient!!(rule, f, x...)
Equivalent to `value_and_pullback(rule, 1.0, f, x...)` -- assumes `f` returns a `Float64`.
"""
function value_and_gradient!!(rule::R, fx::Vararg{Any, N}) where {R, N}
return value_and_gradient!!(rule, map(zero_codual, fx)...)
end
Loading

0 comments on commit d3d32c0

Please sign in to comment.