-
Notifications
You must be signed in to change notification settings - Fork 130
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Duplicate tproxy MG test in integration tests #1262
base: main
Are you sure you want to change the base?
Duplicate tproxy MG test in integration tests #1262
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #1262 +/- ##
=======================================
Coverage 19.30% 19.30%
=======================================
Files 164 164
Lines 10849 10849
=======================================
Hits 2094 2094
Misses 8755 8755
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
Bencher Report
Click to view all benchmark results
|
Bencher Report
🚨 1 Alert
Click to view all benchmark results
|
Bencher Report
🚨 5 Alerts
Click to view all benchmark results
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tested it many times, and some of them I got this error, I don't know if you already noticed it:
running 2 tests
thread 'translation_proxy' panicked at tests-integration/tests/common/mod.rs:162:14:
called `Result::unwrap()` on an `Err` value: JsonRpc(Transport(SocketError(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })))
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
test translation_proxy ... FAILED
thread 'success_pool_template_provider_connection' panicked at tests-integration/tests/common/mod.rs:162:14:
called `Result::unwrap()` on an `Err` value: JsonRpc(Transport(SocketError(Os { code: 35, kind: WouldBlock, message: "Resource temporarily unavailable" })))
test success_pool_template_provider_connection ... FAILED
failures:
failures:
success_pool_template_provider_connection
translation_proxy
test result: FAILED. 0 passed; 2 failed; 0 ignored; 0 measured; 0 filtered out; finished in 37.93s
error: test failed, to rerun pass `-p integration-test --test pool_integration`
// covers | ||
// https://github.com/stratum-mining/stratum/blob/main/test/message-generator/test/translation-proxy/translation-proxy.json |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you add the explanation here like you did for the other test?
@@ -396,9 +396,9 @@ pub async fn start_sv2_translator(upstream: SocketAddr) -> SocketAddr { | |||
.expect("failed"); | |||
let listening_address = get_available_address(); | |||
let listening_port = listening_address.port(); | |||
let hashrate = measure_hashrate(3) as f32 / 100.0; | |||
let hashrate = measure_hashrate(3) as f32 / 20.0; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did test many times, and some of them it was failing. So I tried to change this in:
let hashrate = measure_hashrate(3) as f32 / 20.0; | |
let hashrate = measure_hashrate(1) as f32 / 20.0; |
With that, it never failed again. Don't know if it's random or not btw :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Joking, I just got it again:
test translation_proxy ... FAILED
failures:
failures:
translation_proxy
test result: FAILED. 1 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 33.83s
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tbh I still don't understand why we are dividing the measured hashrate
that will make tProxy set a very low difficulty and make CPU miner submit way more shares than necessary
7ee9303
to
8389498
Compare
// covers | ||
// https://github.com/stratum-mining/stratum/blob/main/test/message-generator/test/translation-proxy/translation-proxy.json | ||
#[tokio::test] | ||
async fn translation_proxy() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think it makes sense to add this to a file called pool_integration.rs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
most importantly, as we already discussed in the past with @Shourya742 , by having multiple tests in the same file, we end up executing all of them in parallel
this can cause problems with CPU miners
) | ||
.await; | ||
let _tp = common::start_template_provider(tp_addr.port()).await; | ||
let _pool_1 = common::start_pool(Some(pool_addr), Some(tp_addr)).await; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why is this called _pool_1
?
let jds_addr = common::start_jds(tp_addr).await; | ||
let jdc_addr = common::start_jdc(pool_jdc_sniffer_addr, tp_addr, jds_addr).await; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why do we have JDC + JDS on this test?
this should be JD-less, tProxy should simply connect to Pool
assert_common_message!( | ||
&pool_jdc_sniffer.next_message_from_downstream(), | ||
SetupConnection | ||
); | ||
assert_common_message!( | ||
&pool_jdc_sniffer.next_message_from_upstream(), | ||
SetupConnectionSuccess | ||
); | ||
assert_mining_message!( | ||
&pool_jdc_sniffer.next_message_from_downstream(), | ||
OpenExtendedMiningChannel | ||
); | ||
assert_mining_message!( | ||
&pool_jdc_sniffer.next_message_from_upstream(), | ||
OpenExtendedMiningChannelSuccess | ||
); | ||
assert_mining_message!( | ||
&pool_jdc_sniffer.next_message_from_upstream(), | ||
NewExtendedMiningJob | ||
); | ||
assert_mining_message!( | ||
&pool_jdc_sniffer.next_message_from_downstream(), | ||
SetCustomMiningJob | ||
); | ||
assert_mining_message!( | ||
&pool_jdc_sniffer.next_message_from_upstream(), | ||
SetNewPrevHash | ||
); | ||
assert_mining_message!( | ||
&pool_jdc_sniffer.next_message_from_downstream(), | ||
SubmitSharesExtended | ||
); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand these assertions
what's the point of asserting messages between JDC and Pool?
the original MG test doesn't even have JDC + JDS
resolves #1208