Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VSTest process being killed/failing often for UWP and WASDK test hosts #7937

Closed
2 of 10 tasks
michael-hawker opened this issue Jul 19, 2023 · 15 comments
Closed
2 of 10 tasks

Comments

@michael-hawker
Copy link

Description

It's not 100% consistent, but it's happening frequently and re-running our pipeline isn't a guarantee. It really kicked up this past week with the change to the environment, so that seems to be a factor? Especially since it's occurring for both UWP and WASDK based test projects? We haven't been changing anything to our test infrastructure code.

We hadn't been having any troubles like this recently up until the environment change over.

Bad runs:
https://github.com/CommunityToolkit/Windows/actions/runs/5537722051/jobs/10106880349
https://github.com/CommunityToolkit/Windows/actions/runs/5544940047/jobs/10124367762
https://github.com/CommunityToolkit/Windows/actions/runs/5514189775/jobs/10053173416
https://github.com/CommunityToolkit/Labs-Windows/actions/runs/5602439673/jobs/10247729757

Sometimes it runs fine:
https://github.com/CommunityToolkit/Windows/actions/runs/5523957535/jobs/10075613144

Maybe it's both the 20230630.1.0 and 20230706.1.0 and just getting worse with the newer one?

Going back over more than a couple of weeks in our builds I don't see this issue at all on 20230612.1:
https://github.com/CommunityToolkit/Windows/actions/runs/5310225807/jobs/9611943142
https://github.com/CommunityToolkit/Labs-Windows/actions/runs/5336453990/jobs/9671187354

We've made a few changes like upgrading from .NET 6 SDK to .NET 7 SDK , but see the errors consistently before/after that change. Didn't see any failures on 20230612.1 VMs. We haven't changed any of the tests being run in this time.

Platforms affected

  • Azure DevOps
  • GitHub Actions - Standard Runners
  • GitHub Actions - Larger Runners

Runner images affected

  • Ubuntu 20.04
  • Ubuntu 22.04
  • macOS 11
  • macOS 12
  • macOS 13
  • Windows Server 2019
  • Windows Server 2022

Image version and build link

We're seeing this across repos but they have the same image:

Current runner version: '2.306.0'
Operating System
  Microsoft Windows Server 2022
  10.0.20348
  Datacenter
Runner Image
  Image: windows-2022
  Version: 20230706.1.0
  Included Software: https://github.com/actions/runner-images/blob/win22/20230706.1/images/win/Windows2022-Readme.md
  Image Release: https://github.com/actions/runner-images/releases/tag/win22%2F20230706.1
Runner Image Provisioner
  2.0.238.1
Current runner version: '2.305.0'
Operating System
  Microsoft Windows Server 2022
  10.0.20348
  Datacenter
Runner Image
  Image: windows-2022
  Version: 20230630.1.0
  Included Software: https://github.com/actions/runner-images/blob/win22/20230630.1/images/win/Windows2022-Readme.md
  Image Release: https://github.com/actions/runner-images/releases/tag/win22%2F20230630.1
Runner Image Provisioner
  2.0.238.1

Is it regression?

20230612.1

Expected behavior

VSTest process able to finish tests normally.

Actual behavior

We're seeing the test process fail for both our UWP and WASDK run of tests:

https://github.com/CommunityToolkit/Labs-Windows/actions/runs/5602439673/jobs/10249333579?pr=418#step:21:477

The active test run was aborted. Reason: Unable to communicate with test host process.
Closing app with package full name '7af355f7-0c20-4a39-9b71-8cf779ccfa82_1.0.0.0_x64__1v6rh0sdhj24c'.
Results File: D:\a\Labs-Windows\Labs-Windows\TestResults\UWP.trx

Test Run Aborted with error System.AggregateException: One or more errors occurred. ---> System.IO.IOException: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. ---> System.Net.Sockets.SocketException: An existing connection was forcibly closed by the remote host
   at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size)
Total tests: Unknown
   --- End of inner exception stack trace ---
     Passed: 36
   at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size)
   at System.IO.Stream.ReadByte()
   at System.IO.BinaryReader.ReadByte()
   at System.IO.BinaryReader.Read7BitEncodedInt()
   at System.IO.BinaryReader.ReadString()
   at Microsoft.VisualStudio.TestPlatform.CommunicationUtilities.LengthPrefixCommunicationChannel.NotifyDataAvailable()
   at Microsoft.VisualStudio.TestPlatform.CommunicationUtilities.TcpClientExtensions.MessageLoopAsync(TcpClient client, ICommunicationChannel channel, Action`1 errorHandler, CancellationToken cancellationToken)
   --- End of inner exception stack trace ---.

https://github.com/CommunityToolkit/Labs-Windows/actions/runs/5602439673/jobs/10249333759?pr=418#step:21:558

The active test run was aborted. Reason: Unable to communicate with test host process.
Terminating app with process ID '2792'.
Results File: D:\a\Labs-Windows\Labs-Windows\TestResults\WinAppSdk.trx

Test Run Aborted with error System.AggregateException: One or more errors occurred. ---> System.IO.IOException: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. ---> System.Net.Sockets.SocketException: An existing connection was forcibly closed by the remote host
   at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size)
Total tests: Unknown
   --- End of inner exception stack trace ---
     Passed: 36
   at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size)
   at System.IO.Stream.ReadByte()
   at System.IO.BinaryReader.ReadByte()
   at System.IO.BinaryReader.Read7BitEncodedInt()
   at System.IO.BinaryReader.ReadString()
   at Microsoft.VisualStudio.TestPlatform.CommunicationUtilities.LengthPrefixCommunicationChannel.NotifyDataAvailable()
   at Microsoft.VisualStudio.TestPlatform.CommunicationUtilities.TcpClientExtensions.MessageLoopAsync(TcpClient client, ICommunicationChannel channel, Action`1 errorHandler, CancellationToken cancellationToken)

I have one PR which is on 20230716.1.0 and just refusing to get past testing: https://github.com/CommunityToolkit/Labs-Windows/actions/runs/5602439673/attempts/1 (on 4th attempt at the moment)

Repro steps

Attempt to build either repo in GitHub Actions Runner using main workflow:
https://github.com/CommunityToolkit/Windows
https://github.com/CommunityToolkit/Labs-Windows

@vpolikarpov-akvelon
Copy link
Contributor

Hi @michael-hawker. Thank you for reporting, we will investigate this.

@vpolikarpov-akvelon
Copy link
Contributor

Hey @michael-hawker. The behavior you have described may be caused by OOM errors sometimes. May I ask you to extend your workflow with a task that runs always at the end and outputs system events? E.g. using powershell: Get-EventLog -LogName System -EntryType Error | Format-List.

Also it would be very useful for us to try to reproduce this in a fork. Is it possible? What credentials or secrets have to be defined?

@michael-hawker
Copy link
Author

@vpolikarpov-akvelon not familiar with OOM term, but can look at adding a step. Only credential is to push packages to the feed when integrating into main, so nothing should be needed in the standard case of just running the workflow.

I was able to run into the issue locally in release mode in one case, so there may be something going on with .NET Native, investigating a bit more on my side too. Or that's an orthogonal issue...

@michael-hawker
Copy link
Author

We've been tracking down other known halts to our test process, but here was definitely an interim case where the build succeeded eventually on this run: https://github.com/CommunityToolkit/Windows/actions/runs/5684332115/job/15413161632?pr=157#step:18:895

Where it failed previously here:
https://github.com/CommunityToolkit/Windows/actions/runs/5684332115/job/15412342108?pr=157#step:18:894

@vpolikarpov-akvelon
Copy link
Contributor

I tried collecting crash dump. Test suite fails on task FrameworkElementExtension_RelativeAncestor_FreePageNavigation (link) due to an exception with text Security check failure or stack buffer overrun that occurs within Microsoft.ui.xaml.dll. It doesn't seem like something is wrong with runner image, but with some software.

If you want to inspect dump yourself, you should add a couple of steps to your workflow. Add this after checkout to enable dumps collecting:

- name: Enable User-Mode Dumps collecting
  shell: powershell
  run: |
    New-Item '${{ github.workspace }}\CrashDumps' -Type Directory
    Set-ItemProperty -Path 'HKLM:\SOFTWARE\Microsoft\Windows\Windows Error Reporting\LocalDumps' -Name 'DumpFolder' -Type ExpandString -Value '${{ github.workspace }}\CrashDumps'
    Set-ItemProperty -Path 'HKLM:\SOFTWARE\Microsoft\Windows\Windows Error Reporting\LocalDumps' -Name 'DumpCount' -Type DWord -Value '10'
    Set-ItemProperty -Path 'HKLM:\SOFTWARE\Microsoft\Windows\Windows Error Reporting\LocalDumps' -Name 'DumpType' -Type DWord -Value '2'

And this at the end of the workflow to upload created dumps as an artifacts:

- name: Artifact - CrashDumps
  uses: actions/upload-artifact@v3
  if: ${{ always() }}
  with:
    name: CrashDumps-${{ matrix.platform }}
    path: './CrashDumps'

@vpolikarpov-akvelon
Copy link
Contributor

Hey @michael-hawker. Do you have any news on this issue? Do you experience these problems still? Do you have any new information that can help with investigation as we are in a dead end currently.

@michael-hawker
Copy link
Author

Thanks for checking in @vpolikarpov-akvelon we're still noticing issues. I've added the crash dump collection to our CI in one of my PRs currently facing issues, though I'm having trouble digging into the UWP minidump as my .NETCore CLR version doesn't match the one of the CLI, so I'm not sure how to install/resolve having the right one to dig deeper - any suggestions? Seems like the dump used Microsoft.NET.CoreRuntime.2.2_2.2.29713.2_x64__8wekyb3d8bbwe, but I have Microsoft.NET.CoreRuntime.2.2_2.2.31331.1_x64__8wekyb3d8bbwe. I wanted to compare the exceptions across runtimes. (Though the WindowsAppSDK highlighted a potential issue with the test in that PR, so I've fixed that and am re-running to recollect a clean setup again.)

I added a microsoft/vstest#2952 (comment) to hopefully poke that along about the test process failing more gracefully and providing more information about failures from the test process itself when it does crash.

I also raised the issue with the platform team since the error you pointed out seemed to come from within the system's UI dll, but it didn't seem familiar to them, so will need more investigation. Hoping if I get two dumps failing in the same place that it'll help to pass them along, though I'd like to crack the managed stack open on the UWP one first.

@michael-hawker
Copy link
Author

Was able to get dumps/stacks again on both WinUI2/3 pipelines even after ignoring that test (not sure how executing test was gotten from stack though before), however I think in a different test based on last passed test gets the same overflow message in the IScrollViewer, from DispatcherQueueHelper_FuncOfTaskOfT_Exception_DispatcherQueueExtensionTests_Test in this run - WinUI 2 Dump - WinUI 3 Dump

The test here doesn't really do much that the test that passed previously does, I think it just happens to be the one where whatever buffer happens to overflow is based on the number of run tests? Considering it's the same stack and exception we saw earlier.

Do builds with merges to main receive more resources to the runner than those in a PR? As we do see tend to see this issue more in PRs than we do with merges to main (though we see it there randomly still too). Like I don't think there's anything a particular test is doing here, it seems more related to either something we're manipulating on the test app harness or bubbling up from the platform... I'll pass these new stacks to the platform team.

@vpolikarpov-akvelon
Copy link
Contributor

There is no any difference between runners that handle on-push and on-pr builds.

If you suspect that failures depend on the amount of resources available, then you may try running your build on larger runners.

@michael-hawker
Copy link
Author

@vpolikarpov-akvelon good to know. Yes, we've been waiting a while for the .NET Foundation to be approved for larger runners so we can try that. FYI @ChrisSfanos

@ChrisSfanos
Copy link

Hi @michael-hawker - I think we can move you to the DNF GitHub Enterprise to allow access. We can investigate

@michael-hawker
Copy link
Author

@ChrisSfanos I believe we should already be connected to that? I'll ping you offline so we can follow-up.

michael-hawker added a commit to CommunityToolkit/Windows that referenced this issue Aug 16, 2023
… it's just random based on number of tests run...)

Need to comment out as Ignore ignored... see CommunityToolkit/Tooling-Windows-Submodule#121
Related to investigation, see info in actions/runner-images#7937
michael-hawker added a commit to CommunityToolkit/Windows that referenced this issue Aug 16, 2023
… it's just random based on number of tests run...)

Need to comment out as Ignore ignored... see CommunityToolkit/Tooling-Windows-Submodule#121
Related to investigation, see info in actions/runner-images#7937
michael-hawker added a commit to CommunityToolkit/Windows that referenced this issue Aug 17, 2023
… it's just random based on number of tests run...)

Need to comment out as Ignore ignored... see CommunityToolkit/Tooling-Windows-Submodule#121
Related to investigation, see info in actions/runner-images#7937
michael-hawker added a commit to CommunityToolkit/Windows that referenced this issue Aug 17, 2023
… it's just random based on number of tests run...)

Need to comment out as Ignore ignored... see CommunityToolkit/Tooling-Windows-Submodule#121
Related to investigation, see info in actions/runner-images#7937
@vpolikarpov-akvelon
Copy link
Contributor

Hey @michael-hawker. Do you have any updates on this? Did you test your workflow on larger runners?

@vpolikarpov-akvelon
Copy link
Contributor

Well, I'm closing this issue for now due to inactivity. Feel free to contact our team using internal channel (e.g. Teams) if you still need help with this issue.

@vpolikarpov-akvelon vpolikarpov-akvelon closed this as not planned Won't fix, can't repro, duplicate, stale Aug 28, 2023
@vpolikarpov-akvelon vpolikarpov-akvelon added no-issue-activity and removed investigate Collect additional information, like space on disk, other tool incompatibilities etc. labels Aug 28, 2023
@michael-hawker
Copy link
Author

@vpolikarpov-akvelon was just about to respond, the larger runners didn't help (though it took longer to see it hit again, but it did just happen this morning since we flipped over to them). Happened in another random test (a string converter test). Really seems like a hiccup in the test process or platform or something. I'll follow-up with the platform team and the dumps I provided them.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants