Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Segfault running tapioca dsl #2018

Closed
natematykiewicz opened this issue Sep 11, 2024 · 4 comments
Closed

Segfault running tapioca dsl #2018

natematykiewicz opened this issue Sep 11, 2024 · 4 comments

Comments

@natematykiewicz
Copy link

It segfaults processing the pg gem.

$ bundle exec tapioca dsl

Output file attached because it's 1.9MB. output.txt

@Morriar
Copy link
Collaborator

Morriar commented Sep 12, 2024

👋 Hey @natematykiewicz,

It looks like you're encountering a segfault in Ruby 3.3.5 itself. I'd suggest sending your crash report to https://bugs.ruby-lang.org/ instead.

Thanks.

@Morriar Morriar closed this as not planned Won't fix, can't repro, duplicate, stale Sep 12, 2024
@natematykiewicz
Copy link
Author

natematykiewicz commented Sep 12, 2024

@Morriar, I tried skipping the PG::Connection constant with bundle exec tapioca dsl --skip-constant=PG::Connection, and that didn't prevent the crash. I don't use the PG gem directly, ActiveRecord uses it, so I don't need any type definitions for PG at all. Is there a way to just have Tapioca skip this file?

@natematykiewicz
Copy link
Author

I looked at the .ips file in ~/Library/Logs/DiagnosticReports and saw this:

  "vmRegionInfo" : "0x110c3cb06 is not in any region.  Bytes after previous region: 2823  Bytes before following region: 13562\n      REGION TYPE                    START - END         [ VSIZE] PRT\/MAX SHRMOD  REGION DETAIL\n      __LINKEDIT                  110c2c000-110c3c000    [   64K] r--\/rwx SM=COW  ...socket.bundle\n--->  GAP OF 0x4000 BYTES\n      VM_ALLOCATE                 110c40000-110c50000    [   64K] rw-\/rwx SM=COW  ",
  "exception" : {"codes":"0x0000000000000001, 0x0000000110c3cb06","rawCodes":[1,4576234246],"type":"EXC_BAD_ACCESS","signal":"SIGABRT","subtype":"KERN_INVALID_ADDRESS at 0x0000000110c3cb06"},
  "termination" : {"flags":0,"code":6,"namespace":"SIGNAL","indicator":"Abort trap: 6","byProc":"ruby","byPid":33576},
  "vmregioninfo" : "0x110c3cb06 is not in any region.  Bytes after previous region: 2823  Bytes before following region: 13562\n      REGION TYPE                    START - END         [ VSIZE] PRT\/MAX SHRMOD  REGION DETAIL\n      __LINKEDIT                  110c2c000-110c3c000    [   64K] r--\/rwx SM=COW  ...socket.bundle\n--->  GAP OF 0x4000 BYTES\n      VM_ALLOCATE                 110c40000-110c50000    [   64K] rw-\/rwx SM=COW  ",
  "asi" : {"CoreFoundation":["*** multi-threaded process forked ***"],"libsystem_c.dylib":["crashed on child side of fork pre-exec"]},
  "extMods" : {"caller":{"thread_create":0,"thread_set_state":0,"task_for_pid":0},"system":{"thread_create":0,"thread_set_state":0,"task_for_pid":0},"targeted":{"thread_create":0,"thread_set_state":0,"task_for_pid":0},"warnings":0},
  "faultingThread" : 0,

"crashed on child side of fork pre-exec" and "*** multi-threaded process forked ***" stuck out to me. In my sorbet/tapioca/config.yml I set:

dsl:
  workers: 1

It no longer crashes. I notice that this gem uses Parallel for the process forking:

def run_in_parallel(&block)
# To have the parallel gem run jobs in the parent process, you must pass 0 as the number of processes
number_of_processes = @number_of_workers == 1 ? 0 : @number_of_workers
Parallel.map(@queue, { in_processes: number_of_processes }, &block)
end

And Parallel has had an open issue about segfaults for a year now.
grosser/parallel#337

I've tried it on Ruby 3.2.4, 3.2.5, and 3.3.5. All segfault with more than 1 process. That linked issue lists Ruby 3.2.2.

Do you still think it's a Ruby issue, or is it a Parallel issue?

@natematykiewicz
Copy link
Author

I had asked around about this as well, and someone showed me this: ged/ruby-pg#538 (comment)

Adding ENV['PGGSSENCMODE'] = 'disable' to my bin/tapioca fixes the issue and allows me to use the default worker count.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants