Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reusable buffers #174

Closed
wants to merge 4 commits into from
Closed

Reusable buffers #174

wants to merge 4 commits into from

Conversation

tempusfrangit
Copy link
Contributor

@tempusfrangit tempusfrangit commented Mar 4, 2024

In an effort to limit the OOM scenarios within PGET this PR implements buffers that are fully reusable. This is done with a sync.Pool and builds upon the change to use fixed buffer sizes for any given invocation of PGET.

This PR does not limit overall memory use of PGET.

Partial: #2

@tempusfrangit tempusfrangit requested review from philandstuff and a team March 4, 2024 23:08
@tempusfrangit tempusfrangit force-pushed the reusable-buffers branch 2 times, most recently from 730e35b to 2677b32 Compare March 5, 2024 23:16
Base automatically changed from fixed-chunk-size to main March 6, 2024 14:48
To cleanup the use of the signal channel and enable easy re-use of the
BufferedReader this commit converts the channel to a sync.Cond. This
allows a reset to simply call:

b.ready = false
b.buf.Reset()

It also eliminates the need to re-create the channel on a reset case.
Supporting reusable readers will help us reduce memory needs of pget.

This commit implements the buffered reader pool and wires up CH and
buffer mode to use it to acquire the buffered readers. This lightens the
logic even further guaranteeing that we always use the same capacity for
all readers as it's defined in the  `New` function used by the
sync.Pool

This commit does not return buffers to the pool.
When we're done with a bvuffered reader, call `.Close()` which will do
the reset and re-add to the pool. This change also pivots the
BufferedReader to a ReadCloser type interface.

This change follows through on the promise of allowing reuse of buffers,
minimizing allocations, and reducing overall memory requirements of
pget. This will help further limit OOM situations and lead towards a
future of also limiting maximum memory utilization of pget.
Move the reset logic to .Close as that makes the bufferedReader
responsible for managing it's state not the pool. The pool is just a
broker to hand out readers.
philandstuff added a commit that referenced this pull request Mar 6, 2024
This is an alternative to #174.  The key differences are:

- we use sync.Pool to store *bytes.Buffer instances rather than bufferedReader
  instances
- we keep the ready channel instead of using sync.Cond
- we return the buffer to the pool in Read() rather than requiring an explicit
  Close()

Overall this changes fewer parts of the codebase and is (IMHO) simpler and
easier to understand.
@philandstuff
Copy link
Contributor

I've looked at this and implemented an alternative in #177.

@tempusfrangit tempusfrangit deleted the reusable-buffers branch March 6, 2024 16:47
@tempusfrangit
Copy link
Contributor Author

I've looked at this and implemented an alternative in #177.

WFM lets use #177

philandstuff added a commit that referenced this pull request Mar 6, 2024
* Reuse buffers used by bufferedReader

This is an alternative to #174.  The key differences are:

- we use sync.Pool to store *bytes.Buffer instances rather than bufferedReader
  instances
- we keep the ready channel instead of using sync.Cond
- we return the buffer to the pool in Read() rather than requiring an explicit
  Close()

Overall this changes fewer parts of the codebase and is (IMHO) simpler and
easier to understand.
@tempusfrangit tempusfrangit restored the reusable-buffers branch March 6, 2024 18:54
@tempusfrangit tempusfrangit deleted the reusable-buffers branch October 10, 2024 15:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants