You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
None of the existing Type, Pre, Post field logic needs to be changed.
The compacted changes from AddChange(change Change) should take the most recent fields from above if there is already an existing change for c.cache[ledgerKeyString]
switch existingChange.LedgerEntryChangeType() {
case xdr.LedgerEntryChangeTypeLedgerEntryCreated:
// If existing type is created it means that this entry does not
// exist in a DB so we update entry change.
c.cache[ledgerKeyString] = Change{
Type: key.Type,
Pre: existingChange.Pre, // = nil
Post: change.Post,
Reason: change.Reason,
OperationIndex: change.OperationIndex,
Transaction: change.Transaction,
Ledger: change.Ledger,
LedgerUpgrade: change.LedgerUpgrade,
}
Alternatively instead of propagating the latest Change.Reason/OperationIndex/etc... to c.cache[ledgerKeyString] = Change{} it is possible to keep the existingChange.Reason/OperationIndex/etc... instead of change.Reason/OperationIndex/etc.... This would be the less preferred option from stellar-etl
The text was updated successfully, but these errors were encountered:
@chowbao It would be helpful for my understanding if you could explain how these new fields can be used (with an example?). Thanks!
Of course
So in stellar-etl we would use the changes to add new columns to some of our tables. For example, our contract_data state table that records the compacted ledger entry changes for contract_data. New columns that would be helpful for us to add would be like the transaction_hash or ledger_sequence that the compacted change occurred. This is nice because now we wouldn't have to manually pass the LedgerCloseMeta as we process changes (Ledger already in the change).
stellar-etl currently has this flow in the code if that helps clear things up as well
Output and upload to data lake to ingest into BigQuery (Hubble)
By also passing new fields like OperationIndex and Transaction we can also reasonable add new columns to the contract_data table like transaction_hash or operation_id that is tied to the change
Related to #5535
The change_compactor should be updated to use the new fields in the Change struct
New fields:
None of the existing
Type, Pre, Post
field logic needs to be changed.The compacted changes from
AddChange(change Change)
should take the most recent fields from above if there is already an existing change forc.cache[ledgerKeyString]
For example in https://github.com/stellar/go/blob/master/ingest/change_compactor.go#L152-L166
Alternatively instead of propagating the latest
Change.Reason/OperationIndex/etc...
toc.cache[ledgerKeyString] = Change{}
it is possible to keep theexistingChange.Reason/OperationIndex/etc...
instead ofchange.Reason/OperationIndex/etc...
. This would be the less preferred option from stellar-etlThe text was updated successfully, but these errors were encountered: