Skip to content
This repository has been archived by the owner on Jul 16, 2020. It is now read-only.

[Batch - PART2] Batch operations and custom data #80

Open
wants to merge 8 commits into
base: feat/control-updates
Choose a base branch
from

Conversation

Mobius5150
Copy link
Contributor

@Mobius5150 Mobius5150 commented Jun 9, 2018

Stacked PR on #81

Exposing a new batch operations API that allows you to batch updates to
multiple objects (e.g. controls, participants) and multiple property
updates on a single object. This API also allows for the use of custom
data including nested objects and arrays.

Sample setting custom data on a control object:

interactive_batch batch;
interactive_control_batch_begin(session, &batch, "default");

interactive_batch_entry entry;
interactive_control_batch_add(batch, &entry, "GiveHealth");
interactive_batch_add_param_str(batch, entry, "foo", "bar");
interactive_batch_add_param_uint(batch, entry, "number", 42);
interactive_batch_add_param_array(batch, entry, "array", [](interactive_batch batch, interactive_batch_array arrayItem)
{
    interactive_batch_array_push_object(batch, arrayItem, [](interactive_batch batch, interactive_batch_entry objectEntry)
    {
        interactive_batch_add_param_str(batch, objectEntry, "foo", "bar");
        interactive_batch_add_param_uint(batch, objectEntry, "number", 42);
    });
    interactive_batch_array_push_str(batch, arrayItem, "bar");
    interactive_batch_array_push_uint(batch, arrayItem, 42);
});
interactive_batch_add_param_object(batch, entry, "object", [](interactive_batch batch, interactive_batch_entry objectEntry)
{
    interactive_batch_add_param_str(batch, objectEntry, "foo", "bar");
    interactive_batch_add_param_uint(batch, objectEntry, "number", 42);
});
interactive_control_batch_end(batch);

Results in:

{
   "id":4,
   "method":"updateControls",
   "discard":true,
   "seq":5,
   "params":{
      "priority":0,
      "controls":[
         {
            "controlID":"GiveHealth",
            "foo":"bar",
            "number":42,
            "array":[
               {
                  "foo":"bar",
                  "number":42
               },
               "bar",
               42
            ],
            "object":{
               "foo":"bar",
               "number":42
            }
         }
      ],
      "sceneID":"default"
   }
}

Batch operations supported for:

  • Controls
  • Participants
  • Scenes
  • Groups
  • World

Tests are not yet complete as we have yet to expose handlers for control
update events and the current implementation does not update internal
state on all events so it is not testable. We also need to invest in
proper teardown methods to kill interactive connection when using
asserts.

DO NOT MERGE

@Mobius5150 Mobius5150 changed the title WIP: feat(batch): Initial commit for batch operations WIP: feat(batch): Batch operations and custom data Jun 9, 2018
Exposing a new batch operations API that allows you to batch updates to
multiple objects (e.g. controls, participants) and multiple property
updates on a single object. This API also allows for the use of custom
data including nested objects and arrays.

Current only exposes batch methods for `controls` - other objects will
come in a subsequent commit.

Tests are not yet complete as we have yet to expose handlers for control
update events and the current implementation does not update internal
state on all events so it is not testable. We also need to invest in
proper teardown methods to kill interactive connection when using
asserts.
Fix build issues and warnings and have working tests for control updates
@Mobius5150 Mobius5150 changed the base branch from master to feat/control-updates June 15, 2018 20:56
@Mobius5150 Mobius5150 changed the title WIP: feat(batch): Batch operations and custom data [Batch - PART2] Batch operations and custom data Jun 15, 2018
/// <summary>
/// Opaque handle for a batch update object
/// </summary>
struct interactive_batch_public {};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since everything in interactivity.h is public can we leave off the _public suffix? The existing paradigm is to mark internal structs with an _internal suffix to differentiate.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, what do you think about changing just this structure name to interactive_batch_op? A little more self documenting as it sort of indicates what you would do with the object. (begin, commit)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That depends - there's already a typedef below for interactive_batch which is the type that external developers will use. Adding _op_ is cool, but let me know what you want to do re _public. This struct is something they'll likely not reference - it's just there for type safety

Copy link
Contributor

@JoshSnider JoshSnider Jun 19, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gotcha, sorry misread this the first time. _public is fine.


typedef interactive_batch_array_public* interactive_batch_array;

typedef void(*interactive_batch_object_callback)(interactive_batch, interactive_batch_entry);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

interactive_batch_entry_callback?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

/// <para>The Mixer interactive protocol implements the JSON Merge Patch algorithm (RFC 7386) for updates on protocol objects - meaning that to delete an object property you need to set it to null rather than just omit it in an update.</para>
/// </remarks>
/// </summary>
int interactive_batch_add_param_null(interactive_batch batch, interactive_batch_entry entry, const char* name);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we simplify this to just one handle (interactive_batch_entry entry, const char* name)? Ideally the batch and the entry are decoupled except for when they are linked with the interactive_*_batch_add functions.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, means another ptr in the struct but not the end of the world

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah the struct gets a little larger but makes the calling pattern much cleaner in that you only have to pass the handle to the function that deals with that handle (begin/commit for batch_op, add_param for entry).

/// <para>Note that a batch request should only contain a single entry per participant ID.</para>
/// </remarks>
/// </summary>
int interactive_participant_batch_add(interactive_batch batch, interactive_batch_entry* entry, const char* participantId);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What error do I get if I do this?

interactive_batch batch;
interactive_participant_batch_begin(session, &batch);
interactive_entry entry;
int err = interactive_control_batch_add(batch, &entry, "GiveHealth");

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch - missed that. But let's decide on a direction for #80 (comment) before I fix

/// <summary>
/// Adds an update to the participant batch request for the participant with the given id.
/// <remarks>
/// <para>Note that a batch request should only contain a single entry per participant ID.</para>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this required or just a suggestion?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Required right now. We could workaround with a decent amount of work. Would increase the footprint of an entry object by 2 * size_t for each protocol object: world, group, scene, participant, control.

Doing this could simplify calling patterns too. Thoughts?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually... Nevermind the above comment. This wouldn't be a good idea.

Right now the protocol only supports updates for a single type of object in a single method call. We could abstract this away BUT it would mean that locally we would turn this into as many participantUpdate, controlUpdate, etc... calls as were needed. But the problem is that the calling pattern suggests the update would be performed as a single atomic operation when it would actually be performed as many independent operations, potentially out of order, each of which could fail.

So let's not do that and call this a requirement for now.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah totally agreed. I meant if you add an entry for an already existing ID does that break? Can we enforce it with an error message or just clean it up and do a merge?

Would love to see test cases that also call these in odd ways and assert errors.

struct interactive_batch_internal
{
interactive_session_internal* session;
const char* method;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Who owns this and param below?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The respective batch begin method, e.g. https://github.com/mixer/interactive-cpp/pull/80/files/2337d46dacb92bc2621421564b7f6a5988c80a61#diff-426a8659af90d0fe57bec68619f83941R20

See comment below about modifying the document directly rather than needing to keep param.

Both strings need to be valid until _commit is called which is why they aren't copied.

Copy link
Contributor

@JoshSnider JoshSnider Jun 19, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That will probably manifest in runtime errors where the caller cleans up the string before calling commit. Can we do a copy here and use std::string? It's a known runtime impact (2 string copies per batch) and shouldn't be a perf issue.

@@ -10,8 +10,11 @@ typedef std::function<void(rapidjson::Document::AllocatorType& allocator, rapidj

int create_method_json(interactive_session_internal& session, const std::string& method, on_get_params getParams, bool discard, unsigned int* id, std::shared_ptr<rapidjson::Document>& methodDoc)
{
std::shared_ptr<rapidjson::Document> doc(std::make_shared<rapidjson::Document>());
doc->SetObject();
std::shared_ptr<rapidjson::Document> doc(nullptr == methodDoc ? std::make_shared<rapidjson::Document>() : methodDoc);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason this was not written this way initially was to preserve output parameter state until we are ready to return. If an error state occurs you would not want to have modified methodDoc. This seems okay since there are no error states below but would have to be rewritten if there were (unlikely). However I'm more interested in the reason for the change, is there a need to re-use the incoming pointer? It would seem that you would always expect a clean json object as well but if you pass in a non-null object this would add to it, somewhat violating the 'create_method_json' name for the function.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good points. Two reasons:

  • Efficiency on the calling thread (already have the document object on hand)
  • I was working towards having the batch begin function be able to customize the document before getParams is called so that we don't have to keep state in the batch entry in the future.

Also is there a better way to get the rapidjson allocator than creating a document? Would there ever be concern with creating the json objects with an allocator from document A then using in document B?

Could always split this call into another function.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The allocator is bound to the document, definitely can't mix and match them. rapidjson has functions to move objects around without copies which might solve this.

freePtr = next;
next = next->next;
delete freePtr;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If there's a failure between interactive_batch_begin and interactive_batch_commit, will all created entries leak?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. I'll make interactive_batch_free public so you can call it in an error state. Would then also want to make _commit not free the object so you consistently call _free every time.

That said, what do you think about having interactive_batch_commit free the entries (so you don't iterate the linked list again) but not the batch op object? This detail shouldn't be exposed to the user at all.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can get away with not having the caller own anything but interactive_batch_op which would greatly simplify memory management. See my comment below and let's chat.

{
}

int interactive_batch_begin(interactive_session session, interactive_batch* batchPtr, char *methodName, on_get_batch_params getParams)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By convention output parameters are always the last parameters so this should be (interactive_session session, char* methodName, on_get_batch_params getParams, interactive_batch* batchPtr)

@JoshSnider
Copy link
Contributor

What do you think about this?

interactive_batch_op batchOp;
interactive_batch_begin(session, "default", interactive_batch_type_control, interactive_batch_op_type_update, &batchOp);

interactive_batch_entry;
interactive_batch_add(batchOp, "GiveHealth", &entry);

interactive_batch_add_param_str(entry, "foo", "bar");
interactive_batch_add_param_uint(entry, "number", 42);

interactive_batch_array arrayItem;
interactive_batch_add_param_array(entry, "array", 3, &arrayItem);

interactive_batch_entry arrObject;
interactive_batch_array_push_object(arrayItem, "object", &arrObject);
interactive_batch_add_param_str(arrObject, "foo", "bar");
interactive_batch_add_param_uint(arrObject, "number", 42);    

interactive_batch_array_push_str(arrayItem, "array", "bar");
interactive_batch_array_set_uint(arrayItem, 2, 42); // Set index '2' to 42

interactive_batch_entry objectEntry;
interactive_batch_add_param_object(entry, "object", &objectEntry);
interactive_batch_add_param_str(objectEntry, "foo", "bar");
interactive_batch_add_param_uint(objectEntry, "number", 42);

interactive_batch_commit(batchOp);
interactive_batch_close(batchOp);

@Mobius5150
Copy link
Contributor Author

@JoshSnider Let me know if we're looking good now and I'll do the rest of the object types

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Development

Successfully merging this pull request may close these issues.

2 participants