-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Custom apps workflow discussion #4037
Comments
With the described concept, the users would get access to Kafka and they should connect and interact with Kafka from their own code. One more approach, in the direction of facilitating building real-time systems on top of Kafka, would be if we actually do the Kafka integration and call the custom app once there is a new record that is consumed, to execute the custom code. In this approach the user would just write the code what to do with the new record when it comes out. We can call the custom function async (written in the language which the user prefers) with the key/value arguments from the consumed Kafka record. When the job completes successfully, we commit the offset in Kafka and continue to the next record. That way we would handle the committing of the offsets and all the hard Kafka work and it would be up to the users to write the code what to do with the record, when new record comes into a topic. This is not a solution when there are thousands new events per second, but for a system that is not receiving that many events. From a technical implementation perspective, we can use Kubernetes jobs (to run the custom apps code) or a persistent workers that wait and execute the code. The advantage with the first approach is that all the scheduling/restarting/outcome logic is already there, while with the second approach the system would be faster as the workers will be constantly there (saving the time from creating the job and cleaning it up) |
Thanks, @ljupcovangelski. Interesting idea as an alternative to using the standard Kafka libraries like We also still have the webhooks as a way to call the relevant apps to the extent they provide an endpoint. Then again queueing the jobs inside the custom apps feels like an overkill, but some people might opt to do so. In general I think the approach should be:
For the standard clients I would suggest to offer code snippets including parsing config env variables to connect to Kafka (in case Kafka runs external or the internal envs for the embedded Kafka). For the deployment itself I think using git with local folders or connecting the apps to remote repos might work best. I would assume people have existing CI/CD pipelines, so working as close as possible to those would avoid a change in behavior. |
Thanks for the answer 👍
Do you mean that we give templates and tell people to build their own images/helm chart and then we tell them to install the Helm chart? In that case, we don't actually need any functionality in the UI/backend and it is just docs. |
At the moment we want to support
custom apps
inPython
andTypescript
. In all of the cases, from infrastructure perspective, we would need:airy.yaml
file and thecluster.update
endpoint.In terms of installing and running the
custom apps
from the UI there are few of approaches that we can do:1. Create the app offline, install via the endpoint
The user can use the
airy CLI
or abash script
to bootstrap a component directory in the workspace. Then the component should be built on top of predefined Docker images for the different languages and pushed to a Docker repository. As last step, the helm chart should also be pushed to a Helm repository.The repository should then be added to the
repositories
ConfigMap inside the Airy instance and the new component can then be installed via thecomponents.install
endpoint. It will not be visible in the Catalog however, as the Catalog reads from thecatalog
repo. If we also want to show the component in the Catalog, we need to modify the backend code forcomponents.list
to retrieve all the components that exist in the defined repositories.This approach requires the least modifications in the backend in the frontend, but has lots of manual steps.
2. Add App through the
Apps
PanelThe user will login into the
Control Center
and then navigate the theApps
panel where he will be able to create a new App. A zip package with the code will be uploaded there. The code will be packaged on a redefined Docker image depending on the language, pushed to a Docker repository and a helm chart will also be pushed to a helm repository.The App will only be visible on the
Apps
panel and not in theCatalog
. We would need to create endpoints in the backend to manage the apps -apps.list
,apps.get
,apps.install
,apps.uninstall
.Not sure that this is inline with our convention, as we can say that the
custom apps
can also be treated as components. This approach also would require the most most modification, both in the backend and in the frontend.3. A template component that can be "installed" multiple times
The user will navigate to the Catalog page in the
Control Center
and there we will have an existing component -Custom app
that can be instantiated (installed) multiple times, under different names. When the user clicks on this component, he will be able to see all the startedcustom apps
at the moment and remove some of them. There he will also have the possibility to create a new app by adding some configuration parameters as well as adding the code. Then the app should show under theApps
page, but we should see only one component (with multiple instances) in theCatalog
.A bit strange might be the actions that we have around the components -
install/uninstall/enable/disable
and how those will impact the "Custom apps".A combination between
1
and3
is also possible, where the Docker image is created manually and the installation happens through theCustom app
component.4. Create a new component through the Catalog
Probably the most straightforward approach would be that a user navigates to the Catalog page in the
Control Center
and then Creates a completely new component. The code will be packaged, Docker image and Helm chart pushed and it should appear as a completely new component. Then the user can Install/Uninstall/Enable/Disable it just as any regular component. It will be available under Connectors or Apps, depending on the metadata about the component.For this to work we would need to modify the backend of the Catalog to display only what is in the
catalog
repo. We can add an additionalcatalog
ConfigMap which can be amended to the existing components defined in the catalog repo. Also maybe a workflow around the repositories as it makes sense that a new repositories are created/attached (both Docker and Helm) where the artifacts will be published.A good way to start designing the feature is by looking how the experience should be from the UI perspective. Also not to be neglected - the
developer experience
- writing and testing code as well as toolings.The text was updated successfully, but these errors were encountered: