This benchmark suite is designed for evaluating the scheduler in a Function-as-a-Service platform. It consists of functions from various application domains so as to simulate the mixed workload in production.
If you want to deploy and run the functions, you need to ...
-
... have an Alibaba Cloud account because the functions are implemented for Function Compute.
-
... know how to use
fun
CLI tool. If not, you could check out this Get Started guide.
The benchmark suite has five applications, each with two functions. You could follow the README in each application directory to deploy and invoke the function.
Application | Function | Programming Language | Dependencies |
---|---|---|---|
Smart Parking | Query Vacancy | JavaScript | Redis |
Reserve Spot | JavaScript | Redis, Kafka | |
Log Processing | Anonymize Log | Rust | Kafka |
Filter Log | Rust | Kafka | |
Computer Vision | Detect Object | Python | TensorFlow |
Classify Image | Python | TensorFlow | |
Media Processing | Get Media Meta | Python | OSS |
Convert Audio | Python | OSS | |
Smart Manufacturing | Ingest Data | C++ | MySQL |
Detect Anomaly | C++ | MySQL |
The benchmark suite only supports Function Compute. To port it to other FaaS platforms, you have to change the way how arguments are passed into the function and how the functions are deployed.
If you have used this benchmark in your research project, please cite the following paper.
@inproceedings{tian2022owl,
title={Owl: Performance-Aware Scheduling for Resource-Efficient Function-as-a-Service Cloud},
author={Tian, Huangshi and Li, Suyi and Wang, Ao and Wang, Wei and Wu, Tianlong and Yang, Haoran},
booktitle={Proceedings of the ACM Symposium on Cloud Computing 2022},
year={2022}
}
@SimonZYC has contributed to this benchmark.