-
Notifications
You must be signed in to change notification settings - Fork 0
performance_framework
Chapters:
- Measurements
- How to run a performance test
- How to create a performance test
- Workflow of performance test
- How to extend
Every performance test will create a log file in the csv format with the following data:
- minimal time -- the minimal time of a call.
- maximal time -- the maximal time of a call.
- average time -- the average time of all calls.
- timeout -- percent of calls which needed more time than configured.
- error -- percent of calls which failed completely.
the output will look like:
# Results of get_Group
# The results of this test will show how long the system needed to get a group.
# This test is based on: 0 users 896 groups
serial*parallel;min;max;avg;timeout;error
1x1-Group.read;96;96;96;0%;0%
1x2-Group.read;41;44;42;0%;0%
1x3-Group.read;38;51;43;0%;0%
1x4-Group.read;28;56;43;0%;0%
1x5-Group.read;38;54;44;0%;0%
1x6-Group.read;32;50;41;0%;0%
1x7-Group.read;26;79;42;0%;0%
1x8-Group.read;25;77;45;0%;0%
1x9-Group.read;28;51;41;0%;0%
1x10-Group.read;25;58;39;0%;0%
2x1-Group.read;39;41;40;0%;0%
2x2-Group.read;32;74;46;0%;0%
2x3-Group.read;22;40;29;0%;0%
2x4-Group.read;26;42;37;0%;0%
2x5-Group.read;39;62;46;0%;0%
2x6-Group.read;26;51;41;0%;0%
2x7-Group.read;27;76;43;0%;0%
2x8-Group.read;20;56;37;0%;0%
2x9-Group.read;33;82;48;0%;0%
2x10-Group.read;26;82;41;0%;0%
3x1-Group.read;26;47;37;0%;0%
3x2-Group.read;35;48;41;0%;0%
3x3-Group.read;18;58;39;0%;0%
3x4-Group.read;18;77;42;0%;0%
3x5-Group.read;23;53;39;0%;0%
3x6-Group.read;19;81;44;0%;0%
3x7-Group.read;20;73;41;0%;0%
3x8-Group.read;20;87;42;0%;0%
3x9-Group.read;24;60;39;0%;0%
3x10-Group.read;29;53;39;0%;0%
You should read the first column "1x2-Group.read" like in the first iteration two parallel called method read of the module Group, the second column is the minimal time in ms, third is the maximal time in ms, avg is the average time in ms, timeout is percent of timeouts and error the percent of error.
To run one or more performance tests you need to have:
-
a running osiam-server instance
-
a running osiam-example-client instance
than you go to:
python/osiam/performance
and run
python runner.py <test1.py> <test2.py> ...
there are also some configuration parameter for the runner script:
- '--server', 'The server host name', default='localhost'
- '--client', 'The client host name.', default='localhost'
- '--client_id', 'The client ID', default='example-client'
- '--iterations', 'The number of repeating runs.', default=5
- '-p', '--parallel', 'The number of parallel runs.', default=10
- '-t', '--timeout', 'If this timeout is reached a request is considered as unsuccessful.', default=500
- '-l', '--log-directory', 'The directory to storage the log output.', default='/tmp'
To create a performance test you need to create a python test script, which is build like:
name = 'delete_Group'
description = 'The results of this test will show how long the system needed to delete Group.'
configuration = {'create': {'Group': 'per_call'}}
tests = [{'resource': 'Group', 'method': 'delete'}]
- name is mandatory String to identify the test
- description is an optional String to describe the test
- configuration is an optional block.
- With create you can create either User or Groups it allows either per_call which means that for every call (calculated by iteration and parallel calls) a new user or group will be created or an integer value to represent the amount
- tests represent the test calls, it is a list so you describe more than one method call
- resource in test is either Group or User
- method is either delete, create, update, replace, get or search
The performance test scripts are separated in
- runner.py -- the script to start the tests
- measuring.py -- contains the functionality to measure the time of the calls
- obtain_access_token.py -- used to get an access_token from the osiam-server and example-client
- prefill_osiam.py -- used to insert a huge amount of data before the actual execution of the testcases
- utils.py -- some helper functionality
- user.py --contains the methods read, delete, update, replace, create, search for user resource
- group.py --contains the methods read, delete, update, replace, create, search for group resource
To extend the performance scripts you just need to create a new with @measuring.measure annoted method in user- or group.py, e.q.:
@measuring.measure
def __update_group__(group):
return scim.update_group(group_ids.pop(), create_dynamic_group())
If you want to create a new resource create a new module for it (e.q. client.py) and use it in your test definitions in the correct notation .
If you want your new module to prefilled than you need to extend the runner.py in the method insert_data.