Diagnostics API
Use the Diagnostics API to run system and performance tests and get the status and results of test runs.
See Instabase API authorization and response conventions for authorization, success response, and error response convention details.
For the Diagnostics API, api-root
defines where to route API requests for your Instabase instance:
import json, requests
api_root = "https://instabase.com/api/v1/diagnostics"
Run system test
Use this API to begin running system tests.
Request
import json, requests
headers = {'Authorization': 'Bearer {0}'.format(token)}
args = {
'api_key': 'ABCDEFGHIJKLMOPQRS',
'url_base': 'https://instabase.com',
'root_test_folder_path': 'admin/my-repo/fs/Instabase Drive/test_root'
}
data = json.dumps(args)
resp = requests.post(api_root + '/system_test', headers=headers, data=data).json()
The body of the request must be a JSON object with the following fields:
api_key
: the Oauth token that will be used to make API calls during the tests.url_base
: the url base that will be used for API calls during testing.root_test_folder_path
: the path of the folder where a temporary subfolder will be generated for the test. All files used for testing will be housed in the subfolder and the subfolder may be deleted after test completition (check t).
Response
If testing successfully begun:
HTTP STATUS CODE 200
{
"status": "OK",
"test_id": "system-test-run-identifier"
}
Get system test status
Use this API to retrieve the status of a system test run, and the results of that run if the tests have completed.
Request
import json, requests
headers = {'Authorization': 'Bearer {0}'.format(token)}
resp = requests.get(api_root + '/system_test/result/<test_id>', headers=headers).json()
Response
If successful:
{
"state": "DONE",
"status": "OK",
"test_result":{
"status": "OK",
"msg": null,
"was_successful": false,
"errors": {
"test_that_encountered_error": "traceback of error"
},
"failures": {
"test_that_failed": "cause of failure"
}
}
}
The body of the response is a JSON dictionary with the following fields:
state
:"PENDING" | "DONE"
: Indicates if the test run has finished.status
:"OK" | "ERROR"
: Indicates if the test status API call encountered an error.msg
: Ifstatus
is"ERROR"
, contains information for the error.test_result
: A dictionary object that contains detailed information about the test run afterstate
isDONE
.status
:"OK" | "ERROR"
. Indicates whether the test run was able to be completed.msg
: Iftest_result[status]
is"ERROR"
, contains information on the error that occured.was_successful
: Indicates whether all tests succeeded.errors
: A dictionary that include tests that failed due to an uncaught exception paired with a description of the exception.failures
: A dictionary that include tests that failed gracefully paired with a description of the failure.
The difference between a test error and failure is that the former fails due to uncaught exception and the latter due to unmatched results expected by the test. In either case, was_successful
will be false
, but subsequent tests will still be run.
Supported tests
Currently, system tests cover the following scenarios.
- Drive tests:
test_mkdir_and_create_file
: tests creating directories and files.test_copy_and_move_file
: tests copying and moving files.test_copy_and_move_folder
: tests copying and moving folders.test_list_dir
: tests listing directories’ contents.test_write_and_read_large_files
: tests reading and writing files up to 100MB in size.test_consecutive_append
: tests consecutively appending to the ends of files.test_unzip_and_extract
: tests unzipping and extracting operations.
Run filesystem performance tests
Use this API to discover available filesystem performance tests and descriptions and begin running filesystem performance tests.
You must enable the Marketplace before using this API.
Discover available filesystem performance tests
Request
import json, requests
headers = {'Authorization': 'Bearer {0}'.format(token)}
resp = requests.get(api_root + '/performance_test/filesystem', headers=headers).json()
Response
If successful:
HTTP STATUS CODE 200
{
"status": "OK",
"tests": [{'test_name': 'perf_test_read_file_api', 'test_description': "Invokes 'Read-File HTTP API' calls and meters performance"}, ...}
}
Start test
Request
import json, requests
headers = {'Authorization': 'Bearer {0}'.format(token)}
args = {
'api_key': 'ABCDEFGHIJKLMOPQRS',
'root_test_folder_path': 'admin/my-repo/fs/Instabase Drive/test_root',
'duration_seconds': 1,
'num_threads': [1],
'file_sizes_kb': [1],
'tests': ['perf_test_read_file_api']
}
data = json.dumps(args)
resp = requests.post(api_root + '/performance_test/filesystem', headers=headers, data=data).json()
The body of the request must be a JSON object with the following fields:
api_key
[str]: OAuth token used to make API calls during the tests.root_test_folder_path
[str]: Path of the folder where a temporary subfolder will be generated for the test. All files used in the test will be housed in the subfolder. The subfolder may be deleted after test completion. Check ’teardown_test_status’ field in test results to see if the temporary subfolder was successfully deleted.duration_seconds
[int] (optional): Length each test will run for. The default is 10 seconds (capped at 1800).num_threads
[list of ints] (optional): Specifies the amount of concurrency for file-operations that the tests will run at (values capped at 100).file_sizes_kb
[list of ints] (optional): File sizes used for running each test (valued capped by 200mb / max(num_threads)).tests
[list of str] (optional): Specifies the test cases to run. Valid test names are: ‘perf_test_read_file_rpc’, ‘perf_test_read_file_api’, ‘perf_test_write_file_api’, ‘perf_test_write_file_rpc’, ‘perf_test_write_file_multipart_rpc’.verify_ssl_certs
[bool] (optional): Enables/disables SSL certificate verification.
These tests produce a substantial amount of temporary files as the test runner attempts to clean up generated files at the end of each test. If you are using more storage-heavy configurations, such as higher file sizes in file_sizes_kb
, you can expect to see temporary storage bloat. If your storage bucket is versioned, this storage bloat might persist after the test has finished.
Certain parameters have their maximum value capped, to reduce the risk of creating configurations that will cause your test to crash. Affected parameters have the value limit noted in their description.
Response
If testing began successfully:
HTTP STATUS CODE 200
{
"status": "OK",
"test_id": "<unique-test-ID>"
}
Get filesystem performance test status
Use this API to retrieve the status of a performance test run, and the results of that run if the tests have completed. You may poll on this endpoint until the test status is no longer “PENDING”. Note that if the test crashes then a test may be permanently left in “PENDING” state. In this case, we recommend running the test again.
Request
import json, requests
headers = {'Authorization': 'Bearer {""}'.format(token)}
resp = requests.get(api_root + '/performance_test/result/<test_id>', headers=headers).json()
Response
If successful, you should expect a result similar to the sample below:
{
"status": "OK",
"test_status": "DONE",
"test_result":{
"suite_name": "filesystem_test_suite",
"test_params": {
"num_threads": [1],
"duration_seconds": 1,
"tests": ["_test_read_file_api"],
"test_file_sizes_kb": [1],
"test_root_path": "tester/tests/fs/Instabase Drive/performance_tests"
},
"version": 1,
"suite_status": {
"status_code": "OK",
"msg": ""
},
"teardown_status": {
"status_code": "OK",
"msg": ""
},
"start_time": "2021-12-07T20:51:25.792180",
"end_time": "2021-12-07T20:52:58.948332",
"results": [
{
"test_name": "Read-File HTTP API 1kb",
"test_description": "Invokes 'Read-File HTTP API' calls and meters performance",
"request_type": "HTTP",
"file_info": {
"file_size": 1,
"file_paths": [
"<paths where file(s) existed during this test>"
],
"file_type": "<the file type used for the test>"
},
"start_time": "<Start time of this test as a datetime string>",
"thread_count": 1,
"errors": ["<list of observed errors during this test>"],
"statistics": {
"num_successes": "<number of successful executions of this operation>",
"num_errors": "<number of errors during executions of this operation>",
"num_total": "<total recorded executions of this operations>",
"requests_per_second": "<operations succesfully executed per second>",
"latencies_stats_seconds": {
"mean": "<mean latency of successfully executed operations>",
"median": "<median latency of successfully executed operations>",
"90th_percentile": "<90th percentile latency of successfully executed operations>",
"99th_percentile": "<99th percentile latency of successfully executed operations>",
"75th_percentile": "<75th percentile latency of successfully executed operations>",
"max": "<max latency of successfully executed operations>",
"min": "<min latency of successfully executed operation>"
}
},
"test_status": {
"status_code": "OK",
"msg": ""
},
"end_time": "<End time of this test as a datetime string>",
"teardown_test_status": {
"status_code": "OK",
"msg": ""
}
}],
}
}
The body of the response is a JSON with the following fields:
status
:"PENDING" | "DONE" | "ERROR"
: Indicates status of performance test.msg
: Ifstatus
is"ERROR"
, contains information for the error.test_result
: A dictionary object that contains detailed information about the test run afterstatus
isDONE
.
The test_result object contains the following fields:
suite_name
: The name of the performance test that was run, such asfilesystem_test_suite
.test_params
: Object containing the parameters the filesystem performance tests ran over.version
: Version of the result schema for the performance test; expected value is"1"
.suite_status
: Status of the suite of tests, Ifsuite_status.status_code
is"ERROR"
, then `suite_status.msg should contain information about the error.teardown_status
: Status of cleanup after tests. Ifteardown_status.status_code
is"ERROR"
, then there may be artifacts from the performance test. To clean these artifacts up, navigate to thetest_root_path
specified in thetest_params
field and delete the folder path/performance_tests/<test_id>
.start_time
: Starting time of the performance test, represented as a timestamp string.end_time
: Ending time of the performance test, represented as a timestamp string.results
: List containing results for each test run.
Run database performance tests
Use this API to discover available database performance tests and their descriptions and to trigger database performance tests.
Discover available database performance tests
Request
import json, requests
headers = {'Authorization': 'Bearer {0}'.format(token)}
resp = requests.get(api_root + '/performance_test/database', headers=headers).json()
Response
If successful:
HTTP STATUS CODE 200
{
'status': 'OK',
'tests': [
{'test_name': 'perf_test_delete_database_rpc', 'test_description': "Invokes a query to delete one row from testing table and meters performance. The query will be executed 'iterations' times"},
...,
{'test_name': 'perf_test_update_database_rpc', 'test_description': "Invokes a query to update one row into testing table and meters performance. The query will be executed 'iterations' times"}]
}
Start test
Request
import json, requests
headers = {'Authorization': 'Bearer {0}'.format(token)}
args = {
'api_key': 'ABCDEFGHIJKLMOPQRS',
'iterations': 1,
'scan_test_row_count': 100,
'row_count': 10,
'tests': ['perf_test_ping_database_rpc'],
}
data = json.dumps(args)
resp = requests.post(api_root + '/performance_test/database', headers=headers, data=data).json()
The body of the request is a JSON object with the following fields:
api_key
[str]: OAuth token used to make API calls during the tests.iterations
[int]: Number of repetitions for each test case (max: 1,000,000).scan_test_row_count
[int] (optional): Number of rows inserted during the setup process for SCAN related tests (default: 1,000; max: 10,000).row_count
[int] (optional): Number of rows inserted during the setup process for READ/UPDATE related tests (default: 100; max: 1,000).tests
[list of str] (optional): Specifies the test cases to run. Valid test names are: ‘perf_test_delete_database_rpc’, ‘perf_test_insert_database_rpc’ ‘perf_test_insert_large_text_database_rpc’, ‘perf_test_join_database_rpc’ ‘perf_test_ping_database_rpc’, ‘perf_test_read_database_rpc’ ‘perf_test_read_large_text_database_rpc’, ‘perf_test_scan_database_rpc’ ‘perf_test_scan_index_database_rpc’, ‘perf_test_scan_sorted_database_rpc’, ‘perf_test_update_database_rpc’.verify_ssl_certs
[bool] (optional): Enables/disables SSL certificate verification.
It’s possible to increase the configuration variables to a level that can cause your test to fail. The upper limits noted in the field description might not apply for all configurations and environments, for example, the speed of database operations may vary from different dialects. A smaller number of iterations (<= 1000) is recommended for time-consuming tests such as SCAN or JOIN. A larger number of iterations can be applied to lightweight tests such as PING or READ.
Response
If testing began successfully:
HTTP STATUS CODE 200
{
"status": "OK",
"test_id": "perf-test-db-<unique-test-ID>"
}
Get database performance test status
The API used for retrieving the status of a database performance test is the same as the one used for filesystem performance tests.
Request
import json, requests
headers = {'Authorization': 'Bearer {""}'.format(token)}
resp = requests.get(api_root + '/performance_test/result/<test_id>', headers=headers).json()
Response
If successful, the result is similar to the sample below:
{
"status": "OK",
"test_status": "DONE",
"test_result": {
"suite_name": "database_test_suite",
"test_params": {
"tests": ["perf_test_ping_database_rpc"],
"username": "<user_name>",
"iterations": 1,
"scan_test_row_count": 1000,
"row_count": 100
},
"version": 1,
"suite_status": {
"status_code": "OK",
"msg": ""
},
"teardown_status": {
"status_code": "OK",
"msg": ""
},
"start_time": 1667954291.771355,
"end_time": 1667954294.275358,
"results": [
{
"test_name": "PING RPC",
"test_description": "Invokes a simple database query and meters performance. The query will be executed 'iterations' times",
"request_type": "RPC",
"start_time": "<Start time of this test as a string with ISO 8601 format (UTC timezone)>",
"test_status": {
"status_code": "OK",
"msg": ""
},
"teardown_test_status": {
"status_code": "OK",
"msg": ""
},
"errors": ["<list of observed errors during this test>"],
"statistics": {
"num_successes": "<number of successful executions of this operation>",
"num_errors": "<number of errors during executions of this operation>",
"num_total": "<total recorded executions of this operations>",
"requests_per_second": "<operations succesfully executed per second>",
"latencies_stats_seconds": {
"mean": "<mean latency of successfully executed operations>",
"median": "<median latency of successfully executed operations>",
"90th_percentile": "<90th percentile latency of successfully executed operations>",
"99th_percentile": "<99th percentile latency of successfully executed operations>",
"75th_percentile": "<75th percentile latency of successfully executed operations>",
"max": "<max latency of successfully executed operations>",
"min": "<min latency of successfully executed operations>"
}
},
"end_time": "<End time of this test as a string with ISO 8601 format (UTC timezone)>"
}
],
"successes": [
{
"test": "perf_test_ping_database_rpc"
}
],
"total": "<total number of tests>",
"username": "<user_name>"
}
}
The body of the response is a JSON object with the following fields:
status
:"PENDING" | "DONE" | "ERROR"
: Indicates the status of the performance test.msg
: Ifstatus
is"ERROR"
, contains information for the error.test_result
: A dictionary object that contains detailed information about the test run whenstatus
isDONE
.
The test_result object contains the following fields:
suite_name
: The name of the performance test that was run, such asdatabase_test_suite
.test_params
: Object containing the parameters of the database performance tests, includingtests
,iterations
,scan_test_row_count
androw_count
.version
: Version of the result schema for the performance test; expected value is"1"
.suite_status
: Status of the suite of tests. Ifsuite_status.status_code
is"ERROR"
, then `suite_status.msg contains information about the error.teardown_status
: Status of cleanup after tests. In database performance tests, theteardown_status.status_code
will always be"OK"
.start_time
: Starting time of the performance test, represented as a timestamp string.end_time
: Ending time of the performance test, represented as a timestamp string.results
: List containing results for each test run.