You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
TESTed is mainly focussed on dynamic analysis (= perform a unit test on the submitted code (function or method) or on execution of the entire code, and check the generated outputs with the corresponding expected outputs). However, many judges also support static analysis that only inspects the submitted source code without running it. Currently, TESTed only supports two built-in ways for static analysis:
compilation: compilation errors are reported either globally or per unit
linting: linting output is reported as source code annotations
Since there are many other forms of static analysis, we might add support for custom static analysis to TESTed. Here are some issues that need to be resolved to support custom static analysis in TESTed:
perform custom static analysis globally (once for each submission), per unit (reported per unit) and/or per context (reported per context); there seems no value in performing custom static analysis below the context-level (individual testcase or individual test)
language-independent vs language-specific static analysis: since the input is the source code of the submission, we could initially provide language-specific analysis; language-independent would require a generic API that needs to be implemented for each supported programming language and seems currently out-of-scope
report results of static analysis in feedback: static analysis could be a simple binary check (e.g. function uses recursion) that could could be reported as a separate test, or have more elaborate output (software quality metrics, test coverage Add support for coverage information #339, full HTML report); the results of static analysis can also be reported as source code annotations (i.e. as an extension to the standard linter)
support custom static analysis from DSL
The text was updated successfully, but these errors were encountered:
TESTed is mainly focussed on dynamic analysis (= perform a unit test on the submitted code (function or method) or on execution of the entire code, and check the generated outputs with the corresponding expected outputs). However, many judges also support static analysis that only inspects the submitted source code without running it. Currently, TESTed only supports two built-in ways for static analysis:
Since there are many other forms of static analysis, we might add support for custom static analysis to TESTed. Here are some issues that need to be resolved to support custom static analysis in TESTed:
The text was updated successfully, but these errors were encountered: