Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support custom static analysis #392

Open
pdawyndt opened this issue Jul 14, 2023 · 2 comments
Open

Support custom static analysis #392

pdawyndt opened this issue Jul 14, 2023 · 2 comments
Labels
core Core of TESTed (including test suite specification) enhancement New feature or request

Comments

@pdawyndt
Copy link
Contributor

pdawyndt commented Jul 14, 2023

TESTed is mainly focussed on dynamic analysis (= perform a unit test on the submitted code (function or method) or on execution of the entire code, and check the generated outputs with the corresponding expected outputs). However, many judges also support static analysis that only inspects the submitted source code without running it. Currently, TESTed only supports two built-in ways for static analysis:

  • compilation: compilation errors are reported either globally or per unit
  • linting: linting output is reported as source code annotations

Since there are many other forms of static analysis, we might add support for custom static analysis to TESTed. Here are some issues that need to be resolved to support custom static analysis in TESTed:

  • perform custom static analysis globally (once for each submission), per unit (reported per unit) and/or per context (reported per context); there seems no value in performing custom static analysis below the context-level (individual testcase or individual test)
  • language-independent vs language-specific static analysis: since the input is the source code of the submission, we could initially provide language-specific analysis; language-independent would require a generic API that needs to be implemented for each supported programming language and seems currently out-of-scope
  • report results of static analysis in feedback: static analysis could be a simple binary check (e.g. function uses recursion) that could could be reported as a separate test, or have more elaborate output (software quality metrics, test coverage Add support for coverage information #339, full HTML report); the results of static analysis can also be reported as source code annotations (i.e. as an extension to the standard linter)
  • support custom static analysis from DSL
@pdawyndt pdawyndt added the enhancement New feature or request label Jul 14, 2023
@pdawyndt pdawyndt changed the title Support static analysis Support custom static analysis Jul 14, 2023
@niknetniko niknetniko added the core Core of TESTed (including test suite specification) label Jul 24, 2023
@niknetniko
Copy link
Member

Some use cases for this (that are currently done with the Python judge) are:

  • controleren van type hints
  • controleren van gebruik van specifieke variabele
  • controleren of er ergens een dictionary gebruikt wordt
  • controleren of een functie gebruikt wordt in een andere functie
  • controleren of er geen lijst gebruikt wordt
  • controleren of een list comprehension gebruikt wordt
  • controleren of de math bibliotheek gebruikt wordt
  • tellen hoeveel functies er gebruikt worden
  • tellen hoeveel if/elif er gebruikt worden

@niknetniko niknetniko removed their assignment May 2, 2024
@BTWS2
Copy link

BTWS2 commented May 3, 2024

Some more use cases:

  • controleren hoeveel lussen gebruikt worden
  • controleren of er een while-lus gebruikt wordt
  • controleren of er een for-lus gebruikt wordt

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
core Core of TESTed (including test suite specification) enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants