Class SarifToLLMForMultiOutcomeCodemod

java.lang.Object
io.codemodder.RawFileChanger
io.codemodder.SarifPluginRawFileChanger
io.codemodder.plugins.llm.SarifPluginLLMCodemod
io.codemodder.plugins.llm.SarifToLLMForMultiOutcomeCodemod
All Implemented Interfaces:
io.codemodder.CodeChanger

public abstract class SarifToLLMForMultiOutcomeCodemod extends SarifPluginLLMCodemod
An extension of SarifPluginRawFileChanger that uses large language models (LLMs) to perform some analysis and categorize what's found to drive different potential code changes.

The inspiration for this type was the "remediate something found by tool X" use case. For example, if a tool cites a vulnerability on a given line, we may want to take any of the following actions:

  • Fix the identified issue by doing A
  • Fix the identified issue by doing B
  • Add a suppression comment to the given line since it's likely a false positive
  • Refactor the code so it doesn't trip the rule anymore, without actually "fixing it"
  • Do nothing, since the LLM can't determine which case the code is

To accomplish that, we need the analysis to "bucket" the code into one of the above categories.

  • Constructor Details

  • Method Details

    • onFileFound

      public io.codemodder.CodemodFileScanningResult onFileFound(io.codemodder.CodemodInvocationContext context, List<com.contrastsecurity.sarif.Result> results)
      Specified by:
      onFileFound in class io.codemodder.SarifPluginRawFileChanger
    • createCodemodChange

      protected io.codemodder.CodemodChange createCodemodChange(com.contrastsecurity.sarif.Result result, int line, String fixDescription)
      Create a CodemodChange from the given code change data.
      Parameters:
      line - the line number of the change
      fixDescription - the description of the change
    • getThreatPrompt

      protected abstract String getThreatPrompt()
      Instructs the LLM on how to assess the risk of the threat.
      Returns:
      The prompt.