From 9197552aac613a8e139f0b16733abce55d23885d Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Mon, 18 Sep 2023 15:49:15 +0000 Subject: [PATCH] build based on 68a0e39 --- dev/config/index.html | 10 +++--- dev/generated-plugin-api/index.html | 16 +++++----- .../dispatch_analysis/index.html | 2 +- .../find_unstable_api/index.html | 2 +- dev/index.html | 4 +-- dev/internals/index.html | 8 ++--- dev/jetanalysis/index.html | 32 +++++++++---------- dev/optanalysis/index.html | 10 +++--- dev/search/index.html | 2 +- dev/tutorial/index.html | 2 +- 10 files changed, 44 insertions(+), 44 deletions(-) diff --git a/dev/config/index.html b/dev/config/index.html index cf3bd1f38..0b10a9044 100644 --- a/dev/config/index.html +++ b/dev/config/index.html @@ -49,7 +49,7 @@ ═════ 1 possible error found ═════ ┌ foo(a::String) @ Main ./REPL[14]:3 │ `Main.undefsum` is not defined: r2 = undefsum(a::String) -└────────────────────
source

Configurations for Top-level Analysis

JET.ToplevelConfigType

Configurations for top-level analysis. These configurations will be active for all the top-level entries explained in the top-level analysis entry points section.


  • context::Module = Main
    The module context in which the top-level execution will be simulated.

    This configuration can be useful when you just want to analyze a submodule, without starting entire analysis from the root module. For example, we can analyze Base.Math like below:

    julia> report_file(JET.fullbasepath("math.jl");
    +└────────────────────

source

Configurations for Top-level Analysis

JET.ToplevelConfigType

Configurations for top-level analysis. These configurations will be active for all the top-level entries explained in the top-level analysis entry points section.


  • context::Module = Main
    The module context in which the top-level execution will be simulated.

    This configuration can be useful when you just want to analyze a submodule, without starting entire analysis from the root module. For example, we can analyze Base.Math like below:

    julia> report_file(JET.fullbasepath("math.jl");
                        context = Base,                  # `Base.Math`'s root module
                        analyze_from_definitions = true, # there're only definitions in `Base`
                        )

    Note that this module context will be virtualized by default so that JET can repeat analysis in the same session without having "invalid redefinition of constant ..." error etc. In other word, JET virtualize the module context of context and make sure the original module context isn't polluted by JET.


  • target_defined_modules::Bool = false
    If true, automatically set the target_modules configuration so that JET filters out errors that are reported within modules that JET doesn't analyze directly.

  • analyze_from_definitions::Bool = false
    If true, JET will start analysis using signatures of top-level definitions (e.g. method signatures), after the top-level interpretation has been done (unless no serious top-level error has happened, like errors involved within a macro expansion).

    This can be handy when you want to analyze a package, which usually contains only definitions but not their usages (i.e. top-level callsites). With this option, JET can enter analysis just with method or type definitions, and we don't need to pass a file that uses the target package.

    Warning

    This feature is very experimental at this point, and you may face lots of false positive errors, especially when trying to analyze a big package with lots of dependencies. If a file that contains top-level callsites (e.g. test/runtests.jl) is available, JET analysis using the file is generally preferred, since analysis entered from concrete call sites will produce more accurate results than analysis entered from (maybe not concrete-typed) method signatures.

    Also see: report_file, watch_file


  • concretization_patterns::Vector{Any} = Any[]
    Specifies a customized top-level code concretization strategy.

    When analyzing a top-level code, JET first splits the entire code into appropriate units of code (i.e. "code blocks"), and then iterate a virtual top-level code execution process on each code block in order to simulate Julia's top-level code execution. In the virtual code execution, JET will selectively interpret "top-level definitions" (like a function definition), while it tries to avoid executing any other parts of code including function calls that typically do a main computational task, leaving them to be analyzed by the succeeding abstract interpretation based analysis.

    However, currently, JET doesn't track "inter-block" level code dependencies, and therefore the selective interpretation of top-level definitions may fail when it needs to use global bindings defined in the other code blocks that have not been selected and actually interpreted (i.e. "concretized") but left for abstract interpretation (i.e. "abstracted").

    For example, the issue would happen if the expansion of a macro uses a global variable, e.g.:

    test/fixtures/concretization_patterns.jl

    # JET doesn't conretize this by default, but just analyzes its type
    @@ -108,7 +108,7 @@
     [toplevel-debug] concretization plan at test/fixtures/concretization_patterns.jl:13:
     1 f 1 ─ %1 = foo(10)
     2 f └──      return %1
    -[toplevel-debug]  exited from test/fixtures/concretization_patterns.jl (took 0.032 sec)

    Also see: the toplevel_logger section below, virtual_process.

    Note

    report_package automatically sets this configuration as

    concretization_patterns = [:(x_)]

    meaning that it will concretize all top-level code included in a package being analyzed.


  • toplevel_logger::Union{Nothing,IO} = nothing
    If IO object is given, it will track JET's toplevel analysis. Logging level can be specified with :JET_LOGGER_LEVEL IO property. Currently supported logging levels are either of 0 ("info" level, default), 1 ("debug" level).

    Examples:

    • logs into stdout
    julia> report_file(filename; toplevel_logger = stdout)
    • logs into io::IOBuffer with "debug" logger level
    julia> report_file(filename; toplevel_logger = IOContext(io, :JET_LOGGER_LEVEL => 1));

  • virtualize::Bool = true
    When true, JET will virtualize the given root module context.

    This configuration is supposed to be used only for testing or debugging. See virtualize_module_context for the internal.


source

Configurations for Abstract Interpretation

JET.JETInferenceParamsFunction

Configurations for abstract interpretation performed by JET. These configurations will be active for all the entries.

You can configure any of the keyword parameters that Core.Compiler.InferenceParams or Core.Compiler.OptimizationParams can take, e.g. max_methods:

julia> methods(==, (Any,Nothing))
+[toplevel-debug]  exited from test/fixtures/concretization_patterns.jl (took 0.032 sec)

Also see: the toplevel_logger section below, virtual_process.

Note

report_package automatically sets this configuration as

concretization_patterns = [:(x_)]

meaning that it will concretize all top-level code included in a package being analyzed.


  • toplevel_logger::Union{Nothing,IO} = nothing
    If IO object is given, it will track JET's toplevel analysis. Logging level can be specified with :JET_LOGGER_LEVEL IO property. Currently supported logging levels are either of 0 ("info" level, default), 1 ("debug" level).

    Examples:

    • logs into stdout
    julia> report_file(filename; toplevel_logger = stdout)
    • logs into io::IOBuffer with "debug" logger level
    julia> report_file(filename; toplevel_logger = IOContext(io, :JET_LOGGER_LEVEL => 1));

  • virtualize::Bool = true
    When true, JET will virtualize the given root module context.

    This configuration is supposed to be used only for testing or debugging. See virtualize_module_context for the internal.


source

Configurations for Abstract Interpretation

JET.JETInferenceParamsFunction

Configurations for abstract interpretation performed by JET. These configurations will be active for all the entries.

You can configure any of the keyword parameters that Core.Compiler.InferenceParams or Core.Compiler.OptimizationParams can take, e.g. max_methods:

julia> methods(==, (Any,Nothing))
 # 3 methods for generic function "==" from Base:
  [1] ==(::Missing, ::Any)
      @ missing.jl:75
@@ -132,8 +132,8 @@
            # and thus we won't get any error report
            x == nothing ? :nothing : :some
        end
-No errors detected

See also Core.Compiler.InferenceParams and Core.Compiler.OptimizationParams.

Listed below are selections of those parameters that can have a potent influence on JET analysis.


  • ipo_constant_propagation::Bool = true
    Enables constant propagation in abstract interpretation. It is highly recommended that you keep this configuration true to get reasonable analysis result, because constant propagation can cut off lots of false positive errorenous code paths and thus produce more accurate and useful analysis results.

  • aggressive_constant_propagation::Bool = true
    If true, JET will try to do constant propagation more "aggressively". It can lead to more accurate analysis as explained above, but also it may incur a performance cost. JET by default enables this configuration to get more accurate analysis result.

  • unoptimize_throw_blocks::Bool = false
    Turn this on to skip analysis on code blocks that will eventually lead to a throw call. This configuration improves the analysis performance, but it's better to be turned off to get a "proper" analysis result, just because there may be other errors even in those "throw blocks".

source
Core.Compiler.InferenceParamsType
inf_params::InferenceParams

Parameters that control abstract interpretation-based type inference operation.


  • inf_params.max_methods::Int = 3
    Type inference gives up analysis on a call when there are more than max_methods matching methods. This trades off between compiler latency and generated code performance. Typically, considering many methods means spending lots of time obtaining poor type information, so this option should be kept low. Base.Experimental.@max_methods can have a more fine-grained control on this configuration with per-module or per-method annotation basis.

  • inf_params.max_union_splitting::Int = 4
    Specifies the maximum number of union-tuples to swap or expand before computing the set of matching methods or conditional types.

  • inf_params.max_apply_union_enum::Int = 8
    Specifies the maximum number of union-tuples to swap or expand when inferring a call to Core._apply_iterate.

  • inf_params.max_tuple_splat::Int = 32
    When attempting to infer a call to Core._apply_iterate, abort the analysis if the tuple contains more than this many elements.

  • inf_params.tuple_complexity_limit_depth::Int = 3
    Specifies the maximum depth of large tuple type that can appear as specialized method signature when inferring a recursive call graph.

  • inf_params.ipo_constant_propagation::Bool = true
    If false, disables analysis with extended lattice information, i.e. disables any of the concrete evaluation, semi-concrete interpretation and constant propagation entirely. Base.@constprop :none can have a more fine-grained control on this configuration with per-method annotation basis.

  • inf_params.aggressive_constant_propagation::Bool = false
    If true, forces constant propagation on any methods when any extended lattice information available. Base.@constprop :aggressive can have a more fine-grained control on this configuration with per-method annotation basis.

  • inf_params.unoptimize_throw_blocks::Bool = true
    If true, skips inferring calls that are in a block that is known to throw. It may improve the compiler latency without sacrificing the runtime performance in common situations.

  • inf_params.assume_bindings_static::Bool = false
    If true, assumes that no new bindings will be added, i.e. a non-existing binding at inference time can be assumed to always not exist at runtime (and thus e.g. any access to it will throw). Defaults to false since this assumption does not hold in Julia's semantics for native code execution.

source
Core.Compiler.OptimizationParamsType
opt_params::OptimizationParams

Parameters that control optimizer operation.


  • opt_params.inlining::Bool = inlining_enabled()
    Controls whether or not inlining is enabled.

  • opt_params.inline_cost_threshold::Int = 100
    Specifies the number of CPU cycles beyond which it's not worth inlining.

  • opt_params.inline_nonleaf_penalty::Int = 1000
    Specifies the penalty cost for a dynamic dispatch.

  • opt_params.inline_tupleret_bonus::Int = 250
    Specifies the extra inlining willingness for a method specialization with non-concrete tuple return types (in hopes of splitting it up). opt_params.inline_tupleret_bonus will be added to opt_params.inline_cost_threshold when making inlining decision.


  • opt_params.max_tuple_splat::Int = 32
    When attempting to inline Core._apply_iterate, abort the optimization if the tuple contains more than this many elements.

  • opt_params.compilesig_invokes::Bool = true
    If true, gives the inliner license to change which MethodInstance to invoke when generating :invoke expression based on the @nospecialize annotation, in order to avoid over-specialization.

  • opt_params.assume_fatal_throw::Bool = false
    If true, gives the optimizer license to assume that any throw is fatal and thus the state after a throw is not externally observable. In particular, this gives the optimizer license to move side effects (that are proven not observed within a particular code path) across a throwing call. Defaults to false.

source
JET.PrintConfigType

Configurations for report printing. The configurations below will be active whenever showing JET's analysis result within REPL.


  • fullpath::Bool = false
    Controls whether or not expand a file path to full path when printing analyzed call stack. Note that paths of Julia's Base files will also be expanded when set to true.

  • print_toplevel_success::Bool = false
    If true, prints a message when there is no toplevel errors found.

  • print_inference_success::Bool = true
    If true, print a message when there is no errors found in abstract interpretation based analysis pass.

source

Configurations for VSCode Integration

JET.VSCode.VSCodeConfigType

Configurations for the VSCode integration. These configurations are active only when used in the integrated Julia REPL.


  • vscode_console_output::Union{Nothing,IO} = stdout
    JET will show analysis result in VSCode's "PROBLEMS" pane and inline annotations. If vscode_console_output::IO is specified, JET will also print the result into the specified output stream in addition to showing the result in the integrated views. When nothing, the result will be only shown in the integrated views.

source

Watch Configurations

JET.WatchConfigType

Configurations for "watch" mode. The configurations will only be active when used with watch_file.


  • revise_all::Bool = true
    Redirected to Revise.entr's all keyword argument. When set to true, JET will retrigger analysis as soon as code updates are detected in any module tracked by Revise. Currently when encountering import/using statements, JET won't perform analysis, but rather will just load the modules as usual execution (this also means Revise will track those modules). So if you're editing both files analyzed by JET and modules that are used within the files, this configuration should be enabled.

  • revise_modules = nothing
    Redirected to Revise.entr's modules positional argument. If a iterator of Module is given, JET will retrigger analysis whenever code in modules updates.

    Tip

    This configuration is useful when your're also editing files that are not tracked by Revise, e.g. editing functions defined in Base:

    # re-perform analysis when you make a change to `Base`
    -julia> watch_file(yourfile; revise_modules = [Base])

source

Configuration File

JET.parse_config_fileFunction

JET.jl offers .prettierrc style configuration file support. This means you can use .JET.toml configuration file to specify any of configurations explained above and share that with others.

When report_file or watch_file is called, it will look for .JET.toml in the directory of the given file, and search up the file tree until a JET configuration file is (or isn't) found. When found, the configurations specified in the file will be applied.

A configuration file can specify configurations like:

aggressive_constant_propagation = false # turn off aggressive constant propagation
+No errors detected

See also Core.Compiler.InferenceParams and Core.Compiler.OptimizationParams.

Listed below are selections of those parameters that can have a potent influence on JET analysis.


  • ipo_constant_propagation::Bool = true
    Enables constant propagation in abstract interpretation. It is highly recommended that you keep this configuration true to get reasonable analysis result, because constant propagation can cut off lots of false positive errorenous code paths and thus produce more accurate and useful analysis results.

  • aggressive_constant_propagation::Bool = true
    If true, JET will try to do constant propagation more "aggressively". It can lead to more accurate analysis as explained above, but also it may incur a performance cost. JET by default enables this configuration to get more accurate analysis result.

  • unoptimize_throw_blocks::Bool = false
    Turn this on to skip analysis on code blocks that will eventually lead to a throw call. This configuration improves the analysis performance, but it's better to be turned off to get a "proper" analysis result, just because there may be other errors even in those "throw blocks".

source
Core.Compiler.InferenceParamsType
inf_params::InferenceParams

Parameters that control abstract interpretation-based type inference operation.


  • inf_params.max_methods::Int = 3
    Type inference gives up analysis on a call when there are more than max_methods matching methods. This trades off between compiler latency and generated code performance. Typically, considering many methods means spending lots of time obtaining poor type information, so this option should be kept low. Base.Experimental.@max_methods can have a more fine-grained control on this configuration with per-module or per-method annotation basis.

  • inf_params.max_union_splitting::Int = 4
    Specifies the maximum number of union-tuples to swap or expand before computing the set of matching methods or conditional types.

  • inf_params.max_apply_union_enum::Int = 8
    Specifies the maximum number of union-tuples to swap or expand when inferring a call to Core._apply_iterate.

  • inf_params.max_tuple_splat::Int = 32
    When attempting to infer a call to Core._apply_iterate, abort the analysis if the tuple contains more than this many elements.

  • inf_params.tuple_complexity_limit_depth::Int = 3
    Specifies the maximum depth of large tuple type that can appear as specialized method signature when inferring a recursive call graph.

  • inf_params.ipo_constant_propagation::Bool = true
    If false, disables analysis with extended lattice information, i.e. disables any of the concrete evaluation, semi-concrete interpretation and constant propagation entirely. Base.@constprop :none can have a more fine-grained control on this configuration with per-method annotation basis.

  • inf_params.aggressive_constant_propagation::Bool = false
    If true, forces constant propagation on any methods when any extended lattice information available. Base.@constprop :aggressive can have a more fine-grained control on this configuration with per-method annotation basis.

  • inf_params.unoptimize_throw_blocks::Bool = true
    If true, skips inferring calls that are in a block that is known to throw. It may improve the compiler latency without sacrificing the runtime performance in common situations.

  • inf_params.assume_bindings_static::Bool = false
    If true, assumes that no new bindings will be added, i.e. a non-existing binding at inference time can be assumed to always not exist at runtime (and thus e.g. any access to it will throw). Defaults to false since this assumption does not hold in Julia's semantics for native code execution.

source
Core.Compiler.OptimizationParamsType
opt_params::OptimizationParams

Parameters that control optimizer operation.


  • opt_params.inlining::Bool = inlining_enabled()
    Controls whether or not inlining is enabled.

  • opt_params.inline_cost_threshold::Int = 100
    Specifies the number of CPU cycles beyond which it's not worth inlining.

  • opt_params.inline_nonleaf_penalty::Int = 1000
    Specifies the penalty cost for a dynamic dispatch.

  • opt_params.inline_tupleret_bonus::Int = 250
    Specifies the extra inlining willingness for a method specialization with non-concrete tuple return types (in hopes of splitting it up). opt_params.inline_tupleret_bonus will be added to opt_params.inline_cost_threshold when making inlining decision.


  • opt_params.max_tuple_splat::Int = 32
    When attempting to inline Core._apply_iterate, abort the optimization if the tuple contains more than this many elements.

  • opt_params.compilesig_invokes::Bool = true
    If true, gives the inliner license to change which MethodInstance to invoke when generating :invoke expression based on the @nospecialize annotation, in order to avoid over-specialization.

  • opt_params.assume_fatal_throw::Bool = false
    If true, gives the optimizer license to assume that any throw is fatal and thus the state after a throw is not externally observable. In particular, this gives the optimizer license to move side effects (that are proven not observed within a particular code path) across a throwing call. Defaults to false.

source
JET.PrintConfigType

Configurations for report printing. The configurations below will be active whenever showing JET's analysis result within REPL.


  • fullpath::Bool = false
    Controls whether or not expand a file path to full path when printing analyzed call stack. Note that paths of Julia's Base files will also be expanded when set to true.

  • print_toplevel_success::Bool = false
    If true, prints a message when there is no toplevel errors found.

  • print_inference_success::Bool = true
    If true, print a message when there is no errors found in abstract interpretation based analysis pass.

source

Configurations for VSCode Integration

JET.VSCode.VSCodeConfigType

Configurations for the VSCode integration. These configurations are active only when used in the integrated Julia REPL.


  • vscode_console_output::Union{Nothing,IO} = stdout
    JET will show analysis result in VSCode's "PROBLEMS" pane and inline annotations. If vscode_console_output::IO is specified, JET will also print the result into the specified output stream in addition to showing the result in the integrated views. When nothing, the result will be only shown in the integrated views.

source

Watch Configurations

JET.WatchConfigType

Configurations for "watch" mode. The configurations will only be active when used with watch_file.


  • revise_all::Bool = true
    Redirected to Revise.entr's all keyword argument. When set to true, JET will retrigger analysis as soon as code updates are detected in any module tracked by Revise. Currently when encountering import/using statements, JET won't perform analysis, but rather will just load the modules as usual execution (this also means Revise will track those modules). So if you're editing both files analyzed by JET and modules that are used within the files, this configuration should be enabled.

  • revise_modules = nothing
    Redirected to Revise.entr's modules positional argument. If a iterator of Module is given, JET will retrigger analysis whenever code in modules updates.

    Tip

    This configuration is useful when your're also editing files that are not tracked by Revise, e.g. editing functions defined in Base:

    # re-perform analysis when you make a change to `Base`
    +julia> watch_file(yourfile; revise_modules = [Base])

source

Configuration File

JET.parse_config_fileFunction

JET.jl offers .prettierrc style configuration file support. This means you can use .JET.toml configuration file to specify any of configurations explained above and share that with others.

When report_file or watch_file is called, it will look for .JET.toml in the directory of the given file, and search up the file tree until a JET configuration file is (or isn't) found. When found, the configurations specified in the file will be applied.

A configuration file can specify configurations like:

aggressive_constant_propagation = false # turn off aggressive constant propagation
 ... # other configurations

Note that the following configurations should be string(s) of valid Julia code:

  • context: string of Julia code, which can be parsed and evaluated into Module
  • concretization_patterns: vector of string of Julia code, which can be parsed into a Julia expression pattern expected by MacroTools.@capture macro.
  • toplevel_logger: string of Julia code, which can be parsed and evaluated into Union{IO,Nothing}

E.g. the configurations below are equivalent:

  • configurations via keyword arguments
    report_file(somefile;
                 concretization_patterns = [:(const GLOBAL_CODE_STORE = x_)],
                 toplevel_logger = IOContext(open("toplevel.txt", "w"), :JET_LOGGER_LEVEL => 1))
  • configurations via a configuration file
    # supposed to concretize `const GLOBAL_CODE_STORE = Dict()` in test/fixtures/concretization_patterns.jl
    @@ -141,4 +141,4 @@
     
     # logs toplevel analysis into toplevel.txt with debug logging level
     toplevel_logger = """IOContext(open("toplevel.txt", "w"), :JET_LOGGER_LEVEL => 1)"""
    -
Note

Configurations specified as keyword arguments have precedence over those specified via a configuration file.

source
+
Note

Configurations specified as keyword arguments have precedence over those specified via a configuration file.

source diff --git a/dev/generated-plugin-api/index.html b/dev/generated-plugin-api/index.html index a73ea2e35..81d06f714 100644 --- a/dev/generated-plugin-api/index.html +++ b/dev/generated-plugin-api/index.html @@ -1,5 +1,5 @@ -API · JET.jl

AbstractAnalyzer Framework

JET offers an infrastructure to implement a "plugin" code analyzer. Actually, JET's default error analyzer is one specific instance of such a plugin analyzer built on top of the framework.

In this documentation we will try to elaborate the framework APIs and showcase example analyzers.

Warning

The APIs described in this page is very experimental and subject to changes. And this documentation is also very WIP.

Interfaces

JET.AbstractAnalyzerType
abstract type AbstractAnalyzer <: AbstractInterpreter end

An interface type of analyzers that are built on top of JET's analyzer framework.

When a new type NewAnalyzer implements the AbstractAnalyzer interface, it should be declared as subtype of AbstractAnalyzer, and is expected to the following interfaces:


  1. AnalyzerState(analyzer::NewAnalyzer) -> AnalyzerState:
    Returns the AnalyzerState for analyzer::NewAnalyzer.

  1. AbstractAnalyzer(analyzer::NewAnalyzer, state::AnalyzerState) -> NewAnalyzer:
    Constructs an new NewAnalyzer instance in the middle of JET's top-level analysis or abstract interpretation, given the previous analyzer::NewAnalyzer and state::AnalyzerState.

  1. ReportPass(analyzer::NewAnalyzer) -> ReportPass:
    Returns ReportPass used for analyzer::NewAnalyzer.

  1. AnalysisCache(analyzer::NewAnalyzer) -> analysis_cache::AnalysisCache:
    Returns code cache used for analyzer::NewAnalyzer.

See also AnalyzerState, ReportPass and AnalysisCache.

Example

JET.jl defines its default error analyzer JETAnalyzer <: AbstractAnalyzer as the following (modified a bit for the sake of simplicity):

# the default error analyzer for JET.jl
+API · JET.jl

AbstractAnalyzer Framework

JET offers an infrastructure to implement a "plugin" code analyzer. Actually, JET's default error analyzer is one specific instance of such a plugin analyzer built on top of the framework.

In this documentation we will try to elaborate the framework APIs and showcase example analyzers.

Warning

The APIs described in this page is very experimental and subject to changes. And this documentation is also very WIP.

Interfaces

JET.AbstractAnalyzerType
abstract type AbstractAnalyzer <: AbstractInterpreter end

An interface type of analyzers that are built on top of JET's analyzer framework.

When a new type NewAnalyzer implements the AbstractAnalyzer interface, it should be declared as subtype of AbstractAnalyzer, and is expected to the following interfaces:


  1. AnalyzerState(analyzer::NewAnalyzer) -> AnalyzerState:
    Returns the AnalyzerState for analyzer::NewAnalyzer.

  1. AbstractAnalyzer(analyzer::NewAnalyzer, state::AnalyzerState) -> NewAnalyzer:
    Constructs an new NewAnalyzer instance in the middle of JET's top-level analysis or abstract interpretation, given the previous analyzer::NewAnalyzer and state::AnalyzerState.

  1. ReportPass(analyzer::NewAnalyzer) -> ReportPass:
    Returns ReportPass used for analyzer::NewAnalyzer.

  1. AnalysisCache(analyzer::NewAnalyzer) -> analysis_cache::AnalysisCache:
    Returns code cache used for analyzer::NewAnalyzer.

See also AnalyzerState, ReportPass and AnalysisCache.

Example

JET.jl defines its default error analyzer JETAnalyzer <: AbstractAnalyzer as the following (modified a bit for the sake of simplicity):

# the default error analyzer for JET.jl
 struct JETAnalyzer{RP<:ReportPass} <: AbstractAnalyzer
     state::AnalyzerState
     analysis_cache::AnalysisCache
@@ -10,14 +10,14 @@
 AnalyzerState(analyzer::JETAnalyzer) = analyzer.state
 AbstractAnalyzer(analyzer::JETAnalyzer, state::AnalyzerState) = JETAnalyzer(ReportPass(analyzer), state)
 ReportPass(analyzer::JETAnalyzer) = analyzer.report_pass
-AnalysisCache(analyzer::JETAnalyzer) = analyzer.analysis_cache
source
JET.AnalyzerStateType
mutable struct AnalyzerState
+AnalysisCache(analyzer::JETAnalyzer) = analyzer.analysis_cache
source
JET.AnalyzerStateType
mutable struct AnalyzerState
     ...
 end

The mutable object that holds various states that are consumed by all AbstractAnalyzers.


AnalyzerState(analyzer::AbstractAnalyzer) -> AnalyzerState

If NewAnalyzer implements the AbstractAnalyzer interface, NewAnalyzer should implement this AnalyzerState(analyzer::NewAnalyzer) -> AnalyzerState interface.

A new AnalyzerState is supposed to be constructed using the general configurations passed as keyword arguments jetconfigs of the NewAnalyzer(; jetconfigs...) constructor, and the constructed AnalyzerState is usually kept within NewAnalyzer itself:

function NewAnalyzer(world::UInt=Base.get_world_counter(); jetconfigs...)
     ...
     state = AnalyzerState(world; jetconfigs...)
     return NewAnalyzer(..., state)
 end
-AnalyzerState(analyzer::NewAnalyzer) = analyzer.state
source
JET.ReportPassType
abstract type ReportPass end

An interface type that represents AbstractAnalyzer's report pass. analyzer::AbstractAnalyzer injects report passes using the (::ReportPass)(::Type{InferenceErrorReport}, ::AbstractAnalyzer, state, ...) interface, which provides a flexible and efficient layer to configure the analysis done by AbstractAnalyzer.


ReportPass(analyzer::AbstractAnalyzer) -> ReportPass

If NewAnalyzer implements the AbstractAnalyzer interface, NewAnalyzer should implement this ReportPass(analyzer::NewAnalyzer) -> ReportPass interface.

ReportPass allows NewAnalyzer to provide a very flexible configuration layer for NewAnalyzer's analysis; an user can define their own ReportPass to control how NewAnalyzer collects report errors while still using the analysis routine implemented by NewAnalyzer.

Example

For example, JETAnalyzer accepts a custom ReportPass passed as part of the general configurations (see the documentation of AbstractAnalyzer for an example implementation). And we can setup a custom report pass IgnoreAllExceptGlobalUndefVar, that ignores all the reports that are otherwise collected by JETAnalyzer except UndefVarErrorReport:

# custom report pass that ignores all the reports except `UndefVarErrorReport`
+AnalyzerState(analyzer::NewAnalyzer) = analyzer.state
source
JET.ReportPassType
abstract type ReportPass end

An interface type that represents AbstractAnalyzer's report pass. analyzer::AbstractAnalyzer injects report passes using the (::ReportPass)(::Type{InferenceErrorReport}, ::AbstractAnalyzer, state, ...) interface, which provides a flexible and efficient layer to configure the analysis done by AbstractAnalyzer.


ReportPass(analyzer::AbstractAnalyzer) -> ReportPass

If NewAnalyzer implements the AbstractAnalyzer interface, NewAnalyzer should implement this ReportPass(analyzer::NewAnalyzer) -> ReportPass interface.

ReportPass allows NewAnalyzer to provide a very flexible configuration layer for NewAnalyzer's analysis; an user can define their own ReportPass to control how NewAnalyzer collects report errors while still using the analysis routine implemented by NewAnalyzer.

Example

For example, JETAnalyzer accepts a custom ReportPass passed as part of the general configurations (see the documentation of AbstractAnalyzer for an example implementation). And we can setup a custom report pass IgnoreAllExceptGlobalUndefVar, that ignores all the reports that are otherwise collected by JETAnalyzer except UndefVarErrorReport:

# custom report pass that ignores all the reports except `UndefVarErrorReport`
 struct IgnoreAllExceptGlobalUndefVar <: ReportPass end
 
 # ignores all the reports analyzed by `JETAnalyzer`
@@ -36,13 +36,13 @@
     else
         return undef_global_error() # "`undefvar` is not defined" error report will be reported
     end
-end
source
JET.AnalysisCacheType
AnalysisCache

JET's internal representation of a global analysis cache.


AnalysisCache(analyzer::AbstractAnalyzer) -> analysis_cache::AnalysisCache

Returns AnalysisCache for this analyzer::AbstractAnalyzer. AbstractAnalyzer instances can share the same cache if they perform the same analysis, otherwise their cache should be separated.

source
JET.valid_configurationsFunction
valid_configurations(analyzer::AbstractAnalyzer) -> names or nothing

Returns a set of names that are valid as a configuration for analyzer. names should be an iterator of Symbol. No validations are performed if nothing is returned.

source
JET.aggregation_policyFunction
aggregation_policy(analyzer::AbstractAnalyzer)

Defines how analyzer aggregates InferenceErrorReports. Defaults to default_aggregation_policy.


default_aggregation_policy(report::InferenceErrorReport) -> DefaultReportIdentity

Returns the default identity of report::InferenceErrorReport, where DefaultReportIdentity aggregates reports based on "error location" of each report. DefaultReportIdentity aggregates InferenceErrorReports aggressively in a sense that it ignores the identity of error point's MethodInstance, under the assumption that errors are identical as far as they're collected at the same file and line.

source
JET.VSCode.vscode_diagnostics_orderFunction
vscode_diagnostics_order(analyzer::AbstractAnalyzer) -> Bool

If true (default) a diagnostic will be reported at entry site. Otherwise it's reported at error point.

source
JET.InferenceErrorReportType
InferenceErrorReport

An interface type of error reports that JET collects by abstract interpration. In order for R<:InferenceErrorReport to implement the interface, it should satisfy the following requirements:

R<:InferenceErrorReport is supposed to be constructed using the following constructor

R(::AbstractAnalyzer, state, spec_args...) -> R

where state can be either of:

  • state::Tuple{Union{Core.Compiler.InferenceState, Core.Compiler.OptimizationState}, Int64}: a state with the current program counter specified
  • state::InferenceState: a state with the current program counter set to state.currpc
  • state::InferenceResult: a state with the current program counter unknown
  • state::MethodInstance: a state with the current program counter unknown

See also: @jetreport, VirtualStackTrace, VirtualFrame

source
JET.copy_reportFunction
copy_report(report::R) where R<:InferenceErrorReport -> new::R

Returns new new::R, that should be identical to the original report::R, except that new.vst is copied from report.vst so that the further modification on report.vst that may happen in later abstract interpretation doesn't affect new.vst.

source
JET.print_report_messageFunction
print_report_message(io::IO, report::R) where R<:InferenceErrorReport

Prints to io and describes why report is reported.

source
JET.print_signatureFunction
print_signature(::R) where R<:InferenceErrorReport -> Bool

Configures whether or not to print the report signature when printing R (defaults to true).

source
JET.report_colorFunction
report_color(::R) where R<:InferenceErrorReport -> Symbol

Configures the color for R (defaults to :red).

source
JET.AnalysisCacheType
AnalysisCache

JET's internal representation of a global analysis cache.


AnalysisCache(analyzer::AbstractAnalyzer) -> analysis_cache::AnalysisCache

Returns AnalysisCache for this analyzer::AbstractAnalyzer. AbstractAnalyzer instances can share the same cache if they perform the same analysis, otherwise their cache should be separated.

source
JET.valid_configurationsFunction
valid_configurations(analyzer::AbstractAnalyzer) -> names or nothing

Returns a set of names that are valid as a configuration for analyzer. names should be an iterator of Symbol. No validations are performed if nothing is returned.

source
JET.aggregation_policyFunction
aggregation_policy(analyzer::AbstractAnalyzer)

Defines how analyzer aggregates InferenceErrorReports. Defaults to default_aggregation_policy.


default_aggregation_policy(report::InferenceErrorReport) -> DefaultReportIdentity

Returns the default identity of report::InferenceErrorReport, where DefaultReportIdentity aggregates reports based on "error location" of each report. DefaultReportIdentity aggregates InferenceErrorReports aggressively in a sense that it ignores the identity of error point's MethodInstance, under the assumption that errors are identical as far as they're collected at the same file and line.

source
JET.VSCode.vscode_diagnostics_orderFunction
vscode_diagnostics_order(analyzer::AbstractAnalyzer) -> Bool

If true (default) a diagnostic will be reported at entry site. Otherwise it's reported at error point.

source
JET.InferenceErrorReportType
InferenceErrorReport

An interface type of error reports that JET collects by abstract interpration. In order for R<:InferenceErrorReport to implement the interface, it should satisfy the following requirements:

R<:InferenceErrorReport is supposed to be constructed using the following constructor

R(::AbstractAnalyzer, state, spec_args...) -> R

where state can be either of:

  • state::Tuple{Union{Core.Compiler.InferenceState, Core.Compiler.OptimizationState}, Int64}: a state with the current program counter specified
  • state::InferenceState: a state with the current program counter set to state.currpc
  • state::InferenceResult: a state with the current program counter unknown
  • state::MethodInstance: a state with the current program counter unknown

See also: @jetreport, VirtualStackTrace, VirtualFrame

source
JET.copy_reportFunction
copy_report(report::R) where R<:InferenceErrorReport -> new::R

Returns new new::R, that should be identical to the original report::R, except that new.vst is copied from report.vst so that the further modification on report.vst that may happen in later abstract interpretation doesn't affect new.vst.

source
JET.print_report_messageFunction
print_report_message(io::IO, report::R) where R<:InferenceErrorReport

Prints to io and describes why report is reported.

source
JET.print_signatureFunction
print_signature(::R) where R<:InferenceErrorReport -> Bool

Configures whether or not to print the report signature when printing R (defaults to true).

source
JET.report_colorFunction
report_color(::R) where R<:InferenceErrorReport -> Symbol

Configures the color for R (defaults to :red).

source
JET.analyze_and_report_call!Function
analyze_and_report_call!(analyzer::AbstractAnalyzer, f, [types]; jetconfigs...) -> JETCallResult
 analyze_and_report_call!(analyzer::AbstractAnalyzer, tt::Type{<:Tuple}; jetconfigs...) -> JETCallResult
-analyze_and_report_call!(analyzer::AbstractAnalyzer, mi::MethodInstance; jetconfigs...) -> JETCallResult

A generic entry point to analyze a function call with AbstractAnalyzer. Finally returns the analysis result as JETCallResult. Note that this is intended to be used by developers of AbstractAnalyzer only. General users should use high-level entry points like report_call and report_opt.

source
JET.call_test_exFunction
call_test_ex(funcname::Symbol, testname::Symbol, ex0, __module__, __source__)

An internal utility function to implement a @test_call-like macro. See the implementation of @test_call.

source
JET.func_testFunction
func_test(func, testname::Symbol, args...; jetconfigs...)

An internal utility function to implement a test_call-like function. See the implementation of test_call.

source
JET.analyze_and_report_file!Function
analyze_and_report_file!(analyzer::AbstractAnalyzer, filename::AbstractString; jetconfigs...) -> JETToplevelResult

A generic entry point to analyze a file with AbstractAnalyzer. Finally returns the analysis result as JETToplevelResult. Note that this is intended to be used by developers of AbstractAnalyzer only. General users should use high-level entry points like report_file.

source
JET.analyze_and_report_package!Function
analyze_and_report_package!(analyzer::AbstractAnalyzer,
+analyze_and_report_call!(analyzer::AbstractAnalyzer, mi::MethodInstance; jetconfigs...) -> JETCallResult

A generic entry point to analyze a function call with AbstractAnalyzer. Finally returns the analysis result as JETCallResult. Note that this is intended to be used by developers of AbstractAnalyzer only. General users should use high-level entry points like report_call and report_opt.

source
JET.call_test_exFunction
call_test_ex(funcname::Symbol, testname::Symbol, ex0, __module__, __source__)

An internal utility function to implement a @test_call-like macro. See the implementation of @test_call.

source
JET.func_testFunction
func_test(func, testname::Symbol, args...; jetconfigs...)

An internal utility function to implement a test_call-like function. See the implementation of test_call.

source
JET.analyze_and_report_file!Function
analyze_and_report_file!(analyzer::AbstractAnalyzer, filename::AbstractString; jetconfigs...) -> JETToplevelResult

A generic entry point to analyze a file with AbstractAnalyzer. Finally returns the analysis result as JETToplevelResult. Note that this is intended to be used by developers of AbstractAnalyzer only. General users should use high-level entry points like report_file.

source
JET.analyze_and_report_package!Function
analyze_and_report_package!(analyzer::AbstractAnalyzer,
                             package::Union{AbstractString,Module,Nothing} = nothing;
-                            jetconfigs...) -> JETToplevelResult

A generic entry point to analyze a package with AbstractAnalyzer. Finally returns the analysis result as JETToplevelResult. Note that this is intended to be used by developers of AbstractAnalyzer only. General users should use high-level entry points like report_package.

source
JET.analyze_and_report_text!Function
analyze_and_report_text!(analyzer::AbstractAnalyzer, text::AbstractString,
+                            jetconfigs...) -> JETToplevelResult

A generic entry point to analyze a package with AbstractAnalyzer. Finally returns the analysis result as JETToplevelResult. Note that this is intended to be used by developers of AbstractAnalyzer only. General users should use high-level entry points like report_package.

source
JET.analyze_and_report_text!Function
analyze_and_report_text!(analyzer::AbstractAnalyzer, text::AbstractString,
                          filename::AbstractString = "top-level";
-                         jetconfigs...) -> JETToplevelResult

A generic entry point to analyze a top-level code with AbstractAnalyzer. Finally returns the analysis result as JETToplevelResult. Note that this is intended to be used by developers of AbstractAnalyzer only. General users should use high-level entry points like report_text.

source
JET.@jetreportMacro
@jetreport struct T <: InferenceErrorReport
+                         jetconfigs...) -> JETToplevelResult

A generic entry point to analyze a top-level code with AbstractAnalyzer. Finally returns the analysis result as JETToplevelResult. Note that this is intended to be used by developers of AbstractAnalyzer only. General users should use high-level entry points like report_text.

source
JET.@jetreportMacro
@jetreport struct T <: InferenceErrorReport
     ...
 end

A utility macro to define InferenceErrorReport. It can be very tedious to manually satisfy the InferenceErrorReport interfaces. JET internally uses this @jetreport utility macro, which takes a struct definition of InferenceErrorReport without the required fields specified, and automatically defines the struct as well as constructor definitions. If the report T <: InferenceErrorReport is defined using @jetreport, then T just needs to implement the print_report_message interface.

For example, JETAnalyzer's MethodErrorReport is defined as follows:

@jetreport struct MethodErrorReport <: InferenceErrorReport
     @nospecialize t # ::Union{Type, Vector{Type}}
@@ -61,4 +61,4 @@
         end
         print(io, " (", nts, '/', union_split, " union split)")
     end
-end

and constructed as like MethodErrorReport(sv::InferenceState, atype::Any, 0).

source

Examples

+end

and constructed as like MethodErrorReport(sv::InferenceState, atype::Any, 0).

source

Examples

diff --git a/dev/generated-plugin-examples/dispatch_analysis/index.html b/dev/generated-plugin-examples/dispatch_analysis/index.html index 6f02f903e..e3f025411 100644 --- a/dev/generated-plugin-examples/dispatch_analysis/index.html +++ b/dev/generated-plugin-examples/dispatch_analysis/index.html @@ -210,4 +210,4 @@ compute(x::Int64) @ Main ./timing.jl:289 │ runtime dispatch detected: Core.kwcall([quote]::@NamedTuple{msg::Nothing}, Base.time_print, %106::IO, %61::UInt64, %87::Int64, %102::Int64, %105::Int64, %67::UInt64, %70::UInt64, true)::Any └──────────────────── -

This page was generated using Literate.jl.

+

This page was generated using Literate.jl.

diff --git a/dev/generated-plugin-examples/find_unstable_api/index.html b/dev/generated-plugin-examples/find_unstable_api/index.html index 345d5ac8e..a1c703cd9 100644 --- a/dev/generated-plugin-examples/find_unstable_api/index.html +++ b/dev/generated-plugin-examples/find_unstable_api/index.html @@ -162,4 +162,4 @@ │││││ usage of unstable API `Base.uncompressed_ast` found ││││└───────────────────────────────────────────────────────────────────────────────── ││││┌ @ /Users/aviatesk/.julia/packages/IRTools/aSVI5/src/reflection/reflection.jl:54 -... # many other "unstable API"s detected

This page was generated using Literate.jl.

+... # many other "unstable API"s detected

This page was generated using Literate.jl.

diff --git a/dev/index.html b/dev/index.html index 2e6ec2198..871c2b784 100644 --- a/dev/index.html +++ b/dev/index.html @@ -33,7 +33,7 @@ reduce_empty(op::Base.BottomRF{typeof(+)}, ::Type{Char}) @ Base ./reduce.jl:360 reduce_empty(::typeof(+), ::Type{Char}) @ Base ./reduce.jl:343 │ no matching method found `zero(::Type{Char})`: zero(T::Type{Char}) -└────────────────────

Analyze packages with report_package

This looks for all method definitions and analyses function calls based on their signatures. Note that this is less accurate than @report_call, because the actual input types cannot be known for generic methods.

julia> using Pkg; Pkg.activate(; temp=true, io=devnull); Pkg.add("AbstractTrees"; io=devnull);
julia> Pkg.status()Status `/tmp/jl_IdAYDt/Project.toml` +└────────────────────

Analyze packages with report_package

This looks for all method definitions and analyses function calls based on their signatures. Note that this is less accurate than @report_call, because the actual input types cannot be known for generic methods.

julia> using Pkg; Pkg.activate(; temp=true, io=devnull); Pkg.add("AbstractTrees"; io=devnull);
julia> Pkg.status()Status `/tmp/jl_MAc3Vj/Project.toml` [1520ce14] AbstractTrees v0.4.4
julia> report_package("AbstractTrees"; toplevel_logger=nothing)═════ 9 possible errors found ═════ isroot(root::Any, x::Any) @ AbstractTrees /home/runner/.julia/packages/AbstractTrees/EUx8s/src/base.jl:102 │ no matching method found `parent(::Any, ::Any)`: AbstractTrees.parent(root::Any, x::Any) @@ -66,4 +66,4 @@ └──────────────────── (::AbstractTrees.var"#17#18")(n::Any) @ AbstractTrees /home/runner/.julia/packages/AbstractTrees/EUx8s/src/iteration.jl:323 │ no matching method found `parent(::Any, ::Any)`: AbstractTrees.parent(getfield(#self#::AbstractTrees.var"#17#18", :tree)::Any, n::Any) -└────────────────────

Limitations

JET explores the functions you call directly as well as their inferable callees. However, if the argument types for a call cannot be inferred, JET does not analyze the callee. Consequently, a report of No errors detected does not imply that your entire codebase is free of errors. To increase the confidence in JET's results use @report_opt to make sure your code is inferrible.

JET integrates with SnoopCompile, and you can sometimes use SnoopCompile to collect the data to perform more comprehensive analyses. SnoopCompile's limitation is that it only collects data for calls that have not been previously inferred, so you must perform this type of analysis in a fresh session.

See SnoopCompile's JET-integration documentation for further details.

Acknowledgement

This project started as my undergrad thesis project at Kyoto University, supervised by Prof. Takashi Sakuragawa. We were heavily inspired by ruby/typeprof, an experimental type understanding/checking tool for Ruby. The grad thesis about this project is published at https://github.com/aviatesk/grad-thesis, but currently, it's only available in Japanese.

+└────────────────────

Limitations

JET explores the functions you call directly as well as their inferable callees. However, if the argument types for a call cannot be inferred, JET does not analyze the callee. Consequently, a report of No errors detected does not imply that your entire codebase is free of errors. To increase the confidence in JET's results use @report_opt to make sure your code is inferrible.

JET integrates with SnoopCompile, and you can sometimes use SnoopCompile to collect the data to perform more comprehensive analyses. SnoopCompile's limitation is that it only collects data for calls that have not been previously inferred, so you must perform this type of analysis in a fresh session.

See SnoopCompile's JET-integration documentation for further details.

Acknowledgement

This project started as my undergrad thesis project at Kyoto University, supervised by Prof. Takashi Sakuragawa. We were heavily inspired by ruby/typeprof, an experimental type understanding/checking tool for Ruby. The grad thesis about this project is published at https://github.com/aviatesk/grad-thesis, but currently, it's only available in Japanese.

diff --git a/dev/internals/index.html b/dev/internals/index.html index ae2a53253..567b9e123 100644 --- a/dev/internals/index.html +++ b/dev/internals/index.html @@ -1,14 +1,14 @@ -Internals · JET.jl

Internals of JET.jl

Abstract Interpretation

In order to perform type-level program analysis, JET.jl uses Core.Compiler.AbstractInterpreter interface, and customizes its abstract interpretation by overloading a subset of Core.Compiler functions, that are originally developed for Julia compiler's type inference and optimizations that aim at generating efficient native code for CPU execution.

JET.AbstractAnalyzer overloads a set of Core.Compiler functions to implement the "core" functionalities of JET's analysis, including inter-procedural error report propagation and caching of the analysis result. And each plugin analyzer (e.g. JET.JETAnalyzer) will overload more Core.Compiler functions so that it can perform its own program analysis on top of the core AbstractAnalyzer infrastructure.

Most overloads use the invoke reflection, which allows AbstractAnalyzer to dispatch to the original AbstractInterpreter's abstract interpretation methods while still passing AbstractAnalyzer to the subsequent (maybe overloaded) callees.

JET.islineageFunction
islineage(parent::MethodInstance, current::MethodInstance) ->
+Internals · JET.jl

Internals of JET.jl

Abstract Interpretation

In order to perform type-level program analysis, JET.jl uses Core.Compiler.AbstractInterpreter interface, and customizes its abstract interpretation by overloading a subset of Core.Compiler functions, that are originally developed for Julia compiler's type inference and optimizations that aim at generating efficient native code for CPU execution.

JET.AbstractAnalyzer overloads a set of Core.Compiler functions to implement the "core" functionalities of JET's analysis, including inter-procedural error report propagation and caching of the analysis result. And each plugin analyzer (e.g. JET.JETAnalyzer) will overload more Core.Compiler functions so that it can perform its own program analysis on top of the core AbstractAnalyzer infrastructure.

Most overloads use the invoke reflection, which allows AbstractAnalyzer to dispatch to the original AbstractInterpreter's abstract interpretation methods while still passing AbstractAnalyzer to the subsequent (maybe overloaded) callees.

JET.islineageFunction
islineage(parent::MethodInstance, current::MethodInstance) ->
     (report::InferenceErrorReport) -> Bool

Returns a function that checks if a given InferenceErrorReport

  • is generated from current, and
  • is "lineage" of parent (i.e. entered from it).

This function is supposed to be used when additional analysis with extended lattice information happens in order to filter out reports collected from current by analysis without using that extended information. When a report should be filtered out, the first virtual stack frame represents parent and the second does current.

Example:

entry
 └─ linfo1 (report1: linfo1->linfo2)
    ├─ linfo2 (report1: linfo2)
    ├─ linfo3 (report2: linfo3->linfo2)
    │  └─ linfo2 (report2: linfo2)
-   └─ linfo3′ (~~report2: linfo3->linfo2~~)

In the example analysis above, report2 should be filtered out on re-entering into linfo3′ (i.e. when we're analyzing linfo3 with constant arguments), nevertheless report1 shouldn't because it is not detected within linfo3 but within linfo1 (so it's not a "lineage of linfo3"):

  • islineage(linfo1, linfo3)(report2) === true
  • islineage(linfo1, linfo3)(report1) === false
source
Core.Compiler.bail_out_toplevel_callFunction

By default AbstractInterpreter implements the following inference bail out logic:

  • bail_out_toplevel_call(::AbstractInterpreter, sig, ::InferenceState): bail out from inter-procedural inference when inferring top-level and non-concrete call site callsig
  • bail_out_call(::AbstractInterpreter, rt, ::InferenceState): bail out from inter-procedural inference when return type rt grows up to Any
  • bail_out_apply(::AbstractInterpreter, rt, ::InferenceState): bail out from _apply_iterate inference when return type rt grows up to Any

It also bails out from local statement/frame inference when any lattice element gets down to Bottom, but AbstractInterpreter doesn't provide a specific interface for configuring it.

source
Core.Compiler.bail_out_callFunction

By default AbstractInterpreter implements the following inference bail out logic:

  • bail_out_toplevel_call(::AbstractInterpreter, sig, ::InferenceState): bail out from inter-procedural inference when inferring top-level and non-concrete call site callsig
  • bail_out_call(::AbstractInterpreter, rt, ::InferenceState): bail out from inter-procedural inference when return type rt grows up to Any
  • bail_out_apply(::AbstractInterpreter, rt, ::InferenceState): bail out from _apply_iterate inference when return type rt grows up to Any

It also bails out from local statement/frame inference when any lattice element gets down to Bottom, but AbstractInterpreter doesn't provide a specific interface for configuring it.

source
Core.Compiler.add_call_backedges!Function
add_call_backedges!(analyzer::JETAnalyzer, ...)

An overload for abstract_call_gf_by_type(analyzer::JETAnalyzer, ...), which always add backedges (even if a new method can't refine the return type grew up to Any). This is because a new method definition always has a potential to change JETAnalyzer's analysis result.

source
Core.Compiler.const_prop_entry_heuristicFunction
const_prop_entry_heuristic(analyzer::JETAnalyzer, result::MethodCallResult, sv::InferenceState)

This overload forces constant prop' even if an inference result can't be improved anymore with respect to the return type, e.g. when result.rt is already Const. Especially, this overload implements an heuristic to force constant prop' when any error points have been reported while the previous abstract method call without constant arguments. The reason we want much more aggressive constant propagation by that heuristic is that it's highly possible constant prop' can produce more accurate analysis result, by throwing away false positive error reports by cutting off the unreachable control flow or detecting must-reachable throw calls.

source
JET.analyze_task_parallel_code!Function
analyze_task_parallel_code!(analyzer::AbstractAnalyzer, arginfo::ArgInfo, sv::InferenceState)

Adds special cased analysis pass for task parallelism. In Julia's task parallelism implementation, parallel code is represented as closure and it's wrapped in a Task object. Core.Compiler.NativeInterpreter doesn't infer nor optimize the bodies of those closures when compiling code that creates parallel tasks, but JET will try to run additional analysis pass by recurring into the closures.

See also: https://github.com/aviatesk/JET.jl/issues/114

Note

JET won't do anything other than doing JET analysis, e.g. won't annotate return type of wrapped code block in order to not confuse the original AbstractInterpreter routine track https://github.com/JuliaLang/julia/pull/39773 for the changes in native abstract interpretation routine.

source

How AbstractAnalyzer manages caches

Missing docstring.

Missing docstring for JET.AnalysisCache. Check Documenter's build log for details.

JET.CachedAnalysisResultType
CachedAnalysisResult

AnalysisResult is transformed into CachedAnalysisResult when it is cached into a global cache maintained by AbstractAnalyzer. That means, codeinf::CodeInstance = Core.Compiler.code_cache(analyzer::AbstractAnalyzer)[mi::MethodInstance]) is expected to have its field codeinf.inferred::CachedAnalysisResult.

InferenceErrorReports found within already-analyzed result::InferenceResult can be accessed with get_cached_reports(analyzer, result).

source
Core.Compiler.inlining_policyFunction
inlining_policy(analyzer::AbstractAnalyzer, @nospecialize(src), ...) -> source::Any

Implements inlining policy for AbstractAnalyzer. Since AbstractAnalyzer works on InferenceResult whose src field keeps AnalysisResult or CachedAnalysisResult, this implementation needs to forward their wrapped source to inlining_policy(::AbstractInterpreter, ::Any, ::UInt8).

source

Top-level Analysis

JET.virtual_processFunction
virtual_process(s::AbstractString,
+   └─ linfo3′ (~~report2: linfo3->linfo2~~)

In the example analysis above, report2 should be filtered out on re-entering into linfo3′ (i.e. when we're analyzing linfo3 with constant arguments), nevertheless report1 shouldn't because it is not detected within linfo3 but within linfo1 (so it's not a "lineage of linfo3"):

  • islineage(linfo1, linfo3)(report2) === true
  • islineage(linfo1, linfo3)(report1) === false
source
Core.Compiler.bail_out_toplevel_callFunction

By default AbstractInterpreter implements the following inference bail out logic:

  • bail_out_toplevel_call(::AbstractInterpreter, sig, ::InferenceState): bail out from inter-procedural inference when inferring top-level and non-concrete call site callsig
  • bail_out_call(::AbstractInterpreter, rt, ::InferenceState): bail out from inter-procedural inference when return type rt grows up to Any
  • bail_out_apply(::AbstractInterpreter, rt, ::InferenceState): bail out from _apply_iterate inference when return type rt grows up to Any

It also bails out from local statement/frame inference when any lattice element gets down to Bottom, but AbstractInterpreter doesn't provide a specific interface for configuring it.

source
Core.Compiler.bail_out_callFunction

By default AbstractInterpreter implements the following inference bail out logic:

  • bail_out_toplevel_call(::AbstractInterpreter, sig, ::InferenceState): bail out from inter-procedural inference when inferring top-level and non-concrete call site callsig
  • bail_out_call(::AbstractInterpreter, rt, ::InferenceState): bail out from inter-procedural inference when return type rt grows up to Any
  • bail_out_apply(::AbstractInterpreter, rt, ::InferenceState): bail out from _apply_iterate inference when return type rt grows up to Any

It also bails out from local statement/frame inference when any lattice element gets down to Bottom, but AbstractInterpreter doesn't provide a specific interface for configuring it.

source
Core.Compiler.add_call_backedges!Function
add_call_backedges!(analyzer::JETAnalyzer, ...)

An overload for abstract_call_gf_by_type(analyzer::JETAnalyzer, ...), which always add backedges (even if a new method can't refine the return type grew up to Any). This is because a new method definition always has a potential to change JETAnalyzer's analysis result.

source
Core.Compiler.const_prop_entry_heuristicFunction
const_prop_entry_heuristic(analyzer::JETAnalyzer, result::MethodCallResult, sv::InferenceState)

This overload forces constant prop' even if an inference result can't be improved anymore with respect to the return type, e.g. when result.rt is already Const. Especially, this overload implements an heuristic to force constant prop' when any error points have been reported while the previous abstract method call without constant arguments. The reason we want much more aggressive constant propagation by that heuristic is that it's highly possible constant prop' can produce more accurate analysis result, by throwing away false positive error reports by cutting off the unreachable control flow or detecting must-reachable throw calls.

source
JET.analyze_task_parallel_code!Function
analyze_task_parallel_code!(analyzer::AbstractAnalyzer, arginfo::ArgInfo, sv::InferenceState)

Adds special cased analysis pass for task parallelism. In Julia's task parallelism implementation, parallel code is represented as closure and it's wrapped in a Task object. Core.Compiler.NativeInterpreter doesn't infer nor optimize the bodies of those closures when compiling code that creates parallel tasks, but JET will try to run additional analysis pass by recurring into the closures.

See also: https://github.com/aviatesk/JET.jl/issues/114

Note

JET won't do anything other than doing JET analysis, e.g. won't annotate return type of wrapped code block in order to not confuse the original AbstractInterpreter routine track https://github.com/JuliaLang/julia/pull/39773 for the changes in native abstract interpretation routine.

source

How AbstractAnalyzer manages caches

Missing docstring.

Missing docstring for JET.AnalysisCache. Check Documenter's build log for details.

JET.CachedAnalysisResultType
CachedAnalysisResult

AnalysisResult is transformed into CachedAnalysisResult when it is cached into a global cache maintained by AbstractAnalyzer. That means, codeinf::CodeInstance = Core.Compiler.code_cache(analyzer::AbstractAnalyzer)[mi::MethodInstance]) is expected to have its field codeinf.inferred::CachedAnalysisResult.

InferenceErrorReports found within already-analyzed result::InferenceResult can be accessed with get_cached_reports(analyzer, result).

source
Core.Compiler.inlining_policyFunction
inlining_policy(analyzer::AbstractAnalyzer, @nospecialize(src), ...) -> source::Any

Implements inlining policy for AbstractAnalyzer. Since AbstractAnalyzer works on InferenceResult whose src field keeps AnalysisResult or CachedAnalysisResult, this implementation needs to forward their wrapped source to inlining_policy(::AbstractInterpreter, ::Any, ::UInt8).

source

Top-level Analysis

JET.virtual_processFunction
virtual_process(s::AbstractString,
                 filename::AbstractString,
                 analyzer::AbstractAnalyzer,
-                config::ToplevelConfig) -> res::VirtualProcessResult

Simulates Julia's toplevel execution and collects error points, and finally returns res::VirtualProcessResult

  • res.included_files::Set{String}: files that have been analyzed
  • res.defined_modules::Set{Module}: module contexts created while this top-level analysis
  • res.toplevel_error_reports::Vector{ToplevelErrorReport}: toplevel errors found during the text parsing or partial (actual) interpretation; these reports are "critical" and should have precedence over inference_error_reports
  • res.inference_error_reports::Vector{InferenceErrorReport}: possible error reports found by AbstractAnalyzer
  • res.toplevel_signatures: signatures of methods defined within the analyzed files
  • res.actual2virtual::Pair{Module, Module}: keeps actual and virtual module

This function first parses s::AbstractString into toplevelex::Expr and then iterate the following steps on each code block (blk) of toplevelex:

  1. if blk is a :module expression, recursively enters analysis into an newly defined virtual module
  2. lowers blk into :thunk expression lwr (macros are also expanded in this step)
  3. if the context module is virtualized, replaces self-references of the original context module with virtualized one: see fix_self_references
  4. ConcreteInterpreter partially interprets some statements in lwr that should not be abstracted away (e.g. a :method definition); see also partially_interpret!
  5. finally, AbstractAnalyzer analyzes the remaining statements by abstract interpretation
Warning

In order to process the toplevel code sequentially as Julia runtime does, virtual_process splits the entire code, and then iterate a simulation process on each code block. With this approach, we can't track the inter-code-block level dependencies, and so a partial interpretation of toplevle definitions will fail if it needs an access to global variables defined in other code blocks that are not interpreted but just abstracted. We can circumvent this issue using JET's concretization_patterns configuration, which allows us to customize JET's concretization strategy. See ToplevelConfig for more details.

source
JET.VirtualProcessResultType
res::VirtualProcessResult
  • res.included_files::Set{String}: files that have been analyzed
  • res.defined_modules::Set{Module}: module contexts created while this top-level analysis
  • res.toplevel_error_reports::Vector{ToplevelErrorReport}: toplevel errors found during the text parsing or partial (actual) interpretation; these reports are "critical" and should have precedence over inference_error_reports
  • res.inference_error_reports::Vector{InferenceErrorReport}: possible error reports found by AbstractAnalyzer
  • res.toplevel_signatures: signatures of methods defined within the analyzed files
  • res.actual2virtual::Pair{Module, Module}: keeps actual and virtual module
source
JET.virtualize_module_contextFunction
virtualize_module_context(actual::Module)

HACK to return a module where the context of actual is virtualized.

The virtualization will be done by 2 steps below:

  1. loads the module context of actual into a sandbox module, and export the whole context from there
  2. then uses names exported from the sandbox

This way, JET's runtime simulation in the virtual module context will be able to define a name that is already defined in actual without causing "cannot assign a value to variable ... from module ..." error, etc. It allows JET to virtualize the context of already-existing module other than Main.

TODO

Currently this function relies on Base.names, and thus it can't restore the usinged names.

source
JET.ConcreteInterpreterType
ConcreteInterpreter

The trait to inject code into JuliaInterpreter's interpretation process; JET.jl overloads:

  • JuliaInterpreter.step_expr! to add error report pass for module usage expressions and support package analysis
  • JuliaInterpreter.evaluate_call_recurse! to special case include calls
  • JuliaInterpreter.handle_err to wrap an error happened during interpretation into ActualErrorWrapped
source
JET.partially_interpret!Function
partially_interpret!(interp::ConcreteInterpreter, mod::Module, src::CodeInfo)

Partially interprets statements in src using JuliaInterpreter.jl:

  • concretizes "toplevel definitions", i.e. :method, :struct_type, :abstract_type and :primitive_type expressions and their dependencies
  • concretizes user-specified toplevel code (see ToplevelConfig)
  • directly evaluates module usage expressions and report error of invalid module usages (TODO: enter into the loaded module and keep JET analysis)
  • special-cases include calls so that top-level analysis recursively enters the included file
source

How top-level analysis is bridged to AbstractAnalyzer

JET.AbstractGlobalType
mutable struct AbstractGlobal
+                config::ToplevelConfig) -> res::VirtualProcessResult

Simulates Julia's toplevel execution and collects error points, and finally returns res::VirtualProcessResult

  • res.included_files::Set{String}: files that have been analyzed
  • res.defined_modules::Set{Module}: module contexts created while this top-level analysis
  • res.toplevel_error_reports::Vector{ToplevelErrorReport}: toplevel errors found during the text parsing or partial (actual) interpretation; these reports are "critical" and should have precedence over inference_error_reports
  • res.inference_error_reports::Vector{InferenceErrorReport}: possible error reports found by AbstractAnalyzer
  • res.toplevel_signatures: signatures of methods defined within the analyzed files
  • res.actual2virtual::Pair{Module, Module}: keeps actual and virtual module

This function first parses s::AbstractString into toplevelex::Expr and then iterate the following steps on each code block (blk) of toplevelex:

  1. if blk is a :module expression, recursively enters analysis into an newly defined virtual module
  2. lowers blk into :thunk expression lwr (macros are also expanded in this step)
  3. if the context module is virtualized, replaces self-references of the original context module with virtualized one: see fix_self_references
  4. ConcreteInterpreter partially interprets some statements in lwr that should not be abstracted away (e.g. a :method definition); see also partially_interpret!
  5. finally, AbstractAnalyzer analyzes the remaining statements by abstract interpretation
Warning

In order to process the toplevel code sequentially as Julia runtime does, virtual_process splits the entire code, and then iterate a simulation process on each code block. With this approach, we can't track the inter-code-block level dependencies, and so a partial interpretation of toplevle definitions will fail if it needs an access to global variables defined in other code blocks that are not interpreted but just abstracted. We can circumvent this issue using JET's concretization_patterns configuration, which allows us to customize JET's concretization strategy. See ToplevelConfig for more details.

source
JET.VirtualProcessResultType
res::VirtualProcessResult
  • res.included_files::Set{String}: files that have been analyzed
  • res.defined_modules::Set{Module}: module contexts created while this top-level analysis
  • res.toplevel_error_reports::Vector{ToplevelErrorReport}: toplevel errors found during the text parsing or partial (actual) interpretation; these reports are "critical" and should have precedence over inference_error_reports
  • res.inference_error_reports::Vector{InferenceErrorReport}: possible error reports found by AbstractAnalyzer
  • res.toplevel_signatures: signatures of methods defined within the analyzed files
  • res.actual2virtual::Pair{Module, Module}: keeps actual and virtual module
source
JET.virtualize_module_contextFunction
virtualize_module_context(actual::Module)

HACK to return a module where the context of actual is virtualized.

The virtualization will be done by 2 steps below:

  1. loads the module context of actual into a sandbox module, and export the whole context from there
  2. then uses names exported from the sandbox

This way, JET's runtime simulation in the virtual module context will be able to define a name that is already defined in actual without causing "cannot assign a value to variable ... from module ..." error, etc. It allows JET to virtualize the context of already-existing module other than Main.

TODO

Currently this function relies on Base.names, and thus it can't restore the usinged names.

source
JET.ConcreteInterpreterType
ConcreteInterpreter

The trait to inject code into JuliaInterpreter's interpretation process; JET.jl overloads:

  • JuliaInterpreter.step_expr! to add error report pass for module usage expressions and support package analysis
  • JuliaInterpreter.evaluate_call_recurse! to special case include calls
  • JuliaInterpreter.handle_err to wrap an error happened during interpretation into ActualErrorWrapped
source
JET.partially_interpret!Function
partially_interpret!(interp::ConcreteInterpreter, mod::Module, src::CodeInfo)

Partially interprets statements in src using JuliaInterpreter.jl:

  • concretizes "toplevel definitions", i.e. :method, :struct_type, :abstract_type and :primitive_type expressions and their dependencies
  • concretizes user-specified toplevel code (see ToplevelConfig)
  • directly evaluates module usage expressions and report error of invalid module usages (TODO: enter into the loaded module and keep JET analysis)
  • special-cases include calls so that top-level analysis recursively enters the included file
source

How top-level analysis is bridged to AbstractAnalyzer

JET.AbstractGlobalType
mutable struct AbstractGlobal
     t::Any     # analyzed type
     isconst::Bool # is this abstract global variable declarared as constant or not
-end

Wraps a global variable whose type is analyzed by abstract interpretation. AbstractGlobal object will be actually evaluated into the context module, and a later analysis may refer to or alter its type on future load and store operations.

Note

The type of the wrapped global variable will be propagated only when in a toplevel frame, and thus we don't care about the analysis cache invalidation on a refinement of the wrapped global variable, since JET doesn't cache the toplevel frame.

source

Analysis Result

JET.JETToplevelResultType
res::JETToplevelResult

Represents the result of JET's analysis on a top-level script.

  • res.analyzer::AbstractAnalyzer: AbstractAnalyzer used for this analysis
  • res.res::VirtualProcessResult: VirtualProcessResult collected from this analysis
  • res.source::AbstractString: the identity key of this analysis
  • res.jetconfigs: configurations used for this analysis

JETToplevelResult implements show methods for each different frontend. An appropriate show method will be automatically chosen and render the analysis result.

source
JET.JETCallResultType
res::JETCallResult

Represents the result of JET's analysis on a function call.

  • res.result::InferenceResult: the result of this analysis
  • res.analyzer::AbstractAnalyzer: AbstractAnalyzer used for this analysis
  • res.source::AbstractString: the identity key of this analysis
  • res.jetconfigs: configurations used for this analysis

JETCallResult implements show methods for each different frontend. An appropriate show method will be automatically chosen and render the analysis result.

source

Error Report Interface

JET.VirtualFrameType
VirtualFrame

Stack information representing virtual execution context:

  • file::Symbol: the path to the file containing the virtual execution context
  • line::Int: the line number in the file containing the virtual execution context
  • sig::Signature: a signature of this frame
  • linfo::MethodInstance: The MethodInstance containing the execution context

This type is very similar to Base.StackTraces.StackFrame, but its execution context is collected during abstract interpration, not collected from actual execution.

source
JET.VirtualStackTraceType
VirtualStackTrace

Represents a virtual stack trace in the form of a vector of VirtualFrame. The vector holds VirtualFrames in order of "from entry call site to error point", i.e. the first element is the VirtualFrame of the entry call site, and the last element is that contains the error.

source
JET.SignatureType
Signature

Represents an expression signature. print_signature implements a frontend functionality to show this type.

source
Missing docstring.

Missing docstring for JET.InferenceErrorReport. Check Documenter's build log for details.

JET.ToplevelErrorReportType
ToplevelErrorReport

An interface type of error reports that JET collects while top-level concrete interpration. All ToplevelErrorReport should have the following fields:

  • file::String: the path to the file containing the interpretation context
  • line::Int: the line number in the file containing the interpretation context

See also: virtual_process, ConcreteInterpreter

source
+end

Wraps a global variable whose type is analyzed by abstract interpretation. AbstractGlobal object will be actually evaluated into the context module, and a later analysis may refer to or alter its type on future load and store operations.

Note

The type of the wrapped global variable will be propagated only when in a toplevel frame, and thus we don't care about the analysis cache invalidation on a refinement of the wrapped global variable, since JET doesn't cache the toplevel frame.

source

Analysis Result

JET.JETToplevelResultType
res::JETToplevelResult

Represents the result of JET's analysis on a top-level script.

  • res.analyzer::AbstractAnalyzer: AbstractAnalyzer used for this analysis
  • res.res::VirtualProcessResult: VirtualProcessResult collected from this analysis
  • res.source::AbstractString: the identity key of this analysis
  • res.jetconfigs: configurations used for this analysis

JETToplevelResult implements show methods for each different frontend. An appropriate show method will be automatically chosen and render the analysis result.

source
JET.JETCallResultType
res::JETCallResult

Represents the result of JET's analysis on a function call.

  • res.result::InferenceResult: the result of this analysis
  • res.analyzer::AbstractAnalyzer: AbstractAnalyzer used for this analysis
  • res.source::AbstractString: the identity key of this analysis
  • res.jetconfigs: configurations used for this analysis

JETCallResult implements show methods for each different frontend. An appropriate show method will be automatically chosen and render the analysis result.

source

Error Report Interface

JET.VirtualFrameType
VirtualFrame

Stack information representing virtual execution context:

  • file::Symbol: the path to the file containing the virtual execution context
  • line::Int: the line number in the file containing the virtual execution context
  • sig::Signature: a signature of this frame
  • linfo::MethodInstance: The MethodInstance containing the execution context

This type is very similar to Base.StackTraces.StackFrame, but its execution context is collected during abstract interpration, not collected from actual execution.

source
JET.VirtualStackTraceType
VirtualStackTrace

Represents a virtual stack trace in the form of a vector of VirtualFrame. The vector holds VirtualFrames in order of "from entry call site to error point", i.e. the first element is the VirtualFrame of the entry call site, and the last element is that contains the error.

source
JET.SignatureType
Signature

Represents an expression signature. print_signature implements a frontend functionality to show this type.

source
Missing docstring.

Missing docstring for JET.InferenceErrorReport. Check Documenter's build log for details.

JET.ToplevelErrorReportType
ToplevelErrorReport

An interface type of error reports that JET collects while top-level concrete interpration. All ToplevelErrorReport should have the following fields:

  • file::String: the path to the file containing the interpretation context
  • line::Int: the line number in the file containing the interpretation context

See also: virtual_process, ConcreteInterpreter

source
diff --git a/dev/jetanalysis/index.html b/dev/jetanalysis/index.html index dfcc41db9..97642d008 100644 --- a/dev/jetanalysis/index.html +++ b/dev/jetanalysis/index.html @@ -27,16 +27,16 @@ Closest candidates are: +(::Any, ::Any, ::Any, ::Any...) @ Base operators.jl:587 - +(::Integer, ::AbstractChar) - @ Base char.jl:247 +(::T, ::Integer) where T<:AbstractChar - @ Base char.jl:237
julia> sum("") # will lead to `MethodError: zero(Type{Char})`ERROR: MethodError: no method matching zero(::Type{Char}) + @ Base char.jl:237 + +(::Integer, ::AbstractChar) + @ Base char.jl:247
julia> sum("") # will lead to `MethodError: zero(Type{Char})`ERROR: MethodError: no method matching zero(::Type{Char}) Closest candidates are: zero(::Type{Union{}}, Any...) @ Base number.jl:310 - zero(::Type{Pkg.Resolve.FieldValue}) - @ Pkg /opt/hostedtoolcache/julia/nightly/x64/share/julia/stdlib/v1.11/Pkg/src/Resolve/fieldvalues.jl:38 + zero(::Type{Dates.Date}) + @ Dates /opt/hostedtoolcache/julia/nightly/x64/share/julia/stdlib/v1.11/Dates/src/types.jl:459 zero(::Type{LibGit2.GitHash}) @ LibGit2 /opt/hostedtoolcache/julia/nightly/x64/share/julia/stdlib/v1.11/LibGit2/src/oid.jl:221 ...

We should note that @report_call sum("julia") could detect both of those two different errors that can happen at runtime. This is because @report_call does a static analysis — it analyzes the function call in a way that does not rely on one instance of runtime execution, but rather it reasons about all the possible executions! This is one of the biggest advantages of static analysis because other alternatives to check software qualities like "testing" usually rely on some runtime execution and they can only cover a subset of all the possible executions.

As mentioned above, JET is designed to work with just a normal Julia program. Let's define new arbitrary functions and run JET on it:

julia> function foo(s0)
@@ -141,10 +141,10 @@
   └────────────────────
   
 Test Summary: | Pass  Fail  Broken  Total  Time
-JET testset   |    2     1       1      4  2.6s
+JET testset   |    2     1       1      4  2.4s
 ERROR: Some tests did not pass: 2 passed, 1 failed, 0 errored, 1 broken.

JET uses JET itself in its test pipeline: JET's static analysis has been proven to be very useful and helped its development a lot. If interested, take a peek at JET's "self check" testset.

Lastly, let's see the example that demonstrates JET can analyze a "top-level" program. The top-level analysis should be considered as a somewhat experimental feature, and at this moment you may need additional configurations to run it correctly. Please read the descriptions of top-level entry points and choose an appropriate entry point for your use case. Here we run report_file on demo.jl. It automatically extracts and loads "definitions" of functions, structs and such, and then analyzes their "usages" statically:

julia> report_file(normpath(Base.pkgdir(JET), "demo.jl"))[toplevel-info] virtualized the context of Main (took 0.017 sec)
 [toplevel-info] entered into /home/runner/work/JET.jl/JET.jl/demo.jl
-[toplevel-info]  exited from /home/runner/work/JET.jl/JET.jl/demo.jl (took 0.227 sec)
+[toplevel-info]  exited from /home/runner/work/JET.jl/JET.jl/demo.jl (took 0.196 sec)
 ═════ 7 possible errors found ═════
 Toplevel MethodInstance thunk @ Main /home/runner/work/JET.jl/JET.jl/demo.jl:9
 │ `m` is not defined: fib(m)
@@ -248,15 +248,15 @@
 f() @ Main ./REPL[2]:3
 g() @ Main ./REPL[1]:1
 │ may throw: Main.throw()
-└────────────────────

Entry Points

Interactive Entry Points

JET offers interactive analysis entry points that can be used similarly to code_typed and its family:

JET.@report_callMacro
@report_call [jetconfigs...] f(args...)

Evaluates the arguments to a function call, determines their types, and then calls report_call on the resulting expression. This macro works in a similar way as the @code_typed macro.

The general configurations and the error analysis specific configurations can be specified as an optional argument.

source
JET.report_callFunction
report_call(f, [types]; jetconfigs...) -> JETCallResult
+└────────────────────

Entry Points

Interactive Entry Points

JET offers interactive analysis entry points that can be used similarly to code_typed and its family:

JET.report_callFunction
report_call(f, [types]; jetconfigs...) -> JETCallResult
 report_call(tt::Type{<:Tuple}; jetconfigs...) -> JETCallResult
-report_call(mi::Core.MethodInstance; jetconfigs...) -> JETCallResult

Analyzes a function call with the given type signature to find type-level errors and returns back detected problems.

The general configurations and the error analysis specific configurations can be specified as a keyword argument.

See the documentation of the error analysis for more details.

source

Top-level Entry Points

JET can also analyze your "top-level" program: it can just take your Julia script or package and will report possible errors.

Note that JET will analyze your top-level program "half-statically": JET will selectively interpret and load "definitions" (like a function or struct definition) and try to simulate Julia's top-level code execution process. While it tries to avoid executing any other parts of code like function calls and analyzes them based on abstract interpretation instead (and this is a part where JET statically analyzes your code). If you're interested in how JET selects "top-level definitions", please see JET.virtual_process.

Warning

Because JET will interpret "definitions" in your code, that part of top-level analysis certainly runs your code. So we should note that JET can cause some side effects from your code; for example, JET will try to expand all the macros used in your code, and so the side effects involved with macro expansions will also happen in JET's analysis process.

JET.report_fileFunction
report_file(file::AbstractString; jetconfigs...) -> JETToplevelResult

Analyzes file to find type-level errors and returns back detected problems.

This function looks for .JET.toml configuration file in the directory of file, and searches upward in the file tree until a .JET.toml is (or isn't) found. When found, the configurations specified in the file are applied. See JET's configuration file specification for more details.

The general configurations and the error analysis specific configurations can be specified as a keyword argument, and if given, they are preferred over the configurations specified by a .JET.toml configuration file.

Tip

When you want to analyze your package but no files that actually use its functions are available, the analyze_from_definitions option may be useful since it allows JET to analyze methods based on their declared signatures. For example, JET can analyze JET itself in this way:

# from the root directory of JET.jl
+report_call(mi::Core.MethodInstance; jetconfigs...) -> JETCallResult

Analyzes a function call with the given type signature to find type-level errors and returns back detected problems.

The general configurations and the error analysis specific configurations can be specified as a keyword argument.

See the documentation of the error analysis for more details.

source

Top-level Entry Points

JET can also analyze your "top-level" program: it can just take your Julia script or package and will report possible errors.

Note that JET will analyze your top-level program "half-statically": JET will selectively interpret and load "definitions" (like a function or struct definition) and try to simulate Julia's top-level code execution process. While it tries to avoid executing any other parts of code like function calls and analyzes them based on abstract interpretation instead (and this is a part where JET statically analyzes your code). If you're interested in how JET selects "top-level definitions", please see JET.virtual_process.

Warning

Because JET will interpret "definitions" in your code, that part of top-level analysis certainly runs your code. So we should note that JET can cause some side effects from your code; for example, JET will try to expand all the macros used in your code, and so the side effects involved with macro expansions will also happen in JET's analysis process.

JET.report_fileFunction
report_file(file::AbstractString; jetconfigs...) -> JETToplevelResult

Analyzes file to find type-level errors and returns back detected problems.

This function looks for .JET.toml configuration file in the directory of file, and searches upward in the file tree until a .JET.toml is (or isn't) found. When found, the configurations specified in the file are applied. See JET's configuration file specification for more details.

The general configurations and the error analysis specific configurations can be specified as a keyword argument, and if given, they are preferred over the configurations specified by a .JET.toml configuration file.

Tip

When you want to analyze your package but no files that actually use its functions are available, the analyze_from_definitions option may be useful since it allows JET to analyze methods based on their declared signatures. For example, JET can analyze JET itself in this way:

# from the root directory of JET.jl
 julia> report_file("src/JET.jl";
                    analyze_from_definitions = true)

See also report_package.

Note

This function enables the toplevel_logger configuration with the default logging level by default. You can still explicitly specify and configure it:

report_file(args...;
             toplevel_logger = nothing, # suppress the toplevel logger
-            jetconfigs...) # other configurations

See JET's top-level analysis configurations for more details.

source
JET.watch_fileFunction
watch_file(file::AbstractString; jetconfigs...)

Watches file and keeps re-triggering analysis with report_file on code update. JET will try to analyze all the included files reachable from file, and it will re-trigger analysis if there is code update detected in any of the included files.

This function internally uses Revise.jl to track code updates. Revise also offers possibilities to track changes in files that are not directly analyzed by JET, or even changes in Base files. See watch configurations for more details.

Warning

This interface is very experimental and likely to subject to change or removal without notice.

See also report_file.

source
JET.report_packageFunction
report_package(package::Module; jetconfigs...) -> JETToplevelResult
-report_package(package::AbstractString; jetconfigs...) -> JETToplevelResult

Analyzes package in the same way as report_file and returns back type-level errors with the special default configurations, which are especially tuned for analyzing a package (see below for details). The package argument can be either a Module or a AbstractString. In the latter case it must be the name of a package in your current environment.

The error analysis performed by this function is configured as follows by default:

  • analyze_from_definitions = true: This allows JET to start analysis without top-level call sites. This is useful for analyzing a package since a package itself usually only contains definitions of types and methods but not their usages (i.e. call sites).
  • concretization_patterns = [:(x_)]: Concretizes every top-level code in a given package. The concretizations are generally preferred for successful analysis as far as they can be performed cheaply. In most cases it is indeed cheap to interpret and concretize top-level code written in a package since it usually only defines types and methods.
  • ignore_missing_comparison = true: JET ignores the possibility of a poorly-inferred comparison operator call (e.g. ==) returning missing. This is useful because report_package often relies on poor input argument type information at the beginning of analysis, leading to noisy error reports from branching on the potential missing return value of such a comparison operator call. If a target package needs to handle missing, this configuration shuold be turned off since it hides the possibility of errors that may actually at runtime.

See ToplevelConfig and JETAnalyzer for more details.

Still the general configurations and the error analysis specific configurations can be specified as a keyword argument, and if given, they are preferred over the default configurations described above.


report_package(; jetconfigs...) -> JETToplevelResult

Like above but analyzes the package of the current project.

See also report_file.

source
JET.report_textFunction
report_text(text::AbstractString; jetconfigs...) -> JETToplevelResult
-report_text(text::AbstractString, filename::AbstractString; jetconfigs...) -> JETToplevelResult

Analyzes top-level text and returns back type-level errors.

source

Test Integration

JET also exports entries that are fully integrated with Test standard library's unit-testing infrastructure. It can be used in your test suite to assert your program is free from errors that JET can detect:

JET.@test_callMacro
@test_call [jetconfigs...] [broken=false] [skip=false] f(args...)

Runs @report_call jetconfigs... f(args...) and tests that the function call f(args...) is free from problems that @report_call can detect. Returns a Pass result if the test is successful, a Fail result if any problems are detected, or an Error result if the test encounters an unexpected error. When the test Fails, abstract call stack to each problem location will be printed to stdout.

julia> @test_call sincos(10)
+            jetconfigs...) # other configurations

See JET's top-level analysis configurations for more details.

source
JET.watch_fileFunction
watch_file(file::AbstractString; jetconfigs...)

Watches file and keeps re-triggering analysis with report_file on code update. JET will try to analyze all the included files reachable from file, and it will re-trigger analysis if there is code update detected in any of the included files.

This function internally uses Revise.jl to track code updates. Revise also offers possibilities to track changes in files that are not directly analyzed by JET, or even changes in Base files. See watch configurations for more details.

Warning

This interface is very experimental and likely to subject to change or removal without notice.

See also report_file.

source
JET.report_packageFunction
report_package(package::Module; jetconfigs...) -> JETToplevelResult
+report_package(package::AbstractString; jetconfigs...) -> JETToplevelResult

Analyzes package in the same way as report_file and returns back type-level errors with the special default configurations, which are especially tuned for analyzing a package (see below for details). The package argument can be either a Module or a AbstractString. In the latter case it must be the name of a package in your current environment.

The error analysis performed by this function is configured as follows by default:

  • analyze_from_definitions = true: This allows JET to start analysis without top-level call sites. This is useful for analyzing a package since a package itself usually only contains definitions of types and methods but not their usages (i.e. call sites).
  • concretization_patterns = [:(x_)]: Concretizes every top-level code in a given package. The concretizations are generally preferred for successful analysis as far as they can be performed cheaply. In most cases it is indeed cheap to interpret and concretize top-level code written in a package since it usually only defines types and methods.
  • ignore_missing_comparison = true: JET ignores the possibility of a poorly-inferred comparison operator call (e.g. ==) returning missing. This is useful because report_package often relies on poor input argument type information at the beginning of analysis, leading to noisy error reports from branching on the potential missing return value of such a comparison operator call. If a target package needs to handle missing, this configuration shuold be turned off since it hides the possibility of errors that may actually at runtime.

See ToplevelConfig and JETAnalyzer for more details.

Still the general configurations and the error analysis specific configurations can be specified as a keyword argument, and if given, they are preferred over the default configurations described above.


report_package(; jetconfigs...) -> JETToplevelResult

Like above but analyzes the package of the current project.

See also report_file.

source
JET.report_textFunction
report_text(text::AbstractString; jetconfigs...) -> JETToplevelResult
+report_text(text::AbstractString, filename::AbstractString; jetconfigs...) -> JETToplevelResult

Analyzes top-level text and returns back type-level errors.

source

Test Integration

JET also exports entries that are fully integrated with Test standard library's unit-testing infrastructure. It can be used in your test suite to assert your program is free from errors that JET can detect:

JET.@test_callMacro
@test_call [jetconfigs...] [broken=false] [skip=false] f(args...)

Runs @report_call jetconfigs... f(args...) and tests that the function call f(args...) is free from problems that @report_call can detect. Returns a Pass result if the test is successful, a Fail result if any problems are detected, or an Error result if the test encounters an unexpected error. When the test Fails, abstract call stack to each problem location will be printed to stdout.

julia> @test_call sincos(10)
 Test Passed
   Expression: #= none:1 =# JET.@test_call sincos(10)

As with @report_call, the general configurations and the error analysis specific configurations can be specified as an optional argument:

julia> cond = false
 
@@ -305,12 +305,12 @@
 
 Test Summary: | Pass  Fail  Broken  Total  Time
 check errors  |    1     1       1      3  0.2s
-ERROR: Some tests did not pass: 1 passed, 1 failed, 0 errored, 1 broken.
source
JET.test_callFunction
test_call(f, [types]; broken::Bool = false, skip::Bool = false, jetconfigs...)
-test_call(tt::Type{<:Tuple}; broken::Bool = false, skip::Bool = false, jetconfigs...)

Runs report_call on a function call with the given type signature and tests that it is free from problems that report_call can detect. Except that it takes a type signature rather than a call expression, this function works in the same way as @test_call.

source
JET.test_fileFunction
test_file(file::AbstractString; jetconfigs...)

Runs report_file and tests that there are no problems detected.

As with report_file, the general configurations and the error analysis specific configurations can be specified as an optional argument.

Like @test_call, test_file is fully integrated with the Test standard library. See @test_call for the details.

source
JET.test_packageFunction
test_package(package::Module; jetconfigs...)
+ERROR: Some tests did not pass: 1 passed, 1 failed, 0 errored, 1 broken.
source
JET.test_callFunction
test_call(f, [types]; broken::Bool = false, skip::Bool = false, jetconfigs...)
+test_call(tt::Type{<:Tuple}; broken::Bool = false, skip::Bool = false, jetconfigs...)

Runs report_call on a function call with the given type signature and tests that it is free from problems that report_call can detect. Except that it takes a type signature rather than a call expression, this function works in the same way as @test_call.

source
JET.test_fileFunction
test_file(file::AbstractString; jetconfigs...)

Runs report_file and tests that there are no problems detected.

As with report_file, the general configurations and the error analysis specific configurations can be specified as an optional argument.

Like @test_call, test_file is fully integrated with the Test standard library. See @test_call for the details.

source
JET.test_packageFunction
test_package(package::Module; jetconfigs...)
 test_package(package::AbstractString; jetconfigs...)
 test_package(; jetconfigs...)

Runs report_package and tests that there are no problems detected.

As with report_package, the general configurations and the error analysis specific configurations can be specified as an optional argument.

Like @test_call, test_package is fully integrated with the Test standard library. See @test_call for the details.

julia> @testset "test_package" begin
            test_package("Example"; toplevel_logger=nothing)
        end;
 Test Summary: | Pass  Total  Time
-test_package  |    1      1  0.0s
source
JET.test_textFunction
test_text(text::AbstractString; jetconfigs...)
-test_text(text::AbstractString, filename::AbstractString; jetconfigs...)

Runs report_text and tests that there are no problems detected.

As with report_text, the general configurations and the error analysis specific configurations can be specified as an optional argument.

Like @test_call, test_text is fully integrated with the Test standard library. See @test_call for the details.

source

Configurations

In addition to the general configurations, the error analysis can take the following specific configurations:

JET.JETAnalyzerType

Every entry point of error analysis can accept any of the general configurations as well as the following additional configurations that are specific to the error analysis.


  • mode::Symbol = :basic:
    Switches the error analysis pass. Each analysis pass reports errors according to their own "error" definition. JET by default offers the following modes:

    • mode = :basic: the default error analysis pass. This analysis pass is tuned to be useful for general Julia development by reporting common problems, but also note that it is not enough strict to guarantee that your program never throws runtime errors.
      See BasicPass for more details.
    • mode = :sound: the sound error analysis pass. If this pass doesn't report any errors, then your program is assured to run without any runtime errors (unless JET's error definition is not accurate and/or there is an implementation flaw).
      See SoundPass for more details.
    • mode = :typo: a typo detection pass A simple analysis pass to detect "typo"s in your program. This analysis pass is essentially a subset of the default basic pass (BasicPass), and it only reports undefined global reference and undefined field access. This might be useful especially for a very complex code base, because even the basic pass tends to be too noisy (spammed with too many errors) for such a case.
      See TypoPass for more details.
    Note

    You can also set up your own analysis using JET's AbstractAnalyzer-Framework.


  • ignore_missing_comparison::Bool = false:
    If true, JET will ignores the possibility of a poorly-inferred comparison operator call (e.g. ==) returning missing in order to hide the error reports from branching on the potential missing return value of such a comparison operator call. This is turned off by default, because a comparison call results in a Union{Bool,Missing} possibility, it likely signifies an inferrability issue or the missing possibility should be handled someway. But this is useful to reduce the noisy error reports in the situations where specific input arguments type is not available at the beginning of the analysis like report_package.

source
JET.BasicPassType

The basic error analysis pass. This is used by default.

source
JET.SoundPassType

The sound error analysis pass.

source
JET.TypoPassType

A typo detection pass.

source
+test_package | 1 1 0.0ssource
JET.test_textFunction
test_text(text::AbstractString; jetconfigs...)
+test_text(text::AbstractString, filename::AbstractString; jetconfigs...)

Runs report_text and tests that there are no problems detected.

As with report_text, the general configurations and the error analysis specific configurations can be specified as an optional argument.

Like @test_call, test_text is fully integrated with the Test standard library. See @test_call for the details.

source

Configurations

In addition to the general configurations, the error analysis can take the following specific configurations:

JET.JETAnalyzerType

Every entry point of error analysis can accept any of the general configurations as well as the following additional configurations that are specific to the error analysis.


  • mode::Symbol = :basic:
    Switches the error analysis pass. Each analysis pass reports errors according to their own "error" definition. JET by default offers the following modes:

    • mode = :basic: the default error analysis pass. This analysis pass is tuned to be useful for general Julia development by reporting common problems, but also note that it is not enough strict to guarantee that your program never throws runtime errors.
      See BasicPass for more details.
    • mode = :sound: the sound error analysis pass. If this pass doesn't report any errors, then your program is assured to run without any runtime errors (unless JET's error definition is not accurate and/or there is an implementation flaw).
      See SoundPass for more details.
    • mode = :typo: a typo detection pass A simple analysis pass to detect "typo"s in your program. This analysis pass is essentially a subset of the default basic pass (BasicPass), and it only reports undefined global reference and undefined field access. This might be useful especially for a very complex code base, because even the basic pass tends to be too noisy (spammed with too many errors) for such a case.
      See TypoPass for more details.
    Note

    You can also set up your own analysis using JET's AbstractAnalyzer-Framework.


  • ignore_missing_comparison::Bool = false:
    If true, JET will ignores the possibility of a poorly-inferred comparison operator call (e.g. ==) returning missing in order to hide the error reports from branching on the potential missing return value of such a comparison operator call. This is turned off by default, because a comparison call results in a Union{Bool,Missing} possibility, it likely signifies an inferrability issue or the missing possibility should be handled someway. But this is useful to reduce the noisy error reports in the situations where specific input arguments type is not available at the beginning of the analysis like report_package.

source
JET.BasicPassType

The basic error analysis pass. This is used by default.

source
JET.SoundPassType

The sound error analysis pass.

source
JET.TypoPassType

A typo detection pass.

source
diff --git a/dev/optanalysis/index.html b/dev/optanalysis/index.html index def119276..e28523dfb 100644 --- a/dev/optanalysis/index.html +++ b/dev/optanalysis/index.html @@ -207,9 +207,9 @@ Test Summary: | Pass Fail Broken Total Time check type-stabilities | 2 1 1 4 0.0s -ERROR: Some tests did not pass: 2 passed, 1 failed, 0 errored, 1 broken.

Entry Points

Interactive Entry Points

The optimization analysis offers interactive entry points that can be used in the same way as @report_call and report_call:

JET.@report_optMacro
@report_opt [jetconfigs...] f(args...)

Evaluates the arguments to a function call, determines their types, and then calls report_opt on the resulting expression.

The general configurations and the optimization analysis specific configurations can be specified as an optional argument.

source
JET.report_optFunction
report_opt(f, [types]; jetconfigs...) -> JETCallResult
+ERROR: Some tests did not pass: 2 passed, 1 failed, 0 errored, 1 broken.

Entry Points

Interactive Entry Points

The optimization analysis offers interactive entry points that can be used in the same way as @report_call and report_call:

JET.report_optFunction
report_opt(f, [types]; jetconfigs...) -> JETCallResult
 report_opt(tt::Type{<:Tuple}; jetconfigs...) -> JETCallResult
-report_opt(mi::Core.MethodInstance; jetconfigs...) -> JETCallResult

Analyzes a function call with the given type signature to detect optimization failures and unresolved method dispatches.

The general configurations and the optimization analysis specific configurations can be specified as a keyword argument.

See the documentation of the optimization analysis for more details.

source

Test Integration

As with the default error analysis, the optimization analysis also offers the integration with Test standard library:

JET.@test_optMacro
@test_opt [jetconfigs...] [broken=false] [skip=false] f(args...)

Runs @report_opt jetconfigs... f(args...) and tests that the function call f(args...) is free from optimization failures and unresolved method dispatches that @report_opt can detect.

As with @report_opt, the general configurations and optimization analysis specific configurations can be specified as an optional argument:

julia> function f(n)
+report_opt(mi::Core.MethodInstance; jetconfigs...) -> JETCallResult

Analyzes a function call with the given type signature to detect optimization failures and unresolved method dispatches.

The general configurations and the optimization analysis specific configurations can be specified as a keyword argument.

See the documentation of the optimization analysis for more details.

source

Test Integration

As with the default error analysis, the optimization analysis also offers the integration with Test standard library:

JET.@test_optMacro
@test_opt [jetconfigs...] [broken=false] [skip=false] f(args...)

Runs @report_opt jetconfigs... f(args...) and tests that the function call f(args...) is free from optimization failures and unresolved method dispatches that @report_opt can detect.

As with @report_opt, the general configurations and optimization analysis specific configurations can be specified as an optional argument:

julia> function f(n)
             r = sincos(n)
             # `println` is full of runtime dispatches,
             # but we can ignore the corresponding reports from `Base`
@@ -220,8 +220,8 @@
 
 julia> @test_opt target_modules=(@__MODULE__,) f(10)
 Test Passed
-  Expression: #= REPL[3]:1 =# JET.@test_call analyzer = JET.OptAnalyzer target_modules = (#= REPL[3]:1 =# @__MODULE__(),) f(10)

Like @test_call, @test_opt is fully integrated with the Test standard library. See @test_call for the details.

source
JET.test_optFunction
test_opt(f, [types]; broken::Bool = false, skip::Bool = false, jetconfigs...)
-test_opt(tt::Type{<:Tuple}; broken::Bool = false, skip::Bool = false, jetconfigs...)

Runs report_opt on a function call with the given type signature and tests that it is free from optimization failures and unresolved method dispatches that report_opt can detect. Except that it takes a type signature rather than a call expression, this function works in the same way as @test_opt.

source

Top-level Entry Points

By default, JET doesn't offer top-level entry points for the optimization analysis, because it's usually used for only a selective portion of your program. But if you want you can just use report_file or similar top-level entry points with specifying analyzer = OptAnalyzer configuration in order to apply the optimization analysis on a top-level script, e.g. report_file("path/to/file.jl"; analyzer = OptAnalyzer).

Configurations

In addition to the general configurations, the optimization analysis can take the following specific configurations:

JET.OptAnalyzerType

Every entry point of optimization analysis can accept any of the general configurations as well as the following additional configurations that are specific to the optimization analysis.


  • skip_noncompileable_calls::Bool = true:
    Julia's runtime dispatch is "powerful" because it can always compile code with concrete runtime arguments so that a "kernel" function runs very effectively even if it's called from a type-instable call site. This means, we (really) often accept that some parts of our code are not inferred statically, and rather we want to just rely on information that is only available at runtime. To model this programming style, the optimization analyzer by default does NOT report any optimization failures or runtime dispatches detected within non-concrete calls (more correctly, "non-compileable" calls are ignored: see also the note below). We can turn off this skip_noncompileable_calls configuration to get type-instabilities within those calls.

    # the following examples are adapted from https://docs.julialang.org/en/v1/manual/performance-tips/#kernel-functions
    +  Expression: #= REPL[3]:1 =# JET.@test_call analyzer = JET.OptAnalyzer target_modules = (#= REPL[3]:1 =# @__MODULE__(),) f(10)

    Like @test_call, @test_opt is fully integrated with the Test standard library. See @test_call for the details.

source
JET.test_optFunction
test_opt(f, [types]; broken::Bool = false, skip::Bool = false, jetconfigs...)
+test_opt(tt::Type{<:Tuple}; broken::Bool = false, skip::Bool = false, jetconfigs...)

Runs report_opt on a function call with the given type signature and tests that it is free from optimization failures and unresolved method dispatches that report_opt can detect. Except that it takes a type signature rather than a call expression, this function works in the same way as @test_opt.

source

Top-level Entry Points

By default, JET doesn't offer top-level entry points for the optimization analysis, because it's usually used for only a selective portion of your program. But if you want you can just use report_file or similar top-level entry points with specifying analyzer = OptAnalyzer configuration in order to apply the optimization analysis on a top-level script, e.g. report_file("path/to/file.jl"; analyzer = OptAnalyzer).

Configurations

In addition to the general configurations, the optimization analysis can take the following specific configurations:

JET.OptAnalyzerType

Every entry point of optimization analysis can accept any of the general configurations as well as the following additional configurations that are specific to the optimization analysis.


  • skip_noncompileable_calls::Bool = true:
    Julia's runtime dispatch is "powerful" because it can always compile code with concrete runtime arguments so that a "kernel" function runs very effectively even if it's called from a type-instable call site. This means, we (really) often accept that some parts of our code are not inferred statically, and rather we want to just rely on information that is only available at runtime. To model this programming style, the optimization analyzer by default does NOT report any optimization failures or runtime dispatches detected within non-concrete calls (more correctly, "non-compileable" calls are ignored: see also the note below). We can turn off this skip_noncompileable_calls configuration to get type-instabilities within those calls.

    # the following examples are adapted from https://docs.julialang.org/en/v1/manual/performance-tips/#kernel-functions
     julia> function fill_twos!(a)
                for i = eachindex(a)
                    a[i] = 2
    @@ -330,4 +330,4 @@
     # we can also turns off the heuristic itself
     julia> @test_opt unoptimize_throw_blocks=false skip_unoptimized_throw_blocks=false sin(10)
     Test Passed
    -  Expression: #= REPL[7]:1 =# JET.@test_call analyzer = JET.OptAnalyzer unoptimize_throw_blocks = false skip_unoptimized_throw_blocks = false sin(10)

source
+ Expression: #= REPL[7]:1 =# JET.@test_call analyzer = JET.OptAnalyzer unoptimize_throw_blocks = false skip_unoptimized_throw_blocks = false sin(10)
source
diff --git a/dev/search/index.html b/dev/search/index.html index b21d18386..e74091d4f 100644 --- a/dev/search/index.html +++ b/dev/search/index.html @@ -1,2 +1,2 @@ -Search · JET.jl

Loading search...

    +Search · JET.jl

    Loading search...

      diff --git a/dev/tutorial/index.html b/dev/tutorial/index.html index 3478caf73..0989ef901 100644 --- a/dev/tutorial/index.html +++ b/dev/tutorial/index.html @@ -81,4 +81,4 @@ # Now let's filter out the ones that pass without issue badmis = filter(mis) do mi !isempty(JET.get_reports(report_call(mi))) -end

      Then you can inspect the methodinstances in badmis individually with report_call(mi).

      There are two caveats to note:

      +end

      Then you can inspect the methodinstances in badmis individually with report_call(mi).

      There are two caveats to note: