Skip to content

Commit

Permalink
Merge pull request #819 from EMResearch/master
Browse files Browse the repository at this point in the history
Updating branch before requesting a code review.
  • Loading branch information
onurd86 authored Oct 8, 2023
2 parents 3e612e6 + aacc1b1 commit 2d5ef05
Show file tree
Hide file tree
Showing 8 changed files with 230 additions and 34 deletions.
11 changes: 6 additions & 5 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -50,12 +50,13 @@ jobs:
path: core/target/evomaster.jar
retention-days: ${{env.retention-days}}
if-no-files-found: error
### TODO disabled due to bug. See https://github.com/mikepenz/action-junit-report/issues/952
# Make test report accessible from GitHub Actions (as Maven logs are long)
- name: Publish Test Report
if: success() || failure()
uses: mikepenz/action-junit-report@v3
with:
report_paths: '**/target/surefire-reports/TEST-*.xml'
# - name: Publish Test Report
# if: success() || failure()
# uses: mikepenz/action-junit-report@v4
# with:
# report_paths: '**/target/surefire-reports/TEST-*.xml'
# Upload coverage results
- name: Upload coverage to CodeCov
run: curl -s https://codecov.io/bash | bash
Expand Down
8 changes: 5 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ building on decades of research in the field of [Search-Based Software Testing](

__Key features__:

* _Web APIs_: At the moment, _EvoMaster_ can generate test cases for __REST__ and __GraphQL__ APIs.
* _Web APIs_: At the moment, _EvoMaster_ can generate test cases for __REST__, __GraphQL__ and __RPC__ (e.g., __gRPC__ and __Thrift__) APIs.

* _Blackbox_ testing mode: can run on any API (regardless of its programming language, e.g., Python and Go).
However, results for blackbox testing will be worse than whitebox testing (e.g., due to lack of code analysis).
Expand All @@ -41,7 +41,7 @@ __Key features__:
JVM (e.g., Java and Kotlin). _EvoMaster_ analyses the bytecode of the tested applications, and uses
several heuristics such as _testability transformations_ and _taint analysis_ to be able to generate
more effective test cases. We support JDK __8__ and the major LTS versions after that (currently JDK __17__). Might work on other JVM versions, but we provide __NO__ support for it.
Note: there is initial support for other languages as well, like for example JavaScript/TypeScript, but they are not in a stable, feature-complete state yet.
Note: there is initial support for other languages as well, like for example JavaScript/TypeScript and C#, but they are not in a stable, feature-complete state yet.

* _Installation_: we provide installers for the main operating systems: Windows (`.msi`),
OSX (`.dmg`) and Linux (`.deb`). We also provide an uber-fat JAR file.
Expand Down Expand Up @@ -85,6 +85,8 @@ __Known limitations__:
But, then, you should run _EvoMaster_ for something like between 1 and 24 hours (the longer the better, but
it is unlikely to get better results after 24 hours).

* _RPC APIs_: for the moment, we do not directly support RPC schema definitions. Fuzzing RPC APIs requires to write a driver, using the client library of the API to make the calls.

* _External services_: (e.g., other RESTful APIs) currently there is no support for them (e.g., to automatically mock them).
It is work in progress.

Expand Down Expand Up @@ -121,7 +123,7 @@ __Known limitations__:
Depending on the year, we might have funding for _postdoc_ and _PhD student_ positions to work on this project (in Oslo, Norway).

Current positions:
* 2023: PhD student positions. No new calls scheduled for the moment.
* 2023: PhD student positions, [1 position available](https://www.kristiania.no/en/about-kristiania/vacant-positions/?rmpage=job&rmjob=679&rmlang=UK).
* 2023: Postdoc positions. No new calls scheduled for the moment.

For questions on these positions, please contact Prof. Andrea Arcuri.
Expand Down
31 changes: 30 additions & 1 deletion core/src/main/kotlin/org/evomaster/core/Main.kt
Original file line number Diff line number Diff line change
Expand Up @@ -220,9 +220,38 @@ class Main {
val totalLines = unitsInfo.numberOfLines
val percentage = String.format("%.0f", (linesInfo.total / totalLines.toDouble()) * 100)

/*
This is a quite tricky case...
the number of covered lines X should be less or equal than the total T, ie X<=T.
However, we end up with cases like X > T where T=0.
Should never happen in practice, but it does for E2E tests.
This is because we could have different test suites working on same SUTs.
Once one is finished, it would reset all data.
Such data would not then be recomputed in the next test suite execution, as
the classes are already loaded...
Not sure if there is any clean solution for this...
executing these tests in own process might be done with Failsafe/Surefire.
Having check for totalLines == 0 was not a good solution. If the assertion fails,
and test is re-executed on same JVM with classes already loaded, then we would get
totalLines == 0 after the reset... and so the test cases will always pass :(
*/
//assert(totalLines == 0 || linesInfo.total <= totalLines){ "${linesInfo.total} > $totalLines"}
/*
Having this assertion is way too problematic... not only issue when more than 2 E2E use
the same SUT, but also when flacky tests are re-run (both in our scaffolding, and in Maven)
*/
//assert(linesInfo.total <= totalLines){ "WRONG COVERAGE: ${linesInfo.total} > $totalLines"}

info("Covered targets (lines, branches, faults, etc.): ${targetsInfo.total}")
info("Potential faults: ${faults.size}")
info("Bytecode line coverage: $percentage% (${linesInfo.total} out of $totalLines in $units units/classes)")

if(totalLines == 0 || units == 0){
logError("Detected $totalLines lines to cover, for a total of $units units/classes." +
" Are you sure you did setup getPackagePrefixesToCover() correctly?")
} else {
info("Bytecode line coverage: $percentage% (${linesInfo.total} out of $totalLines in $units units/classes)")
}
} else {
warn("Failed to retrieve SUT info")
}
Expand Down
16 changes: 7 additions & 9 deletions core/src/main/kotlin/org/evomaster/core/search/FitnessValue.kt
Original file line number Diff line number Diff line change
Expand Up @@ -254,15 +254,13 @@ class FitnessValue(
var seedingTime = 0
var searchTime = 0

targets.entries.forEach { e ->
(e.value.distance == MAX_VALUE && (prefix == null || idMapper.getDescriptiveId(e.key).startsWith(prefix))).apply {
if (coveredTargetsDuringSeeding.contains(e.key))
seedingTime++
else
searchTime++
if (this && bootTime.any { it.descriptiveId == idMapper.getDescriptiveId(e.key) })
duplicatedcounter++
}
targets.entries.filter { e -> (e.value.distance == MAX_VALUE && (prefix == null || idMapper.getDescriptiveId(e.key).startsWith(prefix))) }.forEach { e ->
if (coveredTargetsDuringSeeding.contains(e.key))
seedingTime++
else
searchTime++
if (bootTime.any { it.descriptiveId == idMapper.getDescriptiveId(e.key) })
duplicatedcounter++
}

/*
Expand Down
126 changes: 126 additions & 0 deletions core/src/test/kotlin/org/evomaster/core/search/FitnessValueTest.kt
Original file line number Diff line number Diff line change
@@ -0,0 +1,126 @@
package org.evomaster.core.search


import org.evomaster.client.java.controller.api.dto.BootTimeInfoDto
import org.evomaster.client.java.controller.api.dto.TargetInfoDto
import org.evomaster.client.java.instrumentation.shared.ObjectiveNaming
import org.evomaster.core.search.service.IdMapper
import org.junit.jupiter.api.Assertions.*
import org.junit.jupiter.api.Test

class FitnessValueTest {


@Test
fun testUnionWithBootTimeCoveredTargets(){

val idMapper = IdMapper().apply {
addMapping(0, "Line_at_com.foo.rest.examples.spring.postcollection.CreateDto_00007")
addMapping(1, "Class_com.foo.rest.examples.spring.postcollection.CreateDto")
addMapping(2, "Success_Call_at_com.foo.rest.examples.spring.postcollection.CreateDto_00007_0")
addMapping(3, "Line_at_com.foo.rest.examples.spring.postcollection.CreateDto_00008")
addMapping(4, "Line_at_com.foo.rest.examples.spring.postcollection.PostCollectionRest_00025")
addMapping(5, "Branch_at_com.foo.rest.examples.spring.postcollection.PostCollectionRest_at_line_00025_position_0_falseBranch")
addMapping(6, "Branch_at_com.foo.rest.examples.spring.postcollection.PostCollectionRest_at_line_00025_position_0_trueBranch")
addMapping(7, "Line_at_com.foo.rest.examples.spring.postcollection.PostCollectionRest_00026")
addMapping(8, "Success_Call_at_com.foo.rest.examples.spring.postcollection.PostCollectionRest_00026_0")
addMapping(9, "Line_at_com.foo.rest.examples.spring.postcollection.PostCollectionRest_00029")
addMapping(10, "Success_Call_at_com.foo.rest.examples.spring.postcollection.PostCollectionRest_00029_0")
addMapping(11, "Success_Call_at_com.foo.rest.examples.spring.postcollection.PostCollectionRest_00029_1")
addMapping(-2, "201:POST:/api/pc")
addMapping(-3, "HTTP_SUCCESS:POST:/api/pc")
addMapping(-4, "HTTP_FAULT:POST:/api/pc")
addMapping(15, "Line_at_com.foo.rest.examples.spring.postcollection.PostCollectionRest_00039")
addMapping(16, "Success_Call_at_com.foo.rest.examples.spring.postcollection.PostCollectionRest_00039_0")
addMapping(17, "Branch_at_com.foo.rest.examples.spring.postcollection.PostCollectionRest_at_line_00039_position_0_falseBranch")
addMapping(18, "Branch_at_com.foo.rest.examples.spring.postcollection.PostCollectionRest_at_line_00039_position_0_trueBranch")
addMapping(19, "Line_at_com.foo.rest.examples.spring.postcollection.PostCollectionRest_00040")
addMapping(20, "Success_Call_at_com.foo.rest.examples.spring.postcollection.PostCollectionRest_00040_0")
addMapping(21, "Success_Call_at_com.foo.rest.examples.spring.postcollection.PostCollectionRest_00040_1")
addMapping(-5, "400:GET:/api/pc")
addMapping(-6, "HTTP_SUCCESS:GET:/api/pc")
addMapping(-7, "HTTP_FAULT:GET:/api/pc")
addMapping(25, "PotentialFault_PartialOracle_CodeOracle GET:/api/pc")
addMapping(26, "Line_at_com.foo.rest.examples.spring.postcollection.PostCollectionRest_00043")
addMapping(27, "Success_Call_at_com.foo.rest.examples.spring.postcollection.PostCollectionRest_00043_0")
addMapping(28, "Line_at_com.foo.rest.examples.spring.postcollection.ValuesDto_00006")
addMapping(29, "Class_com.foo.rest.examples.spring.postcollection.ValuesDto")
addMapping(30, "Success_Call_at_com.foo.rest.examples.spring.postcollection.ValuesDto_00006_0")
addMapping(31, "Line_at_com.foo.rest.examples.spring.postcollection.ValuesDto_00008")
addMapping(32, "Success_Call_at_com.foo.rest.examples.spring.postcollection.ValuesDto_00008_0")
addMapping(33, "Line_at_com.foo.rest.examples.spring.postcollection.PostCollectionRest_00044")
addMapping(34, "Success_Call_at_com.foo.rest.examples.spring.postcollection.PostCollectionRest_00044_0")
addMapping(35, "Line_at_com.foo.rest.examples.spring.postcollection.PostCollectionRest_00046")
addMapping(36, "Success_Call_at_com.foo.rest.examples.spring.postcollection.PostCollectionRest_00046_0")
addMapping(-8, "200:GET:/api/pc")
addMapping(-9, "200:GET:/v2/api-docs")
addMapping(-10, "HTTP_SUCCESS:GET:/v2/api-docs")
addMapping(-11, "HTTP_FAULT:GET:/v2/api-docs")
addMapping(-12, "PotentialFault_PartialOracle_CodeOracle GET:/v2/api-docs")
}

val fv = FitnessValue(1.0)
fv.coverTarget(0) //line
fv.coverTarget(1)
fv.coverTarget(2)
fv.coverTarget(3) //line
fv.coverTarget(4) //line
fv.coverTarget(-12)

assertEquals(6, fv.coveredTargets())

var linesInfo = fv.unionWithBootTimeCoveredTargets(ObjectiveNaming.LINE, idMapper, null)
assertEquals(3, linesInfo.total)

var bootTimeInfoDto = BootTimeInfoDto().apply {
targets = listOf(
TargetInfoDto().apply{//new
id = 35
descriptiveId = "Line_at_com.foo.rest.examples.spring.postcollection.PostCollectionRest_00046"
value = 1.0
actionIndex = -1
},
TargetInfoDto().apply{//other
id = 36
descriptiveId = "Success_Call_at_com.foo.rest.examples.spring.postcollection.PostCollectionRest_00046_0"
value = 1.0
actionIndex = -1
}
)
}

linesInfo = fv.unionWithBootTimeCoveredTargets(ObjectiveNaming.LINE, idMapper, bootTimeInfoDto)
assertEquals(4, linesInfo.total)
assertEquals(1, linesInfo.bootTime)
assertEquals(3, linesInfo.searchTime)

bootTimeInfoDto = BootTimeInfoDto().apply {
targets = listOf(
TargetInfoDto().apply{//duplicate
id = 0
descriptiveId = "Line_at_com.foo.rest.examples.spring.postcollection.CreateDto_00007"
value = 1.0
actionIndex = -1
},
TargetInfoDto().apply{//new
id = 35
descriptiveId = "Line_at_com.foo.rest.examples.spring.postcollection.PostCollectionRest_00046"
value = 1.0
actionIndex = -1
},
TargetInfoDto().apply{//other
id = 36
descriptiveId = "Success_Call_at_com.foo.rest.examples.spring.postcollection.PostCollectionRest_00046_0"
value = 1.0
actionIndex = -1
}
)
}

linesInfo = fv.unionWithBootTimeCoveredTargets(ObjectiveNaming.LINE, idMapper, bootTimeInfoDto)
assertEquals(4, linesInfo.total)
assertEquals(2, linesInfo.bootTime)
assertEquals(3, linesInfo.searchTime)
}

}
32 changes: 16 additions & 16 deletions docs/whitebox.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,52 +3,52 @@

In white-box testing, the internal details of the _system under test_ (SUT) are known.
There is the need to able to access to the source code (or bytecode, for JVM languages) of the SUT.
This is usually not a problem when testing is done by the developers of the SUT themselves.
This is usually not a problem when testing is done by the developers of the SUT themselves.


A white-box test approach can aim at maximizing the code coverage of the SUT.
A white-box test approach can aim at maximizing the code coverage of the SUT.
This is helpful in at least two ways:

* _Fault Detection_: the higher the code coverage, the more likely it is to find a bug in the SUT.
A bug can only manifest itself if the faulty statements are executed at least once.
* _Regression Testing_: even if no fault is found, the generated tests can still be useful to check
later on for regression faults. And for fault detection, the higher code coverage the better.
later on for regression faults. And for fault detection, the higher code coverage the better.


To measure code coverage, the SUT needs to be _instrumented_, by putting probes in it.
In JVM languages, this can be done automatically by intercepting the class loaders, and then
use libraries like ASM to manipulate the bytecode of SUT classes at runtime.
use libraries like ASM to manipulate the bytecode of SUT classes at runtime.


But measuring code coverage alone is not enough to generate high coverage test cases.
Consider this trivial code snippet:

`if(x==42){//...`
`if(x==42){//...`

Using a black-box approach in which inputs are randomly generated, a test input would only have 1
single probability out of 2 at the power of 32 (i.e., around 4 billion possibilities in a 32-bit number)
to cover the `then` branch of that `if` statement.
to cover the `then` branch of that `if` statement.
But, a static/dynamic analysis of the code would simply point out to use the value `42` for `x`.

This is a trivial example, but predicates in the source code can be arbitrarily complex, for
example involving regular expressions and the results of accesses to SQL databases.
_EvoMaster_ uses several different heuristics and code analysis techniques to maximize code coverage
_EvoMaster_ uses several different heuristics and code analysis techniques to maximize code coverage
using an evolutionary algorithm.
In the academic literature, this is referred as _Search-Based Software Testing_.
The interested reader is encouraged to look at our [academic papers](publications.md)
to learn more about these technical details.
The interested reader is encouraged to look at our [academic papers](publications.md)
to learn more about these technical details.


These static and dynamic code analyses do require accessing the source code, and instrument it
before the SUT is started.
before the SUT is started.
But this can be done together when the SUT is instrumented to measure its code coverage.
All the instrumentations and code analyses are automatically performed by _EvoMaster_ with a
All the instrumentations and code analyses are automatically performed by _EvoMaster_ with a
library we provide (e.g., on Maven Central for JVM languages).

A user needs to provide a script/class (called _driver_) in which the SUT is _started_, with instrumentations provided
by our library.
This must be done manually, as each different frameworks (e.g., Spring and DropWizard) has its
own way to start and package an application.
own way to start and package an application.
Once a user has to provide a driver to _start_ the SUT, adding the options to _stop_ and _reset_
the SUT should not be much extra work.
Once this is done, the test cases automatically generated by _EvoMaster_ become _self-contained_,
Expand All @@ -58,23 +58,23 @@ them independent, and then finally _stop_ the SUT after all tests are completed.

We explain [how to write such script/class in this other document](write_driver.md).
To check it out before spending time writing one, you can look at the
[EMB repository](https://github.com/EMResearch/EMB) and search for classes called
[EMB repository](https://github.com/EMResearch/EMB) and search for classes called
`EmbeddedEvoMasterController`.
Start one of those directly from your IDE.
This will start the controller server (binding by default on port `40100`) for one of the SUTs in the
EMB collection.
The controller server is responsible to handle the start/reset/stop of the SUT.
Once it is up and running, you can generate test cases for it by running _EvoMaster_ from
command-line with:
command-line with:

```
java -jar evomaster.jar
```

By default, _EvoMaster_ will try to connect to a controller server that is listening on port 40100.
Its first step will be to tell it to start the SUT with all the required instrumentations.
Then, it will finally start an evolutionary algorithm to evolve test cases, and measure their fitness
when executed against the SUT.
when executed against the SUT.
To see which options to use when running _EvoMaster_ (e.g., for how long to run the evolution),
see the [main options](options.md).

Expand Down
Loading

0 comments on commit 2d5ef05

Please sign in to comment.