Skip to content

Meeting Minutes and Working Plan (made on 2019.01.26)

TimHe edited this page Feb 10, 2019 · 1 revision

Meeting minuts

  1. the most commont definition of performance bug is

    • 'We refer to performance bugs as software defects where relatively simple source-code changes can significantly speed up software, while preserving functionality. These defects cannot be optimized away by state-of-practice compilers, thus bothering end users.' --PLDI'12 ShanLu.

    Since we consider performance as final user's experience(instead of some components hangs but the system doesn't hang). So 'hang' means user can not get service, which is inconsistent with 'preserving functionality'. Further, mem-leak is an already well-known certain type of bug, which threaten correctness. So we do not necessarily consider it as performance bug.

    The performance bugs I am targeting in my project can not simply reuse the definition above. Instead, in my project, the definition is like: the counter-intuitive performance behavior of configurations.

    I am not sure whether I have the correct understanding here, please leave some comments.

  2. It is not surprising that performance bug are rare in FS (To be specific, in bugzilla). Because when users notice a performance degradation, the first thing come to his/her mind is something is wrong with the application he/she is runing, then he/she may report a bug in bug repository of this app instead of FS.

  3. It is doubtful that if most users are likely to change configurations of FS(as shown in PPT today) in their situation. If so, it may not be possible to find much cases about configuration related performance bug in FS.

  4. Although the paper [ESEM'16]An Empirical Study on Performance Bugs for Highly Configurable Software Systems show that about 60% performance bugs are related to configurations, [OOPSLA'14]Statistical Debugging for Real-World Performance Problems shows that 10 out of 65 performance bugs are reported by comparing the performance with different configurations.

    The two different results are not inconsistent, because the first paper doesn't study "to what extent performance bugs are related to configurations" while the second paper studies only the performance bugs that are reported by configuration variation.

    So I have to give enough cases to prove that performance problem can exposed by configuration variation.

Plans

  1. According point 2 and point 3 above, I am going to verify if those 8% of performance patches in the 6 FSs can be exposed by configuration variation or not.

    As well as that, I am going to find more cases in mailing list to further verify if point 2&3 are really true.

  2. To summarize some patterns of those counter-intuitive bugs I have already studied.(e.g. Giving more resources result in performance degradation)

  3. Inspect if the bugs in ShanLu's paper (studies ~200 performance bugs of Ruby-on-Rails) can be exposed by configuration vairation.

Clone this wiki locally