old-cross-binutils/gdb/testsuite/analyze-racy-logs.py

178 lines
6.5 KiB
Python
Raw Normal View History

Improve analysis of racy testcases This is an initial attempt to introduce some mechanisms to identify racy testcases present in our testsuite. As can be seen in previous discussions, racy tests are really bothersome and cause our BuildBot to pollute the gdb-testers mailing list with hundreds of false-positives messages every month. Hopefully, identifying these racy tests in advance (and automatically) will contribute to the reduction of noise traffic to gdb-testers, maybe to the point where we will be able to send the failure messages directly to the authors of the commits. I spent some time trying to decide the best way to tackle this problem, and decided that there is no silver bullet. Racy tests are tricky and it is difficult to catch them, so the best solution I could find (for now?) is to run our testsuite a number of times in a row, and then compare the results (i.e., the gdb.sum files generated during each run). The more times you run the tests, the more racy tests you are likely to detect (at the expense of waiting longer and longer). You can also run the tests in parallel, which makes things faster (and contribute to catching more racy tests, because your machine will have less resources for each test and some of them are likely to fail when this happens). I did some tests in my machine (8-core i7, 16GB RAM), and running the whole GDB testsuite 5 times using -j6 took 23 minutes. Not bad. In order to run the racy test machinery, you need to specify the RACY_ITER environment variable. You will assign a number to this variable, which represents the number of times you want to run the tests. So, for example, if you want to run the whole testsuite 3 times in parallel (using 2 cores), you will do: make check RACY_ITER=3 -j2 It is also possible to use the TESTS variable and specify which tests you want to run: make check TEST='gdb.base/default.exp' RACY_ITER=3 -j2 And so on. The output files will be put at the directory gdb/testsuite/racy_outputs/. After make invokes the necessary rules to run the tests, it finally runs a Python script that will analyze the resulting gdb.sum files. This Python script will read each file, and construct a series of sets based on the results of the tests (one set for FAIL's, one for PASS'es, one for KFAIL's, etc.). It will then do some set operations and come up with a list of unique, sorted testcases that are racy. The algorithm behind this is: for state in PASS, FAIL, XFAIL, XPASS...; do if a test's state in every sumfile is $state; then it is not racy else it is racy (The algorithm is actually a bit more complex than that, because it takes into account other things in order to decide whether the test should be ignored or not). IOW, a test must have the same state in every sumfile. After processing everything, the script prints the racy tests it could identify on stdout. I am redirecting this to a file named racy.sum. Something else that I wasn't sure how to deal with was non-unique messages in our testsuite. I decided to do the same thing I do in our BuildBot: include a unique identifier in the end of message, like: gdb.base/xyz.exp: non-unique message gdb.base/xyz.exp: non-unique message <<2>> This means that you will have to be careful about them when you use the racy.sum file. I ran the script several times here, and it did a good job catching some well-known racy tests. Overall, I am satisfied with this approach and I think it will be helpful to have it upstream'ed. I also intend to extend our BuildBot and create new, specialized builders that will be responsible for detecting the racy tests every X number of days. 2016-03-05 Sergio Durigan Junior <sergiodj@redhat.com> * Makefile.in (DEFAULT_RACY_ITER): New variable. (CHECK_TARGET_TMP): Likewise. (check-single-racy): New rule. (check-parallel-racy): Likewise. (TEST_TARGETS): Adjust rule to account for RACY_ITER. (do-check-parallel-racy): New rule. (check-racy/%.exp): Likewise. * README (Racy testcases): New section. * analyze-racy-logs.py: New file.
2016-03-06 01:37:11 +00:00
#!/usr/bin/env python
# Copyright (C) 2016 Free Software Foundation, Inc.
#
# This file is part of GDB.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
# This program is used to analyze the test results (i.e., *.sum files)
# generated by GDB's testsuite, and print the testcases that are found
# to be racy.
#
# Racy testcases are considered as being testcases which can
# intermittently FAIL (or PASS) when run two or more times
# consecutively, i.e., tests whose results are not deterministic.
#
# This program is invoked when the user runs "make check" and
# specifies the RACY_ITER environment variable.
import sys
import os
import re
# The (global) dictionary that stores the associations between a *.sum
# file and its results. The data inside it will be stored as:
#
# files_and_tests = { 'file1.sum' : { 'PASS' : { 'test1', 'test2' ... },
# 'FAIL' : { 'test5', 'test6' ... },
# ...
# },
# { 'file2.sum' : { 'PASS' : { 'test1', 'test3' ... },
# ...
# }
# }
files_and_tests = dict ()
# The relatioships between various states of the same tests that
# should be ignored. For example, if the same test PASSes on a
# testcase run but KFAILs on another, this test should be considered
# racy because a known-failure is... known.
ignore_relations = { 'PASS' : 'KFAIL' }
# We are interested in lines that start with '.?(PASS|FAIL)'. In
# other words, we don't process errors (maybe we should).
sum_matcher = re.compile('^(.?(PASS|FAIL)): (.*)$')
def parse_sum_line (line, dic):
"""Parse a single LINE from a sumfile, and store the results in the
dictionary referenced by DIC."""
global sum_matcher
line = line.rstrip ()
m = re.match (sum_matcher, line)
if m:
result = m.group (1)
test_name = m.group (3)
# Remove tail parentheses. These are likely to be '(timeout)'
# and other extra information that will only confuse us.
test_name = re.sub ('(\s+)?\(.*$', '', test_name)
if result not in dic.keys ():
dic[result] = set ()
if test_name in dic[result]:
# If the line is already present in the dictionary, then
# we include a unique identifier in the end of it, in the
# form or '<<N>>' (where N is a number >= 2). This is
# useful because the GDB testsuite is full of non-unique
# test messages; however, if you process the racy summary
# file you will also need to perform this same operation
# in order to identify the racy test.
i = 2
while True:
nname = test_name + ' <<' + str (i) + '>>'
if nname not in dic[result]:
break
i += 1
test_name = nname
dic[result].add (test_name)
def read_sum_files (files):
"""Read the sumfiles (passed as a list in the FILES variable), and
process each one, filling the FILES_AND_TESTS global dictionary with
information about them. """
global files_and_tests
for x in files:
with open (x, 'r') as f:
files_and_tests[x] = dict ()
for line in f.readlines ():
parse_sum_line (line, files_and_tests[x])
def identify_racy_tests ():
"""Identify and print the racy tests. This function basically works
on sets, and the idea behind it is simple. It takes all the sets that
refer to the same result (for example, all the sets that contain PASS
tests), and compare them. If a test is present in all PASS sets, then
it is not racy. Otherwise, it is.
This function does that for all sets (PASS, FAIL, KPASS, KFAIL, etc.),
and then print a sorted list (without duplicates) of all the tests
that were found to be racy."""
global files_and_tests
# First, construct two dictionaries that will hold one set of
# testcases for each state (PASS, FAIL, etc.).
#
# Each set in NONRACY_TESTS will contain only the non-racy
# testcases for that state. A non-racy testcase is a testcase
# that has the same state in all test runs.
#
# Each set in ALL_TESTS will contain all tests, racy or not, for
# that state.
nonracy_tests = dict ()
all_tests = dict ()
for f in files_and_tests:
for state in files_and_tests[f]:
try:
nonracy_tests[state] &= files_and_tests[f][state].copy ()
except KeyError:
nonracy_tests[state] = files_and_tests[f][state].copy ()
try:
all_tests[state] |= files_and_tests[f][state].copy ()
except KeyError:
all_tests[state] = files_and_tests[f][state].copy ()
# Now, we eliminate the tests that are present in states that need
# to be ignored. For example, tests both in the PASS and KFAIL
# states should not be considered racy.
ignored_tests = set ()
for s1, s2 in ignore_relations.iteritems ():
try:
ignored_tests |= (all_tests[s1] & all_tests[s2])
except:
continue
racy_tests = set ()
for f in files_and_tests:
for state in files_and_tests[f]:
racy_tests |= files_and_tests[f][state] - nonracy_tests[state]
racy_tests = racy_tests - ignored_tests
# Print the header.
print "\t\t=== gdb racy tests ===\n"
# Print each test.
for line in sorted (racy_tests):
print line
# Print the summary.
print "\n"
print "\t\t=== gdb Summary ===\n"
print "# of racy tests:\t\t%d" % len (racy_tests)
if __name__ == '__main__':
if len (sys.argv) < 3:
# It only makes sense to invoke this program if you pass two
# or more files to be analyzed.
sys.exit ("Usage: %s [FILE] [FILE] ..." % sys.argv[0])
read_sum_files (sys.argv[1:])
identify_racy_tests ()
exit (0)