This is the mail archive of the
gdb-patches@sourceware.org
mailing list for the GDB project.
PING [RFA] Add scripts to generate ARI web pages to gdb/contrib/ari directory
- From: "Pierre Muller" <pierre dot muller at ics-cnrs dot unistra dot fr>
- To: <gdb-patches at sourceware dot org>
- Date: Fri, 25 May 2012 10:08:24 +0200
- Subject: PING [RFA] Add scripts to generate ARI web pages to gdb/contrib/ari directory
- References: <001201cd3547$377188b0$a6549a10$@muller@ics-cnrs.unistra.fr>
Nobody reacted to that RFA...
Should I change something more before including
ARI scripts into gdb/contrib.
I would really like to start working on it,
but feel like I am stalled...
Pierre
> -----Message d'origine-----
> De?: gdb-patches-owner@sourceware.org [mailto:gdb-patches-
> owner@sourceware.org] De la part de Pierre Muller
> Envoyé?: samedi 19 mai 2012 00:40
> À?: gdb-patches@sourceware.org
> Objet?: [RFA] Add scripts to generate ARI web pages to gdb/contrib/ari
> directory
>
> Here is a RFA for inclusion of scripts to gdb/contrib/ari.
>
> The only changes to RFC-v2 are:
> 1) directory moved from gdb/ari to gdb/contrib./ari
> 2) create-web-ari-in-src.sh adapted to new directory
> 3) This script now output that location of the generated
> web page (with a different message depending on
> the existence of this file).
>
>
>
> Pierre Muller
> GDB pascal language maintainer
>
>
> 2012-05-19 Pierre Muller <muller@ics.u-strasbg.fr>
>
> * contrib/ari/create-web-ari-in-src.sh: New file.
> * contrib/ari/gdb_ari.sh: New file.
> * contrib/ari/gdb_find.sh: New file.
> * contrib/ari/update-web-ari.sh: New file.
>
> Index: contrib/ari/create-web-ari-in-src.sh
> ===================================================================
> RCS file: contrib/ari/create-web-ari-in-src.sh
> diff -N contrib/ari/create-web-ari-in-src.sh
> --- /dev/null 1 Jan 1970 00:00:00 -0000
> +++ contrib/ari/create-web-ari-in-src.sh 18 May 2012 22:31:42 -0000
> @@ -0,0 +1,68 @@
> +#! /bin/sh
> +
> +# GDB script to create web ARI page directly from within gdb/ari
directory.
> +#
> +# Copyright (C) 2012 Free Software Foundation, Inc.
> +#
> +# This file is part of GDB.
> +#
> +# This program is free software; you can redistribute it and/or modify
> +# it under the terms of the GNU General Public License as published by
> +# the Free Software Foundation; either version 3 of the License, or
> +# (at your option) any later version.
> +#
> +# This program is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> +# GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program. If not, see <http://www.gnu.org/licenses/>.
> +
> +set -x
> +
> +# Determine directory of current script.
> +scriptpath=`dirname $0`
> +# If "scriptpath" is a relative path, then convert it to absolute.
> +if [ "`echo ${scriptpath} | cut -b1`" != '/' ] ; then
> + scriptpath="`pwd`/${scriptpath}"
> +fi
> +
> +# update-web-ari.sh script wants four parameters
> +# 1: directory of checkout src or gdb-RELEASE for release sources.
> +# 2: a temp directory.
> +# 3: a directory for generated web page.
> +# 4: The name of the current package, must be gdb here.
> +# Here we provide default values for these 4 parameters
> +
> +# srcdir parameter
> +if [ -z "${srcdir}" ] ; then
> + srcdir=${scriptpath}/../../..
> +fi
> +
> +# Determine location of a temporary directory to be used by
> +# update-web-ari.sh script.
> +if [ -z "${tempdir}" ] ; then
> + if [ ! -z "$TMP" ] ; then
> + tempdir=$TMP/create-ari
> + elif [ ! -z "$TEMP" ] ; then
> + tempdir=$TEMP/create-ari
> + else
> + tempdir=/tmp/create-ari
> + fi
> +fi
> +
> +# Default location of generate index.hmtl web page.
> +if [ -z "${webdir}" ] ; then
> + webdir=~/htdocs/www/local/ari
> +fi
> +
> +# Launch update-web-ari.sh in same directory as current script.
> +${scriptpath}/update-web-ari.sh ${srcdir} ${tempdir} ${webdir} gdb
> +
> +if [ -f "${webdir}/index.html" ] ; then
> + echo "ARI output can be viewed in file \"${webdir}/index.html\""
> +else
> + echo "ARI script failed to generate file \"${webdir}/index.html\""
> +fi
> +
> Index: contrib/ari/gdb_ari.sh
> ===================================================================
> RCS file: contrib/ari/gdb_ari.sh
> diff -N contrib/ari/gdb_ari.sh
> --- /dev/null 1 Jan 1970 00:00:00 -0000
> +++ contrib/ari/gdb_ari.sh 18 May 2012 22:31:42 -0000
> @@ -0,0 +1,1347 @@
> +#!/bin/sh
> +
> +# GDB script to list of problems using awk.
> +#
> +# Copyright (C) 2002-2012 Free Software Foundation, Inc.
> +#
> +# This file is part of GDB.
> +#
> +# This program is free software; you can redistribute it and/or modify
> +# it under the terms of the GNU General Public License as published by
> +# the Free Software Foundation; either version 3 of the License, or
> +# (at your option) any later version.
> +#
> +# This program is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> +# GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program. If not, see <http://www.gnu.org/licenses/>.
> +
> +# Make certain that the script is not running in an internationalized
> +# environment.
> +
> +LANG=c ; export LANG
> +LC_ALL=c ; export LC_ALL
> +
> +# Permanent checks take the form:
> +
> +# Do not use XXXX, ISO C 90 implies YYYY
> +# Do not use XXXX, instead use YYYY''.
> +
> +# and should never be removed.
> +
> +# Temporary checks take the form:
> +
> +# Replace XXXX with YYYY
> +
> +# and once they reach zero, can be eliminated.
> +
> +# FIXME: It should be able to override this on the command line
> +error="regression"
> +warning="regression"
> +ari="regression eol code comment deprecated legacy obsolete gettext"
> +all="regression eol code comment deprecated legacy obsolete gettext
> deprecate internal gdbarch macro"
> +print_doc=0
> +print_idx=0
> +
> +usage ()
> +{
> + cat <<EOF 1>&2
> +Error: $1
> +
> +Usage:
> + $0 --print-doc --print-idx -Wall -Werror -W<category> <file> ...
> +Options:
> + --print-doc Print a list of all potential problems, then exit.
> + --print-idx Include the problems IDX (index or key) in every
message.
> + --src=file Write source lines to file.
> + -Werror Treat all problems as errors.
> + -Wall Report all problems.
> + -Wari Report problems that should be fixed in new code.
> + -W<category> Report problems in the specifed category. Vaid
categories
> + are: ${all}
> +EOF
> + exit 1
> +}
> +
> +
> +# Parse the various options
> +Woptions=
> +srclines=""
> +while test $# -gt 0
> +do
> + case "$1" in
> + -Wall ) Woptions="${all}" ;;
> + -Wari ) Woptions="${ari}" ;;
> + -Werror ) Werror=1 ;;
> + -W* ) Woptions="${Woptions} `echo x$1 | sed -e 's/x-W//'`" ;;
> + --print-doc ) print_doc=1 ;;
> + --print-idx ) print_idx=1 ;;
> + --src=* ) srclines="`echo $1 | sed -e 's/--src=/srclines=\"/'`\"" ;;
> + -- ) shift ; break ;;
> + - ) break ;;
> + -* ) usage "$1: unknown option" ;;
> + * ) break ;;
> + esac
> + shift
> +done
> +if test -n "$Woptions" ; then
> + warning="$Woptions"
> + error=
> +fi
> +
> +
> +# -Werror implies treating all warnings as errors.
> +if test -n "${Werror}" ; then
> + error="${error} ${warning}"
> +fi
> +
> +
> +# Validate all errors and warnings.
> +for w in ${warning} ${error}
> +do
> + case " ${all} " in
> + *" ${w} "* ) ;;
> + * ) usage "Unknown option -W${w}" ;;
> + esac
> +done
> +
> +
> +# make certain that there is at least one file.
> +if test $# -eq 0 -a ${print_doc} = 0
> +then
> + usage "Missing file."
> +fi
> +
> +
> +# Convert the errors/warnings into corresponding array entries.
> +for a in ${all}
> +do
> + aris="${aris} ari_${a} = \"${a}\";"
> +done
> +for w in ${warning}
> +do
> + warnings="${warnings} warning[ari_${w}] = 1;"
> +done
> +for e in ${error}
> +do
> + errors="${errors} error[ari_${e}] = 1;"
> +done
> +
> +awk -- '
> +BEGIN {
> + # NOTE, for a per-file begin use "FNR == 1".
> + '"${aris}"'
> + '"${errors}"'
> + '"${warnings}"'
> + '"${srclines}"'
> + print_doc = '$print_doc'
> + print_idx = '$print_idx'
> + PWD = "'`pwd`'"
> +}
> +
> +# Print the error message for BUG. Append SUPLEMENT if non-empty.
> +function print_bug(file,line,prefix,category,bug,doc,supplement,
> suffix,idx) {
> + if (print_idx) {
> + idx = bug ": "
> + } else {
> + idx = ""
> + }
> + if (supplement) {
> + suffix = " (" supplement ")"
> + } else {
> + suffix = ""
> + }
> + # ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
> + print file ":" line ": " prefix category ": " idx doc suffix
> + if (srclines != "") {
> + print file ":" line ":" $0 >> srclines
> + }
> +}
> +
> +function fix(bug,file,count) {
> + skip[bug, file] = count
> + skipped[bug, file] = 0
> +}
> +
> +function fail(bug,supplement) {
> + if (doc[bug] == "") {
> + print_bug("", 0, "internal: ", "internal", "internal", "Missing doc
> for bug " bug)
> + exit
> + }
> + if (category[bug] == "") {
> + print_bug("", 0, "internal: ", "internal", "internal", "Missing
> category for bug " bug)
> + exit
> + }
> +
> + if (ARI_OK == bug) {
> + return
> + }
> + # Trim the filename down to just DIRECTORY/FILE so that it can be
> + # robustly used by the FIX code.
> +
> + if (FILENAME ~ /^\//) {
> + canonicalname = FILENAME
> + } else {
> + canonicalname = PWD "/" FILENAME
> + }
> + shortname = gensub (/^.*\/([^\\]*\/[^\\]*)$/, "\\1", 1,
canonicalname)
> +
> + skipped[bug, shortname]++
> + if (skip[bug, shortname] >= skipped[bug, shortname]) {
> + # print FILENAME, FNR, skip[bug, FILENAME], skipped[bug, FILENAME],
> bug
> + # Do nothing
> + } else if (error[category[bug]]) {
> + # ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
> + print_bug(FILENAME, FNR, "", category[bug], bug, doc[bug],
> supplement)
> + } else if (warning[category[bug]]) {
> + # ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
> + print_bug(FILENAME, FNR, "warning: ", category[bug], bug, doc[bug],
> supplement)
> + }
> +}
> +
> +FNR == 1 {
> + seen[FILENAME] = 1
> + if (match(FILENAME, "\\.[ly]$")) {
> + # FILENAME is a lex or yacc source
> + is_yacc_or_lex = 1
> + }
> + else {
> + is_yacc_or_lex = 0
> + }
> +}
> +END {
> + if (print_idx) {
> + idx = bug ": "
> + } else {
> + idx = ""
> + }
> + # Did we do only a partial skip?
> + for (bug_n_file in skip) {
> + split (bug_n_file, a, SUBSEP)
> + bug = a[1]
> + file = a[2]
> + if (seen[file] && (skipped[bug_n_file] < skip[bug_n_file])) {
> + # ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
> + b = file " missing " bug
> + print_bug(file, 0, "", "internal", file " missing " bug,
> "Expecting " skip[bug_n_file] " occurances of bug " bug " in file " file
",
> only found " skipped[bug_n_file])
> + }
> + }
> +}
> +
> +
> +# Skip OBSOLETE lines
> +/(^|[^_[:alnum:]])OBSOLETE([^_[:alnum:]]|$)/ { next; }
> +
> +# Skip ARI lines
> +
> +BEGIN {
> + ARI_OK = ""
> +}
> +
> +/\/\* ARI:[[:space:]]*(.*)[[:space:]]*\*\// {
> + ARI_OK = gensub(/^.*\/\*
> ARI:[[:space:]]*(.*[^[:space:]])[[:space:]]*\*\/.*$/, "\\1", 1, $0)
> + # print "ARI line found \"" $0 "\""
> + # print "ARI_OK \"" ARI_OK "\""
> +}
> +! /\/\* ARI:[[:space:]]*(.*)[[:space:]]*\*\// {
> + ARI_OK = ""
> +}
> +
> +
> +# Things in comments
> +
> +BEGIN { doc["GNU/Linux"] = "\
> +Do not use `Linux'\'', instead use `Linux kernel'\'' or `GNU/Linux
> system'\'';\
> + comments should clearly differentiate between the two (this test assumes
> that\
> + word `Linux'\'' appears on the same line as the word `GNU'\'' or
> `kernel'\''\
> + or a kernel version"
> + category["GNU/Linux"] = ari_comment
> +}
> +/(^|[^_[:alnum:]])Linux([^_[:alnum:]]|$)/ \
> +&& !/(^|[^_[:alnum:]])Linux\[sic\]([^_[:alnum:]]|$)/ \
> +&& !/(^|[^_[:alnum:]])GNU\/Linux([^_[:alnum:]]|$)/ \
> +&& !/(^|[^_[:alnum:]])Linux kernel([^_[:alnum:]]|$)/ \
> +&& !/(^|[^_[:alnum:]])Linux [:digit:]\.[:digit:]+)/ {
> + fail("GNU/Linux")
> +}
> +
> +BEGIN { doc["ARGSUSED"] = "\
> +Do not use ARGSUSED, unnecessary"
> + category["ARGSUSED"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])ARGSUSED([^_[:alnum:]]|$)/ {
> + fail("ARGSUSED")
> +}
> +
> +
> +# SNIP - Strip out comments - SNIP
> +
> +FNR == 1 {
> + comment_p = 0
> +}
> +comment_p && /\*\// { gsub (/^([^\*]|\*+[^\/\*])*\*+\//, " "); comment_p
=
> 0; }
> +comment_p { next; }
> +!comment_p { gsub (/\/\*([^\*]|\*+[^\/\*])*\*+\//, " "); }
> +!comment_p && /(^|[^"])\/\*/ { gsub (/\/\*.*$/, " "); comment_p = 1; }
> +
> +
> +BEGIN { doc["_ markup"] = "\
> +All messages should be marked up with _."
> + category["_ markup"] = ari_gettext
> +}
>
+/^[^"]*[[:space:]](warning|error|error_no_arg|query|perror_with_name)[[:spa
> ce:]]*\([^_\(a-z]/ {
> + if (! /\("%s"/) {
> + fail("_ markup")
> + }
> +}
> +
> +BEGIN { doc["trailing new line"] = "\
> +A message should not have a trailing new line"
> + category["trailing new line"] = ari_gettext
> +}
> +/(^|[^_[:alnum:]])(warning|error)[[:space:]]*\(_\(".*\\n"\)[\),]/ {
> + fail("trailing new line")
> +}
> +
> +# Include files for which GDB has a custom version.
> +
> +BEGIN { doc["assert.h"] = "\
> +Do not include assert.h, instead include \"gdb_assert.h\"";
> + category["assert.h"] = ari_regression
> + fix("assert.h", "gdb/gdb_assert.h", 0) # it does not use it
> +}
> +/^#[[:space:]]*include[[:space:]]+.assert\.h./ {
> + fail("assert.h")
> +}
> +
> +BEGIN { doc["dirent.h"] = "\
> +Do not include dirent.h, instead include gdb_dirent.h"
> + category["dirent.h"] = ari_regression
> +}
> +/^#[[:space:]]*include[[:space:]]*.dirent\.h./ {
> + fail("dirent.h")
> +}
> +
> +BEGIN { doc["regex.h"] = "\
> +Do not include regex.h, instead include gdb_regex.h"
> + category["regex.h"] = ari_regression
> + fix("regex.h", "gdb/gdb_regex.h", 1)
> +}
> +/^#[[:space:]]*include[[:space:]]*.regex\.h./ {
> + fail("regex.h")
> +}
> +
> +BEGIN { doc["xregex.h"] = "\
> +Do not include xregex.h, instead include gdb_regex.h"
> + category["xregex.h"] = ari_regression
> + fix("xregex.h", "gdb/gdb_regex.h", 1)
> +}
> +/^#[[:space:]]*include[[:space:]]*.xregex\.h./ {
> + fail("xregex.h")
> +}
> +
> +BEGIN { doc["gnu-regex.h"] = "\
> +Do not include gnu-regex.h, instead include gdb_regex.h"
> + category["gnu-regex.h"] = ari_regression
> +}
> +/^#[[:space:]]*include[[:space:]]*.gnu-regex\.h./ {
> + fail("gnu regex.h")
> +}
> +
> +BEGIN { doc["stat.h"] = "\
> +Do not include stat.h or sys/stat.h, instead include gdb_stat.h"
> + category["stat.h"] = ari_regression
> + fix("stat.h", "gdb/gdb_stat.h", 1)
> +}
> +/^#[[:space:]]*include[[:space:]]*.stat\.h./ \
> +|| /^#[[:space:]]*include[[:space:]]*.sys\/stat\.h./ {
> + fail("stat.h")
> +}
> +
> +BEGIN { doc["wait.h"] = "\
> +Do not include wait.h or sys/wait.h, instead include gdb_wait.h"
> + fix("wait.h", "gdb/gdb_wait.h", 2);
> + category["wait.h"] = ari_regression
> +}
> +/^#[[:space:]]*include[[:space:]]*.wait\.h./ \
> +|| /^#[[:space:]]*include[[:space:]]*.sys\/wait\.h./ {
> + fail("wait.h")
> +}
> +
> +BEGIN { doc["vfork.h"] = "\
> +Do not include vfork.h, instead include gdb_vfork.h"
> + fix("vfork.h", "gdb/gdb_vfork.h", 1);
> + category["vfork.h"] = ari_regression
> +}
> +/^#[[:space:]]*include[[:space:]]*.vfork\.h./ {
> + fail("vfork.h")
> +}
> +
> +BEGIN { doc["error not internal-warning"] = "\
> +Do not use error(\"internal-warning\"), instead use internal_warning"
> + category["error not internal-warning"] = ari_regression
> +}
> +/error.*\"[Ii]nternal.warning/ {
> + fail("error not internal-warning")
> +}
> +
> +BEGIN { doc["%p"] = "\
> +Do not use printf(\"%p\"), instead use printf(\"%s\",paddr()) to dump a \
> +target address, or host_address_to_string() for a host address"
> + category["%p"] = ari_code
> +}
> +/%p/ && !/%prec/ {
> + fail("%p")
> +}
> +
> +BEGIN { doc["%ll"] = "\
> +Do not use printf(\"%ll\"), instead use printf(\"%s\",phex()) to dump a \
> +`long long'\'' value"
> + category["%ll"] = ari_code
> +}
> +# Allow %ll in scanf
> +/%[0-9]*ll/ && !/scanf \(.*%[0-9]*ll/ {
> + fail("%ll")
> +}
> +
> +
> +# SNIP - Strip out strings - SNIP
> +
> +# Test on top.c, scm-valprint.c, remote-rdi.c, ada-lang.c
> +FNR == 1 {
> + string_p = 0
> + trace_string = 0
> +}
> +# Strip escaped characters.
> +{ gsub(/\\./, "."); }
> +# Strip quoted quotes.
> +{ gsub(/'\''.'\''/, "'\''.'\''"); }
> +# End of multi-line string
> +string_p && /\"/ {
> + if (trace_string) print "EOS:" FNR, $0;
> + gsub (/^[^\"]*\"/, "'\''");
> + string_p = 0;
> +}
> +# Middle of multi-line string, discard line.
> +string_p {
> + if (trace_string) print "MOS:" FNR, $0;
> + $0 = ""
> +}
> +# Strip complete strings from the middle of the line
> +!string_p && /\"[^\"]*\"/ {
> + if (trace_string) print "COS:" FNR, $0;
> + gsub (/\"[^\"]*\"/, "'\''");
> +}
> +# Start of multi-line string
> +BEGIN { doc["multi-line string"] = "\
> +Multi-line string must have the newline escaped"
> + category["multi-line string"] = ari_regression
> +}
> +!string_p && /\"/ {
> + if (trace_string) print "SOS:" FNR, $0;
> + if (/[^\\]$/) {
> + fail("multi-line string")
> + }
> + gsub (/\"[^\"]*$/, "'\''");
> + string_p = 1;
> +}
> +# { print }
> +
> +# Multi-line string
> +string_p &&
> +
> +# Accumulate continuation lines
> +FNR == 1 {
> + cont_p = 0
> +}
> +!cont_p { full_line = ""; }
> +/[^\\]\\$/ { gsub (/\\$/, ""); full_line = full_line $0; cont_p = 1;
next;
> }
> +cont_p { $0 = full_line $0; cont_p = 0; full_line = ""; }
> +
> +
> +# GDB uses ISO C 90. Check for any non pure ISO C 90 code
> +
> +BEGIN { doc["PARAMS"] = "\
> +Do not use PARAMS(), ISO C 90 implies prototypes"
> + category["PARAMS"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])PARAMS([^_[:alnum:]]|$)/ {
> + fail("PARAMS")
> +}
> +
> +BEGIN { doc["__func__"] = "\
> +Do not use __func__, ISO C 90 does not support this macro"
> + category["__func__"] = ari_regression
> + fix("__func__", "gdb/gdb_assert.h", 1)
> +}
> +/(^|[^_[:alnum:]])__func__([^_[:alnum:]]|$)/ {
> + fail("__func__")
> +}
> +
> +BEGIN { doc["__FUNCTION__"] = "\
> +Do not use __FUNCTION__, ISO C 90 does not support this macro"
> + category["__FUNCTION__"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])__FUNCTION__([^_[:alnum:]]|$)/ {
> + fail("__FUNCTION__")
> +}
> +
> +BEGIN { doc["__CYGWIN32__"] = "\
> +Do not use __CYGWIN32__, instead use __CYGWIN__ or, better, an explicit \
> +autoconf tests"
> + category["__CYGWIN32__"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])__CYGWIN32__([^_[:alnum:]]|$)/ {
> + fail("__CYGWIN32__")
> +}
> +
> +BEGIN { doc["PTR"] = "\
> +Do not use PTR, ISO C 90 implies `void *'\''"
> + category["PTR"] = ari_regression
> + #fix("PTR", "gdb/utils.c", 6)
> +}
> +/(^|[^_[:alnum:]])PTR([^_[:alnum:]]|$)/ {
> + fail("PTR")
> +}
> +
> +BEGIN { doc["UCASE function"] = "\
> +Function name is uppercase."
> + category["UCASE function"] = ari_code
> + possible_UCASE = 0
> + UCASE_full_line = ""
> +}
> +(possible_UCASE) {
> + if (ARI_OK == "UCASE function") {
> + possible_UCASE = 0
> + }
> + # Closing brace found?
> + else if (UCASE_full_line ~ \
> + /^[A-Z][[:alnum:]_]*[[:space:]]*\([^()]*\).*$/) {
> + if ((UCASE_full_line ~ \
> + /^[A-Z][[:alnum:]_]*[[:space:]]*\([^()]*\)[[:space:]]*$/) \
> + && ($0 ~ /^\{/) && (is_yacc_or_lex == 0)) {
> + store_FNR = FNR
> + FNR = possible_FNR
> + store_0 = $0;
> + $0 = UCASE_full_line;
> + fail("UCASE function")
> + FNR = store_FNR
> + $0 = store_0;
> + }
> + possible_UCASE = 0
> + UCASE_full_line = ""
> + } else {
> + UCASE_full_line = UCASE_full_line $0;
> + }
> +}
> +/^[A-Z][[:alnum:]_]*[[:space:]]*\([^()]*(|\))[[:space:]]*$/ {
> + possible_UCASE = 1
> + if (ARI_OK == "UCASE function") {
> + possible_UCASE = 0
> + }
> + possible_FNR = FNR
> + UCASE_full_line = $0
> +}
> +
> +
> +BEGIN { doc["editCase function"] = "\
> +Function name starts lower case but has uppercased letters."
> + category["editCase function"] = ari_code
> + possible_editCase = 0
> + editCase_full_line = ""
> +}
> +(possible_editCase) {
> + if (ARI_OK == "ediCase function") {
> + possible_editCase = 0
> + }
> + # Closing brace found?
> + else if (editCase_full_line ~ \
> +/^[a-z][a-z0-9_]*[A-Z][a-z0-9A-Z_]*[[:space:]]*\([^()]*\).*$/) {
> + if ((editCase_full_line ~ \
> +/^[a-z][a-z0-9_]*[A-Z][a-z0-9A-Z_]*[[:space:]]*\([^()]*\)[[:space:]]*$/)
\
> + && ($0 ~ /^\{/) && (is_yacc_or_lex == 0)) {
> + store_FNR = FNR
> + FNR = possible_FNR
> + store_0 = $0;
> + $0 = editCase_full_line;
> + fail("editCase function")
> + FNR = store_FNR
> + $0 = store_0;
> + }
> + possible_editCase = 0
> + editCase_full_line = ""
> + } else {
> + editCase_full_line = editCase_full_line $0;
> + }
> +}
>
+/^[a-z][a-z0-9_]*[A-Z][a-z0-9A-Z_]*[[:space:]]*\([^()]*(|\))[[:space:]]*$/
> {
> + possible_editCase = 1
> + if (ARI_OK == "editCase function") {
> + possible_editCase = 0
> + }
> + possible_FNR = FNR
> + editCase_full_line = $0
> +}
> +
> +# Only function implementation should be on first column
> +BEGIN { doc["function call in first column"] = "\
> +Function name in first column should be restricted to function
> implementation"
> + category["function call in first column"] = ari_code
> +}
> +/^[a-z][a-z0-9_]*[[:space:]]*\((|[^*][^()]*)\)[[:space:]]*[^ \t]+/ {
> + fail("function call in first column")
> +}
> +
> +
> +# Functions without any parameter should have (void)
> +# after their name not simply ().
> +BEGIN { doc["no parameter function"] = "\
> +Function having no parameter should be declared with funcname (void)."
> + category["no parameter function"] = ari_code
> +}
> +/^[a-zA-Z][a-z0-9A-Z_]*[[:space:]]*\(\)/ {
> + fail("no parameter function")
> +}
> +
> +BEGIN { doc["hash"] = "\
> +Do not use ` #...'\'', instead use `#...'\''(some compilers only
correctly
> \
> +parse a C preprocessor directive when `#'\'' is the first character on \
> +the line)"
> + category["hash"] = ari_regression
> +}
> +/^[[:space:]]+#/ {
> + fail("hash")
> +}
> +
> +BEGIN { doc["OP eol"] = "\
> +Do not use &&, or || at the end of a line"
> + category["OP eol"] = ari_code
> +}
> +/(\|\||\&\&|==|!=)[[:space:]]*$/ {
> + fail("OP eol")
> +}
> +
> +BEGIN { doc["strerror"] = "\
> +Do not use strerror(), instead use safe_strerror()"
> + category["strerror"] = ari_regression
> + fix("strerror", "gdb/gdb_string.h", 1)
> + fix("strerror", "gdb/mingw-hdep.c", 1)
> + fix("strerror", "gdb/posix-hdep.c", 1)
> +}
> +/(^|[^_[:alnum:]])strerror[[:space:]]*\(/ {
> + fail("strerror")
> +}
> +
> +BEGIN { doc["long long"] = "\
> +Do not use `long long'\'', instead use LONGEST"
> + category["long long"] = ari_code
> + # defs.h needs two such patterns for LONGEST and ULONGEST definitions
> + fix("long long", "gdb/defs.h", 2)
> +}
> +/(^|[^_[:alnum:]])long[[:space:]]+long([^_[:alnum:]]|$)/ {
> + fail("long long")
> +}
> +
> +BEGIN { doc["ATTRIBUTE_UNUSED"] = "\
> +Do not use ATTRIBUTE_UNUSED, do not bother (GDB is compiled with -Werror
> and, \
> +consequently, is not able to tolerate false warnings. Since
-Wunused-param
> \
> +produces such warnings, neither that warning flag nor ATTRIBUTE_UNUSED \
> +are used by GDB"
> + category["ATTRIBUTE_UNUSED"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])ATTRIBUTE_UNUSED([^_[:alnum:]]|$)/ {
> + fail("ATTRIBUTE_UNUSED")
> +}
> +
> +BEGIN { doc["ATTR_FORMAT"] = "\
> +Do not use ATTR_FORMAT, use ATTRIBUTE_PRINTF instead"
> + category["ATTR_FORMAT"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])ATTR_FORMAT([^_[:alnum:]]|$)/ {
> + fail("ATTR_FORMAT")
> +}
> +
> +BEGIN { doc["ATTR_NORETURN"] = "\
> +Do not use ATTR_NORETURN, use ATTRIBUTE_NORETURN instead"
> + category["ATTR_NORETURN"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])ATTR_NORETURN([^_[:alnum:]]|$)/ {
> + fail("ATTR_NORETURN")
> +}
> +
> +BEGIN { doc["NORETURN"] = "\
> +Do not use NORETURN, use ATTRIBUTE_NORETURN instead"
> + category["NORETURN"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])NORETURN([^_[:alnum:]]|$)/ {
> + fail("NORETURN")
> +}
> +
> +
> +# General problems
> +
> +BEGIN { doc["multiple messages"] = "\
> +Do not use multiple calls to warning or error, instead use a single call"
> + category["multiple messages"] = ari_gettext
> +}
> +FNR == 1 {
> + warning_fnr = -1
> +}
> +/(^|[^_[:alnum:]])(warning|error)[[:space:]]*\(/ {
> + if (FNR == warning_fnr + 1) {
> + fail("multiple messages")
> + } else {
> + warning_fnr = FNR
> + }
> +}
> +
> +# Commented out, but left inside sources, just in case.
> +# BEGIN { doc["inline"] = "\
> +# Do not use the inline attribute; \
> +# since the compiler generally ignores this, better algorithm selection \
> +# is needed to improved performance"
> +# category["inline"] = ari_code
> +# }
> +# /(^|[^_[:alnum:]])inline([^_[:alnum:]]|$)/ {
> +# fail("inline")
> +# }
> +
> +# This test is obsolete as this type
> +# has been deprecated and finally suppressed from GDB sources
> +#BEGIN { doc["obj_private"] = "\
> +#Replace obj_private with objfile_data"
> +# category["obj_private"] = ari_obsolete
> +#}
> +#/(^|[^_[:alnum:]])obj_private([^_[:alnum:]]|$)/ {
> +# fail("obj_private")
> +#}
> +
> +BEGIN { doc["abort"] = "\
> +Do not use abort, instead use internal_error; GDB should never abort"
> + category["abort"] = ari_regression
> + fix("abort", "gdb/utils.c", 3)
> +}
> +/(^|[^_[:alnum:]])abort[[:space:]]*\(/ {
> + fail("abort")
> +}
> +
> +BEGIN { doc["basename"] = "\
> +Do not use basename, instead use lbasename"
> + category["basename"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])basename[[:space:]]*\(/ {
> + fail("basename")
> +}
> +
> +BEGIN { doc["assert"] = "\
> +Do not use assert, instead use gdb_assert or internal_error; assert \
> +calls abort and GDB should never call abort"
> + category["assert"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])assert[[:space:]]*\(/ {
> + fail("assert")
> +}
> +
> +BEGIN { doc["TARGET_HAS_HARDWARE_WATCHPOINTS"] = "\
> +Replace TARGET_HAS_HARDWARE_WATCHPOINTS with nothing, not needed"
> + category["TARGET_HAS_HARDWARE_WATCHPOINTS"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])TARGET_HAS_HARDWARE_WATCHPOINTS([^_[:alnum:]]|$)/ {
> + fail("TARGET_HAS_HARDWARE_WATCHPOINTS")
> +}
> +
> +BEGIN { doc["ADD_SHARED_SYMBOL_FILES"] = "\
> +Replace ADD_SHARED_SYMBOL_FILES with nothing, not needed?"
> + category["ADD_SHARED_SYMBOL_FILES"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])ADD_SHARED_SYMBOL_FILES([^_[:alnum:]]|$)/ {
> + fail("ADD_SHARED_SYMBOL_FILES")
> +}
> +
> +BEGIN { doc["SOLIB_ADD"] = "\
> +Replace SOLIB_ADD with nothing, not needed?"
> + category["SOLIB_ADD"] = ari_deprecate
> +}
> +/(^|[^_[:alnum:]])SOLIB_ADD([^_[:alnum:]]|$)/ {
> + fail("SOLIB_ADD")
> +}
> +
> +BEGIN { doc["SOLIB_CREATE_INFERIOR_HOOK"] = "\
> +Replace SOLIB_CREATE_INFERIOR_HOOK with nothing, not needed?"
> + category["SOLIB_CREATE_INFERIOR_HOOK"] = ari_deprecate
> +}
> +/(^|[^_[:alnum:]])SOLIB_CREATE_INFERIOR_HOOK([^_[:alnum:]]|$)/ {
> + fail("SOLIB_CREATE_INFERIOR_HOOK")
> +}
> +
> +BEGIN { doc["SOLIB_LOADED_LIBRARY_PATHNAME"] = "\
> +Replace SOLIB_LOADED_LIBRARY_PATHNAME with nothing, not needed?"
> + category["SOLIB_LOADED_LIBRARY_PATHNAME"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])SOLIB_LOADED_LIBRARY_PATHNAME([^_[:alnum:]]|$)/ {
> + fail("SOLIB_LOADED_LIBRARY_PATHNAME")
> +}
> +
> +BEGIN { doc["REGISTER_U_ADDR"] = "\
> +Replace REGISTER_U_ADDR with nothing, not needed?"
> + category["REGISTER_U_ADDR"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])REGISTER_U_ADDR([^_[:alnum:]]|$)/ {
> + fail("REGISTER_U_ADDR")
> +}
> +
> +BEGIN { doc["PROCESS_LINENUMBER_HOOK"] = "\
> +Replace PROCESS_LINENUMBER_HOOK with nothing, not needed?"
> + category["PROCESS_LINENUMBER_HOOK"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])PROCESS_LINENUMBER_HOOK([^_[:alnum:]]|$)/ {
> + fail("PROCESS_LINENUMBER_HOOK")
> +}
> +
> +BEGIN { doc["PC_SOLIB"] = "\
> +Replace PC_SOLIB with nothing, not needed?"
> + category["PC_SOLIB"] = ari_deprecate
> +}
> +/(^|[^_[:alnum:]])PC_SOLIB([^_[:alnum:]]|$)/ {
> + fail("PC_SOLIB")
> +}
> +
> +BEGIN { doc["IN_SOLIB_DYNSYM_RESOLVE_CODE"] = "\
> +Replace IN_SOLIB_DYNSYM_RESOLVE_CODE with nothing, not needed?"
> + category["IN_SOLIB_DYNSYM_RESOLVE_CODE"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])IN_SOLIB_DYNSYM_RESOLVE_CODE([^_[:alnum:]]|$)/ {
> + fail("IN_SOLIB_DYNSYM_RESOLVE_CODE")
> +}
> +
> +BEGIN { doc["GCC_COMPILED_FLAG_SYMBOL"] = "\
> +Replace GCC_COMPILED_FLAG_SYMBOL with nothing, not needed?"
> + category["GCC_COMPILED_FLAG_SYMBOL"] = ari_deprecate
> +}
> +/(^|[^_[:alnum:]])GCC_COMPILED_FLAG_SYMBOL([^_[:alnum:]]|$)/ {
> + fail("GCC_COMPILED_FLAG_SYMBOL")
> +}
> +
> +BEGIN { doc["GCC2_COMPILED_FLAG_SYMBOL"] = "\
> +Replace GCC2_COMPILED_FLAG_SYMBOL with nothing, not needed?"
> + category["GCC2_COMPILED_FLAG_SYMBOL"] = ari_deprecate
> +}
> +/(^|[^_[:alnum:]])GCC2_COMPILED_FLAG_SYMBOL([^_[:alnum:]]|$)/ {
> + fail("GCC2_COMPILED_FLAG_SYMBOL")
> +}
> +
> +BEGIN { doc["FUNCTION_EPILOGUE_SIZE"] = "\
> +Replace FUNCTION_EPILOGUE_SIZE with nothing, not needed?"
> + category["FUNCTION_EPILOGUE_SIZE"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])FUNCTION_EPILOGUE_SIZE([^_[:alnum:]]|$)/ {
> + fail("FUNCTION_EPILOGUE_SIZE")
> +}
> +
> +BEGIN { doc["HAVE_VFORK"] = "\
> +Do not use HAVE_VFORK, instead include \"gdb_vfork.h\" and call vfork() \
> +unconditionally"
> + category["HAVE_VFORK"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])HAVE_VFORK([^_[:alnum:]]|$)/ {
> + fail("HAVE_VFORK")
> +}
> +
> +BEGIN { doc["bcmp"] = "\
> +Do not use bcmp(), ISO C 90 implies memcmp()"
> + category["bcmp"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])bcmp[[:space:]]*\(/ {
> + fail("bcmp")
> +}
> +
> +BEGIN { doc["setlinebuf"] = "\
> +Do not use setlinebuf(), ISO C 90 implies setvbuf()"
> + category["setlinebuf"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])setlinebuf[[:space:]]*\(/ {
> + fail("setlinebuf")
> +}
> +
> +BEGIN { doc["bcopy"] = "\
> +Do not use bcopy(), ISO C 90 implies memcpy() and memmove()"
> + category["bcopy"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])bcopy[[:space:]]*\(/ {
> + fail("bcopy")
> +}
> +
> +BEGIN { doc["get_frame_base"] = "\
> +Replace get_frame_base with get_frame_id, get_frame_base_address, \
> +get_frame_locals_address, or get_frame_args_address."
> + category["get_frame_base"] = ari_obsolete
> +}
> +/(^|[^_[:alnum:]])get_frame_base([^_[:alnum:]]|$)/ {
> + fail("get_frame_base")
> +}
> +
> +BEGIN { doc["floatformat_to_double"] = "\
> +Do not use floatformat_to_double() from libierty, \
> +instead use floatformat_to_doublest()"
> + fix("floatformat_to_double", "gdb/doublest.c", 1)
> + category["floatformat_to_double"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])floatformat_to_double[[:space:]]*\(/ {
> + fail("floatformat_to_double")
> +}
> +
> +BEGIN { doc["floatformat_from_double"] = "\
> +Do not use floatformat_from_double() from libierty, \
> +instead use floatformat_from_doublest()"
> + category["floatformat_from_double"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])floatformat_from_double[[:space:]]*\(/ {
> + fail("floatformat_from_double")
> +}
> +
> +BEGIN { doc["BIG_ENDIAN"] = "\
> +Do not use BIG_ENDIAN, instead use BFD_ENDIAN_BIG"
> + category["BIG_ENDIAN"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])BIG_ENDIAN([^_[:alnum:]]|$)/ {
> + fail("BIG_ENDIAN")
> +}
> +
> +BEGIN { doc["LITTLE_ENDIAN"] = "\
> +Do not use LITTLE_ENDIAN, instead use BFD_ENDIAN_LITTLE";
> + category["LITTLE_ENDIAN"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])LITTLE_ENDIAN([^_[:alnum:]]|$)/ {
> + fail("LITTLE_ENDIAN")
> +}
> +
> +BEGIN { doc["BIG_ENDIAN"] = "\
> +Do not use BIG_ENDIAN, instead use BFD_ENDIAN_BIG"
> + category["BIG_ENDIAN"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])BIG_ENDIAN([^_[:alnum:]]|$)/ {
> + fail("BIG_ENDIAN")
> +}
> +
> +BEGIN { doc["sec_ptr"] = "\
> +Instead of sec_ptr, use struct bfd_section";
> + category["sec_ptr"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])sec_ptr([^_[:alnum:]]|$)/ {
> + fail("sec_ptr")
> +}
> +
> +BEGIN { doc["frame_unwind_unsigned_register"] = "\
> +Replace frame_unwind_unsigned_register with
frame_unwind_register_unsigned"
> + category["frame_unwind_unsigned_register"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])frame_unwind_unsigned_register([^_[:alnum:]]|$)/ {
> + fail("frame_unwind_unsigned_register")
> +}
> +
> +BEGIN { doc["frame_register_read"] = "\
> +Replace frame_register_read() with get_frame_register(), or \
> +possibly introduce a new method safe_get_frame_register()"
> + category["frame_register_read"] = ari_obsolete
> +}
> +/(^|[^_[:alnum:]])frame_register_read([^_[:alnum:]]|$)/ {
> + fail("frame_register_read")
> +}
> +
> +BEGIN { doc["read_register"] = "\
> +Replace read_register() with regcache_read() et.al."
> + category["read_register"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])read_register([^_[:alnum:]]|$)/ {
> + fail("read_register")
> +}
> +
> +BEGIN { doc["write_register"] = "\
> +Replace write_register() with regcache_read() et.al."
> + category["write_register"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])write_register([^_[:alnum:]]|$)/ {
> + fail("write_register")
> +}
> +
> +function report(name) {
> + # Drop any trailing _P.
> + name = gensub(/(_P|_p)$/, "", 1, name)
> + # Convert to lower case
> + name = tolower(name)
> + # Split into category and bug
> + cat = gensub(/^([[:alpha:]]+)_([_[:alnum:]]*)$/, "\\1", 1, name)
> + bug = gensub(/^([[:alpha:]]+)_([_[:alnum:]]*)$/, "\\2", 1, name)
> + # Report it
> + name = cat " " bug
> + doc[name] = "Do not use " cat " " bug ", see declaration for details"
> + category[name] = cat
> + fail(name)
> +}
> +
>
+/(^|[^_[:alnum:]])(DEPRECATED|deprecated|set_gdbarch_deprecated|LEGACY|lega
> cy|set_gdbarch_legacy)_/ {
> + line = $0
> + # print "0 =", $0
> + while (1) {
> + name =
>
gensub(/^(|.*[^_[:alnum:]])((DEPRECATED|deprecated|LEGACY|legacy)_[_[:alnum:
> ]]*)(.*)$/, "\\2", 1, line)
> + line =
>
gensub(/^(|.*[^_[:alnum:]])((DEPRECATED|deprecated|LEGACY|legacy)_[_[:alnum:
> ]]*)(.*)$/, "\\1 \\4", 1, line)
> + # print "name =", name, "line =", line
> + if (name == line) break;
> + report(name)
> + }
> +}
> +
> +# Count the number of times each architecture method is set
> +/(^|[^_[:alnum:]])set_gdbarch_[_[:alnum:]]*([^_[:alnum:]]|$)/ {
> + name = gensub(/^.*set_gdbarch_([_[:alnum:]]*).*$/, "\\1", 1, $0)
> + doc["set " name] = "\
> +Call to set_gdbarch_" name
> + category["set " name] = ari_gdbarch
> + fail("set " name)
> +}
> +
> +# Count the number of times each tm/xm/nm macro is defined or undefined
> +/^#[[:space:]]*(undef|define)[[:space:]]+[[:alnum:]_]+.*$/ \
> +&&
> !/^#[[:space:]]*(undef|define)[[:space:]]+[[:alnum:]_]+_H($|[[:space:]])/
\
> +&& FILENAME ~ /(^|\/)config\/(|[^\/]*\/)(tm-|xm-|nm-).*\.h$/ {
> + basename = gensub(/(^|.*\/)([^\/]*)$/, "\\2", 1, FILENAME)
> + type = gensub(/^(tm|xm|nm)-.*\.h$/, "\\1", 1, basename)
> + name =
> gensub(/^#[[:space:]]*(undef|define)[[:space:]]+([[:alnum:]_]+).*$/,
"\\2",
> 1, $0)
> + if (type == basename) {
> + type = "macro"
> + }
> + doc[type " " name] = "\
> +Do not define macros such as " name " in a tm, nm or xm file, \
> +in fact do not provide a tm, nm or xm file"
> + category[type " " name] = ari_macro
> + fail(type " " name)
> +}
> +
> +BEGIN { doc["deprecated_registers"] = "\
> +Replace deprecated_registers with nothing, they have reached \
> +end-of-life"
> + category["deprecated_registers"] = ari_eol
> +}
> +/(^|[^_[:alnum:]])deprecated_registers([^_[:alnum:]]|$)/ {
> + fail("deprecated_registers")
> +}
> +
> +BEGIN { doc["read_pc"] = "\
> +Replace READ_PC() with frame_pc_unwind; \
> +at present the inferior function call code still uses this"
> + category["read_pc"] = ari_deprecate
> +}
> +/(^|[^_[:alnum:]])read_pc[[:space:]]*\(/ || \
> +/(^|[^_[:alnum:]])set_gdbarch_read_pc[[:space:]]*\(/ || \
> +/(^|[^_[:alnum:]])TARGET_READ_PC[[:space:]]*\(/ {
> + fail("read_pc")
> +}
> +
> +BEGIN { doc["write_pc"] = "\
> +Replace write_pc() with get_frame_base_address or get_frame_id; \
> +at present the inferior function call code still uses this when doing \
> +a DECR_PC_AFTER_BREAK"
> + category["write_pc"] = ari_deprecate
> +}
> +/(^|[^_[:alnum:]])write_pc[[:space:]]*\(/ || \
> +/(^|[^_[:alnum:]])set_gdbarch_write_pc[[:space:]]*\(/ || \
> +/(^|[^_[:alnum:]])TARGET_WRITE_PC[[:space:]]*\(/ {
> + fail("write_pc")
> +}
> +
> +BEGIN { doc["generic_target_write_pc"] = "\
> +Replace generic_target_write_pc with a per-architecture implementation, \
> +this relies on PC_REGNUM which is being eliminated"
> + category["generic_target_write_pc"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])generic_target_write_pc([^_[:alnum:]]|$)/ {
> + fail("generic_target_write_pc")
> +}
> +
> +BEGIN { doc["read_sp"] = "\
> +Replace read_sp() with frame_sp_unwind"
> + category["read_sp"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])read_sp[[:space:]]*\(/ || \
> +/(^|[^_[:alnum:]])set_gdbarch_read_sp[[:space:]]*\(/ || \
> +/(^|[^_[:alnum:]])TARGET_READ_SP[[:space:]]*\(/ {
> + fail("read_sp")
> +}
> +
> +BEGIN { doc["register_cached"] = "\
> +Replace register_cached() with nothing, does not have a regcache
parameter"
> + category["register_cached"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])register_cached[[:space:]]*\(/ {
> + fail("register_cached")
> +}
> +
> +BEGIN { doc["set_register_cached"] = "\
> +Replace set_register_cached() with nothing, does not have a regcache
> parameter"
> + category["set_register_cached"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])set_register_cached[[:space:]]*\(/ {
> + fail("set_register_cached")
> +}
> +
> +# Print functions: Use versions that either check for buffer overflow
> +# or safely allocate a fresh buffer.
> +
> +BEGIN { doc["sprintf"] = "\
> +Do not use sprintf, instead use xsnprintf or xstrprintf"
> + category["sprintf"] = ari_code
> +}
> +/(^|[^_[:alnum:]])sprintf[[:space:]]*\(/ {
> + fail("sprintf")
> +}
> +
> +BEGIN { doc["vsprintf"] = "\
> +Do not use vsprintf(), instead use xstrvprintf"
> + category["vsprintf"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])vsprintf[[:space:]]*\(/ {
> + fail("vsprintf")
> +}
> +
> +BEGIN { doc["asprintf"] = "\
> +Do not use asprintf(), instead use xstrprintf()"
> + category["asprintf"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])asprintf[[:space:]]*\(/ {
> + fail("asprintf")
> +}
> +
> +BEGIN { doc["vasprintf"] = "\
> +Do not use vasprintf(), instead use xstrvprintf"
> + fix("vasprintf", "gdb/utils.c", 1)
> + category["vasprintf"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])vasprintf[[:space:]]*\(/ {
> + fail("vasprintf")
> +}
> +
> +BEGIN { doc["xasprintf"] = "\
> +Do not use xasprintf(), instead use xstrprintf"
> + fix("xasprintf", "gdb/defs.h", 1)
> + fix("xasprintf", "gdb/utils.c", 1)
> + category["xasprintf"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])xasprintf[[:space:]]*\(/ {
> + fail("xasprintf")
> +}
> +
> +BEGIN { doc["xvasprintf"] = "\
> +Do not use xvasprintf(), instead use xstrvprintf"
> + fix("xvasprintf", "gdb/defs.h", 1)
> + fix("xvasprintf", "gdb/utils.c", 1)
> + category["xvasprintf"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])xvasprintf[[:space:]]*\(/ {
> + fail("xvasprintf")
> +}
> +
> +# More generic memory operations
> +
> +BEGIN { doc["bzero"] = "\
> +Do not use bzero(), instead use memset()"
> + category["bzero"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])bzero[[:space:]]*\(/ {
> + fail("bzero")
> +}
> +
> +BEGIN { doc["strdup"] = "\
> +Do not use strdup(), instead use xstrdup()";
> + category["strdup"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])strdup[[:space:]]*\(/ {
> + fail("strdup")
> +}
> +
> +BEGIN { doc["strsave"] = "\
> +Do not use strsave(), instead use xstrdup() et.al."
> + category["strsave"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])strsave[[:space:]]*\(/ {
> + fail("strsave")
> +}
> +
> +# String compare functions
> +
> +BEGIN { doc["strnicmp"] = "\
> +Do not use strnicmp(), instead use strncasecmp()"
> + category["strnicmp"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])strnicmp[[:space:]]*\(/ {
> + fail("strnicmp")
> +}
> +
> +# Boolean expressions and conditionals
> +
> +BEGIN { doc["boolean"] = "\
> +Do not use `boolean'\'', use `int'\'' instead"
> + category["boolean"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])boolean([^_[:alnum:]]|$)/ {
> + if (is_yacc_or_lex == 0) {
> + fail("boolean")
> + }
> +}
> +
> +BEGIN { doc["false"] = "\
> +Definitely do not use `false'\'' in boolean expressions"
> + category["false"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])false([^_[:alnum:]]|$)/ {
> + if (is_yacc_or_lex == 0) {
> + fail("false")
> + }
> +}
> +
> +BEGIN { doc["true"] = "\
> +Do not try to use `true'\'' in boolean expressions"
> + category["true"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])true([^_[:alnum:]]|$)/ {
> + if (is_yacc_or_lex == 0) {
> + fail("true")
> + }
> +}
> +
> +# Typedefs that are either redundant or can be reduced to `struct
> +# type *''.
> +# Must be placed before if assignment otherwise ARI exceptions
> +# are not handled correctly.
> +
> +BEGIN { doc["d_namelen"] = "\
> +Do not use dirent.d_namelen, instead use NAMELEN"
> + category["d_namelen"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])d_namelen([^_[:alnum:]]|$)/ {
> + fail("d_namelen")
> +}
> +
> +BEGIN { doc["strlen d_name"] = "\
> +Do not use strlen dirent.d_name, instead use NAMELEN"
> + category["strlen d_name"] = ari_regression
> +}
>
+/(^|[^_[:alnum:]])strlen[[:space:]]*\(.*[^_[:alnum:]]d_name([^_[:alnum:]]|$
> )/ {
> + fail("strlen d_name")
> +}
> +
> +BEGIN { doc["var_boolean"] = "\
> +Replace var_boolean with add_setshow_boolean_cmd"
> + category["var_boolean"] = ari_regression
> + fix("var_boolean", "gdb/command.h", 1)
> + # fix only uses the last directory level
> + fix("var_boolean", "cli/cli-decode.c", 2)
> +}
> +/(^|[^_[:alnum:]])var_boolean([^_[:alnum:]]|$)/ {
> + if ($0 !~ /(^|[^_[:alnum:]])case *var_boolean:/) {
> + fail("var_boolean")
> + }
> +}
> +
> +BEGIN { doc["generic_use_struct_convention"] = "\
> +Replace generic_use_struct_convention with nothing, \
> +EXTRACT_STRUCT_VALUE_ADDRESS is a predicate"
> + category["generic_use_struct_convention"] = ari_regression
> +}
> +/(^|[^_[:alnum:]])generic_use_struct_convention([^_[:alnum:]]|$)/ {
> + fail("generic_use_struct_convention")
> +}
> +
> +BEGIN { doc["if assignment"] = "\
> +An IF statement'\''s expression contains an assignment (the GNU coding \
> +standard discourages this)"
> + category["if assignment"] = ari_code
> +}
> +BEGIN { doc["if clause more than 50 lines"] = "\
> +An IF statement'\''s expression expands over 50 lines"
> + category["if clause more than 50 lines"] = ari_code
> +}
> +#
> +# Accumulate continuation lines
> +FNR == 1 {
> + in_if = 0
> +}
> +
> +/(^|[^_[:alnum:]])if / {
> + in_if = 1;
> + if_brace_level = 0;
> + if_cont_p = 0;
> + if_count = 0;
> + if_brace_end_pos = 0;
> + if_full_line = "";
> +}
> +(in_if) {
> + # We want everything up to closing brace of same level
> + if_count++;
> + if (if_count > 50) {
> + print "multiline if: " if_full_line $0
> + fail("if clause more than 50 lines")
> + if_brace_level = 0;
> + if_full_line = "";
> + } else {
> + if (if_count == 1) {
> + i = index($0,"if ");
> + } else {
> + i = 1;
> + }
> + for (i=i; i <= length($0); i++) {
> + char = substr($0,i,1);
> + if (char == "(") { if_brace_level++; }
> + if (char == ")") {
> + if_brace_level--;
> + if (!if_brace_level) {
> + if_brace_end_pos = i;
> + after_if = substr($0,i+1,length($0));
> + # Do not parse what is following
> + break;
> + }
> + }
> + }
> + if (if_brace_level == 0) {
> + $0 = substr($0,1,i);
> + in_if = 0;
> + } else {
> + if_full_line = if_full_line $0;
> + if_cont_p = 1;
> + next;
> + }
> + }
> +}
> +# if we arrive here, we need to concatenate, but we are at brace level 0
> +
> +(if_brace_end_pos) {
> + $0 = if_full_line substr($0,1,if_brace_end_pos);
> + if (if_count > 1) {
> + # print "IF: multi line " if_count " found at " FILENAME ":" FNR "
> \"" $0 "\""
> + }
> + if_cont_p = 0;
> + if_full_line = "";
> +}
> +/(^|[^_[:alnum:]])if .* = / {
> + # print "fail in if " $0
> + fail("if assignment")
> +}
> +(if_brace_end_pos) {
> + $0 = $0 after_if;
> + if_brace_end_pos = 0;
> + in_if = 0;
> +}
> +
> +# Printout of all found bug
> +
> +BEGIN {
> + if (print_doc) {
> + for (bug in doc) {
> + fail(bug)
> + }
> + exit
> + }
> +}' "$@"
> +
> Index: contrib/ari/gdb_find.sh
> ===================================================================
> RCS file: contrib/ari/gdb_find.sh
> diff -N contrib/ari/gdb_find.sh
> --- /dev/null 1 Jan 1970 00:00:00 -0000
> +++ contrib/ari/gdb_find.sh 18 May 2012 22:31:42 -0000
> @@ -0,0 +1,41 @@
> +#!/bin/sh
> +
> +# GDB script to create list of files to check using gdb_ari.sh.
> +#
> +# Copyright (C) 2003-2012 Free Software Foundation, Inc.
> +#
> +# This file is part of GDB.
> +#
> +# This program is free software; you can redistribute it and/or modify
> +# it under the terms of the GNU General Public License as published by
> +# the Free Software Foundation; either version 3 of the License, or
> +# (at your option) any later version.
> +#
> +# This program is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> +# GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program. If not, see <http://www.gnu.org/licenses/>.
> +
> +# Make certain that the script is not running in an internationalized
> +# environment.
> +
> +LANG=C ; export LANG
> +LC_ALL=C ; export LC_ALL
> +
> +
> +# A find that prunes files that GDB users shouldn't be interested in.
> +# Use sort to order files alphabetically.
> +
> +find "$@" \
> + -name testsuite -prune -o \
> + -name gdbserver -prune -o \
> + -name gnulib -prune -o \
> + -name osf-share -prune -o \
> + -name '*-stub.c' -prune -o \
> + -name '*-exp.c' -prune -o \
> + -name ada-lex.c -prune -o \
> + -name cp-name-parser.c -prune -o \
> + -type f -name '*.[lyhc]' -print | sort
> Index: contrib/ari/update-web-ari.sh
> ===================================================================
> RCS file: contrib/ari/update-web-ari.sh
> diff -N contrib/ari/update-web-ari.sh
> --- /dev/null 1 Jan 1970 00:00:00 -0000
> +++ contrib/ari/update-web-ari.sh 18 May 2012 22:31:43 -0000
> @@ -0,0 +1,947 @@
> +#!/bin/sh -x
> +
> +# GDB script to create GDB ARI web page.
> +#
> +# Copyright (C) 2001-2012 Free Software Foundation, Inc.
> +#
> +# This file is part of GDB.
> +#
> +# This program is free software; you can redistribute it and/or modify
> +# it under the terms of the GNU General Public License as published by
> +# the Free Software Foundation; either version 3 of the License, or
> +# (at your option) any later version.
> +#
> +# This program is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> +# GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program. If not, see <http://www.gnu.org/licenses/>.
> +
> +# TODO: setjmp.h, setjmp and longjmp.
> +
> +# Direct stderr into stdout but still hang onto stderr (/dev/fd/3)
> +exec 3>&2 2>&1
> +ECHO ()
> +{
> +# echo "$@" | tee /dev/fd/3 1>&2
> + echo "$@" 1>&2
> + echo "$@" 1>&3
> +}
> +
> +# Really mindless usage
> +if test $# -ne 4
> +then
> + echo "Usage: $0 <snapshot/sourcedir> <tmpdir> <destdir> <project>"
1>&2
> + exit 1
> +fi
> +snapshot=$1 ; shift
> +tmpdir=$1 ; shift
> +wwwdir=$1 ; shift
> +project=$1 ; shift
> +
> +# Try to create destination directory if it doesn't exist yet
> +if [ ! -d ${wwwdir} ]
> +then
> + mkdir -p ${wwwdir}
> +fi
> +
> +# Fail if destination directory doesn't exist or is not writable
> +if [ ! -w ${wwwdir} -o ! -d ${wwwdir} ]
> +then
> + echo ERROR: Can not write to directory ${wwwdir} >&2
> + exit 2
> +fi
> +
> +if [ ! -r ${snapshot} ]
> +then
> + echo ERROR: Can not read snapshot file 1>&2
> + exit 1
> +fi
> +
> +# FILE formats
> +# ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
> +# ari.*.idx: <BUG>:<FILE>:<CATEGORY>
> +# ari.*.doc: <BUG>:<COUNT>:<CATEGORY>:<DOC>
> +# Where ``*'' is {source,warning,indent,doschk}
> +
> +unpack_source_p=true
> +delete_source_p=true
> +
> +check_warning_p=false # broken
> +check_indent_p=false # too slow, too many fail
> +check_source_p=true
> +check_doschk_p=true
> +check_werror_p=true
> +
> +update_doc_p=true
> +update_web_p=true
> +
> +if [ ! -z "$send_email" ]
> +then
> + send_email=false
> +fi
> +
> +if awk --version 2>&1 </dev/null | grep -i gnu > /dev/null
> +then
> + AWK=awk
> +else
> + AWK=gawk
> +fi
> +
> +
> +# Set up a few cleanups
> +if ${delete_source_p}
> +then
> + trap "cd /tmp; rm -rf ${tmpdir}; exit" 0 1 2 15
> +fi
> +
> +
> +# If the first parameter is a directory,
> +#we just use it as the extracted source
> +if [ -d ${snapshot} ]
> +then
> + module=${project}
> + srcdir=${snapshot}
> + aridir=${srcdir}/${module}/ari
> + unpack_source_p=false
> + delete_source_p=false
> + version_in=${srcdir}/${module}/version.in
> +else
> + # unpack the tar-ball
> + if ${unpack_source_p}
> + then
> + # Was it previously unpacked?
> + if ${delete_source_p} || test ! -d ${tmpdir}/${module}*
> + then
> + /bin/rm -rf "${tmpdir}"
> + /bin/mkdir -p ${tmpdir}
> + if [ ! -d ${tmpdir} ]
> + then
> + echo "Problem creating work directory"
> + exit 1
> + fi
> + cd ${tmpdir} || exit 1
> + echo `date`: Unpacking tar-ball ...
> + case ${snapshot} in
> + *.tar.bz2 ) bzcat ${snapshot} ;;
> + *.tar ) cat ${snapshot} ;;
> + * ) ECHO Bad file ${snapshot} ; exit 1 ;;
> + esac | tar xf -
> + fi
> + fi
> +
> + module=`basename ${snapshot}`
> + module=`basename ${module} .bz2`
> + module=`basename ${module} .tar`
> + srcdir=`echo ${tmpdir}/${module}*`
> + aridir=${HOME}/ss
> + version_in=${srcdir}/gdb/version.in
> +fi
> +
> +if [ ! -r ${version_in} ]
> +then
> + echo ERROR: missing version file 1>&2
> + exit 1
> +fi
> +version=`cat ${version_in}`
> +
> +
> +# THIS HAS SUFFERED BIT ROT
> +if ${check_warning_p} && test -d "${srcdir}"
> +then
> + echo `date`: Parsing compiler warnings 1>&2
> + cat ${root}/ari.compile | $AWK '
> +BEGIN {
> + FS=":";
> +}
> +/^[^:]*:[0-9]*: warning:/ {
> + file = $1;
> + #sub (/^.*\//, "", file);
> + warning[file] += 1;
> +}
> +/^[^:]*:[0-9]*: error:/ {
> + file = $1;
> + #sub (/^.*\//, "", file);
> + error[file] += 1;
> +}
> +END {
> + for (file in warning) {
> + print file ":warning:" level[file]
> + }
> + for (file in error) {
> + print file ":error:" level[file]
> + }
> +}
> +' > ${root}/ari.warning.bug
> +fi
> +
> +# THIS HAS SUFFERED BIT ROT
> +if ${check_indent_p} && test -d "${srcdir}"
> +then
> + printf "Analizing file indentation:" 1>&2
> + ( cd "${srcdir}" && /bin/sh ${aridir}/gdb_find.sh ${project} | while
> read f
> + do
> + if /bin/sh ${aridir}/gdb_indent.sh < ${f} 2>/dev/null | cmp -s -
> ${f}
> + then
> + :
> + else
> + # ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
> + echo "${f}:0: info: indent: Indentation does not match GNU
> indent output"
> + fi
> + done ) > ${wwwdir}/ari.indent.bug
> + echo ""
> +fi
> +
> +if ${check_source_p} && test -d "${srcdir}"
> +then
> + bugf=${wwwdir}/ari.source.bug
> + oldf=${wwwdir}/ari.source.old
> + srcf=${wwwdir}/ari.source.lines
> + oldsrcf=${wwwdir}/ari.source.lines-old
> +
> + diff=${wwwdir}/ari.source.diff
> + diffin=${diff}-in
> + newf1=${bugf}1
> + oldf1=${oldf}1
> + oldpruned=${oldf1}-pruned
> + newpruned=${newf1}-pruned
> +
> + cp -f ${bugf} ${oldf}
> + cp -f ${srcf} ${oldsrcf}
> + rm -f ${srcf}
> + node=`uname -n`
> + echo "`date`: Using source lines ${srcf}" 1>&2
> + echo "`date`: Checking source code" 1>&2
> + ( cd "${srcdir}" && /bin/sh ${aridir}/gdb_find.sh "${project}" | \
> + xargs /bin/sh ${aridir}/gdb_ari.sh -Werror -Wall --print-idx
> --src=${srcf}
> + ) > ${bugf}
> + # Remove things we are not interested in to signal by email
> + # gdbarch changes are not important here
> + # Also convert ` into ' to avoid command substitution in script below
> + sed -e "/.*: gdbarch:.*/d" -e "s:\`:':g" ${oldf} > ${oldf1}
> + sed -e "/.*: gdbarch:.*/d" -e "s:\`:':g" ${bugf} > ${newf1}
> + # Remove line number info so that code inclusion/deletion
> + # has no impact on the result
> + sed -e "s/\([^:]*\):\([^:]*\):\(.*\)/\1:0:\3/" ${oldf1} >
${oldpruned}
> + sed -e "s/\([^:]*\):\([^:]*\):\(.*\)/\1:0:\3/" ${newf1} >
${newpruned}
> + # Use diff without option to get normal diff output that
> + # is reparsed after
> + diff ${oldpruned} ${newpruned} > ${diffin}
> + # Only keep new warnings
> + sed -n -e "/^>.*/p" ${diffin} > ${diff}
> + sedscript=${wwwdir}/sedscript
> + script=${wwwdir}/script
> + sed -n -e "s|\(^[0-9,]*\)a\(.*\)|echo \1a\2 \n \
> + sed -n \'\2s:\\\\(.*\\\\):> \\\\1:p\' ${newf1}|p" \
> + -e "s|\(^[0-9,]*\)d\(.*\)|echo \1d\2\n \
> + sed -n \'\1s:\\\\(.*\\\\):< \\\\1:p\' ${oldf1}|p" \
> + -e "s|\(^[0-9,]*\)c\(.*\)|echo \1c\2\n \
> + sed -n \'\1s:\\\\(.*\\\\):< \\\\1:p\' ${oldf1} \n \
> + sed -n \"\2s:\\\\(.*\\\\):> \\\\1:p\" ${newf1}|p" \
> + ${diffin} > ${sedscript}
> + ${SHELL} ${sedscript} > ${wwwdir}/message
> + sed -n \
> + -e "s;\(.*\);echo \\\"\1\\\";p" \
> + -e "s;.*< \([^:]*\):\([0-9]*\):.*;grep \"^\1:\2:\" ${oldsrcf};p" \
> + -e "s;.*> \([^:]*\):\([0-9]*\):.*;grep \"^\1:\2:\" ${srcf};p" \
> + ${wwwdir}/message > ${script}
> + ${SHELL} ${script} > ${wwwdir}/mail-message
> + if [ "x${branch}" != "x" ]; then
> + email_suffix="`date` in ${branch}"
> + else
> + email_suffix="`date`"
> + fi
> +
> + if [ "$send_email" == "true" ]; then
> + if [ "${node}" = "sourceware.org" ]; then
> + warning_email=gdb-patches@sourceware.org
> + else
> + # Use default email
> + warning_email=${USER}@${node}
> + fi
> +
> + # Check if ${diff} is not empty
> + if [ -s ${diff} ]; then
> + # Send an email $warning_email
> + mutt -s "New ARI warning ${email_suffix}" \
> + ${warning_email} < ${wwwdir}/mail-message
> + else
> + if [ -s ${wwwdir}/${mail-message} ]; then
> + # Send an email to $warning_email
> + mutt -s "ARI warning list change ${email_suffix}" \
> + ${warning_email} < ${wwwdir}/mail-message
> + fi
> + fi
> + fi
> +fi
> +
> +
> +
> +
> +if ${check_doschk_p} && test -d "${srcdir}"
> +then
> + echo "`date`: Checking for doschk" 1>&2
> + rm -f "${wwwdir}"/ari.doschk.*
> + fnchange_lst="${srcdir}"/gdb/config/djgpp/fnchange.lst
> + fnchange_awk="${wwwdir}"/ari.doschk.awk
> + doschk_in="${wwwdir}"/ari.doschk.in
> + doschk_out="${wwwdir}"/ari.doschk.out
> + doschk_bug="${wwwdir}"/ari.doschk.bug
> + doschk_char="${wwwdir}"/ari.doschk.char
> +
> + # Transform fnchange.lst into fnchange.awk. The program DJTAR
> + # does a textual substitution of each file name using the list.
> + # Generate an awk script that does the equivalent - matches an
> + # exact line and then outputs the replacement.
> +
> + sed -e 's;@[^@]*@[/]*\([^ ]*\) @[^@]*@[/]*\([^ ]*\);\$0 == "\1" {
print
> "\2"\; next\; };' \
> + < "${fnchange_lst}" > "${fnchange_awk}"
> + echo '{ print }' >> "${fnchange_awk}"
> +
> + # Do the raw analysis - transform the list of files into the DJGPP
> + # equivalents putting it in the .in file
> + ( cd "${srcdir}" && find * \
> + -name '*.info-[0-9]*' -prune \
> + -o -name tcl -prune \
> + -o -name itcl -prune \
> + -o -name tk -prune \
> + -o -name libgui -prune \
> + -o -name tix -prune \
> + -o -name dejagnu -prune \
> + -o -name expect -prune \
> + -o -type f -print ) \
> + | $AWK -f ${fnchange_awk} > ${doschk_in}
> +
> + # Start with a clean slate
> + rm -f ${doschk_bug}
> +
> + # Check for any invalid characters.
> + grep '[\+\,\;\=\[\]\|\<\>\\\"\:\?\*]' < ${doschk_in} > ${doschk_char}
> + # ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
> + sed < ${doschk_char} >> ${doschk_bug} \
> + -e s'/$/:0: dos: DOSCHK: Invalid DOS character/'
> +
> + # Magic to map ari.doschk.out to ari.doschk.bug goes here
> + doschk < ${doschk_in} > ${doschk_out}
> + cat ${doschk_out} | $AWK >> ${doschk_bug} '
> +BEGIN {
> + state = 1;
> + invalid_dos = state++; bug[invalid_dos] = "invalid DOS file name";
> category[invalid_dos] = "dos";
> + same_dos = state++; bug[same_dos] = "DOS 8.3";
> category[same_dos] = "dos";
> + same_sysv = state++; bug[same_sysv] = "SysV";
> + long_sysv = state++; bug[long_sysv] = "long SysV";
> + internal = state++; bug[internal] = "internal doschk";
> category[internal] = "internal";
> + state = 0;
> +}
> +/^$/ { state = 0; next; }
> +/^The .* not valid DOS/ { state = invalid_dos; next; }
> +/^The .* same DOS/ { state = same_dos; next; }
> +/^The .* same SysV/ { state = same_sysv; next; }
> +/^The .* too long for SysV/ { state = long_sysv; next; }
> +/^The .* / { state = internal; next; }
> +
> +NF == 0 { next }
> +
> +NF == 3 { name = $1 ; file = $3 }
> +NF == 1 { file = $1 }
> +NF > 3 && $2 == "-" { file = $1 ; name = gensub(/^.* - /, "", 1) }
> +
> +state == same_dos {
> + # ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
> + print file ":0: " category[state] ": " \
> + name " " bug[state] " " " dup: " \
> + " DOSCHK - the names " name " and " file " resolve to the same" \
> + " file on a " bug[state] \
> + " system.<br>For DOS, this can be fixed by modifying the file" \
> + " fnchange.lst."
> + next
> +}
> +state == invalid_dos {
> + # ari.*.bug: <FILE>:<LINE>: <SEVERITY>: <CATEGORY>: <DOC>
> + print file ":0: " category[state] ": " name ": DOSCHK - " name
> + next
> +}
> +state == internal {
> + # ari.*.bug: <FILE>:<LINE>: <SEVERITY>: <CATEGORY>: <DOC>
> + print file ":0: " category[state] ": " bug[state] ": DOSCHK - a " \
> + bug[state] " problem"
> +}
> +'
> +fi
> +
> +
> +
> +if ${check_werror_p} && test -d "${srcdir}"
> +then
> + echo "`date`: Checking Makefile.in for non- -Werror rules"
> + rm -f ${wwwdir}/ari.werror.*
> + cat "${srcdir}/${project}/Makefile.in" | $AWK >
> ${wwwdir}/ari.werror.bug '
> +BEGIN {
> + count = 0
> + cont_p = 0
> + full_line = ""
> +}
> +/^[-_[:alnum:]]+\.o:/ {
> + file = gensub(/.o:.*/, "", 1) ".c"
> +}
> +
> +/[^\\]\\$/ { gsub (/\\$/, ""); full_line = full_line $0; cont_p = 1;
next;
> }
> +cont_p { $0 = full_line $0; cont_p = 0; full_line = ""; }
> +
> +/\$\(COMPILE\.pre\)/ {
> + print file " has line " $0
> + if (($0 !~ /\$\(.*ERROR_CFLAGS\)/) && ($0 !~
/\$\(INTERNAL_CFLAGS\)/))
> {
> + # ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
> + print "'"${project}"'/" file ":0: info: Werror: The file is not
> being compiled with -Werror"
> + }
> +}
> +'
> +fi
> +
> +
> +# From the warnings, generate the doc and indexed bug files
> +if ${update_doc_p}
> +then
> + cd ${wwwdir}
> + rm -f ari.doc ari.idx ari.doc.bug
> + # Generate an extra file containing all the bugs that the ARI can
> detect.
> + /bin/sh ${aridir}/gdb_ari.sh -Werror -Wall --print-idx --print-doc >>
> ari.doc.bug
> + cat ari.*.bug | $AWK > ari.idx '
> +BEGIN {
> + FS=": *"
> +}
> +{
> + # ari.*.bug: <FILE>:<LINE>: <CATEGORY>: <BUG>: <DOC>
> + file = $1
> + line = $2
> + category = $3
> + bug = $4
> + if (! (bug in cat)) {
> + cat[bug] = category
> + # strip any trailing .... (supplement)
> + doc[bug] = gensub(/ \([^\)]*\)$/, "", 1, $5)
> + count[bug] = 0
> + }
> + if (file != "") {
> + count[bug] += 1
> + # ari.*.idx: <BUG>:<FILE>:<CATEGORY>
> + print bug ":" file ":" category
> + }
> + # Also accumulate some categories as obsolete
> + if (category == "deprecated") {
> + # ari.*.idx: <BUG>:<FILE>:<CATEGORY>
> + if (file != "") {
> + print category ":" file ":" "obsolete"
> + }
> + #count[category]++
> + #doc[category] = "Contains " category " code"
> + }
> +}
> +END {
> + i = 0;
> + for (bug in count) {
> + # ari.*.doc: <BUG>:<COUNT>:<CATEGORY>:<DOC>
> + print bug ":" count[bug] ":" cat[bug] ":" doc[bug] >> "ari.doc"
> + }
> +}
> +'
> +fi
> +
> +
> +# print_toc BIAS MIN_COUNT CATEGORIES TITLE
> +
> +# Print a table of contents containing the bugs CATEGORIES. If the
> +# BUG count >= MIN_COUNT print it in the table-of-contents. If
> +# MIN_COUNT is non -ve, also include a link to the table.Adjust the
> +# printed BUG count by BIAS.
> +
> +all=
> +
> +print_toc ()
> +{
> + bias="$1" ; shift
> + min_count="$1" ; shift
> +
> + all=" $all $1 "
> + categories=""
> + for c in $1; do
> + categories="${categories} categories[\"${c}\"] = 1 ;"
> + done
> + shift
> +
> + title="$@" ; shift
> +
> + echo "<p>" >> ${newari}
> + echo "<a name=${title}>" | tr '[A-Z]' '[a-z]' >> ${newari}
> + echo "<h3>${title}</h3>" >> ${newari}
> + cat >> ${newari} # description
> +
> + cat >> ${newari} <<EOF
> +<p>
> +<table>
> +<tr><th align=left>BUG</th><th>Total</th><th
> align=left>Description</th></tr>
> +EOF
> + # ari.*.doc: <BUG>:<COUNT>:<CATEGORY>:<DOC>
> + cat ${wwwdir}/ari.doc \
> + | sort -t: +1rn -2 +0d \
> + | $AWK >> ${newari} '
> +BEGIN {
> + FS=":"
> + '"$categories"'
> + MIN_COUNT = '${min_count}'
> + BIAS = '${bias}'
> + total = 0
> + nr = 0
> +}
> +{
> + # ari.*.doc: <BUG>:<COUNT>:<CATEGORY>:<DOC>
> + bug = $1
> + count = $2
> + category = $3
> + doc = $4
> + if (count < MIN_COUNT) next
> + if (!(category in categories)) next
> + nr += 1
> + total += count
> + printf "<tr>"
> + printf "<th align=left valign=top><a name=\"%s\">", bug
> + printf "%s", gensub(/_/, " ", "g", bug)
> + printf "</a></th>"
> + printf "<td align=right valign=top>"
> + if (count > 0 && MIN_COUNT >= 0) {
> + printf "<a href=\"#,%s\">%d</a></td>", bug, count + BIAS
> + } else {
> + printf "%d", count + BIAS
> + }
> + printf "</td>"
> + printf "<td align=left valign=top>%s</td>", doc
> + printf "</tr>"
> + print ""
> +}
> +END {
> + print "<tr><th align=right valign=top>" nr "</th><th align=right
> valign=top>" total "</th><td></td></tr>"
> +}
> +'
> +cat >> ${newari} <<EOF
> +</table>
> +<p>
> +EOF
> +}
> +
> +
> +print_table ()
> +{
> + categories=""
> + for c in $1; do
> + categories="${categories} categories[\"${c}\"] = 1 ;"
> + done
> + # Remember to prune the dir prefix from projects files
> + # ari.*.idx: <BUG>:<FILE>:<CATEGORY>
> + cat ${wwwdir}/ari.idx | $AWK >> ${newari} '
> +function qsort (table,
> + middle, tmp, left, nr_left, right, nr_right, result) {
> + middle = ""
> + for (middle in table) { break; }
> + nr_left = 0;
> + nr_right = 0;
> + for (tmp in table) {
> + if (tolower(tmp) < tolower(middle)) {
> + nr_left++
> + left[tmp] = tmp
> + } else if (tolower(tmp) > tolower(middle)) {
> + nr_right++
> + right[tmp] = tmp
> + }
> + }
> + #print "qsort " nr_left " " middle " " nr_right > "/dev/stderr"
> + result = ""
> + if (nr_left > 0) {
> + result = qsort(left) SUBSEP
> + }
> + result = result middle
> + if (nr_right > 0) {
> + result = result SUBSEP qsort(right)
> + }
> + return result
> +}
> +function print_heading (where, bug_i) {
> + print ""
> + print "<tr border=1>"
> + print "<th align=left>File</th>"
> + print "<th align=left><em>Total</em></th>"
> + print "<th></th>"
> + for (bug_i = 1; bug_i <= nr_bug; bug_i++) {
> + bug = i2bug[bug_i];
> + printf "<th>"
> + # The title names are offset by one. Otherwize, when the browser
> + # jumps to the name it leaves out half the relevant column.
> + #printf "<a name=\",%s\"> </a>", bug
> + printf "<a name=\",%s\"> </a>", i2bug[bug_i-1]
> + printf "<a href=\"#%s\">", bug
> + printf "%s", gensub(/_/, " ", "g", bug)
> + printf "</a>\n"
> + printf "</th>\n"
> + }
> + #print "<th></th>"
> + printf "<th><a name=\"%s,\"> </a></th>\n", i2bug[bug_i-1]
> + print "<th align=left><em>Total</em></th>"
> + print "<th align=left>File</th>"
> + print "</tr>"
> +}
> +function print_totals (where, bug_i) {
> + print "<th align=left><em>Totals</em></th>"
> + printf "<th align=right>"
> + printf "<em>%s</em>", total
> + printf ">"
> + printf "</th>\n"
> + print "<th></th>";
> + for (bug_i = 1; bug_i <= nr_bug; bug_i++) {
> + bug = i2bug[bug_i];
> + printf "<th align=right>"
> + printf "<em>"
> + printf "<a href=\"#%s\">%d</a>", bug, bug_total[bug]
> + printf "</em>";
> + printf "<a href=\"#%s,%s\">^</a>", prev_file[bug, where], bug
> + printf "<a href=\"#%s,%s\">v</a>", next_file[bug, where], bug
> + printf "<a name=\"%s,%s\"> </a>", where, bug
> + printf "</th>";
> + print ""
> + }
> + print "<th></th>"
> + printf "<th align=right>"
> + printf "<em>%s</em>", total
> + printf "<"
> + printf "</th>\n"
> + print "<th align=left><em>Totals</em></th>"
> + print "</tr>"
> +}
> +BEGIN {
> + FS = ":"
> + '"${categories}"'
> + nr_file = 0;
> + nr_bug = 0;
> +}
> +{
> + # ari.*.idx: <BUG>:<FILE>:<CATEGORY>
> + bug = $1
> + file = $2
> + category = $3
> + # Interested in this
> + if (!(category in categories)) next
> + # Totals
> + db[bug, file] += 1
> + bug_total[bug] += 1
> + file_total[file] += 1
> + total += 1
> +}
> +END {
> +
> + # Sort the files and bugs creating indexed lists.
> + nr_bug = split(qsort(bug_total), i2bug, SUBSEP);
> + nr_file = split(qsort(file_total), i2file, SUBSEP);
> +
> + # Dummy entries for first/last
> + i2file[0] = 0
> + i2file[-1] = -1
> + i2bug[0] = 0
> + i2bug[-1] = -1
> +
> + # Construct a cycle of next/prev links. The file/bug "0" and "-1"
> + # are used to identify the start/end of the cycle. Consequently,
> + # prev(0) = -1 (prev of start is the end) and next(-1) = 0 (next
> + # of end is the start).
> +
> + # For all the bugs, create a cycle that goes to the prev / next file.
> + for (bug_i = 1; bug_i <= nr_bug; bug_i++) {
> + bug = i2bug[bug_i]
> + prev = 0
> + prev_file[bug, 0] = -1
> + next_file[bug, -1] = 0
> + for (file_i = 1; file_i <= nr_file; file_i++) {
> + file = i2file[file_i]
> + if ((bug, file) in db) {
> + prev_file[bug, file] = prev
> + next_file[bug, prev] = file
> + prev = file
> + }
> + }
> + prev_file[bug, -1] = prev
> + next_file[bug, prev] = -1
> + }
> +
> + # For all the files, create a cycle that goes to the prev / next bug.
> + for (file_i = 1; file_i <= nr_file; file_i++) {
> + file = i2file[file_i]
> + prev = 0
> + prev_bug[file, 0] = -1
> + next_bug[file, -1] = 0
> + for (bug_i = 1; bug_i <= nr_bug; bug_i++) {
> + bug = i2bug[bug_i]
> + if ((bug, file) in db) {
> + prev_bug[file, bug] = prev
> + next_bug[file, prev] = bug
> + prev = bug
> + }
> + }
> + prev_bug[file, -1] = prev
> + next_bug[file, prev] = -1
> + }
> +
> + print "<table border=1 cellspacing=0>"
> + print "<tr></tr>"
> + print_heading(0);
> + print "<tr></tr>"
> + print_totals(0);
> + print "<tr></tr>"
> +
> + for (file_i = 1; file_i <= nr_file; file_i++) {
> + file = i2file[file_i];
> + pfile = gensub(/^'${project}'\//, "", 1, file)
> + print ""
> + print "<tr>"
> + print "<th align=left><a name=\"" file ",\">" pfile "</a></th>"
> + printf "<th align=right>"
> + printf "%s", file_total[file]
> + printf "<a href=\"#%s,%s\">></a>", file, next_bug[file, 0]
> + printf "</th>\n"
> + print "<th></th>"
> + for (bug_i = 1; bug_i <= nr_bug; bug_i++) {
> + bug = i2bug[bug_i];
> + if ((bug, file) in db) {
> + printf "<td align=right>"
> + printf "<a href=\"#%s\">%d</a>", bug, db[bug, file]
> + printf "<a href=\"#%s,%s\">^</a>", prev_file[bug, file], bug
> + printf "<a href=\"#%s,%s\">v</a>", next_file[bug, file], bug
> + printf "<a name=\"%s,%s\"> </a>", file, bug
> + printf "</td>"
> + print ""
> + } else {
> + print "<td> </td>"
> + #print "<td></td>"
> + }
> + }
> + print "<th></th>"
> + printf "<th align=right>"
> + printf "%s", file_total[file]
> + printf "<a href=\"#%s,%s\"><</a>", file, prev_bug[file, -1]
> + printf "</th>\n"
> + print "<th align=left>" pfile "</th>"
> + print "</tr>"
> + }
> +
> + print "<tr></tr>"
> + print_totals(-1)
> + print "<tr></tr>"
> + print_heading(-1);
> + print "<tr></tr>"
> + print ""
> + print "</table>"
> + print ""
> +}
> +'
> +}
> +
> +
> +# Make the scripts available
> +cp ${aridir}/gdb_*.sh ${wwwdir}
> +
> +# Compute the ARI index - ratio of zero vs non-zero problems.
> +indexes=`awk '
> +BEGIN {
> + FS=":"
> +}
> +{
> + # ari.*.doc: <BUG>:<COUNT>:<CATEGORY>:<DOC>
> + bug = $1; count = $2; category = $3; doc = $4
> +
> + if (bug ~ /^legacy_/) legacy++
> + if (bug ~ /^deprecated_/) deprecated++
> +
> + if (category !~ /^gdbarch$/) {
> + bugs += count
> + }
> + if (count == 0) {
> + oks++
> + }
> +}
> +END {
> + #print "tests/ok:", nr / ok
> + #print "bugs/tests:", bugs / nr
> + #print "bugs/ok:", bugs / ok
> + print bugs / ( oks + legacy + deprecated )
> +}
> +' ${wwwdir}/ari.doc`
> +
> +# Merge, generating the ARI tables.
> +if ${update_web_p}
> +then
> + echo "Create the ARI table" 1>&2
> + oldari=${wwwdir}/old.html
> + ari=${wwwdir}/index.html
> + newari=${wwwdir}/new.html
> + rm -f ${newari} ${newari}.gz
> + cat <<EOF >> ${newari}
> +<html>
> +<head>
> +<title>A.R. Index for GDB version ${version}</title>
> +</head>
> +<body>
> +
> +<center><h2>A.R. Index for GDB version ${version}<h2></center>
> +
> +<!-- body, update above using ../index.sh -->
> +
> +<!-- Navigation. This page contains the following anchors.
> +"BUG": The definition of the bug.
> +"FILE,BUG": The row/column containing FILEs BUG count
> +"0,BUG", "-1,BUG": The top/bottom total for BUGs column.
> +"FILE,O", "FILE,-1": The left/right total for FILEs row.
> +",BUG": The top title for BUGs column.
> +"FILE,": The left title for FILEs row.
> +-->
> +
> +<center><h3>${indexes}</h3></center>
> +<center><h3>You can not take this seriously!</h3></center>
> +
> +<center>
> +Also available:
> +<a href="../gdb/ari/">most recent branch</a>
> +|
> +<a href="../gdb/current/ari/">current</a>
> +|
> +<a href="../gdb/download/ari/">last release</a>
> +</center>
> +
> +<center>
> +Last updated: `date -u`
> +</center>
> +EOF
> +
> + print_toc 0 1 "internal regression" Critical <<EOF
> +Things previously eliminated but returned. This should always be empty.
> +EOF
> +
> + print_table "regression code comment obsolete gettext"
> +
> + print_toc 0 0 code Code <<EOF
> +Coding standard problems, portability problems, readability problems.
> +EOF
> +
> + print_toc 0 0 comment Comments <<EOF
> +Problems concerning comments in source files.
> +EOF
> +
> + print_toc 0 0 gettext GetText <<EOF
> +Gettext related problems.
> +EOF
> +
> + print_toc 0 -1 dos DOS 8.3 File Names <<EOF
> +File names with problems on 8.3 file systems.
> +EOF
> +
> + print_toc -2 -1 deprecated Deprecated <<EOF
> +Mechanisms that have been replaced with something better, simpler,
> +cleaner; or are no longer required by core-GDB. New code should not
> +use deprecated mechanisms. Existing code, when touched, should be
> +updated to use non-deprecated mechanisms. See obsolete and deprecate.
> +(The declaration and definition are hopefully excluded from count so
> +zero should indicate no remaining uses).
> +EOF
> +
> + print_toc 0 0 obsolete Obsolete <<EOF
> +Mechanisms that have been replaced, but have not yet been marked as
> +such (using the deprecated_ prefix). See deprecate and deprecated.
> +EOF
> +
> + print_toc 0 -1 deprecate Deprecate <<EOF
> +Mechanisms that are a candidate for being made obsolete. Once core
> +GDB no longer depends on these mechanisms and/or there is a
> +replacement available, these mechanims can be deprecated (adding the
> +deprecated prefix) obsoleted (put into category obsolete) or deleted.
> +See obsolete and deprecated.
> +EOF
> +
> + print_toc -2 -1 legacy Legacy <<EOF
> +Methods used to prop up targets using targets that still depend on
> +deprecated mechanisms. (The method's declaration and definition are
> +hopefully excluded from count).
> +EOF
> +
> + print_toc -2 -1 gdbarch Gdbarch <<EOF
> +Count of calls to the gdbarch set methods. (Declaration and
> +definition hopefully excluded from count).
> +EOF
> +
> + print_toc 0 -1 macro Macro <<EOF
> +Breakdown of macro definitions (and #undef) in configuration files.
> +EOF
> +
> + print_toc 0 0 regression Fixed <<EOF
> +Problems that have been expunged from the source code.
> +EOF
> +
> + # Check for invalid categories
> + for a in $all; do
> + alls="$alls all[$a] = 1 ;"
> + done
> + cat ari.*.doc | $AWK >> ${newari} '
> +BEGIN {
> + FS = ":"
> + '"$alls"'
> +}
> +{
> + # ari.*.doc: <BUG>:<COUNT>:<CATEGORY>:<DOC>
> + bug = $1
> + count = $2
> + category = $3
> + doc = $4
> + if (!(category in all)) {
> + print "<b>" category "</b>: no documentation<br>"
> + }
> +}
> +'
> +
> + cat >> ${newari} <<EOF
> +<center>
> +Input files:
> +`( cd ${wwwdir} && ls ari.*.bug ari.idx ari.doc ) | while read f
> +do
> + echo "<a href=\"${f}\">${f}</a>"
> +done`
> +</center>
> +
> +<center>
> +Scripts:
> +`( cd ${wwwdir} && ls *.sh ) | while read f
> +do
> + echo "<a href=\"${f}\">${f}</a>"
> +done`
> +</center>
> +
> +<!-- /body, update below using ../index.sh -->
> +</body>
> +</html>
> +EOF
> +
> + for i in . .. ../..; do
> + x=${wwwdir}/${i}/index.sh
> + if test -x $x; then
> + $x ${newari}
> + break
> + fi
> + done
> +
> + gzip -c -v -9 ${newari} > ${newari}.gz
> +
> + cp ${ari} ${oldari}
> + cp ${ari}.gz ${oldari}.gz
> + cp ${newari} ${ari}
> + cp ${newari}.gz ${ari}.gz
> +
> +fi # update_web_p
> +
> +# ls -l ${wwwdir}
> +
> +exit 0