HOME | Software | Lotus Cars | DWARF | Kindle | Solar |
All the code here is provided in C or Python source and is free software. Enjoy!
While the documentation on lcov is accurate enough to use I found it difficult to get started using it. Difficult to know what I needed to do. Here I show two examples. The first is a simple case, all done in one directory, while the second is more realistic for a project with multiple source directories. Hopefully with these examples you have a better idea how to do your own coverange analysis.
Presented as a simple shell script.
# Example using lcov # C code compile options: "-O0 -fprofile-arcs -ftest-coverage" # The compile/link create some .gcno files. # Compilation done in this directory. # Do this in a directory you own, otherwise firefox # will refuse to read the html: 'Access Denied'. # We don't show the source file, it is not our topic here. cc -O0 -fprofile-arcs -ftest-coverage basetest.c -o basetest # Options to the test executable cause different # paths to be followed in basetest.c (because we wrote the C # example source to do that) # and create/update *.gcda files to demonstrate that # the .gcda files are an accumulation. ./basetest a ./basetest c ./basetest b #Now the *.gcda files contain data from all three #runs. echo "=========" # These use the *.gcda data and the source .c files and # *.info files. lcov --capture --directory . --output-file basetest.lcov genhtml basetest.lcov --output-directory out # Now lets look at the html results with a browser: firefox ./out/index.html
The source tree for the project (on github: libdwarf-code) has two directories with the source we wish to analyze: src/bin/dwarfdump and src/lib/libdwarf. We do our build in a separate base directory from the source as usual. Source in $HOME/dwarf/libdwarf-code and the build in /home/builds/lcov/bld (just to be specific).
cd /home/builds/lcov/bld # Ensure no lefovers from a previous build, then: CFLAGS="-O0 -fprofile-arcs -ftest-coverage" $HOME/dwarf/libdwarf-code/configure make # Two arbitrary runs accumulated into .gcda files # Important to execute the executable from the # directory it is built in. Do not move it. # The options to dwarfdump are just examples here (that work). src/bin/dwarfdump/dwarfdump -i src/bin/dwarfdump/dwarfdump src/bin/dwarfdump/dwarfdump -l src/bin/dwarfdump/dwarfdump # Show the results from both build directories of interest. lcov --capture --directory src/lib/libdwarf --directory src/bin/dwarfdump --output-file test.lcov genhtml test.lcov --output-directory out
PGE.com (Pacific Gas and Electric) provides a way (from one's account on pge.com) to download one's electrical usage in 15 minute increments over a date-range you choose.
It's hard to find the download button in the PGE web pages. Go to the pge page with your Usage values. Find the small green button labeled Green Button, following a graph of your usage. It is the download button. You can choose either csv form or xml form.
You can import the csv into a spreadsheet or you can just read it as text. The csv form will show 'Estimated value' on 15 minute periods where pge has not yet gotten your actual usage. Sometimes here we've seen it take several days for pge to clear up the Estimated values (the temporary guesses pge made were absurd).
I think the xml form is neat, but it has no indication of 'Estimated' records and it's unreadable unless one uses code to digest the xml. Here I present python code that does that (assuming you can run command-line applications, there is no graphical interface here).
An example of use:
python3 pgetreeread.py pge_electric_interval_data_2019-05-25_to_2019-06-25.xml
Here are the first few lines output from a pg&e xml file: Dates are presented in ISO-extended date form.
Reading pge usage data Option: [-h] print usage per hour default: True Option: [--day] print usage per day, turn off by hour Option: [--detail] print usage pge 15 minute interval Option: [--date=daychoice] where daychoice is yyyy-mm-dd as in -date=2019-06-23 which selects a single day to report file pge_electric_interval_data_2019-05-25_to_2019-06-25.xml File opened ok Hourly kwh 0.000000 Hourly 2019-06-19 00 kwh 1.589000 Hourly 2019-06-19 01 kwh 1.610000 Hourly 2019-06-19 02 kwh 1.571000
The python3 source code is available and I have marked it as being in the public domain.
Download or read as pgetreeread.py.
"Code Testing Through Fault Injection" in ":login;" magazine (December, 2014. Usenix.org) by Peter Gutmann offered a simple example by an unnamed friend: instrument malloc() so on call N it returns NULL. Try with N from 0 to some higher number. On any system using dynamically-loaded libc it is easy to do. Here we set it for current Linux libc. Run your chosen executable and see how it fares.
Here is the basic C source. Using a script (example below) link this into the executable you want to test.
/* fakemalloc.in */ /* script below modifies this into a temporary fakemalloc.c */ static unsigned count; extern void * __libc_malloc(); void *malloc(size_t len) { /* Perhaps the test should be count >= FAILCOUNT ??? */ if (count == FAILCOUNT) { return 0; } count++; return __libc_malloc(len); }
Here we build the executable to be tested. The executable to test can be anything. Naturally you have some application in mind. Normally you would modify the link step of your executable creation . Here we assume you use make to build the executable under test. Lets say the executable is named mycode
# This is a version of a link line presumed to be in Makefile # Fairly typical use of make. # It is essential that fakemalloc.o is linked in before # the standard libraries, but its exact location on the link # line is normally not critical. # This line new, for test cc -g -c fakemalloc.c # This line adds fakemalloc.o cc -g $(CFLAGS) mycode.o fakemalloc.o -o mycode $(OBJS) $(LDFLAGS) $(LIBS)
So with that as background, we move to the example test script.
#!/bin/bash targ=./mycode subj=test.o ct=0 while [ $ct -lt 500 ] do echo '===== START TEST' rm -f core # rm $targ so the test executable gets rebuild. rm -f $targ sed -e "s/FAILCOUNT/$ct/" < fakemalloc.in > fakemalloc.c echo TEST $ct # This is just so you can see the sed worked. grep if fakemalloc.c # Next line presumably does the compile of fakemalloc.c # and the build of mycode. make $targ "place args here" > junk.x # to see the output on stdout. cat junk.x if [ -f core ] then echo "CORE FILE EXISTS, test" > $ct fi rm -f core echo '===== END TEST' ct=`expr $ct + 1` done
I hope you find this useful for your testing.
Some key writings on C for C programmers (from Henry Spencer, Dennis Richie, and others) is at http://www.lysator.liu.se/c/ C programmers will enjoy this. Take a look.
Occasionally folks are unaware of or misunderstand the rules on writing POSIX signal handlers in C and C++. So I've written a brief overview of POSIX signal handling pitfalls. Corrections and suggestions welcome.
When viewing the binary representation of a file I prefer to use a format I first saw in a utility provided by IBM in OS360 in 1964. So I wrote C code I call hxdump that prepares output in a similar format. Download the hxdump.c C source. To compile it cc -g hxdump.c -o hxdump should suffice. The source has always said there are no restrictions on its use.
Sometimes (rarely) one has a file one wants to patch directly, rather than use an editor. Years ago I wrote (and made available via ftp) a program I call binpatch. Download the binpatch.c C source. To compile it cc -g binpatch.c -o binpatch should suffice. The source has always said there are no restrictions on its use.
See the page describing a palm extractor in C. The Palm devices involved have been out of production for many hears now and it is not likely you have such a device on hand! It is a way to get data out of a palm data file (palm provided no way to do this!).
A standard based search is quite useful, but tsearch has its peculiarities, so perhaps an example is of interest. See the page describing a tsearch example in C. And see a worked out example in tsearch.c.
The implemented tree searches are of binary tree, binary tree with Eppinger delete, balanced tree, and red-black tree, The hash search uses the same interfaces. To avoid conflicts with your standard library, the function names are prefixed with dwarf_. The implemented functions (for every version) are dwarf_tsearch(), dwarf_tfind(),dwarf_tdelete(), dwarf_tdestroy(), dwarf_twalk(), and dwarf_tdump().
This work is licensed under a
Creative Commons Attribution 4.0 International License.