Table of Contents
Static analysis tools
https://clang.llvm.org/docs/UndefinedBehaviorSanitizer.html?mc_cid=3835da293a&mc_eid=UNIQID https://clang.llvm.org/docs/ThreadSanitizer.html?mc_cid=3835da293a&mc_eid=UNIQID https://clang.llvm.org/docs/AddressSanitizer.html?mc_cid=3835da293a&mc_eid=UNIQID
Running embedded software in the host
From the Embedded Artistry newsletter:
We have had a number of discussions this month about the idea of running as much code off-target as possible. Some of this is prompted by the Designing Embedded Software for Change course, where we frequently discuss ideas around breaking dependencies on the underlying hardware/OS. In other cases, it is prompted by a common question about testing: why should I bother writing tests off-target when I can run a unit testing framework on my device?
Let's focus on the testing aspect first. At the most basic level, the benefits of running unit and integration tests off-target are:
1. You eliminate the lengthy flashing cycle from the equation. 2. Your build machine has much more processing power, so you can run more tests in a shorter amount of time. 3. Given that you are largely free of your embedded resource constraints, you can pull in additional frameworks and use testing strategies that may not even be feasible to support on your embedded target. Taking a recent client project as an example, we can perform an incremental build and complete unit test cycle off-target in less than two seconds. The same incremental build operation with the tests running on the target hardware takes 35 seconds. This difference in execution time really matters. If the test feedback cycle is short, you're going to run the tests much more often. Longer test cycles also add up over the course of a workday - if you run the tests 100 times, a 2 second test cycle would consume ~3.5 minutes total, while the 35 second version would consume ~58.5 minutes. And we cannot overlook the fact that longer cycles increase the probability of you becoming distracted and losing even more time.
The natural next step from off-target testing is creating a simulator program. After all, you've already put in the requisite work. All that remains is to create simulator implementations for the missing pieces. You might talk to actual peripheral hardware over a USB debug adapter. You might also simulate them by playing back recorded data, providing a fixed value, or generating random data. You can draw your display in a Qt window instead of the display hardware. You might connect to another simulated system through a local TCP/IP port. These capabilities give you the power to iterate much faster and more comfortably than on the target hardware.
When you can run your code off-target, whether through tests or a simulator, you also gain access to more tools and capabilities. Take the free example discussed above - if you're using MacOS or Linux with the provided standard libraries, you can get that support for free using glibc's MALLOC_PERTURB_ or MacOS's malloc debugging variables (or without any extra work if you're using an appropriate OS version). Even better, you can run your code through Valgrind and catch more errors than what malloc scribbling will reveal. You can link against sanitizers, which help you expose, diagnose, and resolve problems quickly - ThreadSanitizer (to catch data races), UndefinedBehaviorSanitizer (to catch undefined-behavior during program execution, which usually manifest in hard-to-debug issues when optimizations are enabled), AddressSanitizer (to catch memory errors like use-after-free, out-of-bounds accesses, and double-free), and LeakSanitizer (to catch memory leaks). You can build with compiler options, like full stack smashing protection or GCC's _FORTIFY_SOURCE=3, that would impact memory usage or performance too drastically to be useful on the target.
We think these benefits are well worth the investment. This does not mean we ignore the target hardware - there should still be regular (automated) tests run on actual devices and internal “dogfooding” of the system. But our goal is to deliver higher quality software at a faster pace, and the tools and capabilities described above allow us to do that in a repeatable way.
Editors
Code complexity
Coverage
https://qiaomuf.wordpress.com/2011/05/26/use-gcov-and-lcov-to-know-your-test-coverage/#more-225
https://michael.stapelberg.de/Artikel/code_coverage_with_lcov
Code comparison
Code visualization
Design
http://www.sparxsystems.com/products/ea/purchase.html#Desktop
http://www.state-machine.com/qm/index.php
http://code.google.com/p/wavedrom
http://www.graphviz.org/Gallery.php
asciiflow.com
idroo.com (online education whiteboard)
Article with ascii data structure visualizations (Embedded in academia)
Diagramming
https://www.sequencediagram.org
https://www.websequencediagrams.com/
www.yworks.com/products/yed (yEd graph editor)
Documentation
Markdown
Issue tracking
Version control
SVN
- svn checkout https://subversion.assembla.com/svn/emlin1 <my dir>
- svn update
- svn commit
See the basic svn workflow
Git
Building initial repository:
- git init
- git add <file>
- git commit -m <comment>
- git log #lists all the committ (-p to see diff)
- git pull #updates the local repository (equiv. to git fetch plus git merge)
- git tag -l #shows all tagged releases
- git checkout <tagname>
- git log vX.Y.Z..master #shows list of changes between a tag and latest
- git log -p vX1.Y1.Z1..vX2.Y2.Z2 MAINTAINERS #list changes with diff on a given file
- git diff #list all changes in working directory
Working with branches:
- git branch <branchname>
- git checkout <branchname> #move to just created branch (create also with -b option)
- git branch #list of local branches (-a to show also remote branches)
- git push origin origin/<old_name>:refs/heads/<new_name> :<old_name> #changes name of remote branch
Working with a specific release:
- git checkout -f $VERSION #reset working copy to historical version
- git checkout master #back to master branch
- git checkout -f #undo all local changes, reseting working copy to latest version in repository
Workflow:
- make changes
- git status
- git add <file> - stage commits before commiting them
- git commit (-s option to sign changes, -a if all modified files should be part of commit)
Recovering old stash commits
- gitk –all $( git fsck –no-reflog | awk '/dangling commit/ {print $3}' )
- git stash apply $stash_hash
gitk shows graphically the history of the current repository. Also changes between tags, e.g. gitk vX.Y.Z..master
https://wiki.ubuntu.com/Kernel/Action/GitTheSource
https://www.atlassian.com/git/tutorial/git-basics
http://nvie.com/posts/a-successful-git-branching-model/
http://www.gitguys.com/topics/creating-and-playing-with-branches/
http://marklodato.github.io/visual-git-guide/index-en.html
linux kernel
The stable tree tracks the torvalds repo and adds additional stuff to it.
Understanding git
make
http://www.cs.colby.edu/maxwell/courses/tutorials/maketutor/
http://aegis.sourceforge.net/auug97.pdf
http://make.paulandlesley.org/autodep.html
print-%: ; @echo $*=$($*)
Get the value of any makefile variable, e.g.:
make print-SOURCE_FILES
binutils
objdump –no-show-raw-insn -dC <binary>
nm -a –demangle –print-size –size-sort -t d my_obj.o useful information about what (weak) symbols are fat and so should be eliminated from every object file (like template functions from Boost library that have much code in headers only).
nm –size-sort –print-size obj,elf > m_size.txt
readelf -S and readelf -s to view section & symbol info
https://interrupt.memfault.com/blog/best-firmware-size-tools
https://jvns.ca/blog/2014/09/06/how-to-read-an-executable/
CppCon 2018: Matt Godbolt “The Bits Between the Bits: How We Get to main()”