-
Notifications
You must be signed in to change notification settings - Fork 234
Developer Guide
- Table of Contents
- Contributing to cvc5
- Pull Requests
- Architecture
- Coding Style Guidelines
- Formatting with clang-format
- Clean up includes with include-what-you-use (iwyu)
- Profiling Code
- Communicating with the User
- Assertions
- Dealing with expressions (Nodes and TNodes)
- Testing
- Source Code Tree
- cvc5 Build System
- Exporting Library Symbols
- Creating Releases
- Working with Contexts
- Options in cvc5
- Documentation
Contributors to cvc5 are expected to
create their own fork
of the cvc5 repository. All changes that should be merged into main
should be
submitted and reviewed via
pull requests.
The only exception to that rule are very minor changes (e.g., typos in comments)
by developers with write access which may be pushed directly onto main
after
all GitHub Actions builds succeeded.
-
All changes to be merged should reside on a branch of your fork of cvc5.
-
GitHub Actions workers on the main cvc5 repository are shared by everyone and should primarily be used for testing pull requests and
main
. You get separate GitHub Actions workers for testing branches on your fork, so it is highly encouraged that you use this to not congenst the queue of the main repository. -
If you made major changes, break down your changes into small chunks and submit a pull request for each chunk in order to make reviewing less painful.
-
Do not commit work in progress. You should always create pull requests for self-contained finished chunks. New features that are too big for one pull request should be split into smaller (self-contained) pull requests.
-
If your pull request implements a new feature or major change, it should also update the NEWS file.
-
-
Create a pull request via GitHub.
-
If you are a member of the cvc5 organization select at least one reviewer and label your pull request.
If you are an external contributor, a team member will assign reviewers and labels for you.
Label pull requests with respect to their complexity (simple
,moderate
,complex
) and priority (normal
,minor
,major
).-
The level of complexity indicates the complexity of the code changes (and as a consequence, the reviewing task), and based on the complexity, one or two approving code reviews are required:
-
simple
andmoderate
: Require one approving code review. -
complex
: Require two approving code reviews.
-
-
The level of priority indicates how urgently the changes of a pull request need to get into
main
:-
major
: Urgent changes/fixes that take precedence over pull requests marked asnormal
. -
minor
: Not urgent,normal
pull requests take precedence overminor
.
-
-
-
You need approval for your pull request from one (
simple
,moderate
) or two (complex
) reviewers before you can merge your pull request. -
Commits pushed to the pull request should only contain fixes and changes related to reviewer comments.
-
We require that pull requests are up-to-date with
main
before they can be merged. You can either update your branch on the GitHub page of your pull request or you can (while you are on the branch of your pull request) merge the currentmain
usinggit pull origin main
(this creates a merge commit that can be squashed when merging the pull request) and thengit push
to update the pull request.
-
-
If reviewers make non-trivial code suggestions, you might want to consider to acknowledge them for that by adding a
Co-authored-by: REVIEWER_NAME <REVIEWER EMAIL>
to the description of the PR. This line will then be included in the description of the commit when it is merged back tomain
. -
Merging pull requests to
main
.
This step can be done on the GitHub page of your pull request.-
Merging is only possible after GitHub Actions has succeeded.
-
Authors of a pull request are expected to perform the merge if they have write access to the repository (reviewers should not merge pull requests of developers that have privileges to do that themselves).
-
Creating merge commits for pull requests is not allowed (we prefer a linear history) and therefore disabled via the GitHub interface.
-
Squash commits are preferred over rebase commits. If the intermediate commits are not of general interest, do a squash commit
-
In most cases, GitHub should automatically add the pull request number (
#<number>
) to the title of the commit. Do not remove the number and add it if it is missing (useful for debugging/documentation purposes).
-
-
Use the pull request description as commit message if the description is more detailed than the commit message. Do not use the automatically generated commit message for squashed commits that contain all commit titles (this is not descriptive at all).
-
External Contributors: Make sure that the signed-off line is part of the commit message. We require external contributors to squash all commits and sign-off the commit. These PRs should be usually ready to merge without modifying any commit message.
-
GitHub automatically adds a
Co-authored by: …
line to the commit message as soon as anyone other than the original author committed to the PR. This also includes branch updates frommain
. Only include theCo-authored by: …
lines if the co-author contributed to the PR via suggestions or commits, but not via updating the branch. -
Make sure to include
Co-authored by: …
lines that have been included in the description of the PR by the PR author to acknowledge contributions from reviewers. -
When clearing the queue of approved PRs
-
Do not merge PRs labeled as
do not merge
, even if they are approved. Typically, this label is added when there’s minor fixes that need to be applied by the author before the PR is ready to merge. The author is responsible for indicating that everything has been addressed by removing this label. -
Merge in order of submission, oldest first.
-
PRs labeled as
major
take precedence over ones labeled asnormal
andminor
.
-
There are some good ideas on internet posts, like these:
https://alexgaynor.net/2015/dec/29/shrinking-code-review/
https://www.thedroidsonroids.com/blog/splitting-pull-request
It is the developer’s responsibility to make a PR easy to review.
To quote from the second article above:
First things first. It’s the author’s responsibility to make the code
review easy. The reviewers can’t spend as much time as you did digging
into the context of the problem. Therefore, you need to present the
code changes in a friendly form. Delivering your PR in small portions
is one of the ways to do it, but doesn’t exhaust the list.
-
First, develop the code using your own working copy. Don’t worry much about code review size at this stage - just worry about getting a coherent chunk of code implemented and working.
-
The key idea is to separate step 1 from the code review process. Once you have a chunk of code you are happy with, you’ll need to split it into several code review chunks. Ideally, no chunk is more than 100 lines of code. 50 is even better. This also gives you a chance now to focus on making sure your code is well-documented and well-tested as you check it in.
-
First, check in any support code needed for your new code: new files; changes to makefiles; new options, etc. At this point, the files don’t contain code (or maybe contain just stubs) and the options aren’t used anywhere in the code.
-
Second, check in .h files with interfaces and documentation. The .cpp files contain stubs - they still don’t do anything
-
Now, for each method in the .h file, add an implementation. It’s OK to implement more than one method in a single PR, but only if the number of lines is still 50-100. Now is the time to also IMPORTANT add unit tests for each new method.
-
Finally, once all of the methods are implemented, you can “turn on” the new code by adding calls to it in the existing code.
Note that by following this process, a single logical "chunk" of code from the point of view of your work might be broken up into many small pull requests. It takes more effort for the developer of course, but the result is code that is better documented, better tested, and better reviewed.
Now, let’s say you’re refactoring something and it touches lots of code. How do you break this up? Here are some ideas.
-
First, focus on the method or methods that need to be refactored, i.e., the code where the main functionality change occurs. It’s OK to make a temporary, separate version of these methods so that they can be tested and reviewed before they are integrated with the main code.
-
Make sure you update the unit tests and documentation for the refactored methods. At this point, you still have both the old and the refactored methods in the code base.
-
Submit a PR that switches from the old code to the refactored code. It’s OK if this touches tons of files, because conceptually it’s just a search and replace, so this PR can be big - but it should not do anything besides switching over.
-
Finally, remove the old code and tests.
-
PRs labeled as
major
should be reviewed ASAP (= within 2 days, ideally). -
PRs should not sit around unattended for months — start working on it incrementally and submit review chunks.
-
PRs that are huge and can not be split up easily can be reviewed file by file. GitHub supports this quite well already, you can mark files as viewed and such.
-
Try to split up huge PRs if possible — it will speed up the reviewing process.
-
Feel free to bug people regularly if you’re waiting for reviews, but don’t forget that all of us only have so much time to dedicate to reviews per day (this also relates to item 4, the smaller the PRs, the faster the turnaround).
-
Review as thoroughly as possible and reasonable — it is not required that you understand every piece of code you review (semantically, not syntactically). it is great if you do, but if it takes you weeks to achieve that it might be not worth it.
You can find notes on the architecture of cvc5 at https://github.com/cvc5/cvc5/wiki/doc/cvc4-architecture-notes.pdf
See details here: https://github.com/cvc5/cvc5/wiki/Code-Style-Guidelines
To set-up the git clang-format
plugin, follow these steps:
-
Install Clang version >= 9.0.0
-
Download the
git-clang-format
script -
Make it executable (
chmod +x git-clang-format
) -
Put it somewhere in your path
-
Check that
git clang-format -h
works (notice there’s no dash aftergit
)
Before committing, you can run:
git add --update # Stage your changes
git clang-format HEAD # Format the changed lines
If you want to format changes from multiple commits, e.g. the last ten commits, do:
git clang-format HEAD~10 # Format the changes in the last ten commits
For (neo)vim there is the vim-clang-format plugin for formatting files and blocks of code. For emacs there is the analogous clang-format.el.
Clang-format uses a configuration file that describes the formatting style.
Make sure that you use the .clang-format
file in the cvc5 repository (this
should happen automatically with git clang-format
).
iwyu is a tool based on llvm/clang that analyzes your code and recommends changes to which other files are included. In particular, it detects types that can be forward declared allowing includes to be removed from header files. This can be one way to significantly reduce compilation time.
iwyu is a bit tricky to compile, as it is tied to a specific llvm/clang version. If you can, simply use a packaged version (e.g., ubuntu, arch, homebrew). While iwyu can be integrated with cmake, this is hard to get right as well. What is usually much more reliable is using iwyu with the compilation database that cmake generates.
This approach allows to run iwyu on a single source file, and iwyu will print suggestions for this source file and the header file of the same name. First figure out the location of the C++ standard library shipped with clang, for example /usr/lib/llvm-10/lib/clang/10.0.0/include
(this is probably not necessary on MacOS) and select a source file to analyze, for example src/expr/node.cpp
.
Then use iwyu_tool
as follows:
iwyu_tool -p <build-dir> <source-file> -- -Xiwyu --cxx17ns -I<stdlib-dir>
build-dir=.
) you can use it as follows:
iwyu_tool -p . ../src/expr/node.cpp -- -Xiwyu --cxx17ns -I/usr/lib/llvm-10/lib/clang/10.0.0/include/
iwyu_tool
and iwyu
itself have some options, for example -Xiwyu --cxx17ns
instructs iwyu
to use the new notation for nested namespaces when suggesting forward declarations. Other interesting options: --check-also
allows to check for includes in other header files as well; --keep
should ignore certain includes, though it seems to be broken.
Note that iwyu is not perfect: it may suggest to remove headers that can not be removed; it may suggest to forward declare types where this is not possible; it may suggest to replace includes by others where it does not make sense. You should thus check every change manually!
Configure cvc5 with
./configure.sh debug --coverage
cd build
make
Before using cvc5, first reset the code coverage counters to zero with
make coverage-reset
After using cvc5 you can generate a coverage report for all cvc5 executions
since the last coverage-reset
with
make coverage
You will find the coverage report in build/coverage
.
For testing C++ tests only, configure cvc5 with
./configure.sh debug --coverage
./configure.sh debug --coverage --auto-download --cocoa --all-bindings
cd build
make build-tests
make coverage-reset
ctest -R unit/api/<lang>/ --output-on-failure # for <lang> = {cpp, c, java, python}
make coverage-json
../contrib/uncovered-api-functions.py
Nightly coverage reports are available on our website.
All output should go through Debug, Warning, Notice, Chat, and Trace.
Examples:
#include "util/output.h"
cvc5::Debug("arith") << "foo!"; // prints "foo!" if arith debugging is on
cvc5::Warning() << "Equivalence classes didn't get merged."; // prints unless user gave -q or in SMT-COMP mode
cvc5::Notice() << "Hi, so you don't get bored, here are some statistics: " << stats; // if user gives -v
cvc5::Chat() << "I feel like doing some theory propagation."; // if user gives -vv
cvc5::Trace("arith") << "<ralph>I'm now calling a function!</ralph>"; // if user gives -vvv
For non-debug/tracing builds, cvc5::Debug/Trace are a no-op respectively and everything will be inlined away.
Use assertions for pre- and postconditions, invariants and any other assumption you require to hold true for a given piece of code. Use plenty, but in a meaningful way.
Assertions are by default enabled in debug
builds. You can additionally turn them on with --assertions
and off with --no-assertions
at configure time.
cvc5 provides the following special macros for assertions:
-
Assert
Use as you would useassert
from<cassert>
. Aborts if the asserted condition fails. When assertions are turned off, this is a no-op and everything is optimized away.#include "util/assert.h" cvc5::Assert(condition); // asserts that condition is true cvc5::Assert(condition) << "error message on fail"; // asserts that condition is true with custom error message
-
AlwaysAssert
Use for strong assertions that will always be checked, even when assertions are disabled. Use very sparingly, and only if the performance overhead is negligible. Aborts if the asserted condition fails (same asAssert()
).#include "util/assert.h" cvc5::AlwaysAssert(condition); cvc5::AlwaysAssert(condition) << "error message";
-
Unreachable
Use to indicate that code is intended to be unreachable. This macro is always checked, even when assertions are disabled. This macro aborts when executed.#include "util/assert.h" cvc5::Unreachable(); // flags (supposedly) unreachable code; fails even under --no-assertions cvc5::Unreachable() << "error message"; // custom error message
-
Unhandled
Use to indicate that a particular case is unhandled, e.g., in a switch statement that was designed to handle all cases. This macro is always checked, even when assertions are disabled. Ideally, we don’t want any occurrences of this macro. If used, then only in combination with a corresponding issue that addresses this unhandled case. This macro aborts when executed.#include "util/assert.h" cvc5::Unhandled(); // flags an unhandled case; fails even under --disable-assertions cvc5::Unhandled() << "error message"; // custom error message
-
Use
Node
to store nodes in containers or member variables. -
Make functions take
TNode
and returnNode
.
Expressions (or nodes) in cvc5 are generally represented by Node
or TNode
objects. They both reference NodeValue
that is actually stored in a NodeManager
, but differ in that Node
maintains a reference count while TNode
does not.
In standard C++ terms, we conceptually have Node = std::shared_ptr<NodeValue>
and TNode = std::weak_ptr<NodeValue>
.
The NodeValue
type is not to be used by anyone except Node
, TNode
and NodeManager
. While nodes are conceptually immutable (and Node
and TNode
enforce this), NodeValue
is mutable and thus not suited to be used directly, even internally within the cvc5 library. Furthermore, Node
and TNode
both implement a minimum of sanity checking (at least in assertions-enabled builds) to ensure that they are used correctly: never referencing a NodeValue
with zero reference count, for example.
Under the hood, one template type (the NodeTemplate<>
) implements both Node
and TNode
(Node
and TNode
are typedefs to instantiations of the NodeTemplate<>
template with reference-counting turned on and off). Thus, Node
and TNode
provide the same interface and can be exchanged easily without compiler troubles (if it is safe to do so, of course).
The Node
type maintains reference counts, and thus ensures proper lifetimes for all nodes. When a Node
object is created pointing to a NodeValue
, it increases the reference count on that NodeValue
. When the Node
is destructed (or points away), it decrements the reference count. As soon as the reference count is zero, the respective NodeValue
is eligible for reclamation by the NodeManager
: it will soon be deallocated and can no longer be used.
The TNode
does not maintain reference counts, which avoid some overhead which can be significant in certain scenarios. However, it is also prone to issues when the underlying NodeValue
is deallocated while a TNode
is still being used.
Most importantly: If in doubt, use Node
.
Using Node
is always safe, using TNode
instead can provide some performance benefits. Hence, the question actually is: When is it safe to use TNode
?
Whether a TNode
can be used depends on some Node
keeping the underlying NodeValue
alive. As long as this is ensured, using TNode
is safe. Some typical examples for safe and unsafe usage of TNode
are listed below.
One main use case for TNode
are function arguments. Most of the time, it is safe to pass a Node
to a function that expects a TNode
argument. As the Node
n
lives in a scope above the function f
, it outlives the function scope of f
and thus can be safely referred to by a TNode
tn
.
Node f(TNode tn) { return tn.notNode(); }
void foo() {
Node n = ...;
Node not_n = f(n);
}
Note that this reasoning is not universal: if f
can modify n
(directly or indirectly), the TNode
tn
may not be safe to use.
This can be the case if n
lives in some container that is passed to f
as well, or f
is a member function with access to the member variable n
.
Node f(TNode tn, std::vector<Node>& v) { v.clear(); return tn.notNode(); }
void foo() {
std::vector<Node> vn = ...;
Node not_n = f(vn[0], vn);
}
class A {
Node f(TNode tn) { n = ...; return tn.notNode(); }
void foo() {
Node not_n = f(n);
}
Node n;
}
tn
, as the variable "is already there", for example to do some preprocessing on the argument. However, preprocess
most likely returns a Node
which has no remaining reference once it is stored in a TNode
. Instead, the result of preprocess
should be stored in a new Node
variable and used from there.
Node f(TNode tn) {
tn = preprocess(tn);
return tn.notNode();
}
void foo() {
Node n = ...;
Node not_n = f(n);
}
As soon as we store nodes in data structures that have a long lifetime, it becomes difficult to argue with scopes. Thus, TNode
should generally not be used to store nodes in data structures. In the following example, nodes stored in v
may not be safe as n
goes out of scope as soon as f
returns and whether there is another Node
pointing to the same node is not clear. We should thus use std::vector<Node>
instead.
class A {
void f(Node n) { v.emplace_back(n); }
std::vector<TNode> v;
};
TNode
can be used in containers if another container (controlled by the same entity) ensures the existence of Node
objects for all TNode
instances. This is the case, for example, for certain additional caches and lookups: all
holds references for all nodes passed to A
by add
. The additional containers processed
and simplified
only hold a subset of these and thus TNode
is sufficient. Note that we need Node
again for the key
type of simplified
as we store new nodes there.
class A {
void add(TNode tn) { all.emplace_back(tn); }
void process() {
for (const auto& n: all) {
if (processed.find(n) != processed.end()) {
processed.insert(s);
Node s = simplify(n);
if (s != n) {
simplified.emplace(n, s);
}
}
}
}
std::vector<Node> all;
std::set<TNode> processed;
std::map<TNode,Node> simplified;
};
Several properties, in particular relative ordering as per standard comparison operators, of nodes depend on their id, which is dynamically chosen upon node construction. Hence, if a node is deallocated and then reconstructed, it is not necessarily the same. Similarly, TNode
objects do not become "safe again" when a node is re-constructed. Note that in reality, a re-created node oftentimes does have the same underlying NodeValue
, since zero-reference-count nodes are not immediately reclaimed (they are in the so-called zombie
state, and they are reclaimed in bulk when there are enough zombies). But because they are immediately eligible
for reclamation, you cannot rely on this.
This also implies that it is not sufficient that "a corresponding Node
exists every time a TNode
is used", but we need to ensure that "a corresponding Node
exists continuously as long as a TNode
is used".
A common practice is to take function arguments as const&
to avoid (oftentimes expensive copy operations).
Albeit not being trivially copyable (in the strict standard C++ sense of std::is_trivially_copyable
), copying a TNode
is almost certainly simplified to a single register assignment when compiler optimization is enabled. Using a reference (which copies a pointer to the referenced node internally) can thus not be expected to be any faster, but may instead pose problems for caching: while accessing a function argument TNode tn
only accesses the current stack frame (where tn
lives) and the NodeValue
in the NodeManager
, when using references we may need to additionally access the place where the referenced TNode
lives.
While move semantics (&&
, std::move
) can be greatly beneficial, their benefit depends on the possibility to make a move more efficient than a copy. Given that the special "null value" (Node()
) is reference counted as well, every new Node
requires increasing some reference count. Using Node(Node&& n)
instead of Node(const Node& n)
could merely change which reference counts are changed when, but not change their overall number. We thus do not offer move semantics and thereby keep the interface (slightly) simpler.
In this section, we explain the three different types of tests we do in cvc5: regression tests, unit tests and system tests. Information on how to run the different types of tests can be found in the INSTALL guide.
Regression tests consist of input files and expected outcomes, so they perform end-to-end testing of the cvc5 binary. All the regression tests are located in test/regression/
. See the README in the corresponding directory for details on the format, the organization, and how to add new regressions.
Unit tests a small portion (a unit) of the code base. We use CxxTest to implement the unit tests. To run the unit tests, CxxTest needs to be installed on the machine.
All the unit tests are located in test/unit/
. For instructions on how to run
the unit tests, refer to the
INSTALL guide.
There are three kinds of unit tests:
-
white-box tests test the system from a white-box point of view: they have access to the private functionality of classes (that is, usual C++ access restrictions do not apply), and they are written (typically by the author of the tested class) to test the internal functions and invariants of a class implementation. If the implementation details of a class change, one or more tests will likely change too.
-
By convention, white tests are named
*_white.h
. For example, for theNode
class, which is in thecvc5::expr
namespace, the test is under theexpr/
directory innode_white.h
.
-
-
black-box tests test the system from a black-box point of view: they have access only to the public functionality of classes (that is, usual C++ access restrictions do apply), and they are written (typically not by the author of the tested class) to test the external interface and invariants of a class. If the implementation details of a class change, the tests do not change; if the external interface of a class change, the test should be updated.
-
By convention, black tests are named
*_black.h
. For example, for theNode
class, which is in thecvc5::expr
namespace, the test is under theexpr/
directory innode_black.h
. -
a subcategory of black tests are API tests; these test the system from a public point of view: they have access only the public API of the cvc5 libraries---the exported symbols of various libraries. Usual C++ access restrictions apply, as do the usual linking procedures against the library. Essentially, these public tests are black tests that test the public functionality of a public class. It is useful to separate them from the black tests in order to test that the exported symbols of a library are adequate (TODO: not currently enforced). API tests are in the
test/unit/api
folder.
-
To add a new test, create a test header file under test/unit
(in a the
subdirectory of the corresponding cvc5 module) and add it to the
CMakeLists.txt
file in the corresponding subdirectory. If your test is a
white-box test (it calls private functionality of the tested class directly and
was written knowing the implementation details), its name should include
White
, e.g., NodeWhite
, and it should be in a file like
test/unit/expr/node_white.h
, and you should add it using
cvc5_add_unit_test_white
in test/unit/expr/CMakeLists.txt
. If your test is
a black-box test (it calls only the public functionality of the tested class
and was written knowing the interface only), its name should include Black
,
e.g., NodeBlack
, and it should be in a file like
test/unit/expr/node_black.h
, and you should add it using
cvc5_add_unit_test_black
in test/unit/expr/CMakeLists.txt
. If your test is
an API test (it calls only the public functionality of the tested class, which
is exported in the public-facing interface of the library), its name should be
in the test/unit/api
folder, e.g., ExprPublic, and it should be in a file
like test/unit/api/solver_black.h
, and you should add it using
cvc5_add_unit_test_black
in test/unit/api/CMakeLists.txt
.
If you add your test using cvc5_add_unit_test_black
or
cvc5_add_unit_test_white
to the CMakeLists.txt
file in the same directory
as the unit test and you’re using the GNU C++ compiler, it will be compiled and
linked correctly: that is, if it’s a white-box test, the compiler will ignore
access restrictions; if it’s a black-box test, it can still access hidden
symbols; and if it’s a public test, it must only access the public-facing,
exported symbols of the appropriate library.
Your test methods are named test*
, e.g., testNull()
. They return void
and
take no arguments.
There are numerous testing assertions you can make inside your testing code. For example:
TS_ASSERT(1 + 1 > 1);
TS_ASSERT_EQUALS(1 + 1, 2);
There’s also a TS_FAIL()
that fails unconditionally:
void testSomething()
{
TS_FAIL("I don't know how to test this!");
}
…and a TS_WARN()
:
void testSomething()
{
TS_WARN("TODO: This test still needs to be written!");
}
In order to run, CxxTest takes a header file, generates from that a new cpp file, and then then compiles the cpp file into a binary. The binaries for the unit tests live in the builds directory. If a unit test had the path
cvc5/test/unit/foo/bar.h
then the binary will have the path <build dir>/test/unit/foo/bar
and the
CxxTest cpp file will have the path <build dir>/src/test/unit/foo/bar.cpp
.
If you have compiled cvc5 in debug mode, then the unit tests binaries can be run in gdb.
Tests are executed in the order that they appear in the header file. Moving a broken test to the top of the class definition (temporarily) makes it execute before the other tests. This is a useful trick for making debugging with gdb easier.
To recompile all of the tests, you can run make check
. If you just run make
after changing the source of cvc5 or a unit test, the test will not be
recompiled.
The most common errors with unit tests are duplicating method names, improper copy-'n'-paste between tests, etc.
If you get strange errors, make sure you called your test class something
unique (e.g., ExprBlack
for black-box tests on Expr
). If you get an error
that looks like this:
expr/expr_black.cpp:20: error: expected initializer before ‘suite_ExprBlack’
expr/expr_black.cpp:23: error: ‘suite_ExprBlack’ was not declared in this scope
expr/expr_black.cpp: In member function ‘virtual void TestDescription_ExprBlack_testDefaultCtor::runTest()’:
expr/expr_black.cpp:28: error: ‘suite_ExprBlack’ was not declared in this scope
expr/expr_black.cpp: In member function ‘virtual void TestDescription_ExprBlack_testCtors::runTest()’:
expr/expr_black.cpp:34: error: ‘suite_ExprBlack’ was not declared in this scope
...etc...
…then you’re missing a closing semicolon at the end of your test class definition. (They’re easy to forget.)
Note: Commenting out unit tests may not work as expected.
Bugzilla bug #44 states:
As our unit-test database is growing the time to compile all the tests is getting increasingly unbearable, specially as every change in the source spawns recompilation of all tests. I know it might be impossible, but I would be wonderful if we could get around this somehow.
Fortunately, the answer is yes, this rebuilding can be avoided.
But it must be stressed that before any commit, you should always let the build tree rebuild, relink, and rerun all tests!
To rebuild a single unit test, use make <unit test>
, e.g. make
map_util_black
and then run it either by calling the binary of the test or
using ctest
.
System, or API, testing is a form of regression testing on cvc5 libraries rather than the driver. Rather than .cvc
and .smt2
instances as input, a system test is a source file that is built and linked against the cvc5 libraries and uses its API, bypassing the usual cvc5 driver. These tests are located under test/system/
.
Every non-exceptional line of code should be covered by our tests. To generate code coverage information using gcov, configure cvc5 with --coverage
. For more detailed instructions about coverage testing, seehttps://github.com/cvc5/cvc5/wiki/Developer-Guide#code-coverage-reports[how to generate code coverage reports]. , compile the project, then run make coverage
. This generates an HTML report of the coverage information in <build_dir>/coverage
. For more details on how to generate coverage reports and test for uncovered functions, see profiling code.
-
cmake
CMake configurations and finder modules for building cvc5. -
contrib
Maintainer scripts and other things that aren’t directly part of cvc5. -
docs
Files to generate API documentation. -
examples
Examples on how to use cvc5’s API in different languages. -
licenses
Additional licenses of cvc5’s dependencies. -
proofs/signatures
cvc5’s LFSC signatures. -
-
api
Public API. -
base
Assertion/check macros and other base utilities. -
bindings
Language bindings. -
context
Context-dependent data structures. -
decision
Decision engine and decision strategies. -
expr
Expression and node manager module. -
include
Systemwide includes. -
lib
Replacement functions for some library functions not available on all platforms. -
main
Source for the main cvc5 binary. -
options
Options module. -
parser
Parsers for supported cvc5 input formats. -
preprocessing
Preprocessing passes. -
printer
Printers for supported cvc5 output formats. -
proof
Proof manager and related data structures. -
prop
Propositional engine and SAT solvers. -
smt
SMT engine, model, and commands. -
smt_util
SMT specific utility classes. -
theory
Theory engine and all theory solvers. -
util
Utility classes.
-
cvc5’s build system uses CMake and provides the following pre-defined build profiles:
-
competition Maximally optimized, assertions and tracing disabled, muzzled.
-
debug Unoptimized, debug symbols, assertions, and tracing enabled.
-
production (default) Optimized, assertions and tracing disabled.
-
testing Optimized debug build.
The default build profile is production, which you will get if you just
run ./configure.sh
. To choose a different build profile use:
./configure.sh <profile>
You can customize a build with the options provided via ./configure.sh -h
.
For example,
to compile a debug build with profiling support enabled use:
./configure.sh debug --profiling
You can configure multiple cvc5 builds to be built in separate build
directories. By default ./configure.sh
creates a directory build
,
with ./configure.sh --name=<build-directory>
you can create a build directory
with name <build-directory>
.
You can determine the build type of a cvc5 binary with the --show-config
command-line option.
For example, the following command shows the config of a production build.
This is cvc5 version 1.0-prerelease [git master dde3aac0]
compiled with GCC version 10.2.0
on May 5 2021 14:07:39
Copyright (c) 2009-2020 by the authors and their institutional
affiliations listed at http://cvc4.cs.stanford.edu/authors
cvc5 is open-source and is covered by the BSD license (modified).
THIS SOFTWARE IS PROVIDED AS-IS, WITHOUT ANY WARRANTIES.
USE AT YOUR OWN RISK.
cvc5 incorporates code from ANTLR3 (http://www.antlr.org).
See licenses/antlr3-LICENSE for copyright and licensing information.
This version of cvc5 is linked against the following non-(L)GPL'ed
third party libraries.
CaDiCaL - Simplified Satisfiability Solver
See https://github.com/arminbiere/cadical for copyright information.
SymFPU - The Symbolic Floating Point Unit
See https://github.com/martin-cs/symfpu/tree/CVC4 for copyright information.
This version of cvc5 is linked against the following third party
libraries covered by the LGPLv3 license.
See licenses/lgpl-3.0.txt for more information.
GMP - Gnu Multi Precision Arithmetic Library
See http://gmplib.org for copyright information.
LibPoly polynomial library
See https://github.com/SRI-CSL/libpoly for copyright and
licensing information.
cvc5 is statically linked against these libraries. To recompile
this version of cvc5 with different versions of these libraries
follow the instructions on https://github.com/cvc5/cvc5/blob/main/INSTALL.rst
See the file COPYING (distributed with the source code, and with
all binaries) for the full cvc5 copyright, licensing, and (lack of)
warranty information.
version : 1.0-prerelease
scm : git [master dde3aac0]
library : 1.0.0
debug code : no
statistics : yes
tracing : no
dumping : yes
muzzled : no
assertions : no
coverage : no
profiling : no
asan : no
ubsan : no
tsan : no
competition : no
abc : no
cln : no
glpk : no
cadical : yes
cryptominisat : no
gmp : yes
kissat : no
poly : yes
editline : no
symfpu : yes
To see if the binary is statically linked, or not, use the ldd
command:
$ ldd bin/cvc5
linux-vdso.so.1 (0x00007ffd4c5ae000)
libcvc5parser.so.1 => /home/cvc5/git/cvc5/build/src/parser/libcvc5parser.so.1 (0x00007f377ad28000)
libgmp.so.10 => /usr/lib/libgmp.so.10 (0x00007f377ac6c000)
libcvc5.so.1 => /home/cvc5/git/cvc5/build/src/libcvc5.so.1 (0x00007f3779309000)
libstdc++.so.6 => /usr/lib/libstdc++.so.6 (0x00007f377912c000)
libm.so.6 => /usr/lib/libm.so.6 (0x00007f3778fe7000)
libgcc_s.so.1 => /usr/lib/libgcc_s.so.1 (0x00007f3778fcd000)
libc.so.6 => /usr/lib/libc.so.6 (0x00007f3778dfe000)
/lib64/ld-linux-x86-64.so.2 => /usr/lib64/ld-linux-x86-64.so.2 (0x00007f377afc4000)
The above output is from a dynamically linked cvc5 binary, which is dynamically
linked against the cvc5 libraries libcvc5parser.so.1
and libcvc5.so.1
.
This is the default behavior of the build system.
If the cvc5 libraries are missing from the list, the binary was statically
linked against the cvc5 libraries and dynamically linked against system
libraries.
This can be achieved by configuring with
./configure.sh --static --no-static-binary
.
$ ldd bin/cvc5
linux-vdso.so.1 (0x00007ffd4c5ae000)
libgmp.so.10 => /usr/lib/libgmp.so.10 (0x00007f377ac6c000)
libstdc++.so.6 => /usr/lib/libstdc++.so.6 (0x00007f377912c000)
libm.so.6 => /usr/lib/libm.so.6 (0x00007f3778fe7000)
libgcc_s.so.1 => /usr/lib/libgcc_s.so.1 (0x00007f3778fcd000)
libc.so.6 => /usr/lib/libc.so.6 (0x00007f3778dfe000)
/lib64/ld-linux-x86-64.so.2 => /usr/lib64/ld-linux-x86-64.so.2 (0x00007f377afc4000)
If you want a fully static binary, which is also statically link against system
libraries use: ./configure.sh --static
.
$ ldd bin/cvc5
not a dynamic executable
Note that on some platforms compiling a fully static binary may be difficult
or not possible. In these cases use the --no-static-binary
configure option.
When you create new source and header files make sure to add them via
git add …
and add them to the corresponding CMakeLists.txt
file. If the
file you create is in a directory with a CMakeLists.txt
add it to this file,
otherwise in most of the cases you want to add it to src/CMakeLists.txt
.
If supported by the compiler, libcvc5 hides all symbols by default (with
-fvisibility-hidden
). This permits more aggressive optimization of the
library. Those symbols intended to be the public interface to cvc5, however,
must be explicitly exported. Public symbols are marked with CVC5_EXPORT, e.g.,
class CVC5_EXPORT Sort
{
...
};
std::ostream& operator<<(std::ostream& out, const Sort& s) CVC5_EXPORT;
See http://gcc.gnu.org/wiki/Visibility for more information on visibility.
-
Update copyright headers
Run
contrib/update-copyright.pl
, which will update directoriessrc/
,examples/
andtest/
. -
Update files which refer to the version
-
Update copyright year in Configuration::copyright()
-
Update "cvc5 prerelease version 1.x" to "cvc5 release version 1.x" in INSTALL.md
Note that after release it must be updated again to "cvc5 prerelease version 1.(x+1)"
-
Update NEWS
-
-
Update file library version
-
Increment by 1 the value of
CVC5_SOVERSION
in CMakeLists.txt. For version 1.7 its value is 6. For more information on shared libary compatibility see: -
Erase
CVC5_EXTRAVERSION
("-prerelease" → "") in that file. After the release that should be reverted and theCVC5_MINOR
number should be bumped.
-
-
Update AUTHORS and THANKS files
-
Check list of publications on the website for completeness
-
Publish sources and binary on the webpage
Note that the commands below should run on barrett1, which has all required libraries are installed.
-
Run nightly script to create release binaries
./nightly.sh -b "win64-opt x86_64-linux-static-opt" -j 16 -e no -f -r 0 dest pub
Flags such that it generates no email, only windows and linux build, only regress 0.
-
Note that you must use
-r 0
only if nightlies passed and TRIVIAL changes were made. -
Also note that the nightly script will only work if you are authorized to use docker (necessary for the windows build).
-
-
Go to https://github.com/cvc5/cvc5/releases, create release tag.
-
win64 and x86_64-linux binaries should be added as assets for the release
-
Add "changes since" from NEWS to the description of the release
-
-
Copy x86_64-linux binary to
/afs/cs.stanford.edu/u/barrett/www-cvc4/downloads/builds/x86_64-linux-opt/cvc4-<version>-x86_64-linux-opt
-
Copy win64 binary to
/afs/cs.stanford.edu/u/barrett/www-cvc4/downloads/builds/win/cvc4-<version>-win64-opt.exe
-
Contexts are a particularly complicated part of the machinery of cvc5, and much of cvc5’s work hinges on them.
The class "Context" is an abstraction for a rollback-able memory. One can
create a Context
:
Context c;
and some objects in it (use the CDO<>
wrapper to make a simple, rollback-able
object, using a pointer to the Context
as the constructor argument):
CDO<int> i(&c);
CDO<int> j(&c);
i = 4;
j = 3;
One then "pushes a scope" on the Context
:
c.push();
Further edits to i
and j
are then rolled back with pop()
:
i = 10000;
j = -5;
assert(i == 10000 && j == -5); // succeeds
c.pop();
assert(i == 4 && j == 3); // also succeeds
Types wrapped by CDO<>
simply employ a copy-on-write policy if the current
scope is greater than that in which the object was last updated. Rollbacks are
eager (when pop()
is called). CDO<>
just uses the underlying type’s copy
constructor. Sometimes you want something more involved, in which case you
write your own class that extends the class ContextObj
and provides save and
restore functionality. Make sure to read the invariants in
context.h
carefully!
Here are some important gotchas to keep in mind:
-
Your
ContextObj
subclass must calldestroy()
from its destructor. This ensures that, if prematurely destructed (that is, before theContext
goes away), your object is properly unregistered from theContext
. -
Be careful if you allocate in an alternative memory manager (see below).
-
FIXME add more gotchas
You need to be careful; even at a high scope level, all objects registered for
backtracking with a Context
are inserted at scope zero. This means your
program will crash if you delete the associated memory too soon. Now re-read
that sentence. If you create a CDO<int>
(with new
) at scope level 10, and
delete it upon backtracking to scope level 9, the CDO<int>
destructor
properly unregisters itself from the Context
. However, if you allocate it in
a memory manager (such as the ContextMemoryManager
) that doesn’t guarantee
that the destructor is called, your program will crash (perhaps much later),
because the object is still registered at scope level 0 (and thus a bad pointer
appears on a list). You may prefer to create a CDO<int*>
rather than a
CDO<int>*
, which solves the issue (you can prematurely delete the int*
without incident). But that may leak memory, so there are additional
constructors in ContextObj
and CDO<>
(those starting with a bool
argument) that specifically "register" the object at the current scope level.
They aren’t registered at scope level 0, so the problem doesn’t occur. However,
you have to be careful.
But why allocate ContextObj
instances with the ContextMemoryManager
if it’s
so complicated? There’s a big advantage: you don’t have to track the lifetime
of the ContextObj
. The object simply ceases to exist when the Context
is
popped. Thus you can create an object in context memory, and if you stow
pointers to it only in context memory as well, all is cleaned up for you when
the scope pops. Here’s a list of gotchas:
-
For data structures intended solely to be allocated in context memory, privately declare (but don’t define) an
operator new(size_t)
and destructor (as currently in the Link class). -
For data structures that may or may not be allocated in context memory, and are designed to be that way (especially if they contain
ContextObj
instances), they should be heavily documented---especially the destructor, since it may or may not be called. -
There’s also an issue for generic code---some class
Foo<T>
might be allocated in context memory, and that might normally be fine, but ifT
is aContextObj
this requires certain care. -
Note that certain care is required for
ContextObj
inside of data structures. If the (enclosing) data structure can be allocated inContextMemoryManager
("CMM"), that means the (enclosed)ContextObj
can be, even if it was not designed to be allocated in that way. In many instances, it may be required that data structures enclosingContextObj
be parameterized with a similarbool allocatedInCMM
argument as the special constructor in this class (and pass it on to allContextObj
instances).
Options code in cvc5 is automatically generated by source .toml files where they are textually defined. Options are divided to several .toml file, for example according to the relevant theory. More details can be found in: https://github.com/cvc5/cvc5/blob/main/src/options/README
The documentation at https://cvc5.github.io/docs/ is generated from the docs
folder and covers the usage of cvc5 via one of its APIs or the command line. As this documentation is accumulated from APIs in different languages with different toolchains, you should always be aware of which toolchain is processing the particular piece of documentation you are writing.
All documentation that lives in the docs
folder is written in *.rst
files in the reStructuredText
format and directly processed by Sphinx
. See https://www.sphinx-doc.org/en/master/ for general information.
The different APIs are made available via language-specific toolchains and can be linked via "domains", e.g. :cpp:class:
for a C++ class.
We have a few custom Sphinx extensions:
-
examples
is used for the examples. For a particular example, it renders one tab per language the example is available in and provides syntax highlighting. It heavily relies on thesphinx_tabs
extension. -
include_build_file
allows to includerst
files from the build folder, i.e., documentation sources that has been automatically generated by the build system, e.g., for command line options. This is necessary as Sphinx does not support multiple source folders.
The C++ API sources are parsed with doxygen
, which is configured to only produce xml
output. This xml
output is then used by the Sphinx extension breathe
. Typically, we have one rst
file for an API class (and associated helper classes) that uses .. doxygenclass::
to render the class documentation.
When writing documentation in the C++ API, be aware that these comments are processed by doxygen
, not by Sphinx
!
Sphinx supports python natively (via the autodoc
extension). autodoc
requires the python library to be built and then generates the documentation from the docstrings exposed by the library.