This should give us a nice build-performance boost, both locally and
in Travis CI. I've used parallel builds routinely for months now, and
they're working fine. Build output caching is newer, but it also
seems to be working well and saves us tremendous time on downloads.
Without this setting, Gradle would run multiple "test" tasks from
multiple subprojects concurrently, but each "test" task would only run
a single test at a time. Some of our subprojects' test tasks take a
long time to complete, which could easily leave us sequentially
testing on just one or two CPU cores while other cores sit idle.
With this change, Gradle will use as many task slots as are
available (e.g., when run with "--parallel") to run multiple
simultaneous JUnit test classes within any given subproject. This
seems to be fine for us: I am unaware of any shared global state that
would cause conflicts between two test classes within any given
subproject.
When JNI headers for a given class, each nested class will end up with
its own additional header. But I don't want to try to parse nested
class details out of the Java source files just so we can determine
exactly which headers will be created. Instead, simply treat the
entire header destination directory as the output of this task.
This task has an input named "hello_hash.ml", and an output named
"hello_hash.jar". So calling this task "generateHelloHash" is too
vague. Now we call it "generateHelloHashJar" instead.
These manifest files are here for use by the Maven build, but Eclipse
is now using Gradle (via Buildship). So the manifests as seen by
Eclipse do not entirely make sense. I'm hesitant to change the
manifests directly, since presumably they were correct for Maven and
still are.
Perhaps some day we'll be using Gradle to generate manifests. Until
that day comes, we're better off leaving the manifests as they are and
just suppressing Eclipse's warnings instead.
This still doesn't actually work, but it's closer than it was before.
There's still some problem with improper mixing of 32-bit ("x86") and
64-bit ("x64") libraries.
Avoid allocating memory using strdup() and then releasing it using
operator delete. strdup()-allocated memory should be released using
free(), not delete. But in these specific cases, there really was
never any good reason to duplicate the C-style strings in the first
place. Instead, we can safely pass those NUL-terminated char pointers
directly in calls to JNI's NewStringUTF(). NewStringUTF() does not
take ownership of its argument, but rather clones that string data
internally before returning. So using strdup() and delete was just
unnecessary memory churn.
In cases where we need to format, concatenate, or construct new
strings, don't use sprintf() into variable-sized, stack-allocated
arrays. Variable-sized arrays are not portable, and in particular are
rejected by Microsoft's Visual Studio 2017 C++ compiler. Instead, do
our string formatting and manipulations using std::ostringstream
and/or std::string. We just need to be a bit careful about the
lifetimes and ownership responsibilities of allocated data. In
brief, (1) ostringstream::str() returns a temporary string instance
that expires at the end of the enclosing statement, independent of the
lifetime of the ostringstream instance; while (2) string::c_str()
returns an pointer to internal data that remains valid as long as the
string on which it was called is valid and unmodified.
The default location of DroidBench in "/tmp/DroidBench" does not work
well on Windows. So let's disable these tests until someone has time to
make that path more portable.
This URL skips over a redirect that the previous URL went through.
This URL also avoids an annoying "Invalid cookie header" warning that
the previous URL produced.
We now download and verify checksums as a single task, rather than as
two separate tasks. This simplifies other task dependencies, since we
no longer have a checksum-verified "stamp" file separate from the
download itself. Unfortunately the combined task now has a
significant amount of repeated boilerplate. I'm hoping to refactor
that all out into a custom task class, but haven't yet figured out the
details:
<https://github.com/michel-kraemer/gradle-download-task/issues/108>.
We now also use ETags to be smarter about when a fresh download is or
is not actually needed. I think there are still opportunities for
improved caching here, but this is a step in the right direction.
I believe Travis CI jobs get two CPUs by default.
Doing parallel builds regularly is also a good way to help us discover
any build race conditions we may have. There's no guarantee that any
such races will be revealed, but even exposing them
nondeterministically is better than having no possibility of exposing
them at all.
We used to use these to find various Eclipse packages, but that was
always a dodgy affair since we never quite knew whether we had
matching versions of everything. Now that we are using the
"com.diffplug.gradle.p2.asmaven" plug-in, though, we have much better
control over getting exactly the Eclipse material we need. These two
Maven repositories no longer provide anything we use, and therefore
can be removed.
Previously this launcher's job was to run "processTestResources" and
any other Gradle tasks needed to create files that Eclipse was
expecting to be available. But we also want to use it to revert the
bad changes that Buildship applies to ".launch" configuration files.
This is a temporary hack to work around
<https://github.com/eclipse/buildship/issues/653>.
Unlike Linux, macOS has no "RPATH" facility for embedding additional
search paths within shared libraries. Instead, we need to set
"DYLD_LIBRARY_PATH" appropriately in the environment. This
environment variable is the macOS analogue of "LD_LIBRARY_PATH" on
Linux.
Note that adding the required path to the "java.library.path" system
property will *not* work. This property only affects where the JVM
looks for shared objects that it is loading directly. This property
does not influence the search for other, transitively-required shared
objects.
Fixes#3.
This fixes the last of our Javadoc warnings without creating a
circular dependency between ":com.ibm.wala.cast:javadoc" and
":com.ibm.wala.cast.js:javadoc". Fixes#4, wherein more details about
this tricky dependency challenge can be found.
This lets the "DynamicCallGraphTest" tests pass. The tests in that
class expect to find some element of the classpath that includes the
substring "com.ibm.wala.core.testdata". They then treat that as a
collection of bytecode files to instrument.
Big thanks to Julian for showing me where this exclusion logic lives
in the Maven configuration. There's a "**/*AndroidLibs*.java"
exclusion pattern in the top-level "pom.xml".
It's telling me to remove "eclipse-deps:org.eclipse.core.runtime:+"
and "org.osgi:org.osgi.core:4.2.0" as unused dependencies in the
"com.ibm.wala.cast.java.ecj" subproject. However, these two
dependencies (jar files) are actually needed; compilation fails
without them.
This should give us a set of mutually-consistent jars rather than
picking up random, outdated pieces from Maven Central or wherever else
I could find them. We now also have a single, central place where we
set the Eclipse version that we're building against. Much, *much*
cleaner.