the declared target of the call site. This is needed to make sure
forName targets loaded with the Application loader get resolved to point
to the real metod reference for forName.
this issue actually manifested itself in the Kawa Chess program, and so
I have added an assertion to make sure this resolution is done properly.
Fixes inconsistent behavior of the exclusions argument.
Depending on the androidLib argument, setting exclusions to null
is either fine or raises and exception.
This patch makes exclusions truely optional for any case
when null is passed.
* Impl of IMethod isSynthetic and isWalaSynthetic
So far IMethod.isSynthetic referred to WALA-generated helper functions
and there was no equivalent to check whether an IMethod is synthetic in
terms of compiler-generated.
To make naming consistent this patch first renames the isSynthetic to
isWalaSynthetic to clearly indicate that a given IMethod was generated
by WALA. Then, we re-introduce isSynthetic that from now on checks
whether an IMethod is synthetic/compiler-generated (referring to the
synthetic flag in bytecode)
* Implementation of IClass.isSynthetic
Complementary to IMethod.isSynthetic, this method checks whether
an IClass is compiler-generated.
* updated JavaDoc
* Fixes IllegalStateException
Reverts refactored code with try-with-resource back to potentially
leaking implementation. The refactored code threw an exception since
JarFileModule does not implement the AutoClosable interface. Further,
removed the printStackTrace() call, as this is not an exceptional case
but intended control-flow in case DexFileModule creation fails.
* Downgrade JarFile leak diagnostic from warning to error
This is consistent with how we are treating potential JarFile leaks in
other WALA components. WALA issue #236 already notes that these
should be cleaned up eventually, although doing so will not be easy.
These specific test resources are already included in the "testArchives"
configuration of the "com.ibm.wala.core.tests" subproject, upon which
the "com.ibm.wala.dalvik.test" tests already depend. So there's no need
to also copy these resources into the "com.ibm.wala.dalvik.test" test
resources area as well.
Previously I hadn't realized that Gradle's "java" plugin would generate
default "cleanTest" tasks for us. By defining my own "cleanTest" tasks
we were replacing the generated ones, but what we really wanted to do
was augment them with additional files to delete.
Every dependency task listed here is already a dependency of at least
one subproject's "processTestResources" task, and each
"processTestResources" task already depends on the corresponding
"afterEclipseBuildshipImport" task. So listing these tasks here too
is unnecessary.
All Kawa-related downloads now use our existing VerifiedDownload task
class. This gives us Gradle-integrated progress reporting,
incremental build support, build caching, correct dependencies, etc.
Using Kawa to compile Scheme into bytecode now also has proper
dependency management, incremental build support, and build caching.
Same goes for bundling these compiled bytecode files into jar archives
for later use in regression tests.
Also, when downloading kawa-chess, grab a specific commit hash rather
than whatever is the most recent master commit. If this project
changes in the future, we don't want our tests to break unexpectedly.
Perhaps we'd want to pick up any new kawa-chess commits; perhaps not.
Either way, that should be a conscious decision rather than something
that can happen behind our backs.
This specific task runs an external command, and we consider the task
successful if that command exits without error. We don't actually
examine the stdout or stderr of the command being run.
However, it is still useful to log the stdout and stderr to a file,
and to declare that file to be the output of the task. Otherwise, the
task has no declared outputs at all. A task with no outputs is
ineligible for caching and is always considered to be out-of-date.
squash! Declare a task's outputs, enabling incremental build and caching
The issue here is a planned change to how "publishing" blocks work.
Per
<https://docs.gradle.org/4.9/userguide/publishing_maven.html#publishing_maven:deferred_configuration>,
the right way to prepare for this change is to enable it and check for
unexpected changes in what gets published to a local repository. I
have done this, and find no unexpected changes.
So we are actually ready for Gradle 5.0; the warning is a false
positive for us. Leaving the future change enabled means we won't
keep seeing this warning. It also means that any further changes to
our use of "publishing" will be tested under that future change, which
is a good way to avoid surprises later.
Gradle won't pass absolute path when build libcast. We need to set install_name manually otherwise `dyld` would not able to find libcast at runtime.
This is only needed on macos since `ld` will look up all runtime search path automatically.
Fixes#328, which requested better diagnostic messages in the case of
a missing C/C++ compiler toolchain. Gradle actually has perfectly
good, informative messages in that case. Unfortunately, we were
killing the build by dereferencing null before Gradle had a chance to
explain. Now we bail out of some setup work early to avoid the null
dereference, thereby letting Gradle explain things properly.
Under some circumstances, Gradle seems to decide that the destination
file being absent is the download task's expected outcome. It caches
this state, and refuses to retry the download in the future since it
thinks the task is up-to-date. We can correct this by telling Gradle
that the task should not be considered up-to-date if the file is
missing, as recommended by
<https://discuss.gradle.org/t/task-up-to-date-but-outputfile-not-created/17568/2>.
In particular, using the "all" package (which includes source) allows
IntelliJ IDEA to provide autocompletion and other nice features that
are unavailable when using the "bin" package.
Fixes#322
We add an option `createPhantomSuperclasses` to `ClassHierarchy`. When set, if a superclass is missing, we create a new `PhantomClass` in its place and allow the subclass to be added.
To use, you can create the `ClassHierarchy` with the new `ClassHierarchyFactory.makeWithPhantom` methods.
We already had some IntelliJ IDEA project metadata files in ".idea".
I've revisited and updated those now that I have more experience with
Gradle + IntelliJ IDEA + Git. I think this now represents a better
set of decisions regarding what does and does not belong in version
control.
This commit also extends "README-Gradle.md" with clear instructions on
how to bringup WALA as a top-level IntelliJ IDEA project. The
instructions are of a similar flavor to the Eclipse instructions that
were already present, though the details vary. Most notably, with
IntelliJ IDEA you should *open* WALA as an existing project,
not *import* it as a new Gradle project derived from "build.gradle".
This is exactly the reverse of what one should and shouldn't do for
WALA in Eclipse.
When IntelliJ IDEA imports WALA's Gradle configuration, it creates
what is calls a "module" for each sourceSet of each Gradle subproject.
In so doing, it automatically picks up the source and resource
directories used by each of these sourceSets. Unfortunately, IntelliJ
IDEA does not allow multiple modules to share a single root directory
as their source or resource directories, and that's exactly what was
going on with the "example-src" subdirectory under
"com.ibm.wala.cast.js.test.data".
This revised Gradle configuration still has is copying the necessary
"example-src" resources to the appropriate locations for use as test
resources. But IntelliJ IDEA no longer treats "example-src" as a root
directory for resources in the automatically-generated modules. So
now we get along nicer with IntelliJ IDEA while keeping everything
working with Gradle as well.
This task serves a similar role to the "afterEclipseBuildshipImport"
task used with Eclipse. It should only be necessary to build this
task once: in a freshly checked-out tree, just after opening it for
the first time in IntelliJ IDEA.
Ideally this extra setup task would be triggered automatically using
the "Tasks Activation" feature of IntelliJ IDEA's Gradle support.
Unfortunately, "Tasks Activation" settings are recorded in
".idea/workspace.xml", which is used for non-revision-tracked personal
settings.
If Gradle dependencies are set up correctly, then it should be
possible to build any subproject starting with a pristine tree.
These take too long to use for every commit, pull request, etc. But
running an extensive test like this periodically (e.g., weekly) seems
reasonable.
Boxing a primitive using the constructor ("new Integer(4)") always
creates a distinct new boxed instance. That's rarely what you need,
and in fact all of those constructors have been deprecated in Java 9.
Using the static "valueOf" method instead ("Integer.valueOf(4)") can
give better performance by reusing existing instances. You no longer
get a unique boxed object, but generally that's OK.
"javah" was deprecated in Java 9, and has been removed entirely in
Java 10. The right way to generate headers now is by using the "-h"
flag to "javac". When "javac" is run in this way, it still generates
bytecode too. So ideally, we'd run "javac" just once to generate
bytecode and headers simultaneously. Fortunately, the Gradle JNI
plugin at <https://github.com/wpilibsuite/gradle-jni> help us do this
cleanly. Nice!
Previously we were compiling and linking "smoke_main", but not
actually running it as part of automated testing. I simply overlooked
this in the process of recreating the Maven build logic in Gradle.
Now we run "smoke_main" when testing, which turns out to be a pretty
good test of our management of both Java search paths as well as
linker / shared library search paths.
We now use "-rpath" on both Linux and macOS. This linker flag sets
the ELF RPATH on Linux, and the "@rpath" attribute on macOS, but
effectively it's doing the same thing, and that same thing is exactly
what we want. I think.
On Linux, we also now look for the JVM shared library in three
different places. The library has moved between Java 8 and 9, and
even on Java 9 it goes in a different place on Fedora 28 versus Ubuntu
16.04.
According to
<https://docs.gradle.org/current/userguide/native_software.html>,
"When you assemble dependents of a component, the component and all of
its dependents are compiled and linked, including any test suite
binaries." So it's intentional that the "assemble" task causes
creation of the smoke_main executable and xlator_test shared library.
Nothing TODO here; the current behavior is as designed.
The existing code worked fine under Java 8, but Java 9 fails to
resolve type constraints unless it has more explicit information about
at least one of the arguments to anyOf. Weird.
WALA itself does not use any JUnit 4.12 features. However, I am
working on a WALA-based project that does require JUnit 4.12. Mixing
jar versions is a path to madness. So if the WALA maintainers don't
mind, it would make my life easier if WALA itself required JUnit 4.12.
Otherwise, I need to maintain that 4.11/4.12 difference indefinitely
as a divergence between official WALA and my WALA variant.
I suppose an alternative could be to let the JUnit version be
specified externally somehow. I have not explored this option in
depth, but could look into it if simply moving to JUnit 4.12 is
undesirable.
Someone may have thought that we were ignoring these files, but we
aren't. From what I can tell, for these specific files, revision
tracking is intentional.
This follows an IntelliJ IDEA recommendation. Having the full
distribution allows IntelliJ IDEA to provide contextual help,
autocompletion, etc. for Gradle build scripts. The disadvantage, I
suppose, is that it imposes a larger download time on first use of
"gradlew".
New features that I like from this release: (1) better output grouping
when building in parallel, and (2) automatic test ordering to try
previously-failing tests first.
This gives the WALA maintainers the option of doing future 1.4.5+
releases from of a pre-Gradle branch if these merged Gradle changes
turn out to be more disruptive than expected.
The Eclipse metadata files created in this way are not identical to
those that Buildship would create when importing into Eclipse. The
tests in com.ibm.wala.cast.java.test.JDTJava15IRTests and
com.ibm.wala.cast.java.test.JDTJavaIRTests seem to pass using either
the Gradle-generated or the Buildship-generated versions.
As an aside, if you generate these files using Gradle first and *then*
import using Buildship, you end up with metadata that is identical to
what you would have had if you'd only imported with
Buildship. (There's one irrelevant difference in an unimportant
"<comment>" element.) So Maven's tests should run fine under any
wacky mix of Gradle- and Buildship-generated Eclipse metadata files.
That being said, I recommend not mixing build systems. WALA can be
built using either Maven, Gradle, or Eclipse+Buildship, but you're
probably better off only using one of these in any given build tree.
A mixed tree *should* probably work, but I haven't tested it
thoroughly, and consider better to avoid doing.
Incidentally, if there are other Maven-preparation steps that we'd
like Gradle to automate for us, that can now be done easily by
creating more "prepareMavenBuild" Gradle tasks in other subprojects
and adding the appropriate dependencies. For example, it would be
trivial to use this to automate downloading "/tmp/DroidBench",
installing the Android SDK, etc.
I don't know what changes are triggering this. Presumably it's
something to do with the temporary-file code, but I don't see why that
would happen. For now, let's just skip this test.
If multiple tests both write to "/tmp/cg.txt" (for example), then
these tests cannot safely run concurrently. That never used to be a
problem with Maven, since our Maven tests were all strictly sequential
anyway. But parallelized testing using Gradle requires that we do
better. Fortunately, Java has perfectly reasonable APIs for
generating uniquely-named temporary files and directories.
The performance improvement offered by the build cache is modest when
run by Travis CI. This is probably because Travis CI does not keep
the cache on any local machine. Instead, Travis CI uploads the build
cache across a network at the end of each run, then restores the cache
across the network at the start of the next run. So in many cases
we're simply trading network access to original downloads for network
access to the cache image.
Furthermore, it's probably a better test to perform Travis CI testing
jobs from something closer to a clean slate. We really want to know
whether everything builds and passes tests correctly starting from
nothing. We don't want to risk accidentally thinking something would
work simply because we have a cached copy of what it did when it
worked previously.
Previously we unpacked in one task, then installed two extra
components in two dependent tasks. However, installing extra
components modifies some files in place, effectively making those
files both inputs and outputs. That creates race conditions, and
probably interferes with task output caching. Better, then, to treat
the unpack and extra installations all as a single task whose output
is the complete Android SDK tree with all required components
installed.
This should give us a nice build-performance boost, both locally and
in Travis CI. I've used parallel builds routinely for months now, and
they're working fine. Build output caching is newer, but it also
seems to be working well and saves us tremendous time on downloads.
Without this setting, Gradle would run multiple "test" tasks from
multiple subprojects concurrently, but each "test" task would only run
a single test at a time. Some of our subprojects' test tasks take a
long time to complete, which could easily leave us sequentially
testing on just one or two CPU cores while other cores sit idle.
With this change, Gradle will use as many task slots as are
available (e.g., when run with "--parallel") to run multiple
simultaneous JUnit test classes within any given subproject. This
seems to be fine for us: I am unaware of any shared global state that
would cause conflicts between two test classes within any given
subproject.
When JNI headers for a given class, each nested class will end up with
its own additional header. But I don't want to try to parse nested
class details out of the Java source files just so we can determine
exactly which headers will be created. Instead, simply treat the
entire header destination directory as the output of this task.
This task has an input named "hello_hash.ml", and an output named
"hello_hash.jar". So calling this task "generateHelloHash" is too
vague. Now we call it "generateHelloHashJar" instead.
These manifest files are here for use by the Maven build, but Eclipse
is now using Gradle (via Buildship). So the manifests as seen by
Eclipse do not entirely make sense. I'm hesitant to change the
manifests directly, since presumably they were correct for Maven and
still are.
Perhaps some day we'll be using Gradle to generate manifests. Until
that day comes, we're better off leaving the manifests as they are and
just suppressing Eclipse's warnings instead.
This still doesn't actually work, but it's closer than it was before.
There's still some problem with improper mixing of 32-bit ("x86") and
64-bit ("x64") libraries.
Avoid allocating memory using strdup() and then releasing it using
operator delete. strdup()-allocated memory should be released using
free(), not delete. But in these specific cases, there really was
never any good reason to duplicate the C-style strings in the first
place. Instead, we can safely pass those NUL-terminated char pointers
directly in calls to JNI's NewStringUTF(). NewStringUTF() does not
take ownership of its argument, but rather clones that string data
internally before returning. So using strdup() and delete was just
unnecessary memory churn.
In cases where we need to format, concatenate, or construct new
strings, don't use sprintf() into variable-sized, stack-allocated
arrays. Variable-sized arrays are not portable, and in particular are
rejected by Microsoft's Visual Studio 2017 C++ compiler. Instead, do
our string formatting and manipulations using std::ostringstream
and/or std::string. We just need to be a bit careful about the
lifetimes and ownership responsibilities of allocated data. In
brief, (1) ostringstream::str() returns a temporary string instance
that expires at the end of the enclosing statement, independent of the
lifetime of the ostringstream instance; while (2) string::c_str()
returns an pointer to internal data that remains valid as long as the
string on which it was called is valid and unmodified.
The default location of DroidBench in "/tmp/DroidBench" does not work
well on Windows. So let's disable these tests until someone has time to
make that path more portable.
This URL skips over a redirect that the previous URL went through.
This URL also avoids an annoying "Invalid cookie header" warning that
the previous URL produced.
We now download and verify checksums as a single task, rather than as
two separate tasks. This simplifies other task dependencies, since we
no longer have a checksum-verified "stamp" file separate from the
download itself. Unfortunately the combined task now has a
significant amount of repeated boilerplate. I'm hoping to refactor
that all out into a custom task class, but haven't yet figured out the
details:
<https://github.com/michel-kraemer/gradle-download-task/issues/108>.
We now also use ETags to be smarter about when a fresh download is or
is not actually needed. I think there are still opportunities for
improved caching here, but this is a step in the right direction.
I believe Travis CI jobs get two CPUs by default.
Doing parallel builds regularly is also a good way to help us discover
any build race conditions we may have. There's no guarantee that any
such races will be revealed, but even exposing them
nondeterministically is better than having no possibility of exposing
them at all.
We used to use these to find various Eclipse packages, but that was
always a dodgy affair since we never quite knew whether we had
matching versions of everything. Now that we are using the
"com.diffplug.gradle.p2.asmaven" plug-in, though, we have much better
control over getting exactly the Eclipse material we need. These two
Maven repositories no longer provide anything we use, and therefore
can be removed.
Previously this launcher's job was to run "processTestResources" and
any other Gradle tasks needed to create files that Eclipse was
expecting to be available. But we also want to use it to revert the
bad changes that Buildship applies to ".launch" configuration files.
This is a temporary hack to work around
<https://github.com/eclipse/buildship/issues/653>.
Unlike Linux, macOS has no "RPATH" facility for embedding additional
search paths within shared libraries. Instead, we need to set
"DYLD_LIBRARY_PATH" appropriately in the environment. This
environment variable is the macOS analogue of "LD_LIBRARY_PATH" on
Linux.
Note that adding the required path to the "java.library.path" system
property will *not* work. This property only affects where the JVM
looks for shared objects that it is loading directly. This property
does not influence the search for other, transitively-required shared
objects.
Fixes#3.
This fixes the last of our Javadoc warnings without creating a
circular dependency between ":com.ibm.wala.cast:javadoc" and
":com.ibm.wala.cast.js:javadoc". Fixes#4, wherein more details about
this tricky dependency challenge can be found.
This lets the "DynamicCallGraphTest" tests pass. The tests in that
class expect to find some element of the classpath that includes the
substring "com.ibm.wala.core.testdata". They then treat that as a
collection of bytecode files to instrument.
Big thanks to Julian for showing me where this exclusion logic lives
in the Maven configuration. There's a "**/*AndroidLibs*.java"
exclusion pattern in the top-level "pom.xml".
It's telling me to remove "eclipse-deps:org.eclipse.core.runtime:+"
and "org.osgi:org.osgi.core:4.2.0" as unused dependencies in the
"com.ibm.wala.cast.java.ecj" subproject. However, these two
dependencies (jar files) are actually needed; compilation fails
without them.
This should give us a set of mutually-consistent jars rather than
picking up random, outdated pieces from Maven Central or wherever else
I could find them. We now also have a single, central place where we
set the Eclipse version that we're building against. Much, *much*
cleaner.
We're not going to attempt macOS Travis CI testing for Maven builds,
because I don't know whether that's even expected to work on the
official WALA master branch. Our main focus here is Gradle.
Note that Travis macOS images do not support JDK switching, so
explicitly selecting the JDK version now becomes part of the
Linux-only configuration.
Travis macOS images also do not support Android as a build "language".
So our Travis CI configuration for Gradle builds now declares that
this is a Java project rather than an Android one. That's OK, though,
because our Gradle scripts already handle downloading the Android SDK;
we don't need Travis CI to do that for us. When building using Maven,
though, we still call this an Android project because Maven builds do
still rely on Travis CI to provide the Android SDK.
squash! Enable macOS (a.k.a. OS X) Travis CI testing for Gradle builds
If future DroidBench changes include things we need, then we can
decide to move to those newer revisions. But we shouldn't allow
DroidBench to change out from under us implicitly whenever someone
commits something new to the DroidBench repository.
This lets us ditch pre-Java-8 in the Gradle build. (The official WALA
master branch recently got rid of pre-Java-8 in its Maven build.)
That, in turn, lets two "com.ibm.wala.dalvik.test" tests pass that
previously were failing. We still have two other failing tests in
that subproject, but this is definitely progress!
Our Gradle build scripts manage the entire process of downloading and
locally installing the appropriate Android SDK. That includes
automatically accepting a license. Maybe some lawyer will throw a fit
about that some day. Until then, I'd rather have a build system that
does everything needed without imposing additional manual steps on
developers.
Previously Buildship removed its classpath from all of these
launchers. Now it's automatically putting that back in as soon as I
visit each launcher in Eclipse's "Run Configurations" dialog. Not
sure what's going on here, but it certainly seems more sane to me to
assume that the Buildship-computed classpath *is* needed for all of
these. I have an open question on the Gradle discussion forum to try
to understand what's going on here and how to fix it:
<https://discuss.gradle.org/t/launchers-lose-buildship-classpath-on-import-regain-it-later/25641>.
This should prepare test resources for all subprojects. A WALA
developer should run this once before running any tests inside
Eclipse. Initially I'd hoped to make this more narrowly focused, but
Eclipse just doesn't have the infrastructure to deal with fine-grained
dependencies. On the other hand, running "./gradlew
eclipsePrepareTestResources" automatically for each build seems like
overkill, and could end up being rather slow. So for now we require
that the developer run this once, by hand.
A cleaned tree is now much closer to a pristine tree that has just
been checked out and never built. The only extra created files that
are left behind are ".gradle", "buildSrc/.gradle", and
"buildSrc/build".
This gets rid of some Eclipse warnings that stem from Buildship being
confused about what it should treat as a source directory if Maven and
Gradle are both being used in the same tree.
Previously we were repeating the library path twice, but that's not
good for long-term maintenance. That being said, extracting this
information from the depths of the native software model seems *far*
more complex than it should be. I had hoped for something nicer in
response to
<https://discuss.gradle.org/t/compute-wl-rpath-flag-suitable-for-native-shared-library/25278>,
but so far there's nothing.
Previously Maven did this, but Gradle did not. So Gradle testing
would only succeed if we'd already done a Maven build first. Now
these tests pass in a fresh tree that's never seen a Maven build.
Some tests in other subprojects do depend on some these extra jar
files. But they can declare those specific dependencies as needed.
Nothing seems to depend on the entire group of extra jars, so it's not
really useful to declare a task that is merely an alias for all of
them.
Many tests are excluded until
<https://github.com/liblit/WALA/issues/5> is fixed. But we can at
least have Travis-CI watching over our shoulder to ensure that
no *new* regressions sneak into the tree.
<https://github.com/liblit/WALA/issues/5> notes that several
subprojects' tests are currently broken under Gradle. I'd still like
to be able to run non-broken tests, though. So here I'm disabling the
failing tests. The intent is to treat these exclusions as a to-do
list. We can remove exclusions as we get the corresponding tests
working. No more exclusions means
<https://github.com/liblit/WALA/issues/5> is fixed.
It's rather slow, adding roughly five seconds to every "./gradlew"
invocation. And the advice it gives might not even be reaching a
fixed point. I like the idea of running the linter as part of CI
testing, but I now think it's overkill to impose on every developer
build.
I don't know whether Windows or MacOS needs anything similar. If they
do, the details will differ, and should be handled by adding suitable
cases to these switch statements.
This partially reverts 72bc456b7. I'm starting to wonder whether the
linter might be driving us in cycles rather than reaching a fixed
point. We should keep our eyes on this.
I'm not actually sure why this archive is needed, except that it is
mentioned in "META-INF/MANIFEST.MF" and "build.properties". If we
eventually stop supporting Maven, then we may be able to discard the
"copyJarsIntoLib" task and the corresponding lines in
"META-INF/MANIFEST.MF" and "build.properties"
This consistently happens when I import WALA as an existing Gradle
project into Eclipse with Buildship. I don't really know what this
change means, or whether it's desirable. For now, I'm going to trust
Buildship and see what happens.
I think these were previously not being compiled at all. Now, with
Buildship generating Eclipse ".project" settings automatically, these
are being processed. In general we don't care much about questionable
code in test data, though.
These settings files currently are generated with an initial timestamp
comment line, which is not something we'd want to track in version
control. Fortunately, the contents of these files are entirely
mundane, so there should be no problem with having Buildship generate
them anew each time a developer imports WALA into Eclipse as an
existing Gradle project.
Apparently Buildship generates these when one uses Import -> Existing
Gradle Project:
<https://discuss.gradle.org/t/buildship-eclipse-plug-in-multiproject-builds/24030/5>.
We can use the Gradle "eclipse" plugin if customizations are
necessary, but my impression is that the intent is to treat ".project"
and ".classpath" as generated files, not sources to be tracked in
source control.
Unfortunately these tests are still not finding their resources
properly at test run time. I don't know why. It seems to have
something to do with how the tests instantiate and use class loaders.
I'm probably going to need expert help with this.
Dependencies are still not set properly here, so you need to have
built the shared library ("./gradlew xlator_testSharedLibrary") before
running the ":com.ibm.wala.cast.test:test" test task. But at least
the tests do now find and load that shared library properly.
I was confused about the differences among:
srcDir 'foo'
srcDirs ['foo']
srcDirs = ['foo']
As it turns out, the first two append to the set of source
directories, while the last replaces this set entirely. I generally
want replacement, since WALA's current directory layout never matches
Gradle's assumed defaults.
The main requirement here is to arrange for the proper classpath
settings when tests are running so that they can find any associated
resources (i.e., other supporting files).
The tests are currently broken due to some sort of problem using class
loaders to find supporting resources. Until I figure this out, better
to have Travis-CI verify only the things we think work.
Specifically, we're not really in a position now to deal with
duplicated classes among our dependencies. Maybe we can try harder to
examine those in the future, but for now they are a distraction from
other issues that we can attack more readily.
Some of the linter's checks produce failures (errors) when Gradle
builds the Javadoc documentation. Fixing them isn't really a Gradle
issue, though, so I don't want to deal with them now.
Unfortunately the linter does not reach a fixpoint if you keep trying
to apply its suggestions. If you include "compile
'org.eclipse.core:org.eclipse.core.runtime:3.10.0.v20140318-2214'" in
the dependencies for "com.ibm.wala.ide.jdt", then the linter tells you
that this dependency is unused and can be removed. If you remove it,
then the linter tells you that it should be added. Sigh.
By default, each subproject's Javadoc task depends on the same
subproject's Java compilation task, and uses the same classpath.
Thus, any classes that some Java code uses will also be visible when
building the same Java code's documentation.
In this case, we need to see one of the "com.ibm.wala.core" classes in
order to build the "com.ibm.wala.util" documentation. However, we
cannot have Java compilation of "com.ibm.wala.util" depend on Java
compilation of "com.ibm.wala.core", because that would create a
dependency cycle. So we need to add this as a special dependency just
for the "com.ibm.wala.util" documentation task, and add the
appropriate classpath as well.
I'm quite proud of myself for figuring out how to do this properly.
This should help identify cases where the Gradle build only works if
it runs before or after a Maven build. It will also help us recognize
any Maven regressions accidentally introduced by our Gradle work.
Eventually I'll want to swap that order, so that we know that Gradle
builds work even without any help from Maven build setup logic. For
now, though, I just want to test whether the Gradle build works at
all.
This gives the WALA maintainers the option of doing future 1.4.5+
releases from of a pre-Gradle branch if these merged Gradle changes
turn out to be more disruptive than expected.
The Eclipse metadata files created in this way are not identical to
those that Buildship would create when importing into Eclipse. The
tests in com.ibm.wala.cast.java.test.JDTJava15IRTests and
com.ibm.wala.cast.java.test.JDTJavaIRTests seem to pass using either
the Gradle-generated or the Buildship-generated versions.
As an aside, if you generate these files using Gradle first and *then*
import using Buildship, you end up with metadata that is identical to
what you would have had if you'd only imported with
Buildship. (There's one irrelevant difference in an unimportant
"<comment>" element.) So Maven's tests should run fine under any
wacky mix of Gradle- and Buildship-generated Eclipse metadata files.
That being said, I recommend not mixing build systems. WALA can be
built using either Maven, Gradle, or Eclipse+Buildship, but you're
probably better off only using one of these in any given build tree.
A mixed tree *should* probably work, but I haven't tested it
thoroughly, and consider better to avoid doing.
Incidentally, if there are other Maven-preparation steps that we'd
like Gradle to automate for us, that can now be done easily by
creating more "prepareMavenBuild" Gradle tasks in other subprojects
and adding the appropriate dependencies. For example, it would be
trivial to use this to automate downloading "/tmp/DroidBench",
installing the Android SDK, etc.
I don't know what changes are triggering this. Presumably it's
something to do with the temporary-file code, but I don't see why that
would happen. For now, let's just skip this test.
If multiple tests both write to "/tmp/cg.txt" (for example), then
these tests cannot safely run concurrently. That never used to be a
problem with Maven, since our Maven tests were all strictly sequential
anyway. But parallelized testing using Gradle requires that we do
better. Fortunately, Java has perfectly reasonable APIs for
generating uniquely-named temporary files and directories.
The performance improvement offered by the build cache is modest when
run by Travis CI. This is probably because Travis CI does not keep
the cache on any local machine. Instead, Travis CI uploads the build
cache across a network at the end of each run, then restores the cache
across the network at the start of the next run. So in many cases
we're simply trading network access to original downloads for network
access to the cache image.
Furthermore, it's probably a better test to perform Travis CI testing
jobs from something closer to a clean slate. We really want to know
whether everything builds and passes tests correctly starting from
nothing. We don't want to risk accidentally thinking something would
work simply because we have a cached copy of what it did when it
worked previously.
Previously we unpacked in one task, then installed two extra
components in two dependent tasks. However, installing extra
components modifies some files in place, effectively making those
files both inputs and outputs. That creates race conditions, and
probably interferes with task output caching. Better, then, to treat
the unpack and extra installations all as a single task whose output
is the complete Android SDK tree with all required components
installed.
This should give us a nice build-performance boost, both locally and
in Travis CI. I've used parallel builds routinely for months now, and
they're working fine. Build output caching is newer, but it also
seems to be working well and saves us tremendous time on downloads.
Without this setting, Gradle would run multiple "test" tasks from
multiple subprojects concurrently, but each "test" task would only run
a single test at a time. Some of our subprojects' test tasks take a
long time to complete, which could easily leave us sequentially
testing on just one or two CPU cores while other cores sit idle.
With this change, Gradle will use as many task slots as are
available (e.g., when run with "--parallel") to run multiple
simultaneous JUnit test classes within any given subproject. This
seems to be fine for us: I am unaware of any shared global state that
would cause conflicts between two test classes within any given
subproject.
When JNI headers for a given class, each nested class will end up with
its own additional header. But I don't want to try to parse nested
class details out of the Java source files just so we can determine
exactly which headers will be created. Instead, simply treat the
entire header destination directory as the output of this task.
This task has an input named "hello_hash.ml", and an output named
"hello_hash.jar". So calling this task "generateHelloHash" is too
vague. Now we call it "generateHelloHashJar" instead.
These manifest files are here for use by the Maven build, but Eclipse
is now using Gradle (via Buildship). So the manifests as seen by
Eclipse do not entirely make sense. I'm hesitant to change the
manifests directly, since presumably they were correct for Maven and
still are.
Perhaps some day we'll be using Gradle to generate manifests. Until
that day comes, we're better off leaving the manifests as they are and
just suppressing Eclipse's warnings instead.
This still doesn't actually work, but it's closer than it was before.
There's still some problem with improper mixing of 32-bit ("x86") and
64-bit ("x64") libraries.
Avoid allocating memory using strdup() and then releasing it using
operator delete. strdup()-allocated memory should be released using
free(), not delete. But in these specific cases, there really was
never any good reason to duplicate the C-style strings in the first
place. Instead, we can safely pass those NUL-terminated char pointers
directly in calls to JNI's NewStringUTF(). NewStringUTF() does not
take ownership of its argument, but rather clones that string data
internally before returning. So using strdup() and delete was just
unnecessary memory churn.
In cases where we need to format, concatenate, or construct new
strings, don't use sprintf() into variable-sized, stack-allocated
arrays. Variable-sized arrays are not portable, and in particular are
rejected by Microsoft's Visual Studio 2017 C++ compiler. Instead, do
our string formatting and manipulations using std::ostringstream
and/or std::string. We just need to be a bit careful about the
lifetimes and ownership responsibilities of allocated data. In
brief, (1) ostringstream::str() returns a temporary string instance
that expires at the end of the enclosing statement, independent of the
lifetime of the ostringstream instance; while (2) string::c_str()
returns an pointer to internal data that remains valid as long as the
string on which it was called is valid and unmodified.
The default location of DroidBench in "/tmp/DroidBench" does not work
well on Windows. So let's disable these tests until someone has time to
make that path more portable.
This URL skips over a redirect that the previous URL went through.
This URL also avoids an annoying "Invalid cookie header" warning that
the previous URL produced.
We now download and verify checksums as a single task, rather than as
two separate tasks. This simplifies other task dependencies, since we
no longer have a checksum-verified "stamp" file separate from the
download itself. Unfortunately the combined task now has a
significant amount of repeated boilerplate. I'm hoping to refactor
that all out into a custom task class, but haven't yet figured out the
details:
<https://github.com/michel-kraemer/gradle-download-task/issues/108>.
We now also use ETags to be smarter about when a fresh download is or
is not actually needed. I think there are still opportunities for
improved caching here, but this is a step in the right direction.
I believe Travis CI jobs get two CPUs by default.
Doing parallel builds regularly is also a good way to help us discover
any build race conditions we may have. There's no guarantee that any
such races will be revealed, but even exposing them
nondeterministically is better than having no possibility of exposing
them at all.
We used to use these to find various Eclipse packages, but that was
always a dodgy affair since we never quite knew whether we had
matching versions of everything. Now that we are using the
"com.diffplug.gradle.p2.asmaven" plug-in, though, we have much better
control over getting exactly the Eclipse material we need. These two
Maven repositories no longer provide anything we use, and therefore
can be removed.
Previously this launcher's job was to run "processTestResources" and
any other Gradle tasks needed to create files that Eclipse was
expecting to be available. But we also want to use it to revert the
bad changes that Buildship applies to ".launch" configuration files.
This is a temporary hack to work around
<https://github.com/eclipse/buildship/issues/653>.
Unlike Linux, macOS has no "RPATH" facility for embedding additional
search paths within shared libraries. Instead, we need to set
"DYLD_LIBRARY_PATH" appropriately in the environment. This
environment variable is the macOS analogue of "LD_LIBRARY_PATH" on
Linux.
Note that adding the required path to the "java.library.path" system
property will *not* work. This property only affects where the JVM
looks for shared objects that it is loading directly. This property
does not influence the search for other, transitively-required shared
objects.
Fixes#3.
This fixes the last of our Javadoc warnings without creating a
circular dependency between ":com.ibm.wala.cast:javadoc" and
":com.ibm.wala.cast.js:javadoc". Fixes#4, wherein more details about
this tricky dependency challenge can be found.
This lets the "DynamicCallGraphTest" tests pass. The tests in that
class expect to find some element of the classpath that includes the
substring "com.ibm.wala.core.testdata". They then treat that as a
collection of bytecode files to instrument.
Big thanks to Julian for showing me where this exclusion logic lives
in the Maven configuration. There's a "**/*AndroidLibs*.java"
exclusion pattern in the top-level "pom.xml".
It's telling me to remove "eclipse-deps:org.eclipse.core.runtime:+"
and "org.osgi:org.osgi.core:4.2.0" as unused dependencies in the
"com.ibm.wala.cast.java.ecj" subproject. However, these two
dependencies (jar files) are actually needed; compilation fails
without them.
This should give us a set of mutually-consistent jars rather than
picking up random, outdated pieces from Maven Central or wherever else
I could find them. We now also have a single, central place where we
set the Eclipse version that we're building against. Much, *much*
cleaner.
We're not going to attempt macOS Travis CI testing for Maven builds,
because I don't know whether that's even expected to work on the
official WALA master branch. Our main focus here is Gradle.
Note that Travis macOS images do not support JDK switching, so
explicitly selecting the JDK version now becomes part of the
Linux-only configuration.
Travis macOS images also do not support Android as a build "language".
So our Travis CI configuration for Gradle builds now declares that
this is a Java project rather than an Android one. That's OK, though,
because our Gradle scripts already handle downloading the Android SDK;
we don't need Travis CI to do that for us. When building using Maven,
though, we still call this an Android project because Maven builds do
still rely on Travis CI to provide the Android SDK.
squash! Enable macOS (a.k.a. OS X) Travis CI testing for Gradle builds
If future DroidBench changes include things we need, then we can
decide to move to those newer revisions. But we shouldn't allow
DroidBench to change out from under us implicitly whenever someone
commits something new to the DroidBench repository.
This lets us ditch pre-Java-8 in the Gradle build. (The official WALA
master branch recently got rid of pre-Java-8 in its Maven build.)
That, in turn, lets two "com.ibm.wala.dalvik.test" tests pass that
previously were failing. We still have two other failing tests in
that subproject, but this is definitely progress!
Our Gradle build scripts manage the entire process of downloading and
locally installing the appropriate Android SDK. That includes
automatically accepting a license. Maybe some lawyer will throw a fit
about that some day. Until then, I'd rather have a build system that
does everything needed without imposing additional manual steps on
developers.
Previously Buildship removed its classpath from all of these
launchers. Now it's automatically putting that back in as soon as I
visit each launcher in Eclipse's "Run Configurations" dialog. Not
sure what's going on here, but it certainly seems more sane to me to
assume that the Buildship-computed classpath *is* needed for all of
these. I have an open question on the Gradle discussion forum to try
to understand what's going on here and how to fix it:
<https://discuss.gradle.org/t/launchers-lose-buildship-classpath-on-import-regain-it-later/25641>.
This should prepare test resources for all subprojects. A WALA
developer should run this once before running any tests inside
Eclipse. Initially I'd hoped to make this more narrowly focused, but
Eclipse just doesn't have the infrastructure to deal with fine-grained
dependencies. On the other hand, running "./gradlew
eclipsePrepareTestResources" automatically for each build seems like
overkill, and could end up being rather slow. So for now we require
that the developer run this once, by hand.
A cleaned tree is now much closer to a pristine tree that has just
been checked out and never built. The only extra created files that
are left behind are ".gradle", "buildSrc/.gradle", and
"buildSrc/build".
This gets rid of some Eclipse warnings that stem from Buildship being
confused about what it should treat as a source directory if Maven and
Gradle are both being used in the same tree.
Previously we were repeating the library path twice, but that's not
good for long-term maintenance. That being said, extracting this
information from the depths of the native software model seems *far*
more complex than it should be. I had hoped for something nicer in
response to
<https://discuss.gradle.org/t/compute-wl-rpath-flag-suitable-for-native-shared-library/25278>,
but so far there's nothing.
Previously Maven did this, but Gradle did not. So Gradle testing
would only succeed if we'd already done a Maven build first. Now
these tests pass in a fresh tree that's never seen a Maven build.
Some tests in other subprojects do depend on some these extra jar
files. But they can declare those specific dependencies as needed.
Nothing seems to depend on the entire group of extra jars, so it's not
really useful to declare a task that is merely an alias for all of
them.
Many tests are excluded until
<https://github.com/liblit/WALA/issues/5> is fixed. But we can at
least have Travis-CI watching over our shoulder to ensure that
no *new* regressions sneak into the tree.
<https://github.com/liblit/WALA/issues/5> notes that several
subprojects' tests are currently broken under Gradle. I'd still like
to be able to run non-broken tests, though. So here I'm disabling the
failing tests. The intent is to treat these exclusions as a to-do
list. We can remove exclusions as we get the corresponding tests
working. No more exclusions means
<https://github.com/liblit/WALA/issues/5> is fixed.
It's rather slow, adding roughly five seconds to every "./gradlew"
invocation. And the advice it gives might not even be reaching a
fixed point. I like the idea of running the linter as part of CI
testing, but I now think it's overkill to impose on every developer
build.
I don't know whether Windows or MacOS needs anything similar. If they
do, the details will differ, and should be handled by adding suitable
cases to these switch statements.
This partially reverts 72bc456b7. I'm starting to wonder whether the
linter might be driving us in cycles rather than reaching a fixed
point. We should keep our eyes on this.
I'm not actually sure why this archive is needed, except that it is
mentioned in "META-INF/MANIFEST.MF" and "build.properties". If we
eventually stop supporting Maven, then we may be able to discard the
"copyJarsIntoLib" task and the corresponding lines in
"META-INF/MANIFEST.MF" and "build.properties"
This consistently happens when I import WALA as an existing Gradle
project into Eclipse with Buildship. I don't really know what this
change means, or whether it's desirable. For now, I'm going to trust
Buildship and see what happens.
I think these were previously not being compiled at all. Now, with
Buildship generating Eclipse ".project" settings automatically, these
are being processed. In general we don't care much about questionable
code in test data, though.
These settings files currently are generated with an initial timestamp
comment line, which is not something we'd want to track in version
control. Fortunately, the contents of these files are entirely
mundane, so there should be no problem with having Buildship generate
them anew each time a developer imports WALA into Eclipse as an
existing Gradle project.
Apparently Buildship generates these when one uses Import -> Existing
Gradle Project:
<https://discuss.gradle.org/t/buildship-eclipse-plug-in-multiproject-builds/24030/5>.
We can use the Gradle "eclipse" plugin if customizations are
necessary, but my impression is that the intent is to treat ".project"
and ".classpath" as generated files, not sources to be tracked in
source control.
Unfortunately these tests are still not finding their resources
properly at test run time. I don't know why. It seems to have
something to do with how the tests instantiate and use class loaders.
I'm probably going to need expert help with this.
Dependencies are still not set properly here, so you need to have
built the shared library ("./gradlew xlator_testSharedLibrary") before
running the ":com.ibm.wala.cast.test:test" test task. But at least
the tests do now find and load that shared library properly.
I was confused about the differences among:
srcDir 'foo'
srcDirs ['foo']
srcDirs = ['foo']
As it turns out, the first two append to the set of source
directories, while the last replaces this set entirely. I generally
want replacement, since WALA's current directory layout never matches
Gradle's assumed defaults.
The main requirement here is to arrange for the proper classpath
settings when tests are running so that they can find any associated
resources (i.e., other supporting files).
The tests are currently broken due to some sort of problem using class
loaders to find supporting resources. Until I figure this out, better
to have Travis-CI verify only the things we think work.
Specifically, we're not really in a position now to deal with
duplicated classes among our dependencies. Maybe we can try harder to
examine those in the future, but for now they are a distraction from
other issues that we can attack more readily.
Some of the linter's checks produce failures (errors) when Gradle
builds the Javadoc documentation. Fixing them isn't really a Gradle
issue, though, so I don't want to deal with them now.
Unfortunately the linter does not reach a fixpoint if you keep trying
to apply its suggestions. If you include "compile
'org.eclipse.core:org.eclipse.core.runtime:3.10.0.v20140318-2214'" in
the dependencies for "com.ibm.wala.ide.jdt", then the linter tells you
that this dependency is unused and can be removed. If you remove it,
then the linter tells you that it should be added. Sigh.
By default, each subproject's Javadoc task depends on the same
subproject's Java compilation task, and uses the same classpath.
Thus, any classes that some Java code uses will also be visible when
building the same Java code's documentation.
In this case, we need to see one of the "com.ibm.wala.core" classes in
order to build the "com.ibm.wala.util" documentation. However, we
cannot have Java compilation of "com.ibm.wala.util" depend on Java
compilation of "com.ibm.wala.core", because that would create a
dependency cycle. So we need to add this as a special dependency just
for the "com.ibm.wala.util" documentation task, and add the
appropriate classpath as well.
I'm quite proud of myself for figuring out how to do this properly.
This should help identify cases where the Gradle build only works if
it runs before or after a Maven build. It will also help us recognize
any Maven regressions accidentally introduced by our Gradle work.
Eventually I'll want to swap that order, so that we know that Gradle
builds work even without any help from Maven build setup logic. For
now, though, I just want to test whether the Gradle build works at
all.
These are slow tests that we were already effectively turning into
no-ops when running on Travis CI. By skipping them using the proper
JUnit mechanism, these tests will show up as ignored or skipped in
test outcome reports. That's better than having them show up as
passing, when we really don't know whether they would have passed or
failed.
Using constructor references apparently pulls in something involving
nullness annotations. However, we don't actually build with a jar
file that defines those annotations, so this leads to Eclipse build
failures. I don't know the right way to add such a jar file to our
current configuration mishmash of Ant, Maven, and Eclipse. So the
easier thing to do is just disable annotation-based nullness analysis.
I doubt we were getting any benefit from such an analysis anyway,
given that WALA itself doesn't use those annotations at all.
Eclipse's automated code clean-up tool did most of the heavy lifting
here: it specifically has a clean-up option for converting functional
interfaces to lambdas. I merely had to revert the automated changes
for a single enumeration class for which it produced invalid results,
and for a few test inputs that apparently aren't set up to be compiled
with Java 8.
Previously FilterIterator was very permissive regarding the type
relationships between the original iterator, the filtered iterator,
and the predicate used to prune the former down to the latter. Now we
enforce those relationships more strictly, including proper use of
covariant ("<? extends T>") and contravariant ("<? super T>")
polymorphic type parameters where appropriate.
This lets us get rid of seven suppressed warnings about generic types
and/or unchecked conversions. It also moves us toward being able to
use modern Java features like lambdas and streams more easily.
Julian Dolby assures me that WALA is now supposed to be using Java 8
everywhere. This covers nearly all remaining places that I can find
where an earlier Java version was still being used. (The few
exceptions are places where switching to Java 8 causes test failures.
I'll address those separately, probably by reaching out to the WALA
maintainers for help.)
E-mail exchanged with Julian Dolby suggests that this is the right
thing to do, and that it should have been done back when we converted
other parts of the build configuration to Java 8.
These two modules refer to "AST.JLS8". If you have Java 9 installed,
then "AST.JLS8" is marked as deprecated, and we can a warning unless
we suppress or disable the deprecation warning wherever "AST.JLS8" is
used. However, if you don't have Java 9 installed, then "AST.JLS8" is
not deprecated, and trying to suppress deprecation warnings where
"AST.JLS8" is used instead produces warnings about unnecessary warning
suppression. Aagh! Turning off the deprecation warnings entirely for
these two modules seems like the only sane compromise.
Removing fixes one Eclipse error diagnostic: "Default encoding (UTF-8)
for library '.' should be removed as the workspace does not specify an
explicit encoding."
This reapplies the fox from ecd1ff72fe,
which was reverted (apparently unintentionally) as part of a larger
group of changes in 8d65788aef.
Near as I can tell, the requests for deprecated versions here are
intentional. The non-deprecated version (AST.JLS9) is the latest and
greatest, but as far as I can tell we really do want the older version
here.
This is similar to 6caecce3e7, though in
that case JLS8 was the non-reprecated latest version and we were still
asking for JLS3.
- if [ ! -d $M2_HOME/bin ]; then curl https://archive.apache.org/dist/maven/maven-3/3.5.0/binaries/apache-maven-3.5.0-bin.tar.gz | tar zxf - -C $HOME; fi