the declared target of the call site. This is needed to make sure
forName targets loaded with the Application loader get resolved to point
to the real metod reference for forName.
this issue actually manifested itself in the Kawa Chess program, and so
I have added an assertion to make sure this resolution is done properly.
Fixes inconsistent behavior of the exclusions argument.
Depending on the androidLib argument, setting exclusions to null
is either fine or raises and exception.
This patch makes exclusions truely optional for any case
when null is passed.
* Impl of IMethod isSynthetic and isWalaSynthetic
So far IMethod.isSynthetic referred to WALA-generated helper functions
and there was no equivalent to check whether an IMethod is synthetic in
terms of compiler-generated.
To make naming consistent this patch first renames the isSynthetic to
isWalaSynthetic to clearly indicate that a given IMethod was generated
by WALA. Then, we re-introduce isSynthetic that from now on checks
whether an IMethod is synthetic/compiler-generated (referring to the
synthetic flag in bytecode)
* Implementation of IClass.isSynthetic
Complementary to IMethod.isSynthetic, this method checks whether
an IClass is compiler-generated.
* updated JavaDoc
* Fixes IllegalStateException
Reverts refactored code with try-with-resource back to potentially
leaking implementation. The refactored code threw an exception since
JarFileModule does not implement the AutoClosable interface. Further,
removed the printStackTrace() call, as this is not an exceptional case
but intended control-flow in case DexFileModule creation fails.
* Downgrade JarFile leak diagnostic from warning to error
This is consistent with how we are treating potential JarFile leaks in
other WALA components. WALA issue #236 already notes that these
should be cleaned up eventually, although doing so will not be easy.
These specific test resources are already included in the "testArchives"
configuration of the "com.ibm.wala.core.tests" subproject, upon which
the "com.ibm.wala.dalvik.test" tests already depend. So there's no need
to also copy these resources into the "com.ibm.wala.dalvik.test" test
resources area as well.
Previously I hadn't realized that Gradle's "java" plugin would generate
default "cleanTest" tasks for us. By defining my own "cleanTest" tasks
we were replacing the generated ones, but what we really wanted to do
was augment them with additional files to delete.
Every dependency task listed here is already a dependency of at least
one subproject's "processTestResources" task, and each
"processTestResources" task already depends on the corresponding
"afterEclipseBuildshipImport" task. So listing these tasks here too
is unnecessary.
All Kawa-related downloads now use our existing VerifiedDownload task
class. This gives us Gradle-integrated progress reporting,
incremental build support, build caching, correct dependencies, etc.
Using Kawa to compile Scheme into bytecode now also has proper
dependency management, incremental build support, and build caching.
Same goes for bundling these compiled bytecode files into jar archives
for later use in regression tests.
Also, when downloading kawa-chess, grab a specific commit hash rather
than whatever is the most recent master commit. If this project
changes in the future, we don't want our tests to break unexpectedly.
Perhaps we'd want to pick up any new kawa-chess commits; perhaps not.
Either way, that should be a conscious decision rather than something
that can happen behind our backs.
This specific task runs an external command, and we consider the task
successful if that command exits without error. We don't actually
examine the stdout or stderr of the command being run.
However, it is still useful to log the stdout and stderr to a file,
and to declare that file to be the output of the task. Otherwise, the
task has no declared outputs at all. A task with no outputs is
ineligible for caching and is always considered to be out-of-date.
squash! Declare a task's outputs, enabling incremental build and caching
The issue here is a planned change to how "publishing" blocks work.
Per
<https://docs.gradle.org/4.9/userguide/publishing_maven.html#publishing_maven:deferred_configuration>,
the right way to prepare for this change is to enable it and check for
unexpected changes in what gets published to a local repository. I
have done this, and find no unexpected changes.
So we are actually ready for Gradle 5.0; the warning is a false
positive for us. Leaving the future change enabled means we won't
keep seeing this warning. It also means that any further changes to
our use of "publishing" will be tested under that future change, which
is a good way to avoid surprises later.
Gradle won't pass absolute path when build libcast. We need to set install_name manually otherwise `dyld` would not able to find libcast at runtime.
This is only needed on macos since `ld` will look up all runtime search path automatically.
Fixes#328, which requested better diagnostic messages in the case of
a missing C/C++ compiler toolchain. Gradle actually has perfectly
good, informative messages in that case. Unfortunately, we were
killing the build by dereferencing null before Gradle had a chance to
explain. Now we bail out of some setup work early to avoid the null
dereference, thereby letting Gradle explain things properly.
Under some circumstances, Gradle seems to decide that the destination
file being absent is the download task's expected outcome. It caches
this state, and refuses to retry the download in the future since it
thinks the task is up-to-date. We can correct this by telling Gradle
that the task should not be considered up-to-date if the file is
missing, as recommended by
<https://discuss.gradle.org/t/task-up-to-date-but-outputfile-not-created/17568/2>.
In particular, using the "all" package (which includes source) allows
IntelliJ IDEA to provide autocompletion and other nice features that
are unavailable when using the "bin" package.
Fixes#322
We add an option `createPhantomSuperclasses` to `ClassHierarchy`. When set, if a superclass is missing, we create a new `PhantomClass` in its place and allow the subclass to be added.
To use, you can create the `ClassHierarchy` with the new `ClassHierarchyFactory.makeWithPhantom` methods.
We already had some IntelliJ IDEA project metadata files in ".idea".
I've revisited and updated those now that I have more experience with
Gradle + IntelliJ IDEA + Git. I think this now represents a better
set of decisions regarding what does and does not belong in version
control.
This commit also extends "README-Gradle.md" with clear instructions on
how to bringup WALA as a top-level IntelliJ IDEA project. The
instructions are of a similar flavor to the Eclipse instructions that
were already present, though the details vary. Most notably, with
IntelliJ IDEA you should *open* WALA as an existing project,
not *import* it as a new Gradle project derived from "build.gradle".
This is exactly the reverse of what one should and shouldn't do for
WALA in Eclipse.
When IntelliJ IDEA imports WALA's Gradle configuration, it creates
what is calls a "module" for each sourceSet of each Gradle subproject.
In so doing, it automatically picks up the source and resource
directories used by each of these sourceSets. Unfortunately, IntelliJ
IDEA does not allow multiple modules to share a single root directory
as their source or resource directories, and that's exactly what was
going on with the "example-src" subdirectory under
"com.ibm.wala.cast.js.test.data".
This revised Gradle configuration still has is copying the necessary
"example-src" resources to the appropriate locations for use as test
resources. But IntelliJ IDEA no longer treats "example-src" as a root
directory for resources in the automatically-generated modules. So
now we get along nicer with IntelliJ IDEA while keeping everything
working with Gradle as well.
This task serves a similar role to the "afterEclipseBuildshipImport"
task used with Eclipse. It should only be necessary to build this
task once: in a freshly checked-out tree, just after opening it for
the first time in IntelliJ IDEA.
Ideally this extra setup task would be triggered automatically using
the "Tasks Activation" feature of IntelliJ IDEA's Gradle support.
Unfortunately, "Tasks Activation" settings are recorded in
".idea/workspace.xml", which is used for non-revision-tracked personal
settings.
If Gradle dependencies are set up correctly, then it should be
possible to build any subproject starting with a pristine tree.
These take too long to use for every commit, pull request, etc. But
running an extensive test like this periodically (e.g., weekly) seems
reasonable.
Boxing a primitive using the constructor ("new Integer(4)") always
creates a distinct new boxed instance. That's rarely what you need,
and in fact all of those constructors have been deprecated in Java 9.
Using the static "valueOf" method instead ("Integer.valueOf(4)") can
give better performance by reusing existing instances. You no longer
get a unique boxed object, but generally that's OK.
"javah" was deprecated in Java 9, and has been removed entirely in
Java 10. The right way to generate headers now is by using the "-h"
flag to "javac". When "javac" is run in this way, it still generates
bytecode too. So ideally, we'd run "javac" just once to generate
bytecode and headers simultaneously. Fortunately, the Gradle JNI
plugin at <https://github.com/wpilibsuite/gradle-jni> help us do this
cleanly. Nice!
Previously we were compiling and linking "smoke_main", but not
actually running it as part of automated testing. I simply overlooked
this in the process of recreating the Maven build logic in Gradle.
Now we run "smoke_main" when testing, which turns out to be a pretty
good test of our management of both Java search paths as well as
linker / shared library search paths.
We now use "-rpath" on both Linux and macOS. This linker flag sets
the ELF RPATH on Linux, and the "@rpath" attribute on macOS, but
effectively it's doing the same thing, and that same thing is exactly
what we want. I think.
On Linux, we also now look for the JVM shared library in three
different places. The library has moved between Java 8 and 9, and
even on Java 9 it goes in a different place on Fedora 28 versus Ubuntu
16.04.
According to
<https://docs.gradle.org/current/userguide/native_software.html>,
"When you assemble dependents of a component, the component and all of
its dependents are compiled and linked, including any test suite
binaries." So it's intentional that the "assemble" task causes
creation of the smoke_main executable and xlator_test shared library.
Nothing TODO here; the current behavior is as designed.
The existing code worked fine under Java 8, but Java 9 fails to
resolve type constraints unless it has more explicit information about
at least one of the arguments to anyOf. Weird.
WALA itself does not use any JUnit 4.12 features. However, I am
working on a WALA-based project that does require JUnit 4.12. Mixing
jar versions is a path to madness. So if the WALA maintainers don't
mind, it would make my life easier if WALA itself required JUnit 4.12.
Otherwise, I need to maintain that 4.11/4.12 difference indefinitely
as a divergence between official WALA and my WALA variant.
I suppose an alternative could be to let the JUnit version be
specified externally somehow. I have not explored this option in
depth, but could look into it if simply moving to JUnit 4.12 is
undesirable.
Someone may have thought that we were ignoring these files, but we
aren't. From what I can tell, for these specific files, revision
tracking is intentional.
This follows an IntelliJ IDEA recommendation. Having the full
distribution allows IntelliJ IDEA to provide contextual help,
autocompletion, etc. for Gradle build scripts. The disadvantage, I
suppose, is that it imposes a larger download time on first use of
"gradlew".
New features that I like from this release: (1) better output grouping
when building in parallel, and (2) automatic test ordering to try
previously-failing tests first.
This gives the WALA maintainers the option of doing future 1.4.5+
releases from of a pre-Gradle branch if these merged Gradle changes
turn out to be more disruptive than expected.
The Eclipse metadata files created in this way are not identical to
those that Buildship would create when importing into Eclipse. The
tests in com.ibm.wala.cast.java.test.JDTJava15IRTests and
com.ibm.wala.cast.java.test.JDTJavaIRTests seem to pass using either
the Gradle-generated or the Buildship-generated versions.
As an aside, if you generate these files using Gradle first and *then*
import using Buildship, you end up with metadata that is identical to
what you would have had if you'd only imported with
Buildship. (There's one irrelevant difference in an unimportant
"<comment>" element.) So Maven's tests should run fine under any
wacky mix of Gradle- and Buildship-generated Eclipse metadata files.
That being said, I recommend not mixing build systems. WALA can be
built using either Maven, Gradle, or Eclipse+Buildship, but you're
probably better off only using one of these in any given build tree.
A mixed tree *should* probably work, but I haven't tested it
thoroughly, and consider better to avoid doing.
Incidentally, if there are other Maven-preparation steps that we'd
like Gradle to automate for us, that can now be done easily by
creating more "prepareMavenBuild" Gradle tasks in other subprojects
and adding the appropriate dependencies. For example, it would be
trivial to use this to automate downloading "/tmp/DroidBench",
installing the Android SDK, etc.
I don't know what changes are triggering this. Presumably it's
something to do with the temporary-file code, but I don't see why that
would happen. For now, let's just skip this test.
If multiple tests both write to "/tmp/cg.txt" (for example), then
these tests cannot safely run concurrently. That never used to be a
problem with Maven, since our Maven tests were all strictly sequential
anyway. But parallelized testing using Gradle requires that we do
better. Fortunately, Java has perfectly reasonable APIs for
generating uniquely-named temporary files and directories.
The performance improvement offered by the build cache is modest when
run by Travis CI. This is probably because Travis CI does not keep
the cache on any local machine. Instead, Travis CI uploads the build
cache across a network at the end of each run, then restores the cache
across the network at the start of the next run. So in many cases
we're simply trading network access to original downloads for network
access to the cache image.
Furthermore, it's probably a better test to perform Travis CI testing
jobs from something closer to a clean slate. We really want to know
whether everything builds and passes tests correctly starting from
nothing. We don't want to risk accidentally thinking something would
work simply because we have a cached copy of what it did when it
worked previously.
Previously we unpacked in one task, then installed two extra
components in two dependent tasks. However, installing extra
components modifies some files in place, effectively making those
files both inputs and outputs. That creates race conditions, and
probably interferes with task output caching. Better, then, to treat
the unpack and extra installations all as a single task whose output
is the complete Android SDK tree with all required components
installed.
This should give us a nice build-performance boost, both locally and
in Travis CI. I've used parallel builds routinely for months now, and
they're working fine. Build output caching is newer, but it also
seems to be working well and saves us tremendous time on downloads.
Without this setting, Gradle would run multiple "test" tasks from
multiple subprojects concurrently, but each "test" task would only run
a single test at a time. Some of our subprojects' test tasks take a
long time to complete, which could easily leave us sequentially
testing on just one or two CPU cores while other cores sit idle.
With this change, Gradle will use as many task slots as are
available (e.g., when run with "--parallel") to run multiple
simultaneous JUnit test classes within any given subproject. This
seems to be fine for us: I am unaware of any shared global state that
would cause conflicts between two test classes within any given
subproject.
When JNI headers for a given class, each nested class will end up with
its own additional header. But I don't want to try to parse nested
class details out of the Java source files just so we can determine
exactly which headers will be created. Instead, simply treat the
entire header destination directory as the output of this task.
This task has an input named "hello_hash.ml", and an output named
"hello_hash.jar". So calling this task "generateHelloHash" is too
vague. Now we call it "generateHelloHashJar" instead.
These manifest files are here for use by the Maven build, but Eclipse
is now using Gradle (via Buildship). So the manifests as seen by
Eclipse do not entirely make sense. I'm hesitant to change the
manifests directly, since presumably they were correct for Maven and
still are.
Perhaps some day we'll be using Gradle to generate manifests. Until
that day comes, we're better off leaving the manifests as they are and
just suppressing Eclipse's warnings instead.
This still doesn't actually work, but it's closer than it was before.
There's still some problem with improper mixing of 32-bit ("x86") and
64-bit ("x64") libraries.
Avoid allocating memory using strdup() and then releasing it using
operator delete. strdup()-allocated memory should be released using
free(), not delete. But in these specific cases, there really was
never any good reason to duplicate the C-style strings in the first
place. Instead, we can safely pass those NUL-terminated char pointers
directly in calls to JNI's NewStringUTF(). NewStringUTF() does not
take ownership of its argument, but rather clones that string data
internally before returning. So using strdup() and delete was just
unnecessary memory churn.
In cases where we need to format, concatenate, or construct new
strings, don't use sprintf() into variable-sized, stack-allocated
arrays. Variable-sized arrays are not portable, and in particular are
rejected by Microsoft's Visual Studio 2017 C++ compiler. Instead, do
our string formatting and manipulations using std::ostringstream
and/or std::string. We just need to be a bit careful about the
lifetimes and ownership responsibilities of allocated data. In
brief, (1) ostringstream::str() returns a temporary string instance
that expires at the end of the enclosing statement, independent of the
lifetime of the ostringstream instance; while (2) string::c_str()
returns an pointer to internal data that remains valid as long as the
string on which it was called is valid and unmodified.
The default location of DroidBench in "/tmp/DroidBench" does not work
well on Windows. So let's disable these tests until someone has time to
make that path more portable.
This URL skips over a redirect that the previous URL went through.
This URL also avoids an annoying "Invalid cookie header" warning that
the previous URL produced.
We now download and verify checksums as a single task, rather than as
two separate tasks. This simplifies other task dependencies, since we
no longer have a checksum-verified "stamp" file separate from the
download itself. Unfortunately the combined task now has a
significant amount of repeated boilerplate. I'm hoping to refactor
that all out into a custom task class, but haven't yet figured out the
details:
<https://github.com/michel-kraemer/gradle-download-task/issues/108>.
We now also use ETags to be smarter about when a fresh download is or
is not actually needed. I think there are still opportunities for
improved caching here, but this is a step in the right direction.
I believe Travis CI jobs get two CPUs by default.
Doing parallel builds regularly is also a good way to help us discover
any build race conditions we may have. There's no guarantee that any
such races will be revealed, but even exposing them
nondeterministically is better than having no possibility of exposing
them at all.
We used to use these to find various Eclipse packages, but that was
always a dodgy affair since we never quite knew whether we had
matching versions of everything. Now that we are using the
"com.diffplug.gradle.p2.asmaven" plug-in, though, we have much better
control over getting exactly the Eclipse material we need. These two
Maven repositories no longer provide anything we use, and therefore
can be removed.
Previously this launcher's job was to run "processTestResources" and
any other Gradle tasks needed to create files that Eclipse was
expecting to be available. But we also want to use it to revert the
bad changes that Buildship applies to ".launch" configuration files.
This is a temporary hack to work around
<https://github.com/eclipse/buildship/issues/653>.
Unlike Linux, macOS has no "RPATH" facility for embedding additional
search paths within shared libraries. Instead, we need to set
"DYLD_LIBRARY_PATH" appropriately in the environment. This
environment variable is the macOS analogue of "LD_LIBRARY_PATH" on
Linux.
Note that adding the required path to the "java.library.path" system
property will *not* work. This property only affects where the JVM
looks for shared objects that it is loading directly. This property
does not influence the search for other, transitively-required shared
objects.
Fixes#3.
This fixes the last of our Javadoc warnings without creating a
circular dependency between ":com.ibm.wala.cast:javadoc" and
":com.ibm.wala.cast.js:javadoc". Fixes#4, wherein more details about
this tricky dependency challenge can be found.
This lets the "DynamicCallGraphTest" tests pass. The tests in that
class expect to find some element of the classpath that includes the
substring "com.ibm.wala.core.testdata". They then treat that as a
collection of bytecode files to instrument.
Big thanks to Julian for showing me where this exclusion logic lives
in the Maven configuration. There's a "**/*AndroidLibs*.java"
exclusion pattern in the top-level "pom.xml".
It's telling me to remove "eclipse-deps:org.eclipse.core.runtime:+"
and "org.osgi:org.osgi.core:4.2.0" as unused dependencies in the
"com.ibm.wala.cast.java.ecj" subproject. However, these two
dependencies (jar files) are actually needed; compilation fails
without them.
This should give us a set of mutually-consistent jars rather than
picking up random, outdated pieces from Maven Central or wherever else
I could find them. We now also have a single, central place where we
set the Eclipse version that we're building against. Much, *much*
cleaner.
We're not going to attempt macOS Travis CI testing for Maven builds,
because I don't know whether that's even expected to work on the
official WALA master branch. Our main focus here is Gradle.
Note that Travis macOS images do not support JDK switching, so
explicitly selecting the JDK version now becomes part of the
Linux-only configuration.
Travis macOS images also do not support Android as a build "language".
So our Travis CI configuration for Gradle builds now declares that
this is a Java project rather than an Android one. That's OK, though,
because our Gradle scripts already handle downloading the Android SDK;
we don't need Travis CI to do that for us. When building using Maven,
though, we still call this an Android project because Maven builds do
still rely on Travis CI to provide the Android SDK.
squash! Enable macOS (a.k.a. OS X) Travis CI testing for Gradle builds
If future DroidBench changes include things we need, then we can
decide to move to those newer revisions. But we shouldn't allow
DroidBench to change out from under us implicitly whenever someone
commits something new to the DroidBench repository.
This lets us ditch pre-Java-8 in the Gradle build. (The official WALA
master branch recently got rid of pre-Java-8 in its Maven build.)
That, in turn, lets two "com.ibm.wala.dalvik.test" tests pass that
previously were failing. We still have two other failing tests in
that subproject, but this is definitely progress!
Our Gradle build scripts manage the entire process of downloading and
locally installing the appropriate Android SDK. That includes
automatically accepting a license. Maybe some lawyer will throw a fit
about that some day. Until then, I'd rather have a build system that
does everything needed without imposing additional manual steps on
developers.
Previously Buildship removed its classpath from all of these
launchers. Now it's automatically putting that back in as soon as I
visit each launcher in Eclipse's "Run Configurations" dialog. Not
sure what's going on here, but it certainly seems more sane to me to
assume that the Buildship-computed classpath *is* needed for all of
these. I have an open question on the Gradle discussion forum to try
to understand what's going on here and how to fix it:
<https://discuss.gradle.org/t/launchers-lose-buildship-classpath-on-import-regain-it-later/25641>.
This should prepare test resources for all subprojects. A WALA
developer should run this once before running any tests inside
Eclipse. Initially I'd hoped to make this more narrowly focused, but
Eclipse just doesn't have the infrastructure to deal with fine-grained
dependencies. On the other hand, running "./gradlew
eclipsePrepareTestResources" automatically for each build seems like
overkill, and could end up being rather slow. So for now we require
that the developer run this once, by hand.
A cleaned tree is now much closer to a pristine tree that has just
been checked out and never built. The only extra created files that
are left behind are ".gradle", "buildSrc/.gradle", and
"buildSrc/build".
This gets rid of some Eclipse warnings that stem from Buildship being
confused about what it should treat as a source directory if Maven and
Gradle are both being used in the same tree.
Previously we were repeating the library path twice, but that's not
good for long-term maintenance. That being said, extracting this
information from the depths of the native software model seems *far*
more complex than it should be. I had hoped for something nicer in
response to
<https://discuss.gradle.org/t/compute-wl-rpath-flag-suitable-for-native-shared-library/25278>,
but so far there's nothing.
Previously Maven did this, but Gradle did not. So Gradle testing
would only succeed if we'd already done a Maven build first. Now
these tests pass in a fresh tree that's never seen a Maven build.
Some tests in other subprojects do depend on some these extra jar
files. But they can declare those specific dependencies as needed.
Nothing seems to depend on the entire group of extra jars, so it's not
really useful to declare a task that is merely an alias for all of
them.
Many tests are excluded until
<https://github.com/liblit/WALA/issues/5> is fixed. But we can at
least have Travis-CI watching over our shoulder to ensure that
no *new* regressions sneak into the tree.
<https://github.com/liblit/WALA/issues/5> notes that several
subprojects' tests are currently broken under Gradle. I'd still like
to be able to run non-broken tests, though. So here I'm disabling the
failing tests. The intent is to treat these exclusions as a to-do
list. We can remove exclusions as we get the corresponding tests
working. No more exclusions means
<https://github.com/liblit/WALA/issues/5> is fixed.
It's rather slow, adding roughly five seconds to every "./gradlew"
invocation. And the advice it gives might not even be reaching a
fixed point. I like the idea of running the linter as part of CI
testing, but I now think it's overkill to impose on every developer
build.
I don't know whether Windows or MacOS needs anything similar. If they
do, the details will differ, and should be handled by adding suitable
cases to these switch statements.
This partially reverts 72bc456b7. I'm starting to wonder whether the
linter might be driving us in cycles rather than reaching a fixed
point. We should keep our eyes on this.
I'm not actually sure why this archive is needed, except that it is
mentioned in "META-INF/MANIFEST.MF" and "build.properties". If we
eventually stop supporting Maven, then we may be able to discard the
"copyJarsIntoLib" task and the corresponding lines in
"META-INF/MANIFEST.MF" and "build.properties"
This consistently happens when I import WALA as an existing Gradle
project into Eclipse with Buildship. I don't really know what this
change means, or whether it's desirable. For now, I'm going to trust
Buildship and see what happens.
I think these were previously not being compiled at all. Now, with
Buildship generating Eclipse ".project" settings automatically, these
are being processed. In general we don't care much about questionable
code in test data, though.
These settings files currently are generated with an initial timestamp
comment line, which is not something we'd want to track in version
control. Fortunately, the contents of these files are entirely
mundane, so there should be no problem with having Buildship generate
them anew each time a developer imports WALA into Eclipse as an
existing Gradle project.
Apparently Buildship generates these when one uses Import -> Existing
Gradle Project:
<https://discuss.gradle.org/t/buildship-eclipse-plug-in-multiproject-builds/24030/5>.
We can use the Gradle "eclipse" plugin if customizations are
necessary, but my impression is that the intent is to treat ".project"
and ".classpath" as generated files, not sources to be tracked in
source control.
Unfortunately these tests are still not finding their resources
properly at test run time. I don't know why. It seems to have
something to do with how the tests instantiate and use class loaders.
I'm probably going to need expert help with this.
Dependencies are still not set properly here, so you need to have
built the shared library ("./gradlew xlator_testSharedLibrary") before
running the ":com.ibm.wala.cast.test:test" test task. But at least
the tests do now find and load that shared library properly.
I was confused about the differences among:
srcDir 'foo'
srcDirs ['foo']
srcDirs = ['foo']
As it turns out, the first two append to the set of source
directories, while the last replaces this set entirely. I generally
want replacement, since WALA's current directory layout never matches
Gradle's assumed defaults.
The main requirement here is to arrange for the proper classpath
settings when tests are running so that they can find any associated
resources (i.e., other supporting files).
The tests are currently broken due to some sort of problem using class
loaders to find supporting resources. Until I figure this out, better
to have Travis-CI verify only the things we think work.
Specifically, we're not really in a position now to deal with
duplicated classes among our dependencies. Maybe we can try harder to
examine those in the future, but for now they are a distraction from
other issues that we can attack more readily.
Some of the linter's checks produce failures (errors) when Gradle
builds the Javadoc documentation. Fixing them isn't really a Gradle
issue, though, so I don't want to deal with them now.
Unfortunately the linter does not reach a fixpoint if you keep trying
to apply its suggestions. If you include "compile
'org.eclipse.core:org.eclipse.core.runtime:3.10.0.v20140318-2214'" in
the dependencies for "com.ibm.wala.ide.jdt", then the linter tells you
that this dependency is unused and can be removed. If you remove it,
then the linter tells you that it should be added. Sigh.
By default, each subproject's Javadoc task depends on the same
subproject's Java compilation task, and uses the same classpath.
Thus, any classes that some Java code uses will also be visible when
building the same Java code's documentation.
In this case, we need to see one of the "com.ibm.wala.core" classes in
order to build the "com.ibm.wala.util" documentation. However, we
cannot have Java compilation of "com.ibm.wala.util" depend on Java
compilation of "com.ibm.wala.core", because that would create a
dependency cycle. So we need to add this as a special dependency just
for the "com.ibm.wala.util" documentation task, and add the
appropriate classpath as well.
I'm quite proud of myself for figuring out how to do this properly.
This should help identify cases where the Gradle build only works if
it runs before or after a Maven build. It will also help us recognize
any Maven regressions accidentally introduced by our Gradle work.
Eventually I'll want to swap that order, so that we know that Gradle
builds work even without any help from Maven build setup logic. For
now, though, I just want to test whether the Gradle build works at
all.
This gives the WALA maintainers the option of doing future 1.4.5+
releases from of a pre-Gradle branch if these merged Gradle changes
turn out to be more disruptive than expected.
The Eclipse metadata files created in this way are not identical to
those that Buildship would create when importing into Eclipse. The
tests in com.ibm.wala.cast.java.test.JDTJava15IRTests and
com.ibm.wala.cast.java.test.JDTJavaIRTests seem to pass using either
the Gradle-generated or the Buildship-generated versions.
As an aside, if you generate these files using Gradle first and *then*
import using Buildship, you end up with metadata that is identical to
what you would have had if you'd only imported with
Buildship. (There's one irrelevant difference in an unimportant
"<comment>" element.) So Maven's tests should run fine under any
wacky mix of Gradle- and Buildship-generated Eclipse metadata files.
That being said, I recommend not mixing build systems. WALA can be
built using either Maven, Gradle, or Eclipse+Buildship, but you're
probably better off only using one of these in any given build tree.
A mixed tree *should* probably work, but I haven't tested it
thoroughly, and consider better to avoid doing.
Incidentally, if there are other Maven-preparation steps that we'd
like Gradle to automate for us, that can now be done easily by
creating more "prepareMavenBuild" Gradle tasks in other subprojects
and adding the appropriate dependencies. For example, it would be
trivial to use this to automate downloading "/tmp/DroidBench",
installing the Android SDK, etc.
I don't know what changes are triggering this. Presumably it's
something to do with the temporary-file code, but I don't see why that
would happen. For now, let's just skip this test.
If multiple tests both write to "/tmp/cg.txt" (for example), then
these tests cannot safely run concurrently. That never used to be a
problem with Maven, since our Maven tests were all strictly sequential
anyway. But parallelized testing using Gradle requires that we do
better. Fortunately, Java has perfectly reasonable APIs for
generating uniquely-named temporary files and directories.
The performance improvement offered by the build cache is modest when
run by Travis CI. This is probably because Travis CI does not keep
the cache on any local machine. Instead, Travis CI uploads the build
cache across a network at the end of each run, then restores the cache
across the network at the start of the next run. So in many cases
we're simply trading network access to original downloads for network
access to the cache image.
Furthermore, it's probably a better test to perform Travis CI testing
jobs from something closer to a clean slate. We really want to know
whether everything builds and passes tests correctly starting from
nothing. We don't want to risk accidentally thinking something would
work simply because we have a cached copy of what it did when it
worked previously.
Previously we unpacked in one task, then installed two extra
components in two dependent tasks. However, installing extra
components modifies some files in place, effectively making those
files both inputs and outputs. That creates race conditions, and
probably interferes with task output caching. Better, then, to treat
the unpack and extra installations all as a single task whose output
is the complete Android SDK tree with all required components
installed.
This should give us a nice build-performance boost, both locally and
in Travis CI. I've used parallel builds routinely for months now, and
they're working fine. Build output caching is newer, but it also
seems to be working well and saves us tremendous time on downloads.
Without this setting, Gradle would run multiple "test" tasks from
multiple subprojects concurrently, but each "test" task would only run
a single test at a time. Some of our subprojects' test tasks take a
long time to complete, which could easily leave us sequentially
testing on just one or two CPU cores while other cores sit idle.
With this change, Gradle will use as many task slots as are
available (e.g., when run with "--parallel") to run multiple
simultaneous JUnit test classes within any given subproject. This
seems to be fine for us: I am unaware of any shared global state that
would cause conflicts between two test classes within any given
subproject.
When JNI headers for a given class, each nested class will end up with
its own additional header. But I don't want to try to parse nested
class details out of the Java source files just so we can determine
exactly which headers will be created. Instead, simply treat the
entire header destination directory as the output of this task.
This task has an input named "hello_hash.ml", and an output named
"hello_hash.jar". So calling this task "generateHelloHash" is too
vague. Now we call it "generateHelloHashJar" instead.
These manifest files are here for use by the Maven build, but Eclipse
is now using Gradle (via Buildship). So the manifests as seen by
Eclipse do not entirely make sense. I'm hesitant to change the
manifests directly, since presumably they were correct for Maven and
still are.
Perhaps some day we'll be using Gradle to generate manifests. Until
that day comes, we're better off leaving the manifests as they are and
just suppressing Eclipse's warnings instead.
This still doesn't actually work, but it's closer than it was before.
There's still some problem with improper mixing of 32-bit ("x86") and
64-bit ("x64") libraries.
Avoid allocating memory using strdup() and then releasing it using
operator delete. strdup()-allocated memory should be released using
free(), not delete. But in these specific cases, there really was
never any good reason to duplicate the C-style strings in the first
place. Instead, we can safely pass those NUL-terminated char pointers
directly in calls to JNI's NewStringUTF(). NewStringUTF() does not
take ownership of its argument, but rather clones that string data
internally before returning. So using strdup() and delete was just
unnecessary memory churn.
In cases where we need to format, concatenate, or construct new
strings, don't use sprintf() into variable-sized, stack-allocated
arrays. Variable-sized arrays are not portable, and in particular are
rejected by Microsoft's Visual Studio 2017 C++ compiler. Instead, do
our string formatting and manipulations using std::ostringstream
and/or std::string. We just need to be a bit careful about the
lifetimes and ownership responsibilities of allocated data. In
brief, (1) ostringstream::str() returns a temporary string instance
that expires at the end of the enclosing statement, independent of the
lifetime of the ostringstream instance; while (2) string::c_str()
returns an pointer to internal data that remains valid as long as the
string on which it was called is valid and unmodified.
The default location of DroidBench in "/tmp/DroidBench" does not work
well on Windows. So let's disable these tests until someone has time to
make that path more portable.
This URL skips over a redirect that the previous URL went through.
This URL also avoids an annoying "Invalid cookie header" warning that
the previous URL produced.
We now download and verify checksums as a single task, rather than as
two separate tasks. This simplifies other task dependencies, since we
no longer have a checksum-verified "stamp" file separate from the
download itself. Unfortunately the combined task now has a
significant amount of repeated boilerplate. I'm hoping to refactor
that all out into a custom task class, but haven't yet figured out the
details:
<https://github.com/michel-kraemer/gradle-download-task/issues/108>.
We now also use ETags to be smarter about when a fresh download is or
is not actually needed. I think there are still opportunities for
improved caching here, but this is a step in the right direction.
I believe Travis CI jobs get two CPUs by default.
Doing parallel builds regularly is also a good way to help us discover
any build race conditions we may have. There's no guarantee that any
such races will be revealed, but even exposing them
nondeterministically is better than having no possibility of exposing
them at all.
We used to use these to find various Eclipse packages, but that was
always a dodgy affair since we never quite knew whether we had
matching versions of everything. Now that we are using the
"com.diffplug.gradle.p2.asmaven" plug-in, though, we have much better
control over getting exactly the Eclipse material we need. These two
Maven repositories no longer provide anything we use, and therefore
can be removed.
Previously this launcher's job was to run "processTestResources" and
any other Gradle tasks needed to create files that Eclipse was
expecting to be available. But we also want to use it to revert the
bad changes that Buildship applies to ".launch" configuration files.
This is a temporary hack to work around
<https://github.com/eclipse/buildship/issues/653>.
Unlike Linux, macOS has no "RPATH" facility for embedding additional
search paths within shared libraries. Instead, we need to set
"DYLD_LIBRARY_PATH" appropriately in the environment. This
environment variable is the macOS analogue of "LD_LIBRARY_PATH" on
Linux.
Note that adding the required path to the "java.library.path" system
property will *not* work. This property only affects where the JVM
looks for shared objects that it is loading directly. This property
does not influence the search for other, transitively-required shared
objects.
Fixes#3.
This fixes the last of our Javadoc warnings without creating a
circular dependency between ":com.ibm.wala.cast:javadoc" and
":com.ibm.wala.cast.js:javadoc". Fixes#4, wherein more details about
this tricky dependency challenge can be found.
This lets the "DynamicCallGraphTest" tests pass. The tests in that
class expect to find some element of the classpath that includes the
substring "com.ibm.wala.core.testdata". They then treat that as a
collection of bytecode files to instrument.
Big thanks to Julian for showing me where this exclusion logic lives
in the Maven configuration. There's a "**/*AndroidLibs*.java"
exclusion pattern in the top-level "pom.xml".
It's telling me to remove "eclipse-deps:org.eclipse.core.runtime:+"
and "org.osgi:org.osgi.core:4.2.0" as unused dependencies in the
"com.ibm.wala.cast.java.ecj" subproject. However, these two
dependencies (jar files) are actually needed; compilation fails
without them.
This should give us a set of mutually-consistent jars rather than
picking up random, outdated pieces from Maven Central or wherever else
I could find them. We now also have a single, central place where we
set the Eclipse version that we're building against. Much, *much*
cleaner.
We're not going to attempt macOS Travis CI testing for Maven builds,
because I don't know whether that's even expected to work on the
official WALA master branch. Our main focus here is Gradle.
Note that Travis macOS images do not support JDK switching, so
explicitly selecting the JDK version now becomes part of the
Linux-only configuration.
Travis macOS images also do not support Android as a build "language".
So our Travis CI configuration for Gradle builds now declares that
this is a Java project rather than an Android one. That's OK, though,
because our Gradle scripts already handle downloading the Android SDK;
we don't need Travis CI to do that for us. When building using Maven,
though, we still call this an Android project because Maven builds do
still rely on Travis CI to provide the Android SDK.
squash! Enable macOS (a.k.a. OS X) Travis CI testing for Gradle builds
If future DroidBench changes include things we need, then we can
decide to move to those newer revisions. But we shouldn't allow
DroidBench to change out from under us implicitly whenever someone
commits something new to the DroidBench repository.
This lets us ditch pre-Java-8 in the Gradle build. (The official WALA
master branch recently got rid of pre-Java-8 in its Maven build.)
That, in turn, lets two "com.ibm.wala.dalvik.test" tests pass that
previously were failing. We still have two other failing tests in
that subproject, but this is definitely progress!
Our Gradle build scripts manage the entire process of downloading and
locally installing the appropriate Android SDK. That includes
automatically accepting a license. Maybe some lawyer will throw a fit
about that some day. Until then, I'd rather have a build system that
does everything needed without imposing additional manual steps on
developers.
Previously Buildship removed its classpath from all of these
launchers. Now it's automatically putting that back in as soon as I
visit each launcher in Eclipse's "Run Configurations" dialog. Not
sure what's going on here, but it certainly seems more sane to me to
assume that the Buildship-computed classpath *is* needed for all of
these. I have an open question on the Gradle discussion forum to try
to understand what's going on here and how to fix it:
<https://discuss.gradle.org/t/launchers-lose-buildship-classpath-on-import-regain-it-later/25641>.
This should prepare test resources for all subprojects. A WALA
developer should run this once before running any tests inside
Eclipse. Initially I'd hoped to make this more narrowly focused, but
Eclipse just doesn't have the infrastructure to deal with fine-grained
dependencies. On the other hand, running "./gradlew
eclipsePrepareTestResources" automatically for each build seems like
overkill, and could end up being rather slow. So for now we require
that the developer run this once, by hand.
A cleaned tree is now much closer to a pristine tree that has just
been checked out and never built. The only extra created files that
are left behind are ".gradle", "buildSrc/.gradle", and
"buildSrc/build".
This gets rid of some Eclipse warnings that stem from Buildship being
confused about what it should treat as a source directory if Maven and
Gradle are both being used in the same tree.
Previously we were repeating the library path twice, but that's not
good for long-term maintenance. That being said, extracting this
information from the depths of the native software model seems *far*
more complex than it should be. I had hoped for something nicer in
response to
<https://discuss.gradle.org/t/compute-wl-rpath-flag-suitable-for-native-shared-library/25278>,
but so far there's nothing.
Previously Maven did this, but Gradle did not. So Gradle testing
would only succeed if we'd already done a Maven build first. Now
these tests pass in a fresh tree that's never seen a Maven build.
Some tests in other subprojects do depend on some these extra jar
files. But they can declare those specific dependencies as needed.
Nothing seems to depend on the entire group of extra jars, so it's not
really useful to declare a task that is merely an alias for all of
them.
Many tests are excluded until
<https://github.com/liblit/WALA/issues/5> is fixed. But we can at
least have Travis-CI watching over our shoulder to ensure that
no *new* regressions sneak into the tree.
<https://github.com/liblit/WALA/issues/5> notes that several
subprojects' tests are currently broken under Gradle. I'd still like
to be able to run non-broken tests, though. So here I'm disabling the
failing tests. The intent is to treat these exclusions as a to-do
list. We can remove exclusions as we get the corresponding tests
working. No more exclusions means
<https://github.com/liblit/WALA/issues/5> is fixed.
It's rather slow, adding roughly five seconds to every "./gradlew"
invocation. And the advice it gives might not even be reaching a
fixed point. I like the idea of running the linter as part of CI
testing, but I now think it's overkill to impose on every developer
build.
I don't know whether Windows or MacOS needs anything similar. If they
do, the details will differ, and should be handled by adding suitable
cases to these switch statements.
This partially reverts 72bc456b7. I'm starting to wonder whether the
linter might be driving us in cycles rather than reaching a fixed
point. We should keep our eyes on this.
I'm not actually sure why this archive is needed, except that it is
mentioned in "META-INF/MANIFEST.MF" and "build.properties". If we
eventually stop supporting Maven, then we may be able to discard the
"copyJarsIntoLib" task and the corresponding lines in
"META-INF/MANIFEST.MF" and "build.properties"
This consistently happens when I import WALA as an existing Gradle
project into Eclipse with Buildship. I don't really know what this
change means, or whether it's desirable. For now, I'm going to trust
Buildship and see what happens.
I think these were previously not being compiled at all. Now, with
Buildship generating Eclipse ".project" settings automatically, these
are being processed. In general we don't care much about questionable
code in test data, though.
These settings files currently are generated with an initial timestamp
comment line, which is not something we'd want to track in version
control. Fortunately, the contents of these files are entirely
mundane, so there should be no problem with having Buildship generate
them anew each time a developer imports WALA into Eclipse as an
existing Gradle project.
Apparently Buildship generates these when one uses Import -> Existing
Gradle Project:
<https://discuss.gradle.org/t/buildship-eclipse-plug-in-multiproject-builds/24030/5>.
We can use the Gradle "eclipse" plugin if customizations are
necessary, but my impression is that the intent is to treat ".project"
and ".classpath" as generated files, not sources to be tracked in
source control.
Unfortunately these tests are still not finding their resources
properly at test run time. I don't know why. It seems to have
something to do with how the tests instantiate and use class loaders.
I'm probably going to need expert help with this.
Dependencies are still not set properly here, so you need to have
built the shared library ("./gradlew xlator_testSharedLibrary") before
running the ":com.ibm.wala.cast.test:test" test task. But at least
the tests do now find and load that shared library properly.
I was confused about the differences among:
srcDir 'foo'
srcDirs ['foo']
srcDirs = ['foo']
As it turns out, the first two append to the set of source
directories, while the last replaces this set entirely. I generally
want replacement, since WALA's current directory layout never matches
Gradle's assumed defaults.
The main requirement here is to arrange for the proper classpath
settings when tests are running so that they can find any associated
resources (i.e., other supporting files).
The tests are currently broken due to some sort of problem using class
loaders to find supporting resources. Until I figure this out, better
to have Travis-CI verify only the things we think work.
Specifically, we're not really in a position now to deal with
duplicated classes among our dependencies. Maybe we can try harder to
examine those in the future, but for now they are a distraction from
other issues that we can attack more readily.
Some of the linter's checks produce failures (errors) when Gradle
builds the Javadoc documentation. Fixing them isn't really a Gradle
issue, though, so I don't want to deal with them now.
Unfortunately the linter does not reach a fixpoint if you keep trying
to apply its suggestions. If you include "compile
'org.eclipse.core:org.eclipse.core.runtime:3.10.0.v20140318-2214'" in
the dependencies for "com.ibm.wala.ide.jdt", then the linter tells you
that this dependency is unused and can be removed. If you remove it,
then the linter tells you that it should be added. Sigh.
By default, each subproject's Javadoc task depends on the same
subproject's Java compilation task, and uses the same classpath.
Thus, any classes that some Java code uses will also be visible when
building the same Java code's documentation.
In this case, we need to see one of the "com.ibm.wala.core" classes in
order to build the "com.ibm.wala.util" documentation. However, we
cannot have Java compilation of "com.ibm.wala.util" depend on Java
compilation of "com.ibm.wala.core", because that would create a
dependency cycle. So we need to add this as a special dependency just
for the "com.ibm.wala.util" documentation task, and add the
appropriate classpath as well.
I'm quite proud of myself for figuring out how to do this properly.
This should help identify cases where the Gradle build only works if
it runs before or after a Maven build. It will also help us recognize
any Maven regressions accidentally introduced by our Gradle work.
Eventually I'll want to swap that order, so that we know that Gradle
builds work even without any help from Maven build setup logic. For
now, though, I just want to test whether the Gradle build works at
all.
These are slow tests that we were already effectively turning into
no-ops when running on Travis CI. By skipping them using the proper
JUnit mechanism, these tests will show up as ignored or skipped in
test outcome reports. That's better than having them show up as
passing, when we really don't know whether they would have passed or
failed.
Using constructor references apparently pulls in something involving
nullness annotations. However, we don't actually build with a jar
file that defines those annotations, so this leads to Eclipse build
failures. I don't know the right way to add such a jar file to our
current configuration mishmash of Ant, Maven, and Eclipse. So the
easier thing to do is just disable annotation-based nullness analysis.
I doubt we were getting any benefit from such an analysis anyway,
given that WALA itself doesn't use those annotations at all.
Eclipse's automated code clean-up tool did most of the heavy lifting
here: it specifically has a clean-up option for converting functional
interfaces to lambdas. I merely had to revert the automated changes
for a single enumeration class for which it produced invalid results,
and for a few test inputs that apparently aren't set up to be compiled
with Java 8.
Previously FilterIterator was very permissive regarding the type
relationships between the original iterator, the filtered iterator,
and the predicate used to prune the former down to the latter. Now we
enforce those relationships more strictly, including proper use of
covariant ("<? extends T>") and contravariant ("<? super T>")
polymorphic type parameters where appropriate.
This lets us get rid of seven suppressed warnings about generic types
and/or unchecked conversions. It also moves us toward being able to
use modern Java features like lambdas and streams more easily.
Julian Dolby assures me that WALA is now supposed to be using Java 8
everywhere. This covers nearly all remaining places that I can find
where an earlier Java version was still being used. (The few
exceptions are places where switching to Java 8 causes test failures.
I'll address those separately, probably by reaching out to the WALA
maintainers for help.)
E-mail exchanged with Julian Dolby suggests that this is the right
thing to do, and that it should have been done back when we converted
other parts of the build configuration to Java 8.
These two modules refer to "AST.JLS8". If you have Java 9 installed,
then "AST.JLS8" is marked as deprecated, and we can a warning unless
we suppress or disable the deprecation warning wherever "AST.JLS8" is
used. However, if you don't have Java 9 installed, then "AST.JLS8" is
not deprecated, and trying to suppress deprecation warnings where
"AST.JLS8" is used instead produces warnings about unnecessary warning
suppression. Aagh! Turning off the deprecation warnings entirely for
these two modules seems like the only sane compromise.
Removing fixes one Eclipse error diagnostic: "Default encoding (UTF-8)
for library '.' should be removed as the workspace does not specify an
explicit encoding."
This reapplies the fox from ecd1ff72fe,
which was reverted (apparently unintentionally) as part of a larger
group of changes in 8d65788aef.
Near as I can tell, the requests for deprecated versions here are
intentional. The non-deprecated version (AST.JLS9) is the latest and
greatest, but as far as I can tell we really do want the older version
here.
This is similar to 6caecce3e7, though in
that case JLS8 was the non-reprecated latest version and we were still
asking for JLS3.
currentSite should be ThreadLocal because not having this screws up a lot of the instrumentation-based dynamic call graphs we generate for the shootout benchmarks.
I have also conditionally changed how the string of the currentSite is created based on the output format that Julian came up with for the dynamic call graph. This support is necessary, because the Java std libraries are not instrumented. Therefore, they would appear as if calls from them show up from nowhere in the log that WALA generates for the dynamic call graph. This fix make those calls originate from a fake library BLOB node in the call graph.
- adding multi-flow AnalysisOption
- enable/disable special handling of zero-length arrays
Both are required to enable precise analysis of some Dacapo benchmarks when using the Averroes-generated summaries
This fixes two Eclipse Plug-in Development warnings of the form "Key
'...' is not found in localization properties file:
OSGI-INF/l10n/bundle.properties".
This fixes two Eclipse Plug-in Development warnings of the form "The
'javacProjectSettings' build entry should be set when there are
project specific compiler settings".
This removes three Eclipse Plug-in Development warnings of the form
"This plug-in does not export all of its packages"
I assume that omitting some exports is OK, because apparently nothing
else fails to build against these. If an omitted export were needed
elsewhere, something would fail to build.
This change affects both top-level subdirectory names as well as
Eclipse plug-in feature names. Perhaps it would have been possible to
change only the latter, but I don't like the idea of the two being
different.
These name changes fix three Eclipse plug-in warnings of the form:
Illegal value '...-feature' for attribute 'id'.
Legal token characters are "a-z", "A-Z", "0-9", "_". Tokens
must be separated by "."
I'll be the first to admit that I know nearly nothing about Eclipse
plug-in development. If changing these plug-in feature IDs has
broader implications that the automated regression tests won't detect,
then I probably overlooked them too. I would greatly appreciate
skeptical review of this change by someone who knows Eclipse plug-in
development well.
Note that personal Eclipse workspaces may need some manual adjustment
after this change. The three "...-feature" Eclipse projects should be
removed from the workspace, and the three corresponding "..._feature"
Eclipse projects should be added. If you do your git pull using
Eclipse's team features, perhaps it is smart enough to do this for
you? I don't know, but it wouldn't surprise me if fixing things
manually were still needed even in that case.
Specifically, this will run the "javadoc" goal of the
"maven-javadoc-plugin" and the "plugin-source" goal of the
"tycho-source-plugin" whenever Eclipse does a full build. It will not
run these on incremental builds, though. Maven plugins in general are
usually not designed with incremental execution in mind, so rerunning
them on every incremental build turns out to be too sluggish in
practice.
Previously, M2Eclipse would occasionally notice these two plugins,
realize it didn't know what to do with them, and produce Eclipse error
diagnostics that were difficult to resolve. With this change we are
telling Eclipse's Maven builder to just run the plugins in the natural
way even though M2Eclipse has no special handling built-in for them.
Fixes#198, much to my relief.
I would not bother to fix indentation by itself, but it makes sense to
fix this indentation now in advance of some larger changes coming to
this file soon.
In modern Java, we would not need to do this. Java 1.8 can infer the
generic type, which would allow us to write simply
"Collections.emptySet()" instead of
"Collections.<TypeReference>"emptySet()". Unfortunately, Android is
behind the times, so this specific project targets Java 1.5. That
older Java definition does not do the inference we would want here, so
we have to be explicit instead.
The former returns Set<T> for any T, whereas the latter merely returns
a raw Set. Using the former instead of the latter fixes four Eclipse
warnings about using raw generics.
Julian added this in a recent commit, but has told me that he does not
think it is necessary, and that I am free to remove it if it is
causing trouble. Removing this does indeed fix one Eclipse error
diagnostic: "Default encoding (UTF-8) for library '.' should be
removed as the workspace does not specify an explicit encoding."
In Eclipse projects that currently have no definite or potential
resource leaks, treat any such diagnostics as errors in the future.
In `com.ibm.wala.core`, enable warnings about definite or potential
resource leaks. Previously these diagnostics were turned off entirely
in this project. So we actually end up with more warnings now than we
had before, but they are all warnings we should eventually look into.
These are all problems that Eclipse can detect, but that it detects no
instances of right now. Treating these as warnings instead of errors
should help prevent us from slipping backward in the future.
This fixes an Eclipse Plug-In Development warning that reads "Source
folder 'dat/' does not have the output folder in corresponding output
entry 'output..'." It is not the default quick fix that Eclipse
suggests, and I am *definitely* not sure that this is the right way to
fix this problem. (I have nearly zero knowledge of Eclipse plug-in
development.)
That being said, this change does make the warning go away, and `mvn
clean install` without `-DskipTests` still succeeds.
In particular, per the corresponding Eclipse warning, "The
`javacProjectSettings` build entry should be set when there are
project specific compiler settings".
In particular, be quiet about being unable to resolve
"org.eclipse.jdt.launching.macosx", which was already marked as
`optional` anyway. I assume that I'm missing this because I'm not
developing on MacOS, and that it would be present if I were. Can
someone using MacOS confirm this?
Of course, doing nothing isn't always the right behavior. Sometimes a
previously-unhandled value is truly unexpected and one should fail by
throwing an exception. It may not always be clear whether an
exception or doing nothing is the right choice. For some `switch`
statements affected by this commit, I initially guessed that throwing
an exception was the right default behavior, but was proven wrong when
doing so caused WALA regression test failures. That's strong evidence
that the unmatched values were not really unexpected, but merely
should have been handled by doing nothing as before.
Previously each of these `switch` statements would implicitly do
nothing if an unanticipated `enum` value came along. My impression is
that each of these `switch` statements is supposed to be exhaustive,
such that an unexpected (unhandled) value should never appear. If one
does, we should recognize it and complain loudly.
Of course, sometimes the right behavior for previously-unhandled
values is to do nothing. It may not always be clear whether an
exception or doing nothing is the right choice. For this commit,
WALA's regression tests still pass even with the possibility of
throwing an exception for unexpected values. If we assume that the
test suite is thorough, that tells me that throwing an exception is
the right policy for each `switch` statement that I'm changing here.
This `switch` statement currently covers all possible values of the
`enum` it is testing. However, if a new value were introduced in the
future, the `switch` would have been silent about it instead of
printing a debug message as is done in all of the other cases. Better
to print *some* kind of debug in the default case too.
This `switch` statement currently covers all possible values of the
`enum` it is testing. However, if a new value were introduced in the
future, the `switch` would have allowed control-flow to fall through
by default instead of throwing an exception as is done in all of the
other cases. Better to throw *some* kind of exception in the default
case too.
Eclipse was warning that these `switch` statements had no `default`
cases. Each did have some default behavior, but implemented outside
the `switch`. By moving the default behavior into a `default` case
within the `switch`, we eliminate a static warning with no change
whatsoever to the run-time behavior.
Eclipse was warning about the lack of a `default` case in the `switch`
statement. None of the current `enum` values was actually missing
from the `switch`, but if the enum values list were extended in the
future, that would be an easy mistake to make.
Replacing the `switch` with dynamic dispatch to a method lets us
statically enforce the requirement that every enum value must
implement the behavior in some way.
Instead of having two `switch` statements on the Dot format
enumeration, give each Dot enumeration value a way to identify its own
preferred file suffix and command-line format name. This removes some
warnings about `switch` statements without default cases. It also
creates strong static enforcement that any new Dot format *must* also
provide this information.
Casting to `Foo<Bar>` results in an unchecked-cast warning due to Java
generics type erasure. However, sometimes we don't really need a
`Foo<Bar>`, but could simply use any `Foo<?>`. Casting to the latter
creates no warning.
This is the first time I can ever remember explicitly casting to
`Object`. Such a cast might seem useless, and certainly it is
statically known to always succeed. However, it is here for good
reason. Without the cast, we end up making a `Pair<Object, ?>` but
what we really want is a `Pair<Object, Object>`.
Each of these required careful consideration of what the original
developer *intended* as distinguished from what the developer's code
actually *does*. I believe I got each one right, and WALA's
regression tests agree. A second opinion by a core WALA developer
would be welcome, though.
Most of these are harmless, and are best fixed simply by removing the
redundant check or assignment. The one in FlowType.compareBlocks,
however, revealed a real problem. This code checks for nullness of
`a` *after* having called a method on `a`. Assuming that `a` can
indeed be `null` here, the check must come first to avoid a
`NullPointerException`.
In several places, I saw code of this form:
if (thing == null)
assert thing != null : ... ;
I honestly don't understand the purpose of that `if` statement. Why
not just have the `assert` statement there directly? I removed the
seemingly irrelevant `if` statements in these cases, but if this is
some intentional pattern, please explain it to me.
In a few places where nullness is statically known but non-obvious,
add assert statements to point out what's going on to help future
developers.
Upgrade future such warnings to errors to keep us moving in a cleaner
direction.
The single "No grammar constraints (DTD or XML Schema) referenced in
the document" warning arises in a generated file. I doubt that we can
change the generation process to include grammar information. Even if
we could, I don't mind omitting validation here assuming that we can
trust the generator tool to be correct. Validation is much more
important for human-authored XML; for tool-authored XML, we can live
without it.
We already have plenty of examples of Serializable classes with this
field, and the vast majority of those fields have generated IDs rather
than "1L". From this I infer that using proper serialVersionUID
fields is considered appropriate WALA coding style.
Also report unused variables as errors in the future, not just
warnings. We've fixed all of these as of right now, so let's keep it
clean in the future too.
One such annotation was unnecessary because the thing it was
suppressing no longer happens. Any future unnecessary warning
suppressions of this kind will now be treated as errors.
The other annotations were unnecessary because the corresponding
warnings have been disabled entirely in the Eclipse projects'
configurations. There seems to be no way to tell Eclipse to treat
these as anything other than "info" diagnostics in the future, so
that's how they will remain.
In general, my approach was to try to eliminate each unused parameter
using Eclipse's "Change Method Signature" refactoring. That did not
always succeed: a parameter may be unused in some base class method,
but then be used in subclass's override of that method. In cases
where refactoring to eliminate a parameter failed, I instead annotated
the parameter with '@SuppressWarnings("unused")' to silence the
warning.
Note: this group of changes creates a significant risk of
incompatibility for third-party WALA code. Some removed parameters
change externally-visible APIs. Furthermore, these changes do not
necessarily lead to Java compilation errors. For example, suppose
third-party code subclasses a WALA class or interface, overrides a
method, but does not annotate that method as @Override. Removing a
parameter means that the third-party method no longer overrides. This
can quietly change code behavior without compile-time errors or
warnings. This is exactly why one should use @Override wherever
possible, but we cannot guarantee that third-party WALA users have
done that.
Unnecessary "throws" declarations tend to cascade. If foo() calls
bar() and bar() falsely declares that it might throw IOException, that
often leads a programmer to declare that foo() might throw IOException
as well. Fixing the bar() throws declaration then reveals that we can
fix the foo() throws declaration too. By the time we reach a fixed
point with cleaning these up, we have removed roughly 320 unnecessary
throws declarations.
In a few cases, this cleanup even lets us remove entire "try
... catch" statements where the only thing being caught was an
exception that we now statically know cannot be thrown. Nice!
In Eclipse project configurations, upgrade any future such shenanigans
from warnings to errors. Now that we've fixed this, we don't want it
coming back again.
There is a potential drawback to this change. Conceivably some public
WALA API entry point might have declared that it could throw some
exception merely to reserve the *option* of throwing that exception in
third-party code that subclasses and overrides the API entry point in
question. I have no idea whether this is a significant concern in
practice, though.
These are all things one might consider fixing in real application
data. Java code used as test inputs, though, serves a different
purpose. Weird code is generally acceptable or even intentional.
Previously we had 5 such warnings. That's not very many, but it
suggests that the WALA developers consider this to be an acceptable
coding style. If that's so, then it's better to hide these warnings
rather than keep them around as a perpetual distraction.
Previously we had 227 such warnings. That large number suggests that
the WALA developers consider this to be an acceptable coding style.
If that's so, then it's better to hide these warnings rather than keep
them around as a perpetual distraction.
Previously we had 242 such warnings. That large number suggests that
the WALA developers consider this to be an acceptable coding style.
If that's so, then it's better to hide these warnings rather than keep
them around as a perpetual distraction.
I have *not* upgraded this problem to be treated as an error in the
future. Unfortunately Eclipse uses a single configuration setting for
both unnecessary semicolons and also for empty control-flow statements
like `while (p) ;`. I'm not convinced that it's worth rewriting all
instances of the latter into `while (p) { }`. So this is just going
to stay as a warning for now.
The fix is to add "static" where appropriate, of course. I've also
simplified calls to such methods to reflect the fact that they no
longer need a specific object to call the method on.
In projects that contain test inputs, I've left the non-static
declarations unchanged, and instead downgraded the warning to be
ignored. In all other projects, this warning has been upgraded to an
error.
In "com.ibm.wala.cast.test", the "cbuild.sh" script is gone entirely.
Instead, "pom.xml" directs Maven to run "make" directly. We still
have a "Makefile.configuration" file in this project, but this file is
now independent of where WALA is being built and of where Java is
installed.
In "com.ibm.wala.cast", a small "cbuild.sh" remains, to do some
special processing involving Visual Studio variables under Windows.
When not building under Windows, "cbuild.sh" now simply runs "make".
It may well be possible to hoist the special Windows stuff up into
this subproject's "pom.xml", or to change that "pom.xml" to run
"cbuild.sh" only when on Windows, and to run "make" directly
otherwise. I don't know "pom.xml" stuff very well, though. We still
have a "Makefile.configuration" file in this project, but this file is
now independent of where WALA is being built and of where Java is
installed.
The changes I've made in both "Makefile.configuration" files use GNU
make extensions. I assume that's safe because "Makefile.definitions"
already relies on GNU make.
These were not producing warnings in the Eclipse Oxygen GUI, and also
produced no warnings from Tycho when running Maven tests on my local
machine. However, they did result in errors under Travis-CI. I'm not
sure why this inconsistency exists, but hopefully we have now fixed
these raw-type uses in a way that makes everything happy.
In general, these diagnostics are now errors in projects for which all
such warnings have been fixed. There are three unfixed warnings in
two projects, so this diagnostic remains a warning (not an error) in
those projects.
There are also many places where rwa-types-usage warnings have been
locally suppressed using @SuppressWarnings annotations. I haven't
systematically revisited those to see if any can be fixed properly.
So for those projects this diagnostic must also remain a warning (not
an error), since @SuppressWarnings does not work on things Eclipse is
configured to treat as errors.
Along the way, I also converted many "for (;;)" loops into modern
"for (:)" loops. I didn't systematically look for all opportunities
to do this, though. I merely made this change where I was already
converting raw Iterator uses into modern Iterator<...> uses.
Better use of generics also allowed many casts to become statically
redundant. I have removed all such redundant casts.
Only three raw-types warnings remain after this batch of fixes. All
three involve raw uses of CallGraphBuilder. I've tried to fix these
too, but it quickly snowballs into a cascade of changes that may or
may not eventually reach a statically-type-save fixed point. I may
give these last few problem areas another go in the future. For now,
though, the hundreds of other fixes seem worth keeping even if there
are a few stragglers.
This commit may change some public APIs, but only by making weaker
type signatures stronger by replacing raw types with generic types.
For example, we may change something like "Set" into "Set<String>",
but we're not adding new arguments, changing any
underlying (post-generics-erasure) types, etc.
Apparently this code is built using Java 1.6 under Tycho. This leads
to complaints about the Java 1.7+ "<>" generic type inference feature,
if I try to use it. Weirdly, the Eclipse GUI does not complain about
this, so apparently the Eclipse GUI is using Java 1.7 or later. I do
not understand why Tycho and the Eclipse GUI are mismatched in this
way.
There are two such diagnostics: one for collection methods and one for
equals(). See
<https://www.eclipse.org/eclipse/news/4.7/jdt.php#unlikely-argument-types>
for more information about these two new diagnostics.
For each of these diagnostics, I've set the severity level to
"warning" in projects that have some instances of the suspicious code,
or to "error" in projects that have no instances of the suspicious
code.
In regular application code, these warnings should be taken seriously
and fixed. But for test code, it's better to keep things simple and
not add any methods that aren't strictly needed by the test.
Generally, overriding one means you should be overriding the other
too.
Also, configure Eclipse to treat any similar cases as errors, rather
than merely warnings.
In the `com.ibm.wala.util` project, configure Eclipse to treat any
future violations of this as errors, not merely warnings.
However, in `com.ibm.wala.cast.java.test.data`, configure Eclipse to
silently ignore missing @Override annotations. The JLex code in this
project is machine-generated, and we don't have a way to get the
generator to produce @Override annotations.
In general, test code may do all sorts of things that would be
considered poor style in production code. I assume that these
potentially-static methods are declared non-static by design.
These should mostly be things that we've already decided earlier that
we explicitly don't want to "fix" because they simply disagree with
the WALA project's coding style.
The additional diagnostics are ones that were previously being
ignored, but which we seem to have been ignoring by default rather
than as a conscious choice.
For diagnostics of which we currently have *zero* instances, treat
these as errors rather than merely warnings. The intent is to
permanently lock out future regressions of things we've completely
fixed. In the future, whenever we fix the last instance of a given
warning in a given Eclipse project, we should also promote that
diagnostic to an error to keep things clean into the future.
The Eclipse IDE shows no such diagnostics, so it would be nice to
treat them as errors if any appear in the future. However, the batch
Tycho-based build ("mvn clean install -DskipTests") does find and
report numerous such violations. This discrepancy is strange; Manu
and I currently don't know why it's happening. We suspect some
Maven-related weirdness may be creating slightly different
environments in the Eclipse GUI versus the command-line-driven build.
* Fix pom.xml to be like other projects
* Check for existence of nodejs download before re-downloading
* Fix Eclipse config files to be like other projects
This fixes one Eclipse "The JRE container on the classpath is not a
perfect match to the 'JavaSE-1.7' execution environment" warning in
the "Plug-in Development" category.
This fixes five Eclipse "Source folder '...' does not have the output
folder in corresponding output entry 'output..'" warnings in the
"Plug-in Development" category.
Previously some of these were accessing such fields through a subclass
of the declaring class. That creates an unnecessary extra inter-class
dependency lower in the type hierarchy than necessary.
Also, suppress this warning in an automated test input where the
indirect static accesses are explicitly intentional.
This test code is intentionally crafted to use instances to access
static methods. Eclipse's recommendation to access those methods
directly is, therefore, counterproductive.
These methods access only static members. They are methods of a final
class, which means no subclass can ever override these methods and use
dynamic dispatch to choose the implementation at run time. So
declaring these methods static is (statically) safe.
The method itself accesses only static members. The fact that it is
final means no subclass can ever override this method and use dynamic
dispatch to choose the implementation at run time. So declaring this
method static is (statically) safe.
If a method is private, there's no risk that a subclass elsewhere
might be overriding it and depending on dynamic dispatch to choose the
right implementation. So all of these private methods can safely be
declared static without risk of regression in either WALA code or
unseen third-party code.
The "potentially" qualifier is here because these methods are visible
outside the WALA source tree. These methods may seem OK to be static
based on the code we have here, but we have no way of knowing whether
third-party code expected to be able to subclass and override. I'm
going to play it safe and assume that we want to allow that.
Note that we are still allowing Eclipse warnings about methods that
can *definitely* be declared static; a different configuration option
controls these. For private methods, final methods, and methods in
final classes, if the code seems static-safe based on what we have
here, then that's good enough: we don't need to worry about
third-party overrides.
* Fixed bug for source analysis
Do while in case statement would throw an NPE. Now it doesn't
* Added test case
The test will fail with an NPE if the fix is not applied.
This fixes one Eclipse "The JRE container on the classpath is not a
perfect match to the 'JavaSE-1.7' execution environment" warning in
the "Plug-in Development" category.
This fixes five Eclipse "Source folder '...' does not have the output
folder in corresponding output entry 'output..'" warnings in the
"Plug-in Development" category.
In certain cases, one may want to ignore inter-procedural control
dependence. Consider the following example:
flag = getFlagVal();
if (flag) {
doStuff();
}
If we are ignoring interprocedural control dependence, a forward slice
from the first statement will *not* include statements inside doStuff()
and its transitive callees.
This option is useful in scenarios where the effects of statements
inside control-dependent callees can be accounted for via some cheaper
effect analysis. E.g., if you only care about heap effects of control-
dependent callees, you can compute that using mod-ref analysis,
rather than sucking all the control-dependent callee statements into the
slice.
Also added some more detailed comments, a new unit test, and removed
some trailing whitespace.
In all of these cases, the code used to initialize the unused local
seems nontrivial to me. I think this can all be removed without
erasing any needed side effects, but the code might still become
useful to someone in the future. So I'm not really removing the code
entirely, but merely commenting it out.
Previously we were saving these return values into local variables
that were never checked or used in any way. So effectively we were
already discarding these results anyway, but in a manner that produced
Eclipse warnings about unused local variables.
In each of these cases, the constructor directly or indirectly has
side effects that we want to keep, even if the object itself is not
retained and used by eht code that invokes `new`.
Using the `xvfb-run` script gives us automatic lifetime management for
the virtual X server: it will start before the `mvn` test script and
will terminate when that test script exits. The `xvfb-run` script
also sets `$DISPLAY` properly in the `mvn` test script, so we no
longer need to manage that setting ourselves.
This fixes 49 Eclipse code style warnings. I'm not sure why these
were overlooked in my previous sweep of missing-@Override warnings.
Ah well; got 'em this time around.
* Fix warnings about unset javacProjectSettings build entries
Specifically, these are all warnings of the form "The
'javacProjectSettings' build entry should be set when there are project
specific compiler settings".
* Add @Override annotations to all methods that do override
This fixes 287 Eclipse code style warnings.
* Cannot add @Override annotations here, so suppress warnings instead
We should be able to add these @Override annotations in the future,
one Eclipse Mars and earlier are no longer supported. For now,
though, they have to go away in order to be compatible with older
Eclipse releases.
* Fix warnings about unset javacProjectSettings build entries
Specifically, these are all warnings of the form "The
'javacProjectSettings' build entry should be set when there are project
specific compiler settings".
* Turn off Eclipse warnings about undocumented empty blocks
The presence of 65 such warnings in two packages suggests that the de
facto WALA coding style does not mandate documenting empty blocks.
Better to avoid printing warnings that will be routinely ignored, so
that other important warnings are more likely to be noticed.
* Turn off Eclipse warnings about synthetic accessor methods
Synthetic accessor methods allow access to otherwise inaccessible
members of enclosing types. The presence of 246 such warnings in
three packages suggests that the de facto WALA coding style does not
consider synthetic accessor methods to be problematic. Better to
avoid printing warnings that will be routinely ignored, so that other
important warnings are more likely to be noticed.
Specifically, these are all warnings of the form "The
'javacProjectSettings' build entry should be set when there are project
specific compiler settings".
* Add missing type parameters to overrides of ModRef.makeModVisitor
* Add missing type parameters to overrides of ModRef.makeRefVisitor
This also required slightly generalizing the generic type signature of
the ModRef.makeRefVisitor return type. Instead of returning
"RefVisitor<T, ExtendedHeapModel>", we now return "RefVisitor<T, ?
extends ExtendedHeapModel>". We need that extra flexibility because
at least one override of this method returns a RefVisitor whose second
generic type parameter is a subclass of ExtendedHeapModel.
* Use IAnalysisCacheView instead of AnalysisCache. (#1)
It seems that some additional types need to be changed due to
d24519e974. This may not be inclusive,
however.
* Increase visibility of several methods and constructors.
Used for creating custom contexts and context selectors.
* Ignore the results directory.
* Some generic type fixes in ModRef.java.
Specifically, we're turning off Eclipse warnings about missing version
constraints on required bundles ("Require-Bundle"), exported
packages ("Export-Package"), and imported packages ("Import-Package").
We're not turning these off absolutely everywhere, though: only in
packages where one or more such warnings were actually being reported.
So if a given package was already providing all version constraints
for, say, package imports, then we've kept that warning on in that
package.
Honestly I don't entirely understand the practical implications of
these warnings. However, there were 355 of them across many WALA
subprojects. I take this as evidence that the WALA developers do not
consider these version constraints to be important. Therefore, we may
as well stop warning about something we have no intention of fixing.
That being said, if we *do* want to fix some or all of these, I
welcome any advice on what those fixes should look like. I am rather
ignorant about all things OSGi.
Often the easiest way to create a desired test scenario is to write
code that would make no sense in a complete, realistic application.
So we generally want to let test code do oddball things.
Along with these fixes, convert several for loops that used explicit
iterators into newer-style for-each loops that hide the iterators and
casts inside the syntactic sugar. Nice!
However, I have not systematically tried to modernize *all* for loops
that could instead be for-each loops. Someone could certainly do that
at some point. In this commit, I only converted loops that I had to
touch anyway because they were using raw "Iterator" types.
Often the easiest way to create a desired test scenario is to write
code that would make no sense in a complete, realistic application.
So we generally want to let test code do oddball things.
Removing an unused field sometimes means removing constructor code
that used to initialize that field. Removing that initialization code
sometimes leaves whole constructor arguments unused. Removing those
unused arguments can leave us with unused code to compute those
arguments in constructors' callers, and so on. This commit tries to
clean all of this up, working backward from the unused fields that an
earlier commit already removed. Hopefully I have avoided removing
upstream code that had other important side effects, but it wouldn't
hurt for a WALA expert to review this change carefully.
If the true block of an `if` statement is guaranteed to exit early,
such as by a `return` or `throw`, then any code appearing in a
corresponding `else` clause could just as well have appeared after the
`if` statement entirely. Eclipse can warn about this.
However, Manu prefers to let such code stay in the `else` clauses.
OK, sure: this is more a matter of personal taste than something truly
problematic. Per Manu's request, then, we're turning off that Eclipse
warning in the subprojects in which it currently arises.
Manu requested that we use this approach instead of adding
`@SuppressWarnings("unused")` at each affected catch block. That
seems reasonable to me, given the large number of such warnings and
the lack of likely harm from ignoring such caught exceptions.
Fixing these Javadoc comments would require adding packages to various
other packages' build paths. In some of the cases suppressed,
changing build paths in that manner would create circular build
dependencies. In other cases, it would simply add a Javadoc-motivated
dependency that does not exist for the real code, which seems
undesirable. For a few cases, the reference seems to be to types in
code we don't even have here, such as code from "android" or
"org.mozilla" packages.
These changes turn off Eclipse warnings for Javadoc comments with
missing tags, such as "@throw" or "@param".
We don't turn this warning off in all projects. Rather, we turn it
off only in projects that were producing at least one such warning.
In other words, if a project was already completely "clean" with
respect to this warning, then we leave this warning enabled for that
project.
Turning off these warnings is a partial declaration of Javadoc
bankruptcy. In an ideal world, we would enable and fix all of these
warnings. However, there are 327 of them. Apparently the WALA team's
implicit coding style says that omitting Javadoc tags is OK. If
there's no intent to systematically add these tags, then we may as
well turn off these warnings so that we can see other warnings that we
may want to fix.
These changes turn off Eclipse warnings for documentable items without
Javadoc comments. In some subprojects, we turn these off entirely.
In others, we leave these warnings on for public items but not for
items whose visibility is protected or below.
We don't turn this warning off in all projects. Rather, we turn it
off only in projects that were producing at least one such warning.
In other words, if a project was already completely "clean" with
respect to this warning, then we leave this warning enabled for that
project.
Turning off these warnings is a partial declaration of Javadoc
bankruptcy. In an ideal world, we would enable and fix all of these
warnings. However, there are 1,366 of them. Apparently the WALA
team's implicit coding style says that omitting Javadoc comments is
OK. If there's no intent to systematically add documentation, then we
may as well turn off these warnings so that we can see other warnings
that we may want to fix.
These arise, for example, when Javadoc documentation on a public class
includes a @link to a private field. I can see how this would be
problematic for closed-source Java code where private items are
invisible to outsiders. However, given that WALA is open source,
nothing is truly non-visible. If the WALA documentation authors
considered non-visible references useful when explaining things,
that's fine with me.
We don't turn this warning off in all projects. Rather, we turn it
off only in projects that were producing at least one such warning.
In other words, if a project was already completely "clean" with
respect to this warning, then we leave this warning enabled for that
project.
These changes turn off Eclipse warnings for Javadoc tags without
descriptions. In some subprojects, we turn these off entirely. In
others, leave on missing-descrption checks for "@return" tags only.
We don't turn this warning off in all projects. Rather, we turn it
off only in projects that were producing at least one such warning.
In other words, if a project was already completely "clean" with
respect to this warning, then we leave this warning enabled for that
project.
Turning off these warnings is a partial declaration of Javadoc
bankruptcy. In an ideal world, we would enable and fix all of these
warnings. However, there are 576 of them. Apparently the WALA team's
implicit coding style says that omitting descriptions is OK. If
there's no intent to systematically add descriptions, then we may as
well turn off these warnings so that we can see other warnings that we
may want to fix.
As it turns out, I should have been using "? extends InstanceKey" rather
than "? super InstanceKey". But really, we can just use "?" here since
HeapGraph itself constrains its own type parameter appropriately.
Effectively these two checks could only be false if the instance being
tested were null. So we replace the instanceof checks with null
checks. Sometimes that, in turn, makes other surrounding code
simpler. In the case of ApplicationLoaderFilter.test, for example,
a whole conditional case ("o instanceof LocalPointerKey") becomes
statically impossible. That seems a bit strange to me, but that's
what the code was effectively doing.
In the cases addressed here, the caught exception was being "handled"
by throwing some new exception. Instead of discarding the old
exception, pass it to the new exception's constructor to indicate the
original cause of the newly-created exception. This practice, called
"exception chaining", can often be useful in debugging.
Note: some of these methods are decidedly nontrivial. Perhaps they
should not actually be removed? If any should be kept around, please
identify them to me. I'll revise this change to retain those methods
and simply annotate them as needed to suppress Eclipse's warning.
Specifically, these warnings are always of the form "Statement
unnecessarily nested within else clause. The corresponding then clause
does not complete normally". I can see this being a matter of
personal taste, actually. If the WALA maintainers decide that they
prefer to keep this code as it was before, that's OK with me: I'll
replace this group of code changes with a group of settings changes
that tell Eclipse to quietly ignore this "problem".
Eclipse warns that the "if" statements' true blocks are dead, and
indeed the conditions being tested here can never be true. It's a
little subtle why that's so, though. Changing them to "assert"
statements removes two warnings about deprecated code, while still
helping human readers understand what invariants must hold here.
The assert will throw AssertionError if it finds a bad argument,
unless the code is running without assertion checking enabled. The
"if" will throw an IllegalArgumentException if it finds the same bad
argument, and that check is always enabled. Better to be consistent
in what exception is used to report a bad argument. Since the "if" is
always active, let's go with that and remove the redundant,
not-always-active assert.
We used to use junit.framework.Assert for test assertions, and that
class reports failures by throwing
junit.framework.AssertionFailedError. However, junit.framework.Assert
is obsoleted by org.junit.Assert, and we've already replaced the
former with the latter. org.junit.Assert reports failures by throwing
java.lang.AssertionError, so that's what we need to expect in our
testing infrastructure.
All of these involve conditionals that check some static, final debug
flag or debug level. The code will indeed be dead if WALA is built
with those debug facilities turned off. But we still want the code
present in case someone needs to turn some aspect of debugging on for
a while.
The supposedly deprecated function (CGNode.addTarget) really seems to
be intended for internal use, not deprecated per se. We are an
internal user here, so presumably it's OK for us to be using this API
entry point.
Near as I can tell, the request for a deprecated version here is
intentional. The non-deprecated version (AST.JLS8) is the latest and
greatest, but as far as I can tell we really do want the older version
here.
This should make it easier for newcomers to get WALA up and running
from sources. One no longer needs to manually find "dx.jar" using
instructions given at
<http://wala.sourceforge.net/wiki/index.php/UserGuide:Getting_Started#Building_the_code>
or
<https://groups.google.com/forum/#!msg/wala-sourceforge-net/cBYsfEvYVG0/Ua52dyQQU-YJ>.
Those instructions were already out-of-date, anyway.
Given that an official release of "dx.jar" is available in Maven, it
would be nice to have Maven itself manage that dependency.
Unfortunately, it seems that WALA in general does not use Maven for
dependency management. As far as I can tell, the Maven configuration
just calls out to Ant for doing builds, including having Ant download
other needed components. If WALA ever moved to using Maven more
fully, downloading "dx.jar" could easily come along with that.
This arises at some point during "mvn compile install". I'm not sure
exactly when, but it's definitely automated, and therefore not
appropriate to track in git.
This fixes the remaining 34 Eclipse "Resource '...' should be managed
by try-with-resource" warnings that were still left after the previous
commit.
Unlike the fixes in that previous commit, the changes here are *not*
plugging potential resource leaks. However, in many cases that is
simply because the code before the close() call cannot currently throw
exceptions. If exceptions became possible in the future, leaks could
result. Using try-with-resource preemptively avoids that.
Furthermore, in code that was already dealing with exceptions, the
try-with-resource style is usually considerably simpler.
This fixes 33 out of 37 Eclipse "Potential resource leak: '...' may
not be closed" warnings. It also fixes 3 out of 37 Eclipse "Resource
'...' should be managed by try-with-resource" warnings, although that
was not the main focus of this effort.
The remaining 4 warnings about potential resource leaks all involve a
leaked JarFile instance that is passed to a JarFileModule constructor
call. JarFileModile never attempts to close its underlying JarFile;
this code is written as though JarFile cleanup were the caller's
responsibility. However, the JarFile often cannot be closed by the
code that creates the JarFileModule either, since the JarFile needs to
remain open while the JarFileModule is in use, and some of these
JarFileModules stay around beyond the lifetime of the code that
created them. Truly fixing this would essentially require making
JarFileModule implement Closeable, which in turn would probably
require that Module implement Closeable, which in turn would require
changes to lots of code that deals with Module instances to arrange
for them to be properly closed. That's more invasive than I'm
prepared to take on right now.
Instead, rely on Java's ability to infer type parameters in many
contexts. This removes 665 Eclipse warnings.
Note: a few of these changes are to files under "test" subdirectories.
Presumably those are files that serve as test inputs rather than being
part of WALA code proper. As far as I can tell, these changes do not
break any WALA tests. But if any of those tests were specifically
intended to exercise WALA on code with non-inferred generic type
parameters, then I really should be leaving those alone.
Since SSAPhiInstructions are never visited by NullPointerTransferFunctionProvider.TransferFunctionSSAVisitor,
we now respect phi instructions present at a given block by providing additional NodeTransferFunctions, improving precision.
Formerly, meets would lead to incorrect results due to incorrect initialization of initial data flow facts.
These are now properly initialized, interpreting
"State.BOTH" to mean: both "null" and "non-null" are possible values for the given variable, and
"State.UNKNOWN" to be the absurd assertion.
The initial fact at the entry block assumes variables to be BOTH, other blocks are initialy assumed unreachable and hence their variables to be UNKNOWN.
Dalvik bytecode represents 'null' as 0 which may lead to problems
in phi instructions. This variant of TypeInference fixes these
problems by
- tracking whether an SSA value is constant zero and
- ignoring constant zeros in the transfer function of phi instructions
when meeting with non-primitive types
The setting should comply with the comment. Plus,
turning it on seems to lead to some unsoundness because
exception points-to sets become empty but should not be
Access is provided via corresponding methods in FieldImpl, ShrikeCTMethod and ShrikeClass.
Since we do not currently have implementation of these methods for front-ends other than Shrike, these new methods are not yet made available in the corresponding interfaces.
Previously, the various Eclipse projects' Java configurations used
mixtures of 1.6, 1.7, and 1.8. Many were internally inconsistent,
such as requiring 1.7 in "MANIFEST.MF" but 1.6 in the Eclipse JDT
build preferences. The Travis-CI configuration tests against both 1.7
and 1.8, but does not test against 1.6.
Across all projects, the most common version was 1.7. So I'm going to
assume that 1.7 is the intended build target. This commit makes 1.7
the selected version nearly everywhere.
"com.ibm.wala.core.testdata" is the one exception. This specific
project uses a few features only found in 1.8, such as lambda
expressions. Previously, "com.ibm.wala.core.testdata" used 1.7 in
some aspects of its configuration but 1.8 in others. Now it
consistently targets 1.8. I wish this one project didn't need to be
inconsistent with the rest of WALA, but at least now it's consistent
with itself.
(Personally, I'd be happy to target 1.8 only. But my impression
across all of these configuration files is that the WALA developers
still want to be compatible with 1.7. If that is no longer a
requirement, let me know and I will adjust these changes accordingly
to target 1.8 only.)
This change eliminates 11 "There is no 'jre.compilation.profile' build
entry and the project has Java compliance preferences set" warnings
and 13 "The JRE container on the classpath is not a perfect match to
the 'JavaSE-1.7' execution environment" warnings. However, it also
adds 450 "Redundant specification of type arguments <...>" warnings
and 17 "Resource '...' should be managed by try-with-resource"
warnings. So this seems like a net step backward in my wish to reduce
WALA warnings. However, those new warnings concern Java 1.7 language
features that we were not previously using to good effect in projects
that targeted 1.6. If we all agree that we can now target 1.7
instead, then we can use these helpful features as the newly-added
warnings suggest. So I call that a step in the right direction.
Some source files here definitely use Hamcrest, so listing it as a
dependency seems reasonable. What I find confusing is the inconsistency
among my Eclipse installations. On some of my various machines, Eclipse
reports an error if this dependency is not listed. On others, Eclipse
finds the required jar and reports no error, even if this dependency is
not listed. I don't know why the latter works, or why the inconsistency
exists at all. Eclipse is a complex, subtle beast. What I can say is
that this change fixes the error for my Eclipses that were reporting an
error, and does not introduce any new errors for my Eclipses that were
already happy before this change.
When Maven generates these "*/target/antrun/build-main.xml" Any build
scripts, it does not include any DTD or XML Schema declarations.
Eclipse's XML validator warns about the lack of grammar constraints.
The warning is sensible, but we are not in a position to do anything
about it. Better, therefore, to suppress these warnings so that we
can more-clearly see warnings we *can* address.
We actually know the full grammar for these files: it is documented at
<http://help.eclipse.org/kepler/index.jsp?topic=%2Forg.eclipse.platform.doc.isv%2Freference%2Fmisc%2Fplugin_manifest.html>.
We ought to be able to extract that DTD into a file and give each
"plugin.xml" a "<!DOCTYPE plugin SYSTEM ...>" declaration referencing
it. Unfortunately, that leads to a new warning: "External entity
resolution is not supported by PDE." So a stub declaration is the
best we can do. Fortunately, Eclipse's structured editor seems to
preserve these once we add them by hand.
Some of these might have proper DTDs or XML Schema definitions
floating around somewhere that we could use. Presumably many do not.
Rather than hand-craft such definitions myself, I'm just giving each a
minimal stub DOCTYPE declaration. That's enough to satisfy Eclipse's
XML validator, which otherwise complains that these files lack grammar
constraints.
I think the "target/p2artifacts.xml" and "target/p2content.xml" files
are generated by Maven. They are well-formed XML but Eclipse's XML
validator legitimately warns that they lack grammar constraints.
Since we're not maintaining the tool that creates these files, we are
not in a position to do anything about that. Therefore, we may as
well exclude these from validation entirely. That way we can
more-clearly recognize warnings that we *can* do something about.
In theory, these files could be checked against the DTD given at
<http://help.eclipse.org/neon/index.jsp?topic=%2Forg.eclipse.platform.doc.isv%2Freference%2Fmisc%2Ffeature_manifest.html>.
In practice, Eclipse's structured editor for this file type discards
any "<!DOCTYPE ...>" declarations one might manually add. So there's
no convenient way to tell Eclipse' XML validator what grammar to use.
Fortunately, that same structured editor makes it unlikely that anyone
will introduce invalid content. So I don't mind excluding these files
from XML validation entirely.
Eclipse's XML validator warns about missing grammar constraints in
several XML files that come from non-WALA projects. We are not in a
position to do anything about these problems.
Ant "build.xml" files don't have a standard DTD or XML Schema; the
contents are simply too flexible for that. But we can at least
give each a stub DOCTYPE declaration. That's enough to satisfy
Eclipse's XML validator, which otherwise complains that these files
lack grammar constraints.
As created by Tycho Surefire, these files are XML documents without
DTD or XML Schema declarations. The XML validator warns about this
omission. However, Surefire is not a WALA component. We are not in a
suitable position to change it to include XML schema or DTD
declarations in the XML files it generates. Better, then, to ignore
this benign problem so we can focus on warnings that we can act on
directly.
Most of the invalid HTML arose from bare "<" and ">" characters.
These should be escaped as "<" and ">" when not intended to
introduce HTML tags. When you have many such characters close
together, "{@literal ...}" is a nice, readable alternative that
automatically escapes its contents. If the text in question is
intended to be a code fragment, then "{@code ...}" is appropriate:
this is essentially equivalent to "<code>{@literal ...}</code>".
There were a few other HTML violations too, but none common enough to
be worth detailing here.
The contents of @author go straight into HTML, just like most other
Javadoc material. So if you want to have a "<foo@bar.com>" e-mail
address as part of the author information, the angle brackets must be
escaped. Here I've opted to do that using "{@code <foo@bar.com>}",
which has some additional styling effects that seem appropriate for
e-mail addresses. We could also have used "<foo@bar.com>" for
escaping without code styling.
It's unclear whether the original authors of these pages intended them
to be valid or invalid. Certainly there is merit in testing against
invalid HTML, since the vast majority of real-world HTML is indeed
invalid. I'm going to assume that any errors in this collection of
test inputs are intentional, and therefore not worth reporting when
running Eclipse HTML validation.
Plugin documentation includes plenty of invalid HTML. However, we
don't maintain these files, so we are not in a position to fix them.
Better, therefore, to suppress these warnings so that we can notice
and fix other problems over which we do have control.
It's unclear whether the original authors of these pages intended them
to be valid or invalid. Certainly there is merit in testing against
invalid HTML, since the vast majority of real-world HTML is indeed
invalid. I'm going to assume that any errors in this collection of
test inputs are intentional, and therefore not worth reporting when
running Eclipse HTML validation.
Eclipse validation warns about invalid HTML content in all
Maven-generated "target/site/dependency-convergence.html" files. The
warnings are legitimate: these HTML files are indeed invalid.
However, we don't maintain the tool that generates these files, so we
are not in a position to fix them. Better, therefore, to suppress
these warnings so that we can notice and fix other problems over which
we do have control.
If a client violates these restrictions, I prefer that their code fail
at compile time instead of run time. Changing a few key types from
`Class` to `Class<? extends AbstractAndroidModel>` gives us precisely
the static enforcement we need and lets us remove an
`AbstractAndroidModel.class.isAssignableFrom` run-time check.
However, this does change the public API of `AndroidEntryPointManager`
in two ways. The `getModelBehavior` and `setModelBehavior` methods now
respectively accept and return `Class<? extends AbstractAndroidModel>`
instead of `Class`. Is tightening up a public API in this manner
considered OK?
This fixes one Eclipse "no valid properties files exist in the
localization directory specified" warning.
Ordinarily I would also change this project's configuration to treat
this warning as an error in the future. That's a good way to
discourage regressions. Unfortunately in this particular case I
cannot find a setting that has the desired effect, even after hunting
around in Eclipse PDE sources. It seemed that setting
"compilers.p.unknown-resource=0" in
"com.ibm.wala.util/.settings/org.eclipse.pde.prefs" should do the
trick, but it does not. I don't know why.
This resolves one Eclipse "'...' build entry is missing" warning.
Also update the project configuration to treat this warning as an
error. This should discourage commits that create new instances of
this sort of problem in the future.
This resolves one "'...' is not a source folder" Eclipse warning.
Also update the project configuration to treat this warning as an
error. This should discourage commits that create new instances of
this sort of problem in the future.
Other subdirectories' "build.properties" generally seem to include this
already, so who am I to argue?
This resolves one "An entry for META-INF/ is required in bin.includes"
Eclipse warning.
Also update the project configuration to treat this warning as an
error. This should discourage commits that create new instances of
this sort of problem in the future.
In general, the WALA code base is not really ready for nullness
checking. It would be nice if we got there some day, but I'm not
planning to take that on now or any time soon. Until then, it's not
useful to warn about missing @NonNullByDefault declarations on WALA
packages.
See also older commit 7b6811b.
A subclass of TabulationSolver can now override the methods
newNormalExplodedEdge(), newCallExplodedEdge(), and
newReturnExplodedEdge() to take some action whenever (logically)
some edge in the exploded supergraph is "discovered" during
tabulation.
These methods were constructing an IR based on some default
AnalysisOptions, which may not match the options used when constructing
the underlying CallGraph. This mismatch can lead to bad bugs.
Instead of these methods, analyses should get IR directory from the
CGNodes via CGNode.getIR().
Ideally we would fix the methods and not change the interface, but
that would require knowing the right AnalysisOptions, which itself
would necessitate an interface change.
This interface method was added in Eclipse Neon, so we need to implement
it here to have a non-abstract class. With this fix, WALA now builds
with no errors inside Eclipse Neon. (There are still a few thousand
warnings, but that's nothing new.)
Eclipse Mars Service Release 2 finds 45 potential null pointer accesses
across WALA's various Eclipse projects. Eclipse ignores these by
default, but any individual user may have changed their personal Eclipse
configuration to treat them as warnings or errors. Thus, some people
will find that the code builds while others find that it fails. Better
to explicitly use a known-good configuration.
In the long run someone should inspect these cases one-by-one and fix
them where appropriate. But that is probably better managed as part of a
larger effort to tidy up nulls in WALA. I'm not planning to take that on
now or any time soon, though, so this is a better setup for now.
The new project only depends on the minimal Eclipse plugins to make the
ECJ frontend work. Hence, it should be amenable to creating a Maven
Central jar.
Our script for pulling the Polyglot source from Google Code no longer
works. We may remove Polyglot support itself pretty soon, as it
only supports old Java versions.
PrunedCFG had been changed to always include an entry and exit node.
The logic for detecting an "empty" ExceptionPrunedCFG inside the PDG
construction code had not been updated appropriately.
Before, fixed-point systems containing BasicNullaryStatements
caused crashes when trying to output them (because toString()
method of AbstractStatement assumes that there is Right-Hand-Side,
but BasicNullaryStatements throw an UnsupportedOperationException
when trying to get them).
Why shouldn't BasicNullaryStatements have a string representation?
This way, the rest of the outer loop (including the jump to the
beginning) is also reached in the control-flow which means that
the outer loop is modelled as a proper loop
a) serializable added for use by Android services
b) test classes refactored to allow Android variants to use JUnit 3
2) shrike instrumentation now uses java.lang.instrument
a) refactoring
b) online variants of call graph tracing
The former will include the contents of the array, while the latter
only includes the object's identity.
This will allow WALA to be compiled using Google's error-prone compiler
(https://github.com/google/error-prone).
now, for me, code works using e44 with maven
dalvik tests refactored for mobile version with android dev tools
IDE tests Eclipse metadata fixed to make e44 work for me
new android entrypoint to fix failure in new droidbench tests
com.ibm.wala.cast.js.rhino.test/harness-src/com/ibm/wala/cast/js/test/TestPrototypeCallGraphShapeRhino.java
com.ibm.wala.cast.js.test/harness-src/com/ibm/wala/cast/js/test/TestPrototypeCallGraphShape.java
com.ibm.wala.cast.js.test.data/examples-src/pages/prototype.html
work (not yet finished) on fixes to property accesses for JavaScript:
com.ibm.wala.cast/source/java/com/ibm/wala/cast/ipa/callgraph/AstSSAPropagationCallGraphBuilder.java
com.ibm.wala.cast.java/src/com/ibm/wala/cast/java/ipa/callgraph/AstJavaSSAPropagationCallGraphBuilder.java
com.ibm.wala.cast.js/source/com/ibm/wala/cast/js/ipa/callgraph/JSSSAPropagationCallGraphBuilder.java
currently unused tests to remind me to fix bugs:
com.ibm.wala.cast.js.test/harness-src/com/ibm/wala/cast/js/test/TestSimpleCallGraphShape.java
com.ibm.wala.cast.js.test.data/examples-src/tests/loops.js
com.ibm.wala.cast.js.test.data/examples-src/tests/primitive_strings.js
fixes to exception handler code generation in JavaScript:
com.ibm.wala.cast.js.rhino/source/com/ibm/wala/cast/js/translator/RhinoToAstTranslator.java
com.ibm.wala.cast.js.test.data/examples-src/tests/try.js
com.ibm.wala.cast.js.test/harness-src/com/ibm/wala/cast/js/test/TestSimpleCallGraphShape.java
fixes to make the system build on both juno and luna
com.ibm.wala.cast.js.test.data/pom.xml
pom.xml
targets/e42/e42.target
targets/e44/e44.target
targets/pom.xml
com.ibm.wala.core.tests/META-INF/MANIFEST.MF
com.ibm.wala.dalvik.test/META-INF/MANIFEST.MF
com.ibm.wala.ide.jdt.test/META-INF/MANIFEST.MF
com.ibm.wala.ide.jdt/source/com/ibm/wala/cast/java/translator/jdt/FakeExceptionTypeBinding.java
com.ibm.wala.ide.jdt/source/com/ibm/wala/ide/util/JavaEclipseProjectPath.java
com.ibm.wala.ide.jsdt.tests/META-INF/MANIFEST.MF
com.ibm.wala.ide.jsdt.tests/src/com/ibm/wala/ide/jsdt/tests/AbstractJSProjectScopeTest.java
com.ibm.wala.ide/src/com/ibm/wala/ide/util/EclipseProjectPath.java
com.ibm.wala.ide/src/com/ibm/wala/ide/util/ProgressMonitorDelegate.java
beginnings of "pointer analysis" on top of field-based analysis
com.ibm.wala.cast.js/source/com/ibm/wala/cast/js/callgraph/fieldbased/flowgraph/FlowGraph.java
com.ibm.wala.cast.js/source/com/ibm/wala/cast/js/callgraph/fieldbased/flowgraph/vertices/PropVertex.java
com.ibm.wala.cast.js/source/com/ibm/wala/cast/js/callgraph/fieldbased/flowgraph/vertices/RetVertex.java
com.ibm.wala.cast.js/source/com/ibm/wala/cast/js/callgraph/fieldbased/flowgraph/vertices/VarVertex.java
com.ibm.wala.cast.js/source/com/ibm/wala/cast/js/callgraph/fieldbased/flowgraph/vertices/VertexFactory.java
com.ibm.wala.core/src/com/ibm/wala/ipa/callgraph/propagation/PointerAnalysis.java
com.ibm.wala.core/src/com/ibm/wala/ipa/callgraph/propagation/cfa/ExceptionReturnValueKey.java
fixes for crashes in correlartion tracking
com.ibm.wala.cast.js/source/com/ibm/wala/cast/js/ipa/callgraph/correlations/extraction/ClosureExtractor.java
fixes for Dalvik IR generation
com.ibm.wala.core/src/com/ibm/wala/cfg/BytecodeCFG.java
com.ibm.wala.core/src/com/ibm/wala/cfg/ShrikeCFG.java
com.ibm.wala.core/src/com/ibm/wala/ssa/SSACFG.java
com.ibm.wala.dalvik.test/source/com/ibm/wala/dalvik/drivers/APKCallGraphDriver.java
com.ibm.wala.dalvik.test/source/com/ibm/wala/dalvik/test/callGraph/JVMLDalvikComparison.java
com.ibm.wala.dalvik/src/com/ibm/wala/dalvik/classLoader/DexCFG.java
com.ibm.wala.dalvik/src/com/ibm/wala/dalvik/dex/instructions/UnaryOperation.java
com.ibm.wala.dalvik/src/com/ibm/wala/dalvik/ssa/AbstractIntRegisterMachine.java
com.ibm.wala.dalvik/src/com/ibm/wala/dalvik/ssa/DexSSABuilder.java
fixes to stack map generation when instrumenting for Java 7
com.ibm.wala.shrike/src/com/ibm/wala/shrike/cg/DynamicCallGraph.java
com.ibm.wala.shrike/src/com/ibm/wala/shrikeBT/ConstantInstruction.java
com.ibm.wala.shrike/src/com/ibm/wala/shrikeBT/analysis/Analyzer.java
com.ibm.wala.shrike/src/com/ibm/wala/shrikeBT/analysis/ClassHierarchy.java
com.ibm.wala.shrike/src/com/ibm/wala/shrikeBT/analysis/Verifier.java
com.ibm.wala.shrike/src/com/ibm/wala/shrikeBT/shrikeCT/ClassInstrumenter.java
com.ibm.wala.shrike/src/com/ibm/wala/shrikeCT/StackMapConstants.java
com.ibm.wala.shrike/src/com/ibm/wala/shrikeCT/StackMapTableReader.java
com.ibm.wala.shrike/src/com/ibm/wala/shrikeCT/StackMapTableWriter.java
analysis now understands and propagates MethodHandle objects
fixes to Shrike InvokeDynamic instruction
Former-commit-id: fb826f124423bcbca08f729cee1794fbda711d16
2) make instrumentor preserve the names of jar entries and classes as
they are input, rather than recomputing class names when writing the
output jar. This usually makes no difference, but can preserve broken
structures when the input jar file has mismatches between class names
and its entry names.
Changing SSAConditionalInstruction.isObjectComparison(): previous definition returns true for comparisons of Primordial scope objects, but false for Application scope objects. The update version returns true in both cases
In TypeInference when merging a PointType with a ConeType it is safe to return the cone type if
the underlying types are the same. Previously, if an array cone type and array point type were
merged this would result in a java.lang.Object even when the two arrays had the same base type.
Checking for equality first may also save a few cycles for reference types since the
isSubclass check is no longer performed for identical types.
Now there are four structural models:
* SequentialAndroidModel: No loops
* SingleStartAndroidModel: User Interaction on a single component
* LoopAndroidModel: Stuff goes into background and comes back
* LoopKillAndroidModel: Restart of components due to low memory
Code oftain sets the action of an Intent after it's constructor. Until
now a call to such a setter caused the Intent to become "unbound"
(conservative).
This approch allows setting the target once for each Intent - only on
the second call the Intent gets unbound.
This new variant could be dangerous: Setting the target in a branch of
execution may be invalid. This should be detected - no guarantees so!
Methods in question are:
* Intent.setAction
* Intent.setComponent
* Intent.setClass
* Intent.setClassName
Before the Intent would only have been set by calling .attach (if
doBootSequence is enabled). Attach is only called when the modell is
filled with instructions.
This new variant calls setIntent when creating the wrapper for the model
(getMethodAs). This is much better!
* Create IntentContext even if no info available.
This is necessary to also track the start of unknown targets.
* Invalidate the target of an Inten upon a call to Intent.setAction
or Intent.fillIn
Additionally tidied up the classes a bit.
The SystemServiceModel creates and returns a new Instance of the
requested Service (if known).
TODO: We should use a single "global" instance per service instead.
Building the CFG with a SSAGotoInstuction was buggy: Oftain the wrong
jump-target was selected. This has bin fixed.
Additionally InducedCFG now automaticly breaks the basic-block at the
jump-target.
Jumping to Phi-Instructions however is still unsupported (as they are
not part of the cfg-instructions)
There was (and is) the posibility of an endless-loop when looking up the
targets of Intents. Added an evil hack to at least prevent the most
obvious one.
The Analysis has problems with the IntentSender class: Invalid DefUse an
an Init-Call.
The Intent-Starters, that use IntentSender have been made optional.
However disabling them does not help.
Currently the only solution to this problem is adding IntentSender to
the exclusions.txt
Setting the Instantiation-Behavior of Package:
Landroid/support/v4/view
To REUSE causes the following Endless-Recursion in JoDroid:
interproc: computing local killing defintions... done
Building utility edges Exception in thread "main" java.lang.StackOverflowError
at java.util.HashMap.put(HashMap.java:389)
at org.jgrapht.graph.AbstractBaseGraph.addEdge(Unknown Source)
at edu.kit.joana.util.graph.AbstractJoanaGraph.addEdge(AbstractJoanaGraph.java:50)
at edu.kit.joana.wala.core.DependenceGraph.addEdge(DependenceGraph.java:76)
at edu.kit.joana.wala.core.joana.JoanaConverter$UtilityEdgeWalker.discover(JoanaConverter.java:344)
at edu.kit.joana.wala.core.joana.JoanaConverter$UtilityEdgeWalker.discover(JoanaConverter.java:325)
at edu.kit.joana.wala.core.graphs.GraphWalker.dfs(GraphWalker.java:55)
at edu.kit.joana.wala.core.graphs.GraphWalker.dfs(GraphWalker.java:63)
at edu.kit.joana.wala.core.graphs.GraphWalker.dfs(GraphWalker.java:63)
at edu.kit.joana.wala.core.graphs.GraphWalker.dfs(GraphWalker.java:63)
...
As this package contains System-provided android-components it should however be marked REUSE in order to be able to read back values when these types are used.
AndroidPreFlightChecks provides some checkups on the settings before
bulding the Livecycle.
The checks have to be invoked explicitly and issue warnings in the log.
Governed by AndroidEntryPointManager.setDoBootSequence generate some
coarse Android-environment before starting the LiveCycle-Model.
Some action is taken to attach the Android-Context to Components - much
is still missing there though.
Thin context is mainly useful when starting Intents of an external App.
Whenever an Intent is encountered while building the CallGraph a
WALA-Context is generated for it. If a new Android-Component is started
and the Context is sufficient the start-function will call a new type of
model, the MicroModel.
The MicroModel resemples the livecycle of a single Android-Component.
Whenever the start of an intent is encountered the start-function is
replaced by a call to UnknownTargetModel.
UnknownTargetModel will call a restricted android-model (i.e. one that calls only all
Activities). This restricted model is known as MiniModel.
It will also call ExternalModel which does nothing special.
Generate a synthetic AndroidModel in AndroidModelClass.
The model will contain Android EntryPoints in a sorted manner. However
no special handling (loops) are inserted yet.
Intents are not processed at all - thus have to be marked insecure.
Parameters to Androids EntryPoint-Functions may be either marked CREATE
or REUSE. Added these markers and made them availabel through
AndroidEntryPointManager.
Using a SummarizedMethodWithNames instead of a normal one enables human
readable variable names in WALA-Synthetic methods. This should help
when debugging.
Have a toolkit that aids in building WALA-synthetic methods by helping
to avoid common mistakes and managing SSA-Variables.
See bundled package-info.java for more detail
Depending on the method used generating an AnalysisScope failed for
Android-Apps. Especially depending on wheater data was used from a
jar-resource or depending on exclusions.txt
Implemented the GoTo istruction reachable through the
JavaInstructionFactory.
Caution:
- It has to be asshured manually that a basic block starts at the target
- One may not jump to a Phi-Instruction
CAUTION: Now you have to make sure that the provided android lib actually contains all standard
java classes (e.g. java.lang.Object); WALA will complain and crash if this is not the case
2) make instrumentor preserve the names of jar entries and classes as
they are input, rather than recomputing class names when writing the
output jar. This usually makes no difference, but can preserve broken
structures when the input jar file has mismatches between class names
and its entry names.
Changing SSAConditionalInstruction.isObjectComparison(): previous definition returns true for comparisons of Primordial scope objects, but false for Application scope objects. The update version returns true in both cases
this means that running maven on the mac will require putting this back
manually. also running the JDT tests on the mac may require adding
these plugins by hand if the run config is minimizing the plugins being
used.
greater than the max local. This causes the verifier to complain. This
can happen when transforming a class that originally contains local
names for stack slots that are never used. When the new code is
written, a correct max locals is calculated, and then these unused local
table entries trigger errors.
We put all the jar files (the testdata jar, JLex, etc.) in the root
directory now, and set up the build.properties so that these jars get
copied into the final plugin jar, making the tests work properly from
maven. We also still copy the jars into the bin/ directory, so the
Eclipse launchers still work.
These tests create FileModules for certain class / source files,
and hence assume those files are sitting in the filesystem. We
should come up with a better fix here.
Previously, I only included the source file in the zip file but just
realized the zip file is auto-generated from the
com.ibm.wala.cast.java.test.data/src folder.
This ensures that when we need to use stringClass, it can be constructed
directly from the cha field. Previously, it was only initialized after
the call to super(…) in the constructor. However, by then it is too late
because super(…) calls initialize() and solve() which need the
stringClass field.
This is a simple test case that illustrates the issue with type inference
on a String + some primitive.
The code is
package javaonepointfive;
public class TypeInferencePrimAndStringOp {
public static void main(String[] args) {
int a = 2;
String result = "a" + a;
}
}
- remove extraneous printing
- fixes for parse errors in JS and HTML
- fixes for handling parse errors in JS and HTML
- update comments
- Change BitVectorRepository to use LinkedLists
- improve javadoc
- fix for for in contexts for NEVER case
- missing VectorKill println method
- Annotation support
- Properly fix path-with-spaces bug.
- fix bug involving paths with spaces
- add a simple driver for building a call graph via a scope file
- Properly return null as default constructor of an array.
- organize imports
- better handling of missing bytecodes
- javadoc
- test fix
- small Javadoc fix
- added date-property.js
- 1) added InstanceKey.getCreation sites and its implementations 2) fixes for issues with keys representing dynamic properties i) all properties are converted to strings,
- publicize method makeClasspath(). deprecate quoteStringIfNeeded()
- organize imports
- javadoc
- renamed classes to make relationship to mod-ref analysis clearer
- add support for lexical writes
- Code to compute transitive lexical accesses of methods.
- extract some generally useful code from ModRef
- Generate proper InstanceFieldKeys for property accesses with Numbers.
- rewrite to make hardest test appear last
- fix test to properly check reachability
- add an array test that doesn't quite work
- add method to get a PointerKey for a global
- compare FieldValueDispatch objects based on CGNode as well
- Handle duplicate field names between subclass and superclass.
In some cases, class files will have non-abstract methods with no
bytecodes (e.g., stubs for compilation purposes). While such a class
file is invalid, we want to enable clients to handle such an error.
With these changes, Shrike will throw an InvalidClassFileException for
such cases, and WALA's IR construction code will throw a
WalaRuntimeException.
2) fixes for issues with keys representing dynamic properties
i) all properties are converted to strings, in an approximation of JS
semantics
3) fix to handling of instance keys representing numbers in binary +;
now it understands that adding constant keys of type Number requires
adding a non-constant Number key to the lval
Note that this change actually breaks a couple of our unit tests. But,
it seems they were only passing by accident before anyway, and this change
at least leads to a more consistent handling of dynamic property accesses
with String vs. Number property names.
Removing this dependence allows our Getting Started instructions to still work.
It requires some minor code duplication, but otherwise we'd have to change a
bunch of documentation.
1: get the right position for methods from the JDT AST
2: more seriously, there was a nasty bug in how source positions got
mapped in unwound code: mappings were wrong for instructions that
were the result of duplicating code to replicate unwind handling in
exceptional and non-local exits.
Specifically, rewrite ProgressMaster to not depend on Eclipse, and move
to com.ibm.wala.util. Now, we can use timeout-based code in packages
without introducing an Eclipse dependency.
- basic compatibility with Java 7 (i.e., don't crash immediately)
- Added utility class for converting call graphs to JSON.
- add edgeExists CLI option to check if some edge exists in the call graph
Due to constant parameters, we can't assume that constraints don't need
to be generated when we've already seen a target at a call site (since
the previous constraints may have only passed certain constant parameter
values, rather than all parameter values). Add a check to handle these
cases correctly.
The issue was that it's possible for multiple invoke instructions with
different actual parameters to be associated with a single CallSiteReference.
In this case, the invariant parameters for each invoke instruction may differ.
should be ignored when setting up the arguments array (the former is the
invoked function, the latter is the receiver).
git-svn-id: https://wala.svn.sourceforge.net/svnroot/wala/trunk@4480 f5eafffb-2e1d-0410-98e4-8ec43c5233c4
1) Structural changes in the AstTranslator to allow retranslation and generation of custom IR. This is mostly moving state from the translator itself into the context.
2) Some refactoring to share some AST generation code across the Java and JavaScript front ends.
3) Switching to the latest Rhino, release 1.7R3; this is a pervasive change to the JavaScript Rhino translator, since it involves switching to the new AST interface in Rhino.
4) Common code to, as an option, translate Do-style loops by replicating the loop body. This allows the use of CAstNode.LOOP forms for such loops.
5) Some bug fixes to the mechanisms of the CAstRewriter to handle weird control flow cases.
6) An example of retranslation to specialize JavaScript methods based on how many arguments they receive at call sites.
git-svn-id: https://wala.svn.sourceforge.net/svnroot/wala/trunk@4427 f5eafffb-2e1d-0410-98e4-8ec43c5233c4
1) Structural changes in the AstTranslator to allow retranslation and generation of custom IR. This is mostly moving state from the translator itself into the context.
2) Some refactoring to share some AST generation code across the Java and JavaScript front ends.
3) Switching to the latest Rhino, release 1.7R3; this is a pervasive change to the JavaScript Rhino translator, since it involves switching to the new AST interface in Rhino.
4) Common code to, as an option, translate Do-style loops by replicating the loop body. This allows the use of CAstNode.LOOP forms for such loops.
5) Some bug fixes to the mechanisms of the CAstRewriter to handle weird control flow cases.
6) An example of retranslation to specialize JavaScript methods based on how many arguments they receive at call sites.
git-svn-id: https://wala.svn.sourceforge.net/svnroot/wala/trunk@4426 f5eafffb-2e1d-0410-98e4-8ec43c5233c4
1) Structural changes in the AstTranslator to allow retranslation and generation of custom IR. This is mostly moving state from the translator itself into the context.
2) Some refactoring to share some AST generation code across the Java and JavaScript front ends.
3) Switching to the latest Rhino, release 1.7R3; this is a pervasive change to the JavaScript Rhino translator, since it involves switching to the new AST interface in Rhino.
4) Common code to, as an option, translate Do-style loops by replicating the loop body. This allows the use of CAstNode.LOOP forms for such loops.
5) Some bug fixes to the mechanisms of the CAstRewriter to handle weird control flow cases.
6) An example of retranslation to specialize JavaScript methods based on how many arguments they receive at call sites.
git-svn-id: https://wala.svn.sourceforge.net/svnroot/wala/trunk@4425 f5eafffb-2e1d-0410-98e4-8ec43c5233c4
This fork uses the gradle setup of the WALA build system. Thus, for details, please
consult the instructions in the file [README-Gradle](README-Gradle.md). In general,
for rebuilding all WALA artifacts, use:
``` sh
./gradlew clean assemble
```
## Publishing Artifacts
### Configuration
For publishing artifacts, you need to configure the username and password required
for uploading to the remote artifacts repository, i.e., you need to add the following
properties to your `GRADLE_USER_HOME/gradle.properties` (default: `$HOME/.gradle/gradle.properties`)
file:
``` gradle
comLogicalhackingArtifactsUser=<USER>
comLogicalhackingArtifactsPassword=<PASSWORD>
```
### Preparation
Before publishing new artifacts to the artifacts repository, please update the version identifier in the file `build.gradle` by removing the `SNAPSHOT` postfix
and updating the `<WALAVERSION>` identifier.
``` gradle
version '<WALAVERSION>.[R|S].DASCA.<DASCAVERSION>'
```
with the version that should be published. Next, commit your changes and tag the
new version:
``` gradle
git commit -m "Preparing release of version <VERSION>." build.gradle
git tag -s "<VERSION>" -m "Tagging version <VERSION>."
```
Finally, mark the development version by appending `-SNAPSHOT`
``` gradle
version '<WALAVERSION>.[R|S].DASCA.<DASCAVERSION>-SNAPSHOT'
```
and commit your changes:
``` gradle
git commit -m "Marked development version" build.gradle