A Python crawler for extensions from the Chrome Web Store.
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
Michael Herzberg 717a35e7c8 Don't print multiple matches for the same file. 3 weeks ago
ExtensionCrawler Small changes to extgrep. 1 month ago
analysis/library-detector Improved plotting for angular. 2 months ago
database Added database documentation. 7 months ago
resources Updated regexps. 1 year ago
scripts Added dockerfile for building singularity-images. 3 weeks ago
sge Cleanup. 3 weeks ago
.gitignore Updated image name(s). 1 year ago
LICENSE initial commit 2 years ago
README.md Updated python version. 1 month ago
cdnjs-git-miner Update to Python 3.7. 1 month ago
crawler Store log in montly directory and replace : by _ in names of log files. 1 month ago
create-db Switched to python 3.7. 1 month ago
crx-extract Using python 3.7. 1 month ago
crx-jsinventory Moved to Python 3.7. 2 months ago
crx-jsstrings Moved to Python 3.7. 2 months ago
crx-tool Using python 3.7. 2 months ago
extgrep Don't print multiple matches for the same file. 3 weeks ago
requirements.txt Increased requests version (dependency). 4 months ago
setup.py Fixed style errors and warnings. 11 months ago
simhashbucket Switched to python 3.7. 1 month ago

README.md

ExtensionCrawler

A collection of utilities for downloading and analyzing browser extension from the Chrome Web store.

  • crawler: A crawler for extensions from the Chrome Web Store.
  • crx-tool: A tool for analyzing and extracting *.crx files (i.e., Chrome extensions). Calling crx-tool.py <extension>.crx will check the integrity of the extension.
  • crx-extract: A simple tool for extracting *.crx files from the tar-based archive hierarchy.
  • crx-jsinventory: Build a JavaScript inventory of a *.crx file using a JavaScript decomposition analysis.
  • crx-jsstrings: A tool for extracting code blocks, comment blocks, and string literals from JavaScript.
  • create-db: A tool for updating a remote MariaDB from already existing extension archives.

The utilities store the extensions in the following directory hierarchy:

   archive
   ├── conf
   │   └── forums.conf
   ├── data
   │   └── ...
   └── log
       └── ...

The crawler downloads the most recent extension (i.e., the *.crx file as well as the overview page. In addition, the conf directory may contain one file, called forums.conf that lists the ids of extensions for which the forums and support pages should be downloaded as well. The data directory will contain the downloaded extensions.

The crawler and create-db scripts will access and update a MariaDB. They will use the host, datebase, and credentials found in ~/.my.cnf. Since they make use of various JSON features, it is recommended to use at least version 10.2.8 of MariaDB.

All utilities are written in Python 3.7. The required modules are listed in the file requirements.txt.

Installation

Clone and use pip3 to install as a package.

git clone git@logicalhacking.com:BrowserSecurity/ExtensionCrawler.git
pip3 install --user -e ExtensionCrawler

Team

Contributors

  • Mehmet Balande

License

This project is licensed under the GPL 3.0 (or any later version).

SPDX-License-Identifier: GPL-3.0-or-later

Master Repository

The master git repository for this project is hosted by the Software Assurance & Security Research Team at https://git.logicalhacking.com/BrowserSecurity/ExtensionCrawler.