A Python crawler for extensions from the Chrome Web Store.
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
 
Go to file
Jack Deadman b54803d834
Add installation instructions to README
6 years ago
ExtensionCrawler Merge branch 'master' of logicalhacking.com:BrowserSecurity/ExtensionCrawler 6 years ago
queries Added sql query. 6 years ago
scripts Ad-hoc integration of a first analysis to be run after building the data base. 6 years ago
sge Fixed grepper sge. 6 years ago
.gitignore Added .swp file to gitignore. 6 years ago
LICENSE initial commit 7 years ago
README.md Add installation instructions to README 6 years ago
crawler Refactoring: Moved default configuration to config module. 6 years ago
create-db Fixed import. 6 years ago
crx-tool Refactoring. 6 years ago
extract-crx Improved error message in case CRX is not found. 6 years ago
grepper Updated greper. 6 years ago
requirements.txt Simplified requirements.txt using 'pipreqs --force .'. 6 years ago
setup.py Add setup.py 6 years ago

README.md

ExtensionCrawler

A collection of utilities for downloading and analyzing browser extension from the Chrome Web store.

  • crawler: A crawler for extensions from the Chrome Web Store.
  • crx-tool: A tool for analyzing and extracting *.crx files (i.e., Chrome extensions). Calling crx-tool.py <extension>.crx will check the integrity of the extension.
  • extract-crx: A simple tool for extracint *.crx files from the tar-based archive hierarchy.
  • create-db: A tool for creating/initializing the database files from already existing extension archives.

The utilities store the extensions in the following directory hierarchy:

   archive
   ├── conf
   │   └── forums.conf
   ├── data
   │   └── ...
   └── log
       └── ...

The crawler downloads the most recent extension (i.e., the *.crx file as well as the overview page. In addition, the conf directory may contain one file, called forums.conf that lists the ids of extensions for which the forums and support pages should be downloaded as well. The data directory will contain the downloaded extensions as well as sqlite files containing the extracted meta data. The sqlite files can easily be re-generated using the create-db tool.

All utilities are written in Python 3.x. The required modules are listed in the file requirements.txt.

Installation

Clone and use pip to install as a package.

git clone git@logicalhacking.com:BrowserSecurity/ExtensionCrawler.git
pip install -e ExtensionCrawler

Team

License

This project is licensed under the GPL 3.0 (or any later version).