Skip to content

Latest commit

 

History

History
164 lines (130 loc) · 5.3 KB

README.md

File metadata and controls

164 lines (130 loc) · 5.3 KB

SPONGE - Simple Prior Omics Network GEnerator

The SPONGE package generates human prior gene regulatory networks and protein-protein interaction networks for the involved transcription factors.

Table of Contents

General Information

This repository contains the SPONGE package, which allows the generation of human prior gene regulatory networks based mainly on the data from the JASPAR database. It also uses NCBI to find the human analogs of vertebrate transcription factors, Ensembl to collect all the promoter regions in the human genome, UniProt for symbol matching, and STRING to retrieve protein-protein interactions between transcription factors. Because it accesses these databases on the fly, it requires internet access.

Prior gene regulatory networks are useful mainly as an input for tools that incorporate additional sources of information to refine them. The prior networks generated by SPONGE are designed to be compatible with PANDA and related NetZoo tools.

The purpose of this project is to give the ability to generate prior gene regulatory networks to people who do not have the knowledge or inclination to do the genome-wide motif search, but would still like to change some parameters that were used to generate publicly available prior gene regulatory networks. It is also designed to facilitate the inclusion of new information from database updates into the prior networks.

If you just want to use the prior networks generated by the stable version of SPONGE with the default settings, they are available on Zenodo.

Features

The features already available are:

  • Generation of prior gene regulatory network
  • Generation of prior protein-protein interaction network for transcription factors
  • Automatic download of required files during setup
  • Parallelised motif filtering
  • Command line interface

Setup

The requirements are provided in a requirements.txt file.

Usage

SPONGE comes with a netzoopy-sponge command line script:

# Get information about the available options
$ netzoopy-sponge --help
# Run the pipeline
$ netzoopy-sponge

The script comes with a lot of options, but the defaults are designed to be sensible and the users do not have to change any of them unless desired.

Within Python, the default workflow can be invoked as follows:

# Import the class definition
from sponge.sponge import Sponge
# Run the default workflow
sponge_obj = Sponge(run_default=True)

Much like the command line script, the Sponge class implements many variables that give control over the process, and they can be changed from their defaults. For more information, you can run help(Sponge) after the import.

In case one needs more control over the individual steps, the workflow in Python would be as follows:

# Import the class definition
from sponge.sponge import Sponge
# Create the SPONGE object
sponge_obj = Sponge()
# Select the vertebrate transcription factors from JASPAR
sponge_obj.select_tfs()
# Find human homologs for the TFs if possible
sponge_obj.find_human_homologs()
# Filter the matches of the JASPAR bigbed file to the ones in the
# promoters of human transcripts
sponge_obj.filter_matches()
# Aggregate the filtered matches on promoters to genes
sponge_obj.aggregate_matches()
# Write the final motif prior to a file
sponge_obj.write_motif_prior()
# Retrieve the protein-protein interactions between the transcription
# factors from the STRING database
sponge_obj.retrieve_ppi()
# Write the PPI prior to a file
sponge_obj.write_ppi_prior()

SPONGE will attempt to download the files it needs into a temporary directory (.sponge_temp by default). Paths can be provided if these files were downloaded in advance. The JASPAR bigbed file required for filtering is huge (> 100 GB), so the download might take some time. Make sure you're running SPONGE somewhere that has enough space!

As an alternative to the bigbed file download, SPONGE can download tracks for individual TFs on the fly and filter them individually. This way of processing is slower than the bigbed file when all TFs in the database are considered, but it becomes competitive when only a subset is used. The physical storage footprint is much reduced. The option is enabled with on_the_fly_processing=True.

Project Status

The project is: in progress.

Room for Improvement

Room for improvement:

  • Try incorporating unipressed
  • Improve overlap computations

To do:

  • Support for more species

Acknowledgements

Many thanks to the members of the Kuijjer group at NCMM for their feedback and support.

This README is based on a template made by @flynerdpl.

Contact

Created by Ladislav Hovan (ladislav.hovan@ncmm.uio.no). Feel free to contact me!

License

This project is open source and available under the GNU General Public License v3.