Geen omschrijving

Steve Nyemba 4c2efc2892 documentation ... readme 5 maanden geleden
bin befdf453f5 bug fix: crash with etl & process 6 maanden geleden
info 66d881fdda upgrade pyproject.toml, bug fix with registry 6 maanden geleden
notebooks 1a8112f152 adding iceberg notebook 11 maanden geleden
transport 89d762f39a bug fixes: conditional imports 6 maanden geleden
.gitignore 92bf0600c3 .. 2 jaren geleden
README.md 4c2efc2892 documentation ... readme 5 maanden geleden
pyproject.toml a31481e196 fix 6 maanden geleden
requirements.txt 8d4ecd7a9f S3 Requirments file 8 jaren geleden

README.md

Introduction

This project implements an abstraction of objects that can have access to a variety of data stores, implementing read/write with a simple and expressive interface. This abstraction works with NoSQL, SQL and Cloud data stores and leverages pandas.

Why Use Data-Transport ?

Data transport is a simple framework that:

  • easy to install & modify (open-source)
  • enables access to multiple database technologies (pandas, SQLAlchemy)
  • enables notebook sharing without exposing database credential.
  • supports pre/post processing specifications (pipeline)

Installation

Within the virtual environment perform the following :

pip install git+https://github.com/lnyemba/data-transport.git

Options to install components in square brackets

pip install data-transport[nosql,cloud,warehouse,all]@git+https://github.com/lnyemba/data-transport.git

Additional features

- In addition to read/write, there is support for functions for pre/post processing
- CLI interface to add to registry, run ETL
- scales and integrates into shared environments like apache zeppelin; jupyterhub; SageMaker; ...

Learn More

We have available notebooks with sample code to read/write against mongodb, couchdb, Netezza, PostgreSQL, Google Bigquery, Databricks, Microsoft SQL Server, MySQL ... Visit data-transport homepage