Sen descrición

Steve Nyemba 4c2efc2892 documentation ... readme hai 5 meses
bin befdf453f5 bug fix: crash with etl & process hai 6 meses
info 66d881fdda upgrade pyproject.toml, bug fix with registry hai 6 meses
notebooks 1a8112f152 adding iceberg notebook hai 11 meses
transport 89d762f39a bug fixes: conditional imports hai 6 meses
.gitignore 92bf0600c3 .. %!s(int64=2) %!d(string=hai) anos
README.md 4c2efc2892 documentation ... readme hai 5 meses
pyproject.toml a31481e196 fix hai 6 meses
requirements.txt 8d4ecd7a9f S3 Requirments file %!s(int64=8) %!d(string=hai) anos

README.md

Introduction

This project implements an abstraction of objects that can have access to a variety of data stores, implementing read/write with a simple and expressive interface. This abstraction works with NoSQL, SQL and Cloud data stores and leverages pandas.

Why Use Data-Transport ?

Data transport is a simple framework that:

  • easy to install & modify (open-source)
  • enables access to multiple database technologies (pandas, SQLAlchemy)
  • enables notebook sharing without exposing database credential.
  • supports pre/post processing specifications (pipeline)

Installation

Within the virtual environment perform the following :

pip install git+https://github.com/lnyemba/data-transport.git

Options to install components in square brackets

pip install data-transport[nosql,cloud,warehouse,all]@git+https://github.com/lnyemba/data-transport.git

Additional features

- In addition to read/write, there is support for functions for pre/post processing
- CLI interface to add to registry, run ETL
- scales and integrates into shared environments like apache zeppelin; jupyterhub; SageMaker; ...

Learn More

We have available notebooks with sample code to read/write against mongodb, couchdb, Netezza, PostgreSQL, Google Bigquery, Databricks, Microsoft SQL Server, MySQL ... Visit data-transport homepage