No Description

Steve Nyemba c443c6c953 duckdb support 11 months ago
bin b9bc898161 bug fix: registry (more usable) and added to factory method 11 months ago
info dde4767e37 new version 11 months ago
notebooks 2b5c038610 documentation ... 11 months ago
transport c443c6c953 duckdb support 11 months ago
.gitignore 92bf0600c3 .. 1 year ago
README.md 8edb764d11 documentation typo 11 months ago
requirements.txt 8d4ecd7a9f S3 Requirments file 7 years ago
setup.py c443c6c953 duckdb support 11 months ago

README.md

Introduction

This project implements an abstraction of objects that can have access to a variety of data stores, implementing read/write with a simple and expressive interface. This abstraction works with NoSQL, SQL and Cloud data stores and leverages pandas.

Why Use Data-Transport ?

Mostly data scientists that don't really care about the underlying database and would like a simple and consistent way to read/write and move data are well served. Additionally we implemented lightweight Extract Transform Loading API and command line (CLI) tool. Finally it is possible to add pre/post processing pipeline functions to read/write

  1. Familiarity with pandas data-frames
  2. Connectivity drivers are included
  3. Reading/Writing data from various sources
  4. Useful for data migrations or ETL

Installation

Within the virtual environment perform the following :

pip install git+https://github.com/lnyemba/data-transport.git

Learn More

We have available notebooks with sample code to read/write against mongodb, couchdb, Netezza, PostgreSQL, Google Bigquery, Databricks, Microsoft SQL Server, MySQL ... Visit data-transport homepage