What is Apache Hadoop (HDFS)?
Apache Hadoop Distributed File System (HDFS) is an open-source file system for high-bandwidth data storage for the larger Hadoop framework. It is scalable, portable, distributed and provides the capability to run Java API and shell commands. It is best suited for batch processing of large volumes of data in parallel.
Extract Data From HDFS
Loome makes it simple to connect to Apache Hadoop and extract data for downstream systems such as an Integration Hub, Reporting Data Store, Data Lake or Enterprise Data Warehouse. In-built features allow bulk selection of all source tables/files to be automatically synced on a regular schedule, minimising data load size leveraging incremental logic.
Natively Orchestrate HDFS Integration Tasks
Loome allows orchestration of data pipelines across data engineering, data science and high performance computing workloads with native integration of Apache Hadoop data pipeline tasks.
Loome provides a sophisticated workbench for configuration of job and task dependencies, scheduling, detailed logging, automated notifications and API access for dynamic task creation and execution.
Loome can execute tasks located as scripts in a GIT repository, entered via a web interface or by executing operations within a database. Loome includes support for native execution of SQL, Python, Spark, HIVE, PowerShell/PowerShell Core and Operating System commands.
Loome also simplifies control of deployment across multiple environments, and approval of changes between Development, Test and Production environments. Loome also allows you to scale your advanced pipelines to take advantage of on-demand clusters without changing a single line of code.