Need advice about which tool to choose?Ask the StackShare community!

IBM DB2

243
248
+ 1
19
Hadoop

2.5K
2.3K
+ 1
56
Add tool

Hadoop vs IBM DB2: What are the differences?

Introduction

Hadoop and IBM DB2 are both popular technologies used in the field of data storage and processing, but they have several key differences that set them apart from each other.

  1. Architecture: Hadoop is an open-source framework that utilizes a distributed file system and a MapReduce processing model to handle big data. On the other hand, IBM DB2 is a proprietary, relational database management system (RDBMS) that follows a traditional centralized architecture.

  2. Data Processing: Hadoop is designed to handle unstructured and semi-structured data efficiently. It provides a scalable and fault-tolerant platform for processing large volumes of data in parallel. In contrast, IBM DB2 is optimized for structured data processing and offers various relational database management features, such as indexing, querying, and transaction handling.

  3. Scalability: Hadoop is highly scalable and can easily handle petabytes of data by adding more commodity hardware to the cluster. It provides a distributed computing environment, allowing data processing to be spread across multiple nodes. In comparison, IBM DB2's scalability is limited by the capacity of a single server instance, making it more suitable for smaller to medium-sized datasets.

  4. Data Storage: Hadoop uses a distributed file system called HDFS (Hadoop Distributed File System) for storing data across multiple machines in a cluster. This enables fault tolerance and high availability. In contrast, IBM DB2 stores data in a structured manner using tables, indexes, and other database objects within a single server instance.

  5. Processing Speed: Hadoop excels at processing large volumes of data by distributing the workload across a cluster of machines. It can leverage parallel processing and perform computations in a distributed manner, leading to faster processing times for big data tasks. IBM DB2, being a traditional RDBMS, is optimized for transaction processing and handling structured data efficiently.

  6. Cost: Hadoop is an open-source framework and allows organizations to utilize commodity hardware, resulting in a lower total cost of ownership. Conversely, IBM DB2 is a proprietary technology and typically involves licensing costs, making it comparatively more expensive.

In summary, Hadoop is an open-source, distributed framework suited for processing big data with its scalability, fault tolerance, and parallel processing capabilities. In contrast, IBM DB2 is a proprietary relational database management system optimized for structured data processing and transaction handling.

Advice on IBM DB2 and Hadoop
Needs advice
on
HadoopHadoopMarkLogicMarkLogic
and
SnowflakeSnowflake

For a property and casualty insurance company, we currently use MarkLogic and Hadoop for our raw data lake. Trying to figure out how snowflake fits in the picture. Does anybody have some good suggestions/best practices for when to use and what data to store in Mark logic versus Snowflake versus a hadoop or all three of these platforms redundant with one another?

See more
Needs advice
on
HadoopHadoopMarkLogicMarkLogic
and
SnowflakeSnowflake

for property and casualty insurance company we current Use marklogic and Hadoop for our raw data lake. Trying to figure out how snowflake fits in the picture. Does anybody have some good suggestions/best practices for when to use and what data to store in Mark logic versus snowflake versus a hadoop or all three of these platforms redundant with one another?

See more
Replies (1)
Ivo Dinis Rodrigues
none of you bussines at Marklogic · | 1 upvotes · 18.9K views
Recommends

As i see it, you can use Snowflake as your data warehouse and marklogic as a data lake. You can add all your raw data to ML and curate it to a company data model to then supply this to Snowflake. You could try to implement the dw functionality on marklogic but it will just cost you alot of time. If you are using Aws version of Snowflake you can use ML spark connector to access the data. As an extra you can use the ML also as an Operational report system if you join it with a Reporting tool lie PowerBi. With extra apis you can also provide data to other systems with ML as source.

See more
Needs advice
on
HadoopHadoopInfluxDBInfluxDB
and
KafkaKafka

I have a lot of data that's currently sitting in a MariaDB database, a lot of tables that weigh 200gb with indexes. Most of the large tables have a date column which is always filtered, but there are usually 4-6 additional columns that are filtered and used for statistics. I'm trying to figure out the best tool for storing and analyzing large amounts of data. Preferably self-hosted or a cheap solution. The current problem I'm running into is speed. Even with pretty good indexes, if I'm trying to load a large dataset, it's pretty slow.

See more
Replies (1)
Recommends
on
DruidDruid

Druid Could be an amazing solution for your use case, My understanding, and the assumption is you are looking to export your data from MariaDB for Analytical workload. It can be used for time series database as well as a data warehouse and can be scaled horizontally once your data increases. It's pretty easy to set up on any environment (Cloud, Kubernetes, or Self-hosted nix system). Some important features which make it a perfect solution for your use case. 1. It can do streaming ingestion (Kafka, Kinesis) as well as batch ingestion (Files from Local & Cloud Storage or Databases like MySQL, Postgres). In your case MariaDB (which has the same drivers to MySQL) 2. Columnar Database, So you can query just the fields which are required, and that runs your query faster automatically. 3. Druid intelligently partitions data based on time and time-based queries are significantly faster than traditional databases. 4. Scale up or down by just adding or removing servers, and Druid automatically rebalances. Fault-tolerant architecture routes around server failures 5. Gives ana amazing centralized UI to manage data sources, query, tasks.

See more
Get Advice from developers at your company using StackShare Enterprise. Sign up for StackShare Enterprise.
Learn More
Pros of IBM DB2
Pros of Hadoop
  • 7
    Rock solid and very scalable
  • 5
    BLU Analytics is amazingly fast
  • 2
    Native XML support
  • 2
    Secure by default
  • 2
    Easy
  • 1
    Best performance
  • 39
    Great ecosystem
  • 11
    One stack to rule them all
  • 4
    Great load balancer
  • 1
    Amazon aws
  • 1
    Java syntax

Sign up to add or upvote prosMake informed product decisions

- No public GitHub repository available -

What is IBM DB2?

DB2 for Linux, UNIX, and Windows is optimized to deliver industry-leading performance across multiple workloads, while lowering administration, storage, development, and server costs.

What is Hadoop?

The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage.

Need advice about which tool to choose?Ask the StackShare community!

What companies use IBM DB2?
What companies use Hadoop?
See which teams inside your own company are using IBM DB2 or Hadoop.
Sign up for StackShare EnterpriseLearn More

Sign up to get full access to all the companiesMake informed product decisions

What tools integrate with IBM DB2?
What tools integrate with Hadoop?

Sign up to get full access to all the tool integrationsMake informed product decisions

Blog Posts

MySQLKafkaApache Spark+6
2
2014
Aug 28 2019 at 3:10AM

Segment

PythonJavaAmazon S3+16
7
2568
What are some alternatives to IBM DB2 and Hadoop?
Oracle
Oracle Database is an RDBMS. An RDBMS that implements object-oriented features such as user-defined types, inheritance, and polymorphism is called an object-relational database management system (ORDBMS). Oracle Database has extended the relational model to an object-relational model, making it possible to store complex business models in a relational database.
MySQL
The MySQL software delivers a very fast, multi-threaded, multi-user, and robust SQL (Structured Query Language) database server. MySQL Server is intended for mission-critical, heavy-load production systems as well as for embedding into mass-deployed software.
PostgreSQL
PostgreSQL is an advanced object-relational database management system that supports an extended subset of the SQL standard, including transactions, foreign keys, subqueries, triggers, user-defined types and functions.
MongoDB
MongoDB stores data in JSON-like documents that can vary in structure, offering a dynamic, flexible schema. MongoDB was also designed for high availability and scalability, with built-in replication and auto-sharding.
Microsoft SQL Server
Microsoft® SQL Server is a database management and analysis system for e-commerce, line-of-business, and data warehousing solutions.
See all alternatives