Big Data / Hadoop
As internet booms and data grows rapidly from MB to GB to Terabyte and then to Petabytes. Industries facing challenges to store and process Large amount of data. Industries generate variety of data this data needs more and more machines to store and process data on single grid. The Apache Hadoop is a solution to handle this problem. Hadoop is developed to scale up from one server to hundreds and more of machines, each machine can store and process data locally. Apache Big Data/Hadoop is used by in Banking, Insurance, Finance, Retail, Healthcare, Product and Manufacturing, Food Industries
- Volume of Data: – Industries generated online and offline data Megabyte to Petabyte.
- Verity of Data: – Structured, Unstructured, Semi-Structured data.
- Velocity of Data: – Data is Generated frequently using Internet.
Benefits of Big Data / Hadoop:
- Low Cost:Hadoop Uses commodity of Hardware.
- Scalability: According to use, nodes increase or decrease.
- Processing: Batch processing and real time data Ingestion using Hadoop tools.
- Quality of Dashboards We can create graphical reports using any dashboard tools with Hadoop.
Use Cases of Big Data / Hadoop:
Challenges in HealthCare Industry:
- Healthcare industries are needed to store healthcare data for enlarge amount of data that is generated by last couple of years. Health IT company instituted a policy of saving serval years of historical claims and remit data, but it is on premises database. Companies are facing challenges to process these large amount of claims quickly.
- Solution: Hadoop Provides Collection of Payment, faster through operational efficiencies .Impact: Low Cost + Analytic Flexibility. Impact: Simple Deployment & Administration.
- Business Benefits: Archiving couple of years claims and remit data, which requires complex processing to get into a understandable format. Logging terabytes of data generated from transactional data daily and storing them in Hadoop for analytical purposes