Posted in r/hadoop by u/jkestelyn • 6 points and 2 comments The solution was to go into my Docker preferences and increasing the memory amount. The next image is to check if everything working fine. Required: Pseudo-distributed configuration assumes this hostname. Cloudera Director enables an enterprise-grade, self-service experience for deploying, managing, and scaling CDH and Cloudera Enterprise cloud environments, while ensuring auditability. All gists Back to GitHub Sign in Sign up Sign in Sign up {{ message }} Instantly share code, notes, and snippets. Before importing the image assure Docker is running, and type this into the terminal in the home directory /home/your_name. Required: If you want to use the terminal, either immediately or connect to the terminal later. The hash is also printed in the terminal when you import, or you can look up the hashes of all imported images with: Once you know the name or hash of the image, you can run it: Basically, this command is telling docker to run an image with 4GByte the maximum amount of memory the container can use, with 2GByte as soft limit smaller than 4GByte which is activated when Docker detects contention or low memory on the host machine, and 8GByte the amount of memory this container is allowed to swap to disk. Once you create the docker container called “cloudera” do not remove the container unless you intend to delete all of your work and start cleanly. Cloudera Enterprise 5.6.x | Other versions. Recommended: Map the Hue port in the guest to another port on the host. This tutorial will show how to install and configure version 5.8 of Cloudera's Distribution Hadoop (CDH 5) with Quickstarts on Ubuntu 16.04 host. The choice here is Ubuntu 16.04, so before start with the installation takes into consideration the requirements then follow this guide. Creative use of extension methods | Alexey Golub, Writing My First Algorithm in Rails Without Even Knowing It, As a programmer, you need to understand these about files, Standard Template Library (STL) in C++ | Vectors. GitHub Gist: instantly share code, notes, and snippets. Cloudera utilizza i cookie per fornire e migliorare i servizi del nostro sito. For example, on Ubuntu 14.04, you can install Docker using the following command: You can import the Docker image by pulling it from the Docker Hub: You can also download the image from the Cloudera website. Docker and Cloudera made sure that all data and code are protected both in transit, at rest, and at runtime. CDH is 100% open source and is the only Hadoop solution to offer batch processing, interactive SQL and interactive search as well as enterprise-grade continuous availability. When the container runs, it prints an informational message and exits. $ sudo apt-get remove docker docker-engine docker.io, docker run -m 4G --memory-reservation 2G --memory-swap 8G --hostname=quickstart.cloudera --privileged=true -t -i -v $(pwd):/zaid --publish-all=true -p8888 -p8088 cloudera/quickstart /usr/bin/docker-quickstart, The Reasons why you Must Use Visual Studio Code. Cloudera Manager is not started by default. Mind you this is going to take some time for image to build. You always need the stable repository, even if you want to install builds from the edge or test repositories as well. Creating docker with all CDH components. Cloudera Manager 5.16.2 all in one Dockerfile . Also, keep in mind that the host you use to run your CDH cluster must meet the same resource requirements as a normal multi-node deployment. 5. The last command downloads a test image and runs it in a container. Thanks for reading. 10: Docker Tutorial: BigData services & folders on Cloudera quickstart. Once services are started, a Bash shell takes over. These images below are our prove that HUE and YARN working as expected, so we put this line localhost:32768 in our browser for HUE and localhost: 32769 for YARN. The announcement of former Hortonworks and Docker man Rob Bearden seemed to steady the ship, but analysts were quick to point out the clock was ticking in terms of its technology vision. Update the apt package index, list the available versions in the repo, then select and install a version of Docker CE: Install a specific version by its fully qualified package name, for example, docker-ce=5:18.09.0~3-0~ubuntu-xenial, and verify that Docker CE is installed correctly by running the hello-world image. Cloudera Manager makes it easy to manage Cloudera deployments of any scale in production. To run a container using the image, you must know the name or hash of the image. This switch starts a terminal emulator to run the Options. More space - why is communication so hard? The integrated solution is based on Cloudera Navigator Encrypt running on Docker… If this documentation includes code, including but not limited to, code examples, Cloudera makes this available to you under the terms of the Apache License, Version 2.0, including any required Once we used the last command it showed a live stream of statistics when the memory usage of Docker image is between 1.77 GBytes and 4GBytes. Cloudera offers a Docker image, similar to the Cloudera sandbox, that you download and install to your computer. Categories: Container | Docker | QuickStart | All Categories, United States: +1 888 789 1488 is running. The whole problem is that this container is big and my machine has just 16GB. So the minimum you should use is 4GB, The laptop for this guide only has 8GB, so we allocate 4GB to docker when its running. These tutorials are based on lighter Docker containers. Alert: Welcome to the Unified Cloudera Community. docker pull cloudera/quickstart:latest. Hadoop needed more space but was not telling me of it’s needs. Privileged mode is required for HBase database, with option -i means interactive, option -t means to open it in the terminal, and option -v allows to share volumes with the container, so anything that we put in the home directory, will show up in the Docker container under the directory /zaid . Quickly deploy, configure, and monitor your cluster through an intuitive UI — complete with rolling upgrades, backup and disaster recovery, and customizable alerts. You can use this environment to learn Hadoop, try new Outside the US: +1 650 362 0488. This tutorial will show how to install and configure version 5.7.0 of Cloudera Distribution Hadoop (CDH 5) on Ubuntu 16.04 host using Docker. The Cloudera QuickStart Docker Image is a single-node deployment of Cloudera Enterprise, which comprises CDH (Cloudera's open source distribution of the Hadoop ecosystem) and Cloudera … Cloudera & Dockers. You might also want to set up port forwarding so that the port you would normally connect to on a real cluster is mapped to the corresponding port. In the next tutorials will drill into Cloudera Quickstart – Services, CLIs, config files, etc to get a good overview. To see options for starting Cloudera Manager, run the following command: See Cloudera documentation and the Cloudera website for other information, including the license agreement associated with the Docker image. However, on Docker Hub I can only find beta versions: Perché Cloudera. This will take a couple of minutes to complete because it’s a large file size so you can take a cup of tea or whatever you like. Running Cloudera with Docker for development/test. Single-node deployment of Cloudera's 100% open-source Hadoop platform, and Cloudera Manager. If you followed the import instructions above, the name is cloudera/quickstart:latest. Le innovazioni per cloud ibrido di Cloudera Altus accrescono il valore dei dati per le aziende. While Docker has a more simplified structure compared to both of these, the real area where it causes disruption is resource efficiency. No. The hash is also The Cloudera … Container. This will take a while to complete, because it’s a fairly large filesize. if you want to start services manually. Check the content using these commands below. to be out of touch with the rest of the cluster. When you are mapping ports like this, services are not aware and might provide links or other references to specific ports that are no longer available on your client. This compliments ⏯ Getting started with BigData on Cloudera, which was on a Virtual Machine. We have 4 choices: Virtual Box Docker VM Ware KVM In this tutorial, we'll use Docker Image. Logging into Cloudera manager. Overview What is a Container ... Docker is an open-source project based on Linux containers. Warning. We have to change this to the directory of our files. services. Cloudera Director 6.3.1 Unlock the full potential of hadoop in the cloud. davideicardi / cloudera-docker.md. December 2019 spelled the end of a bad year for Cloudera during which it released shocking results of its first quarter of fiscal '20 ended 30 June, after which the company's stock fell 32 per cent. This document describes the architecture and deployment procedures for Cloudera 6.2.0 with Cloudera Data Science Workbench 1.5.0 on a 30-node Cisco UCS C240 M5 and Cisco UCS S3260 cluster based on Cisco UCS Integrated Infrastructure for Big Data and Analytics. You can look up printed in the terminal when you import, or you can look up the hashes of all imported images with: Once you know the name or hash of the image, you can run it: The required flags and other options are described in the following table: Use /usr/bin/docker-quickstart to start all CDH services, and then run a Bash shell. Optional: Map any other ports (for example, 7180 for Cloudera Manager, 80 for a guided tutorial). If these are installed, uninstall them: The contents of /var/lib/docker, including images, containers, volumes, and networks, are preserved. Required: For HBase, MySQL-backed Hive metastore, Hue, Oozie, Sentry, and Cloudera Manager. Virtual machines also allow the exact same thing. © 2020 Cloudera, Inc. All rights reserved. Cloudera Search and Other Cloudera Components, Displaying Cloudera Manager Documentation, Displaying the Cloudera Manager Server Version and Server Time, Using the Cloudera Manager Java API for Cluster Automation, Cloudera Manager 5 Frequently Asked Questions, Cloudera Navigator Data Management Overview, Cloudera Navigator 2 Frequently Asked Questions, Cloudera Navigator Key Trustee Server Overview, Frequently Asked Questions About Cloudera Software, QuickStart VM Software Versions and Documentation, Cloudera Manager and CDH QuickStart Guide, Before You Install CDH 5 on a Single Node, Installing CDH 5 on a Single Linux Node in Pseudo-distributed Mode, Installing CDH 5 with MRv1 on a Single Linux Host in Pseudo-distributed mode, Installing CDH 5 with YARN on a Single Linux Node in Pseudo-distributed mode, Components That Require Additional Configuration, Prerequisites for Cloudera Search QuickStart Scenarios, Installation Requirements for Cloudera Manager, Cloudera Navigator, and CDH 5, Cloudera Manager 5 Requirements and Supported Versions, Permission Requirements for Package-based Installations and Upgrades of CDH, Cloudera Navigator 2 Requirements and Supported Versions, CDH 5 Requirements and Supported Versions, Supported Configurations with Virtualization and Cloud Platforms, Ports Used by Cloudera Manager and Cloudera Navigator, Ports Used by Cloudera Navigator Encryption, Managing Software Installation Using Cloudera Manager, Cloudera Manager and Managed Service Datastores, Configuring an External Database for Oozie, Configuring an External Database for Sqoop, Storage Space Planning for Cloudera Manager, Installation Path A - Automated Installation by Cloudera Manager, Installation Path B - Installation Using Cloudera Manager Parcels or Packages, (Optional) Manually Install CDH and Managed Service Packages, Installation Path C - Manual Installation Using Cloudera Manager Tarballs, Understanding Custom Installation Solutions, Creating and Using a Remote Parcel Repository for Cloudera Manager, Creating and Using a Package Repository for Cloudera Manager, Installing Older Versions of Cloudera Manager 5, Uninstalling Cloudera Manager and Managed Software, Uninstalling a CDH Component From a Single Host, Installing the Cloudera Navigator Data Management Component, Installing Cloudera Navigator Key Trustee Server, Installing and Deploying CDH Using the Command Line, Migrating from MapReduce 1 (MRv1) to MapReduce 2 (MRv2, YARN), Configuring Dependencies Before Deploying CDH on a Cluster, Deploying MapReduce v2 (YARN) on a Cluster, Deploying MapReduce v1 (MRv1) on a Cluster, Installing the Flume RPM or Debian Packages, Files Installed by the Flume RPM and Debian Packages, New Features and Changes for HBase in CDH 5, Configuring HBase in Pseudo-Distributed Mode, Installing and Upgrading the HCatalog RPM or Debian Packages, Configuration Change on Hosts Used with HCatalog, Starting and Stopping the WebHCat REST server, Accessing Table Information with the HCatalog Command-line API, Installing Impala without Cloudera Manager, Starting, Stopping, and Using HiveServer2, Starting HiveServer1 and the Hive Console, Installing the Hive JDBC Driver on Clients, Configuring the Metastore to use HDFS High Availability, Using an External Database for Hue Using the Command Line, Starting, Stopping, and Accessing the Oozie Server, Installing Cloudera Search without Cloudera Manager, Installing MapReduce Tools for use with Cloudera Search, Installing the Lily HBase Indexer Service, Using Snappy Compression in Sqoop 1 and Sqoop 2 Imports, Upgrading Sqoop 1 from an Earlier CDH 5 release, Installing the Sqoop 1 RPM or Debian Packages, Upgrading Sqoop 2 from an Earlier CDH 5 Release, Starting, Stopping, and Accessing the Sqoop 2 Server, Feature Differences - Sqoop 1 and Sqoop 2, Upgrading ZooKeeper from an Earlier CDH 5 Release, Importing Avro Files with Sqoop 1 Using the Command Line, Using the Parquet File Format with Impala, Hive, Pig, and MapReduce, Setting Up an Environment for Building RPMs, Troubleshooting Installation and Upgrade Problems, Managing CDH and Managed Services Using Cloudera Manager, Modifying Configuration Properties Using Cloudera Manager, Modifying Configuration Properties (Classic Layout), Viewing and Reverting Configuration Changes, Exporting and Importing Cloudera Manager Configuration, Starting, Stopping, Refreshing, and Restarting a Cluster, Comparing Configurations for a Service Between Clusters, Starting, Stopping, and Restarting Services, Decommissioning and Recommissioning Hosts, Cloudera Manager 5.6 Configuration Properties, Java KeyStore KMS Properties in CDH 5.6.0, Key Trustee Server Properties in CDH 5.6.0, Key-Value Store Indexer Properties in CDH 5.6.0, Spark (Standalone) Properties in CDH 5.6.0, YARN (MR2 Included) Properties in CDH 5.6.0, Java KeyStore KMS Properties in CDH 5.5.0, Key Trustee Server Properties in CDH 5.5.0, Key-Value Store Indexer Properties in CDH 5.5.0, Spark (Standalone) Properties in CDH 5.5.0, YARN (MR2 Included) Properties in CDH 5.5.0, Java KeyStore KMS Properties in CDH 5.4.0, Key-Value Store Indexer Properties in CDH 5.4.0, Spark (Standalone) Properties in CDH 5.4.0, YARN (MR2 Included) Properties in CDH 5.4.0, Java KeyStore KMS Properties in CDH 5.3.0, Key-Value Store Indexer Properties in CDH 5.3.0, Spark (Standalone) Properties in CDH 5.3.0, YARN (MR2 Included) Properties in CDH 5.3.0, Java KeyStore KMS Properties in CDH 5.2.0, Key-Value Store Indexer Properties in CDH 5.2.0, Spark (Standalone) Properties in CDH 5.2.0, YARN (MR2 Included) Properties in CDH 5.2.0, Key-Value Store Indexer Properties in CDH 5.1.0, Spark (Standalone) Properties in CDH 5.1.0, YARN (MR2 Included) Properties in CDH 5.1.0, Key-Value Store Indexer Properties in CDH 5.0.0, Spark (Standalone) Properties in CDH 5.0.0, YARN (MR2 Included) Properties in CDH 5.0.0, Starting CDH Services Using the Command Line, Configuring init to Start Hadoop System Services, Starting and Stopping HBase Using the Command Line, Stopping CDH Services Using the Command Line, Migrating Data between Clusters Using distcp, Copying Data Between Two Clusters Using Distcp, Copying Data between a Secure and an Insecure Cluster using DistCp and WebHDFS, Exposing HBase Metrics to a Ganglia Server, Adding and Removing Storage Directories for DataNodes, Configuring Storage-Balancing for DataNodes, Configuring Centralized Cache Management in HDFS, Managing User-Defined Functions (UDFs) with HiveServer2, Enabling Hue Applications Using Cloudera Manager, Using an External Database for Hue Using Cloudera Manager, Post-Installation Configuration for Impala, Adding the Oozie Service Using Cloudera Manager, Configuring Oozie Data Purge Settings Using Cloudera Manager, Adding Schema to Oozie Using Cloudera Manager, Scheduling in Oozie Using Cron-like Syntax, Managing Spark Standalone Using the Command Line, Configuring Services to Use the GPL Extras Parcel, Managing the Impala Llama ApplicationMaster, Configuring Other CDH Components to Use HDFS HA, Administering an HDFS High Availability Cluster, Changing a Nameservice Name for Highly Available HDFS Using Cloudera Manager, MapReduce (MRv1) and YARN (MRv2) High Availability, YARN (MRv2) ResourceManager High Availability, Work Preserving Recovery for YARN Components, MapReduce (MRv1) JobTracker High Availability, Cloudera Navigator Key Trustee Server High Availability, High Availability for Other CDH Components, Configuring Cloudera Manager for High Availability With a Load Balancer, Introduction to Cloudera Manager Deployment Architecture, Prerequisites for Setting up Cloudera Manager High Availability, High-Level Steps to Configure Cloudera Manager High Availability, Step 1: Setting Up Hosts and the Load Balancer, Step 2: Installing and Configuring Cloudera Manager Server for High Availability, Step 3: Installing and Configuring Cloudera Management Service for High Availability, Step 4: Automating Failover with Corosync and Pacemaker, TLS and Kerberos Configuration for Cloudera Manager High Availability, Port Requirements for Backup and Disaster Recovery, Enabling Replication Between Clusters in Different Kerberos Realms, Starting, Stopping, and Restarting the Cloudera Manager Server, Configuring Cloudera Manager Server Ports, Moving the Cloudera Manager Server to a New Host, Starting, Stopping, and Restarting Cloudera Manager Agents, Sending Usage and Diagnostic Data to Cloudera, Other Cloudera Manager Tasks and Settings, Cloudera Navigator Data Management Component Administration, Downloading HDFS Directory Access Permission Reports, Introduction to Cloudera Manager Monitoring, Viewing Charts for Cluster, Service, Role, and Host Instances, Monitoring Multiple CDH Deployments Using the Multi Cloudera Manager Dashboard, Installing and Managing the Multi Cloudera Manager Dashboard, Using the Multi Cloudera Manager Status Dashboard, Viewing and Filtering MapReduce Activities, Viewing the Jobs in a Pig, Oozie, or Hive Activity, Viewing Activity Details in a Report Format, Viewing the Distribution of Task Attempts, Troubleshooting Cluster Configuration and Operation, Impala Llama ApplicationMaster Health Tests, HBase RegionServer Replication Peer Metrics, Security Overview for an Enterprise Data Hub, How to Configure TLS Encryption for Cloudera Manager, Configuring Authentication in Cloudera Manager, Configuring External Authentication for Cloudera Manager, Kerberos Concepts - Principals, Keytabs and Delegation Tokens, Enabling Kerberos Authentication Using the Wizard, Step 2: If You are Using AES-256 Encryption, Install the JCE Policy File, Step 3: Get or Create a Kerberos Principal for the Cloudera Manager Server, Step 4: Enabling Kerberos Using the Wizard, Step 6: Get or Create a Kerberos Principal for Each User Account, Step 7: Prepare the Cluster for Each User, Step 8: Verify that Kerberos Security is Working, Step 9: (Optional) Enable Authentication for HTTP Web Consoles for Hadoop Roles, Enabling Kerberos Authentication for Single User Mode or Non-Default Users, Configuring a Cluster with Custom Kerberos Principals, Viewing and Regenerating Kerberos Principals, Using a Custom Kerberos Keytab Retrieval Script, Mapping Kerberos Principals to Short Names, Moving Kerberos Principals to Another OU Within Active Directory, Using Auth-to-Local Rules to Isolate Cluster Users, Enabling Kerberos Authentication Without the Wizard, Step 4: Import KDC Account Manager Credentials, Step 5: Configure the Kerberos Default Realm in the Cloudera Manager Admin Console, Step 8: Wait for the Generate Credentials Command to Finish, Step 9: Enable Hue to Work with Hadoop Security using Cloudera Manager, Step 10: (Flume Only) Use Substitution Variables for the Kerberos Principal and Keytab, Step 11: (CDH 4.0 and 4.1 only) Configure Hue to Use a Local Hive Metastore, Step 14: Create the HDFS Superuser Principal, Step 15: Get or Create a Kerberos Principal for Each User Account, Step 16: Prepare the Cluster for Each User, Step 17: Verify that Kerberos Security is Working, Step 18: (Optional) Enable Authentication for HTTP Web Consoles for Hadoop Roles, Configuring Authentication in the Cloudera Navigator Data Management Component, Configuring External Authentication for the Cloudera Navigator Data Management Component, Managing Users and Groups for the Cloudera Navigator Data Management Component, Configuring Authentication in CDH Using the Command Line, Enabling Kerberos Authentication for Hadoop Using the Command Line, Step 2: Verify User Accounts and Groups in CDH 5 Due to Security, Step 3: If you are Using AES-256 Encryption, Install the JCE Policy File, Step 4: Create and Deploy the Kerberos Principals and Keytab Files, Optional Step 8: Configuring Security for HDFS High Availability, Optional Step 9: Configure secure WebHDFS, Optional Step 10: Configuring a secure HDFS NFS Gateway, Step 11: Set Variables for Secure DataNodes, Step 14: Set the Sticky Bit on HDFS Directories, Step 15: Start up the Secondary NameNode (if used), Step 16: Configure Either MRv1 Security or YARN Security, Using kadmin to Create Kerberos Keytab Files, Configuring the Mapping from Kerberos Principals to Short Names, Enabling Debugging Output for the Sun Kerberos Classes, Configuring Kerberos for Flume Thrift Source and Sink Using Cloudera Manager, Configuring Kerberos for Flume Thrift Source and Sink Using the Command Line, Testing the Flume HDFS Sink Configuration, Configuring Kerberos Authentication for HBase, Configuring the HBase Client TGT Renewal Period, Hive Metastore Server Security Configuration, Using Hive to Run Queries on a Secure HBase Server, Configuring Kerberos Authentication for Hue, Enabling Kerberos Authentication for Impala, Using Multiple Authentication Methods with Impala, Configuring Impala Delegation for Hue and BI Tools, Configuring Kerberos Authentication for the Oozie Server, Enabling Kerberos Authentication for Search, Configuring Spark on YARN for Long-Running Applications, Configuring a Cluster-dedicated MIT KDC with Cross-Realm Trust, Integrating Hadoop Security with Active Directory, Integrating Hadoop Security with Alternate Authentication, Authenticating Kerberos Principals in Java Code, Using a Web Browser to Access an URL Protected by Kerberos HTTP SPNEGO, Private Key and Certificate Reuse Across Java Keystores and OpenSSL, Configuring TLS Security for Cloudera Manager, Configuring TLS Encryption Only for Cloudera Manager, Level 1: Configuring TLS Encryption for Cloudera Manager Agents, Level 2: Configuring TLS Verification of Cloudera Manager Server by the Agents, Level 3: Configuring TLS Authentication of Agents to the Cloudera Manager Server, Configuring TLS/SSL for the Cloudera Navigator Data Management Component, Configuring TLS/SSL for Cloudera Management Service Roles, Configuring TLS/SSL Encryption for CDH Services, Configuring TLS/SSL for HDFS, YARN and MapReduce, Configuring TLS/SSL for Flume Thrift Source and Sink, Configuring Encrypted Communication Between HiveServer2 and Client Drivers, Deployment Planning for Data at Rest Encryption, Data at Rest Encryption Reference Architecture, Resource Planning for Data at Rest Encryption, Optimizing for HDFS Data at Rest Encryption, Enabling HDFS Encryption Using the Wizard, Configuring the Key Management Server (KMS), Migrating Keys from a Java KeyStore to Cloudera Navigator Key Trustee Server, Configuring CDH Services for HDFS Encryption, Backing Up and Restoring Key Trustee Server, Initializing Standalone Key Trustee Server, Configuring a Mail Transfer Agent for Key Trustee Server, Verifying Cloudera Navigator Key Trustee Server Operations, Managing Key Trustee Server Organizations, HSM-Specific Setup for Cloudera Navigator Key HSM, Creating a Key Store with CA-Signed Certificate, Integrating Key HSM with Key Trustee Server, Registering Cloudera Navigator Encrypt with Key Trustee Server, Preparing for Encryption Using Cloudera Navigator Encrypt, Encrypting and Decrypting Data Using Cloudera Navigator Encrypt, Migrating eCryptfs-Encrypted Data to dm-crypt, Cloudera Navigator Encrypt Access Control List, Configuring Encrypted HDFS Data Transport, Configuring Encrypted HBase Data Transport, Cloudera Navigator Data Management Component User Roles, Authorization With Apache Sentry (Incubating), Installing and Upgrading the Sentry Service, Migrating from Sentry Policy Files to the Sentry Service, Synchronizing HDFS ACLs and Sentry Permissions, Installing and Upgrading Sentry for Policy File Authorization, Configuring Sentry Policy File Authorization Using Cloudera Manager, Configuring Sentry Policy File Authorization Using the Command Line, Enabling Sentry Authorization for Search using the Command Line, Enabling Sentry in Cloudera Search for CDH 5, Providing Document-Level Security Using Sentry, Debugging Failed Sentry Authorization Requests, Appendix: Authorization Privilege Model for Search, Installation Considerations for Impala Security, Jsvc, Task Controller and Container Executor Programs, YARN ONLY: Container-executor Error Codes, Sqoop, Pig, and Whirr Security Support Status, Setting Up a Gateway Node to Restrict Cluster Access, ARRAY Complex Type (CDH 5.5 or higher only), MAP Complex Type (CDH 5.5 or higher only), STRUCT Complex Type (CDH 5.5 or higher only), VARIANCE, VARIANCE_SAMP, VARIANCE_POP, VAR_SAMP, VAR_POP, Validating the Deployment with the Solr REST API, Preparing to Index Data with Cloudera Search, Using MapReduce Batch Indexing with Cloudera Search, Near Real Time (NRT) Indexing Using Flume and the Solr Sink, Configuring Flume Solr Sink to Sip from the Twitter Firehose, Indexing a File Containing Tweets with Flume HTTPSource, Indexing a File Containing Tweets with Flume SpoolDirectorySource, Flume Morphline Solr Sink Configuration Options, Flume Morphline Interceptor Configuration Options, Flume Solr UUIDInterceptor Configuration Options, Flume Solr BlobHandler Configuration Options, Flume Solr BlobDeserializer Configuration Options, Extracting, Transforming, and Loading Data With Cloudera Morphlines, Using the Lily HBase Batch Indexer for Indexing, Configuring the Lily HBase NRT Indexer Service for Use with Cloudera Search, Schemaless Mode Overview and Best Practices, Using Search through a Proxy for High Availability, Cloudera Search Frequently Asked Questions, Developing and Running a Spark WordCount Application, Using the spark-avro Library to Access Avro Data Sources, Accessing Data Stored in Amazon S3 through Spark, Building and Running a Crunch Application with Spark.