MoinMoin Logo
  • Comments
  • Immutable Page
  • Menu
    • Navigation
    • RecentChanges
    • FindPage
    • Local Site Map
    • Help
    • HelpContents
    • HelpOnMoinWikiSyntax
    • Display
    • Attachments
    • Info
    • Raw Text
    • Print View
    • Edit
    • Load
    • Save
  • Login

Navigation

  • Start
  • Sitemap
Revision 10 as of 2021-07-23 15:24:03
  • ApacheSpark

Apache Spark

Spark 2.4.8 is a MapReduce-like cluster computing framework designed for low-latency iterative jobs and interactive use from an interpreter. It provides clean, language-integrated APIs in Scala and Java, with a rich array of parallel operators. Spark can run on top of the Apache Mesos cluster manager, Hadoop YARN, Amazon EC2, or without an independent resource manager (“standalone mode”).

Spark 3.1.2 Apache Spark is a unified analytics engine for large-scale data processing. It provides high-level APIs in Java, Scala, Python and R, and an optimized engine that supports general execution graphs. It also supports a rich set of higher-level tools including Spark SQL for SQL and structured data processing, MLlib for machine learning, GraphX for graph processing, and Structured Streaming for incremental computation and stream processing.

Spark components

Spark components:

  • Driver program
  • Cluster manager
  • Worker Node
  • Resilient distributed dataset (RDD)

RDD

A Resilient Distributed Dataset (RDD), the basic abstraction in Spark. Represents an immutable, partitioned collection of elements that can be operated on in parallel.

Install

   1 cd ~/tmp 
   2 curl -O https://apache.claz.org/spark/spark-3.1.2/spark-3.1.2-bin-hadoop3.2.tgz
   3 tar tvzf spark-3.1.2-bin-hadoop3.2.tgz
   4 tar xvzf spark-3.1.2-bin-hadoop3.2.tgz
   5 vi ~/.bashrc
   6 export SPARK_HOME=/home/vitor/tmp/spark-3.1.2-bin-hadoop3.2
   7 export PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin
   8 . ~/.bashrc
   9 cd ~/tmp/spark-3.1.2-bin-hadoop3.2/conf
  10 vi spark-env.sh
  11 SPARK_MASTER_HOST=127.0.0.1
  12 cd ~/tmp
  13 start-master.sh
  14 # stop it 
  15 # stop-master.sh
  16 # /home/vitor/tmp/spark-3.1.2-bin-hadoop3.2/logs/spark-vitor-org.apache.spark.deploy.master.Master-1-debian.out
  17 # 21/07/23 12:02:49 INFO Master: Starting Spark master at spark://127.0.0.1:7077
  18 # 21/07/23 12:02:49 INFO Master: Running Spark version 3.1.2
  19 # 21/07/23 12:02:49 WARN Utils: Service 'MasterUI' could not bind on port 8080. Attempting port 8081.
  20 # 21/07/23 12:02:49 INFO Utils: Successfully started service 'MasterUI' on port 8081.
  21 # 21/07/23 12:02:49 INFO MasterWebUI: Bound MasterWebUI to 0.0.0.0, and started at http://10.0.2.15:8081
  22 # http://127.0.0.1:8081 
  23 # start a worker 
  24 start-worker.sh spark://127.0.0.1:7077
  25 # http://127.0.0.1:8081 
  26 # Alive Workers: 1
  27 

test_spark1.py

  • cd ~/tmp/pyspark-test/
  • . virtenv/bin/activate
  • python3 test_spark1.py

   1 from pyspark.sql import SparkSession
   2 master_url="spark://127.0.0.1:7077"
   3 spark = SparkSession.builder.master(master_url).getOrCreate()
   4 print("spark session created")
  • MoinMoin Powered
  • Python Powered
  • GPL licensed
  • Valid HTML 4.01