pyspark基本操作_基於pyspark-algorithms

一、基本操作

from pyspark.sql import SparkSession
from pyspark.sql import Row
from pyspark.sql.types import * # 有struct 和 dt_type
import pyspark.sql.functions as sql_func

1.1 創建spark連接

1.1.1 SparkSession

spark = SparkSession.builder.appName("session_name").getOrCreate()

1.1.2 Sparkconf

from pyspark import SparkConf 

conf = SparkConf().setAppName('WordCount')
conf.set('spark.executor.memory', '500M')
conf.set('spark.cores.max', 4)

## 讀取數據方法同 sparksession 
## 如下
rdd = sc.textFile(fil_name)

1.2 數據加載

1.2.1 載入json

df = spark.read.json(json_path)
df.show()

1.2.2 載入文本

results = spark.sparkContext.textFile(fil_name)

1.2.3 載入csv

需要先搭建框架在載入數據

schema = StructType([
    StructField('name', StringType()),
    StructField('city', StringType()),
    StructField('age', DoubleType())
])

df = spark.read.schema(schema).format('csv')\
    .option('header', 'false')\
    .option('inferSchema', 'true')\
    .load(fil_name)

df.show()

1.3 一般操作

1.3.1 json等有表頭的數據

## select
df.select('name').show()
df.select(df['name'], df['age'] + 1).show()

## where
df.filter(df['age'] > 23).show()
## groupBy
df.groupBy('age').count().show()


# df 命名成people
df.createOrReplaceTempView('people') 
sql_df = spark.sql('select * from people')
sql_df.show()

# Register the df as a globeltemporary view
df.createGlobalTempView('people')
spark.sql('select * from global_temp.people').show()

spark.stop()

1.3.2 rdd操作

def compute_stats(num_dt):
    avg = stat.mean(num_dt)
    median = stat.median(num_dt)
    std = stat.stdev(num_dt)
    return avg, median, std

def create_pair(record):
    tokens = record.split(',')
    url_address = tokens[0]
    frequency = int(tokens[1])
    return (url_address, frequency)

1.3.2.1 簡單操作

# where
resf = results.filter(lambda record: len(record) > 5)
# map 映射
resf = resf.map(create_pair)
# groupby 然後計算數值 映射方法是 compute_stats
resf = resf.groupByKey().mapValues(compute_stats)

resf.collect()
spark.stop()

# 同樣可以做累加
# reduceByKey(lambda x, y: x + y) 

1.3.2.2 排序

records = spark.sparkContext.textFile(fil_name)
print("展平>>增加一列>>排序")
sorted_cnt = records.flatMap(lambda rec: rec.split(' '))\
    .map(lambda n: (int(n), 1)).sortByKey()
print(sorted_cnt.collect())
output = sorted_cnt.collect()

1.3.3 結構框架下的CSV數據

  • 鏈式操作
average_method1 = df.groupBy('city').agg(sql_func.avg('age').alias('average'))
average_method1.show()
  • spark.sql 操作
    需要創建視圖
df.createOrReplaceTempView('df_tbl')
average_method2 = spark.sql("select city, avg(age) avg_age from df_tbl group by city")
average_method2.show()
spark.stop()

參考: https://github.com/mahmoudparsian/pyspark-algorithms

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章