pyspark dataframe數據分析常用算子

createDataFrame,創建dataframe

df = spark.createDataFrame([
    (144.5, 185, 33, 'M', 'China'),
    (167.2, 165, 45, 'M', 'China'),
    (124.1, 170, 17, 'F', 'Japan'),
    (144.5, 185, 33, 'M', 'Pakistan'),
    (156.5, 180, 54, 'F', None),
    (124.1, 170, 23, 'F', 'Pakistan'),
    (129.2, 175, 62, 'M', 'Russia'),
    ], ['weight', 'height', 'age', 'gender', 'country'])

show

df.show()
默認會把超過20個字符的部分進行截斷,如果不想截斷,可以進行如下設置
df.show(truncate=False)
 

filter,過濾

(1)單條件過濾

df.filter(df['age'] == 33)
或者
df.filter('age = 33')

(2)多條件過濾

# 'or'
df.filter((df['age'] == 33) | (df['gender'] == 'M'))
# 'and'
df.filter((df['age'] == 33) & (df['gender'] == 'M'))

空值過濾

  1. 過濾某一個屬性不爲空的記錄
df.filter("country is not null")
# 或者
df.filter(df["country"].isNotNull())
# 或者
df[df["country"].isNotNull()]

注意:空字符串""並不會被過濾出來
2. 過濾某一個屬性爲空的記錄

df.filter("country is null")
# 或者
df.filter(df["country"].isNull())

groupBy,分組

  1. 分組後統計數量
df.groupBy(df["age"]).count()
+---+-----+
|age|count|
+---+-----+
| 54|    1|
| 33|    2|
| 42|    1|
| 23|    2|
| 45|    1|
+---+-----+
  • more

重命名列

  1. alias
df.select(F.col("country").alias("state"))
  1. withColumnRenamed
df.withColumnRenamed("country", "state")

explode:一列變多行

import pyspark.sql.functions as F
from pyspark.sql.types import *
df = spark.createDataFrame([
    ('u1', 'i1', 'r001,r002,r003'),
    ('u2', 'i2', 'r002,r003'),
    ('u3', 'i3', 'r001')
    ], ['user_id', 'item_id', 'recall_id'])

首先基於recall_id這一列新建一列recall_id_lst

df = df\
    .withColumn("recall_id_lst", F.udf(lambda x: x.split(','), returnType=ArrayType(StringType()))(F.col("recall_id")))
# 結果
+-------+-------+--------------+------------------+
|user_id|item_id|     recall_id|     recall_id_lst|
+-------+-------+--------------+------------------+
|     u1|     i1|r001,r002,r003|[r001, r002, r003]|
|     u2|     i2|     r002,r003|      [r002, r003]|
|     u3|     i3|          r001|            [r001]|
+-------+-------+--------------+------------------+

然後把recall_id_lst這一列變成多行


df.select("user_id", "item_id", F.explode(F.col("recall_id_lst")).alias("recall_id_plat"))
# 結果
+-------+-------+--------------+
|user_id|item_id|recall_id_plat|
+-------+-------+--------------+
|     u1|     i1|          r001|
|     u1|     i1|          r002|
|     u1|     i1|          r003|
|     u2|     i2|          r002|
|     u2|     i2|          r003|
|     u3|     i3|          r001|
+-------+-------+--------------+

去重

基於多列去重

df.dropDuplicates(['weight', 'height'])

when

df.withColumn("age_range", F.when(df.age > 60, "old")
    .when((df.age > 18) & (df.age <= 60),"mid")
    .otherwise("young"))

union,合併dataframe

df.union(df)

數據保存

df.write.mode("overwrite")\
                .save(path, header=True, format='csv')

後續會不斷把常用到的算子整理到博客中~

參考:

1.http://spark.apache.org/docs/2.1.0/api/python/pyspark.sql.html#pyspark.sql.functions

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章