Я думаю, что следующий код должен создать желаемый результат. Код должен работать с искрой 2.2, которая включает в себя функцию contains
.
from pyspark.sql.functions import *
df = spark.createDataFrame([("west street WC",None),\
("WC 87650",None),\
("BOULVEVARD WC",None),\
(None,None),\
(None,"landinf dr WC"),\
(None,"FOX VALLEY WC 76543")],\
["Terminal_Region","Terminal_footprint"]) #Creating Dataframe
df.show() #print initial df
df.withColumn("REGION", when( col("Terminal_Region").isNull() & col("Terminal_footprint").isNull(), "NotMapped").\ #check if both are Null
otherwise(when((col("Terminal_Region").contains("WC")) | ( col("Terminal_footprint").contains("WC")), "EOR").otherwise("WOR"))).show() #otherwise search for "WC"
Выход:
#initial dataframe
+---------------+-------------------+
|Terminal_Region| Terminal_footprint|
+---------------+-------------------+
| west street WC| null|
| WC 87650| null|
| BOULVEVARD WC| null|
| null| null|
| null| landinf dr WC|
| null|FOX VALLEY WC 76543|
+---------------+-------------------+
# df with the logic applied
+---------------+-------------------+---------+
|Terminal_Region| Terminal_footprint| REGION|
+---------------+-------------------+---------+
| west street WC| null| EOR|
| WC 87650| null| EOR|
| BOULVEVARD WC| null| EOR|
| null| null|NotMapped|
| null| landinf dr WC| EOR|
| null|FOX VALLEY WC 76543| EOR|
+---------------+-------------------+---------+