В Apache Spark у меня есть фрейм данных с одним столбцом, в котором есть строка (это дата), но ведущий ноль отсутствует для месяца и дня - PullRequest
0 голосов
/ 19 апреля 2020
import org.apache.spark.sql.functions.regexp_replace
val df = spark.createDataFrame(Seq(
  (1, "9/11/2020"),
  (2, "10/11/2020"),
  (3, "1/1/2020"),  
  (4, "12/7/2020"))).toDF("Id", "x4")
val newDf = df
            .withColumn("x4New", regexp_replace(df("x4"), "(?:(\\d{2}))/(?:(\\d{1}))/(?:(\\d{4}))", "$1/0$2/$3"))
val newDf1 = newDf
            .withColumn("x4New1", regexp_replace(df("x4"), "(?:(\\d{1}))/(?:(\\d{1}))/(?:(\\d{4}))", "0$1/0$2/$3"))
            .withColumn("x4New2", regexp_replace(df("x4"), "(?:(\\d{1}))/(?:(\\d{2}))/(?:(\\d{4}))", "0$1/$2/$3"))

newDf1.show

Output now

+---+----------+----------+-----------+-----------+
| Id|        x4|     x4New|     x4New1|     x4New2|
+---+----------+----------+-----------+-----------+
|  1| 9/11/2020| 9/11/2020|  9/11/2020| 09/11/2020|
|  2|10/11/2020|10/11/2020| 10/11/2020|100/11/2020|
|  3|  1/1/2020|  1/1/2020| 01/01/2020|   1/1/2020|
|  4| 12/7/2020|12/07/2020|102/07/2020|  12/7/2020|
+---+----------+----------+-----------+-----------+

`Требуемый вывод, добавьте начальный ноль перед днем ​​или месяцем-одиночным git 'Не хотите использовать UDF из соображений производительности

+---+----------+----------+
| Id|        x4|     date | 
+---+----------+----------+
|  1| 9/11/2020|09/11/2020|
|  2|10/11/2020|10/11/2020|
|  3|  1/1/2020|01/01/2020|
|  4| 12/7/2020|12/07/2020|
+---+----------+----------+-----------+-----------+

Ответы [ 2 ]

2 голосов
/ 19 апреля 2020

Использование from_unixtime,unix_timestamp (или) date_format,to_timestamp,(or) to_date во встроенных функциях.

Example:(In Spark-2.4)

import org.apache.spark.sql.functions._

//sample data
val df = spark.createDataFrame(Seq((1, "9/11/2020"),(2, "10/11/2020"),(3, "1/1/2020"),  (4, "12/7/2020"))).toDF("Id", "x4")

//using from_unixtime
df.withColumn("date",from_unixtime(unix_timestamp(col("x4"),"MM/dd/yyyy"),"MM/dd/yyyy")).show()

//using date_format
df.withColumn("date",date_format(to_timestamp(col("x4"),"MM/dd/yyyy"),"MM/dd/yyyy")).show()
df.withColumn("date",date_format(to_date(col("x4"),"MM/dd/yyyy"),"MM/dd/yyyy")).show()
//+---+----------+----------+
//| Id|        x4|      date|
//+---+----------+----------+
//|  1| 9/11/2020|09/11/2020|
//|  2|10/11/2020|10/11/2020|
//|  3|  1/1/2020|01/01/2020|
//|  4| 12/7/2020|12/07/2020|
//+---+----------+----------+
0 голосов
/ 20 апреля 2020

`Найден обходной путь, посмотрите, есть ли лучшее решение с использованием одного кадра данных и без UDF '

import org.apache.spark.sql.functions.regexp_replace
val df = spark.createDataFrame(Seq(
  (1, "9/11/2020"),
  (2, "10/11/2020"),
  (3, "1/1/2020"),  
  (4, "12/7/2020"))).toDF("Id", "x4")

val newDf = df.withColumn("x4New", regexp_replace(df("x4"), "(?:(\\b\\d{2}))/(?:(\\d))/(?:(\\d{4})\\b)", "$1/0$2/$3"))
val newDf1 = newDf.withColumn("x4New1", regexp_replace(newDf("x4New"), "(?:(\\b\\d{1}))/(?:(\\d))/(?:(\\d{4})\\b)", "0$1/$2/$3"))
val newDf2 = newDf1.withColumn("x4New2", regexp_replace(newDf1("x4New1"), "(?:(\\b\\d{1}))/(?:(\\d{2}))/(?:(\\d{4})\\b)", "0$1/$2/$3"))
val newDf3 = newDf2.withColumn("date", to_date(regexp_replace(newDf2("x4New2"), "(?:(\\b\\d{2}))/(?:(\\d{1}))/(?:(\\d{4})\\b)", "$1/0$2/$3"),"MM/dd/yyyy"))
val formatedDataDf = newDf3
                    .drop("x4New")
                    .drop("x4New1")
                    .drop("x4New2")

formatedDataDf.printSchema
formatedDataDf.show

Output looks like as follows

root
 |-- Id: integer (nullable = false)
 |-- x4: string (nullable = true)
 |-- date: date (nullable = true)

+---+----------+----------+
| Id|        x4|      date|
+---+----------+----------+
|  1| 9/11/2020|2020-09-11|
|  2|10/11/2020|2020-10-11|
|  3|  1/1/2020|2020-01-01|
|  4| 12/7/2020|2020-12-07|
+---+----------+----------+
...