объединить информацию о пользователе в одной строке в виде фрейма данных, используя scala - PullRequest
0 голосов
/ 25 апреля 2020
dfFilter.show()

------------+-----------+-----------+------------+--------+
CONTR       |COD        | DATE      |TYPCOD      | Amount |
------------+-----------+-----------+------------+--------+
0004        |4433       |2006-11-04 |RMA         | 150.0  |
0004        |4433       |2012-05-14 |FCB         | 300.0  |
0004        |1122       |2011-10-17 |RMA         | 100.0  |
0004        |1122       |2015-12-05 |FCB         | 500.0  |
------------+-----------+-----------+------------+--------+
// 

val addColumn = dfFilter.withColumn("RMA_AMOUNT", when(col("TYPCOD")==="RMA", col("Amount")))
                        .withColumn("DATE_RMA", when(col("TYPCOD")==="RMA", col("DATE")))
                        .withColumn("FCB_AMOUNT", when(col("TYPCOD")==="FCB", col("Amount")))
                        .withColumn("DATE_FCB", when(col("TYPCOD")==="FCB", col("DATE")))
addColumn.show()

--------+-----------+-----------+------------+--------+------------+-----------+-----------+-----------+
CONTR   |COD        | DATE      |TYPCOD      | Amount | RMA_AMOUNT |DATE_RMA   |FCB_AMOUNT |DATE_FCB   |
--------+-----------+-----------+------------+--------+------------+-----------+-----------+-----------+
0004    |4433       |2006-11-04 |RMA         | 150.0  |150.0       |2006-11-04 |null       |null       |
0004    |4433       |2012-05-14 |FCB         | 300.0  |null        |null       |300.0      |2012-05-14 |
0004    |1122       |2011-10-17 |RMA         | 100.0  |100.0       |2011-10-17 |null       |null       |
0004    |1122       |2015-12-05 |FCB         | 500.0  |null        |null       |500.0      |2015-12-05 |
--------+-----------+------------+-----------+--------+------------+-----------+-----------+-----------+

У меня одинаковые CONTR и COD , но у этого клиента разные даты и суммы, и я хочу сгруппировать их и сохранить две строки в dataFrame, я добавил столбцы относительно полей TYPCOD и DATE , чтобы в дальнейшем я мог остаться только с двумя строками в фрейме данных и, таким образом, не потерять информацию.

возможно? Ожидаемое:

------------+-------------+------------+-----------+-----------+-----------+
CONTR       |COD          | RMA_AMOUNT |DATE_RMA   |FCB_AMOUNT |DATE_FCB   |
------------+-------------+------------+-----------+-----------+-----------+
0004        |4433         |150.0       |2006-11-04 |300.0      |2012-05-14 |
0004        |1122         |100.0       |2011-10-17 |500.0      |2015-12-05 |
------------+-------------+------------+-----------+-----------+-----------+

Ответы [ 2 ]

0 голосов
/ 25 апреля 2020

Используйте groupBy, затем используйте first(col,ignoreNull=true) функции для этого случая.

val df=Seq(("0004","4433","2006-11-04","RMA","150.0","150.0","2006-11-04",null.asInstanceOf[String],null.asInstanceOf[String]),("0004","4433","2012-05-14","FCB","300.0",null.asInstanceOf[String],null.asInstanceOf[String],"300.0","2012-05-14"),("0004","1122","2011-10-17","RMA","100.0","100.0","2011-10-17",null.asInstanceOf[String],null.asInstanceOf[String]),("0004","1122","2015-12-05","FCB","500.0",null.asInstanceOf[String],null.asInstanceOf[String],"500.0","2015-12-05")).toDF("CONTR","COD","DATE","TYPCOD","Amount","RMA_AMOUNT","DATE_RMA","FCB_AMOUNT","DATE_FCB")

//+-----+----+----------+------+------+----------+----------+----------+----------+
//|CONTR| COD|      DATE|TYPCOD|Amount|RMA_AMOUNT|  DATE_RMA|FCB_AMOUNT|  DATE_FCB|
//+-----+----+----------+------+------+----------+----------+----------+----------+
//| 0004|4433|2006-11-04|   RMA| 150.0|     150.0|2006-11-04|      null|      null|
//| 0004|4433|2012-05-14|   FCB| 300.0|      null|      null|     300.0|2012-05-14|
//| 0004|1122|2011-10-17|   RMA| 100.0|     100.0|2011-10-17|      null|      null|
//| 0004|1122|2015-12-05|   FCB| 500.0|      null|      null|     500.0|2015-12-05|
//+-----+----+----------+------+------+----------+----------+----------+----------+

df.groupBy("CONTR","COD").agg(first(col("RMA_AMOUNT"),true).alias("RMA_AMOUNT"),first(col("DATE_RMA"),true).alias("DATE_RMA"),first(col("FCB_AMOUNT"),true).alias("FCB_AMOUNT"),first(col("DATE_FCB"),true).alias("DATE_FCB")).show()
//+-----+----+----------+----------+----------+----------+
//|CONTR| COD|RMA_AMOUNT|  DATE_RMA|FCB_AMOUNT|  DATE_FCB|
//+-----+----+----------+----------+----------+----------+
//| 0004|4433|     150.0|2006-11-04|     300.0|2012-05-14|
//| 0004|1122|     100.0|2011-10-17|     500.0|2015-12-05|
//+-----+----+----------+----------+----------+----------+

//incase if you want to keep TYPCOD and DATE values 
df.groupBy("CONTR","COD").agg(concat_ws(",",collect_list(col("TYPCOD"))).alias("TYPECOD"),concat_ws(",",collect_list(col("DATE"))).alias("DATE"),first(col("RMA_AMOUNT"),true).alias("RMA_AMOUNT"),first(col("DATE_RMA"),true).alias("DATE_RMA"),first(col("FCB_AMOUNT"),true).alias("FCB_AMOUNT"),first(col("DATE_FCB"),true).alias("DATE_FCB")).show(false)
//+-----+----+-------+---------------------+----------+----------+----------+----------+
//|CONTR|COD |TYPECOD|DATE                 |RMA_AMOUNT|DATE_RMA  |FCB_AMOUNT|DATE_FCB  |
//+-----+----+-------+---------------------+----------+----------+----------+----------+
//|0004 |4433|RMA,FCB|2006-11-04,2012-05-14|150.0     |2006-11-04|300.0     |2012-05-14|
//|0004 |1122|RMA,FCB|2011-10-17,2015-12-05|100.0     |2011-10-17|500.0     |2015-12-05|
//+-----+----+-------+---------------------+----------+----------+----------+----------+
0 голосов
/ 25 апреля 2020

Да, это возможно. Пожалуйста, проверьте код ниже.

scala> val df = Seq(("0004",4433,"2006-11-04","RMA",150.0),("0004",4433,"2012-05-14","FCB",300.0),("0004",1122,"2011-10-17","RMA",100.0),("0004",1122,"2015-12-05","FCB",500.0)).toDF("contr","cod","date","typcod","amount")
df: org.apache.spark.sql.DataFrame = [contr: string, cod: int ... 3 more fields]

scala> val rma = df.filter($"typcod" === "RMA").select($"contr",$"cod",$"date".as("rma_date"),$"typcod",$"amount".as("rma_amount"))
rma: org.apache.spark.sql.DataFrame = [contr: string, cod: int ... 3 more fields]

scala> rma.show(false)
+-----+----+----------+------+----------+
|contr|cod |rma_date  |typcod|rma_amount|
+-----+----+----------+------+----------+
|0004 |4433|2006-11-04|RMA   |150.0     |
|0004 |1122|2011-10-17|RMA   |100.0     |
+-----+----+----------+------+----------+


scala> val fcb = df.filter($"typcod" === "FCB").select($"contr",$"cod",$"date".as("fcb_date"),$"typcod",$"amount".as("fcb_amount")).drop("contr")
fcb: org.apache.spark.sql.DataFrame = [cod: int, fcb_date: string ... 2 more fields]

scala> fcb.show(false)
+----+----------+------+----------+
|cod |fcb_date  |typcod|fcb_amount|
+----+----------+------+----------+
|4433|2012-05-14|FCB   |300.0     |
|1122|2015-12-05|FCB   |500.0     |
+----+----------+------+----------+


scala> rma.join(fcb,Seq("cod"),"inner").select("contr","cod","rma_amount","rma_date","fcb_amount","fcb_date").show(false)
+-----+----+----------+----------+----------+----------+
|contr|cod |rma_amount|rma_date  |fcb_amount|fcb_date  |
+-----+----+----------+----------+----------+----------+
|0004 |4433|150.0     |2006-11-04|300.0     |2012-05-14|
|0004 |1122|100.0     |2011-10-17|500.0     |2015-12-05|
+-----+----+----------+----------+----------+----------+


scala>

Добро пожаловать на сайт PullRequest, где вы можете задавать вопросы и получать ответы от других членов сообщества.
...