У меня есть объединение двух абсолютно одинаковых подзапросов.Однако, исходя из объяснения запроса, Spark SQL, кажется, выполняет один и тот же подзапрос дважды.Ожидаемый?
In [20]: session.sql('(select count(city_code) as c from location group by country_code having c < 10) union (select count(city_code) as c from location group by country_code having c < 10)').explain(True)
== Parsed Logical Plan ==
'Distinct
+- 'Union
:- 'Filter ('c < 10)
: +- 'Aggregate ['country_code], ['count('city_code) AS c#228]
: +- 'UnresolvedRelation `location`
+- 'Filter ('c < 10)
+- 'Aggregate ['country_code], ['count('city_code) AS c#229]
+- 'UnresolvedRelation `location`
== Analyzed Logical Plan ==
c: bigint
Distinct
+- Union
:- Filter (c#228L < cast(10 as bigint))
: +- Aggregate [country_code#231], [count(city_code#234) AS c#228L]
: +- SubqueryAlias location
: +- Relation[uid#230L,country_code#231,country_category#232,region_code#233,city_code#234,time#235L] parquet
+- Filter (c#229L < cast(10 as bigint))
+- Aggregate [country_code#237], [count(city_code#240) AS c#229L]
+- SubqueryAlias location
+- Relation[country_code#237,city_code#240] parquet
== Optimized Logical Plan ==
Aggregate [c#228L], [c#228L]
+- Union
:- Filter (c#228L < 10)
: +- Aggregate [country_code#231], [count(city_code#234) AS c#228L]
: +- Project [country_code#231, city_code#234]
: +- Relation[country_code#231,city_code#234] parquet
+- Filter (c#229L < 10)
+- Aggregate [country_code#237], [count(city_code#240) AS c#229L]
+- Project [country_code#237, city_code#240]
+- Relation[country_code#237,city_code#240] parquet
== Physical Plan ==
*HashAggregate(keys=[c#228L], functions=[], output=[c#228L])
+- Exchange hashpartitioning(c#228L, 200)
+- *HashAggregate(keys=[c#228L], functions=[], output=[c#228L])
+- Union
:- *Filter (c#228L < 10)
: +- *HashAggregate(keys=[country_code#231], functions=[count(city_code#234)], output=[c#228L])
: +- Exchange hashpartitioning(country_code#231, 200)
: +- *HashAggregate(keys=[country_code#231], functions=[partial_count(city_code#234)], output=[country_code#231, count#255L])
: +- *FileScan parquet default.location[country_code#231,city_code#234] Batched: true, Format: Parquet, Location: InMemoryFileIndex[.../location], PartitionFilters: [], PushedFilters: [],
ReadSchema: struct<country_code:string,city_code:string>
+- *Filter (c#229L < 10)
+- *HashAggregate(keys=[country_code#237], functions=[count(city_code#240)], output=[c#229L])
+- ReusedExchange [country_code#237, count#257L], Exchange hashpartitioning(country_code#231, 200)