Вариант использования: прочитать файл s3 csv и создать фрейм данных
Используемый код:
import boto3 import pyspark from pyspark.sql import SparkSession spark = SparkSession.builder.getOrCreate() s3 = boto3.client('s3',aws_access_key_id='xxxxxxxxxxxxx',aws_secret_access_key='xxx') cust_Address_SOURCE_PATH = "s3://log-bucket-poc-varun/" read_s3_address_cust_df=spark.read.format("com.databricks.spark.csv").option("header", "true").option("inferSchema", "true").load(Cust_Address_SOURCE_PATH) print(read_s3_address_cust_df.show())
Error : An error occurred while calling o763.load. : java.io.IOException: No FileSystem for scheme: s3 at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2660) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295) at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary$1.apply(DataSource.scala:547)