Первая проблема решена, прочитайте, пожалуйста, прокрутите вниз до EDIT2
Я пытаюсь получить доступ к веб-службе, развернутой через Azure Machine Learning Studio, используя образец кода пакетного выполнения для Python внизу страницы ниже:
https://studio.azureml.net/apihelp/workspaces/306bc1f050ba4cdba0dbc6cc561c6ab0/webservices/e4e3d2d32ec347ae9a829b200f7d31cd/endpoints/61670382104542bc9533a920830b263c/jobs
Я уже исправил проблему в соответствии с этим вопросом (заменил BlobService на BlobBlockService и т. Д.):
https://studio.azureml.net/apihelp/workspaces/306bc1f050ba4cdba0dbc6cc561c6ab0/webservices/e4e3d2d32ec347ae9a829b200f7d31cd/endpoints/61670382104542bc9533a920830b263c/jobs
И я также ввел API-ключ, имя контейнера, URL, account_key и account_name в соответствии с инструкциями.
Однако, похоже, что сегодня фрагмент кода еще более устарел, чем тогда, потому что сейчас я получаю другую ошибку:
File "C:/Users/Alex/Desktop/scripts/BatchExecution.py", line 80, in uploadFileToBlob
blob_service = asb.BlockBlobService(account_name=storage_account_name, account_key=storage_account_key)
File "C:\Users\Alex\Anaconda3\lib\site-packages\azure\storage\blob\blockblobservice.py", line 145, in __init__
File "C:\Users\Alex\Anaconda3\lib\site-packages\azure\storage\blob\baseblobservice.py", line 205, in __init__
TypeError: get_service_parameters() got an unexpected keyword argument 'token_credential'
Я также заметил, что при установке Azure SDK для Python через pip я получаю следующие предупреждения в конце процесса (однако установка прошла успешно):
azure-storage-queue 1.3.0 has requirement azure-storage-common<1.4.0,>=1.3.0, but you'll have azure-storage-common 1.1.0 which is incompatible.
azure-storage-file 1.3.0 has requirement azure-storage-common<1.4.0,>=1.3.0, but you'll have azure-storage-common 1.1.0 which is incompatible.
azure-storage-blob 1.3.0 has requirement azure-storage-common<1.4.0,>=1.3.0, but you'll have azure-storage-common 1.1.0 which is incompatible.
Я не могу найти ничего обо всем этом в последней документации по Python SDK (слово 'token_credential' даже не содержится):
https://media.readthedocs.org/pdf/azure-storage/latest/azure-storage.pdf
Кто-нибудь знает, что происходит во время установки или почему во время выполнения появляется ошибка типа с token_credential?
Или кто-нибудь знает, как мне установить необходимую версию Azure-Storage-Common или Azure-Storage-BLOB-объектов?
РЕДАКТИРОВАТЬ: Вот мой код (однако не воспроизводимый, потому что я изменил ключи перед публикацией)
# How this works:
#
# 1. Assume the input is present in a local file (if the web service accepts input)
# 2. Upload the file to an Azure blob - you"d need an Azure storage account
# 3. Call BES to process the data in the blob.
# 4. The results get written to another Azure blob.
# 5. Download the output blob to a local file
#
# Note: You may need to download/install the Azure SDK for Python.
# See: http://azure.microsoft.com/en-us/documentation/articles/python-how-to-install/
import urllib
# If you are using Python 3+, import urllib instead of urllib2
import json
import time
import azure.storage.blob as asb # replaces BlobService by BlobBlockService
def printHttpError(httpError):
print("The request failed with status code: " + str(httpError.code))
# Print the headers - they include the requert ID and the timestamp, which are useful for debugging the failure
print(httpError.info())
print(json.loads(httpError.read()))
return
def saveBlobToFile(blobUrl, resultsLabel):
output_file = "myresults.csv" # Replace this with the location you would like to use for your output file
print("Reading the result from " + blobUrl)
try:
# If you are using Python 3+, replace urllib2 with urllib.request in the following code
response = urllib.request.urlopen(blobUrl)
except urllib.request.HTTPError:
printHttpError(urllib.HTTPError)
return
with open(output_file, "w+") as f:
f.write(response.read())
print(resultsLabel + " have been written to the file " + output_file)
return
def processResults(result):
first = True
results = result["Results"]
for outputName in results:
result_blob_location = results[outputName]
sas_token = result_blob_location["SasBlobToken"]
base_url = result_blob_location["BaseLocation"]
relative_url = result_blob_location["RelativeLocation"]
print("The results for " + outputName + " are available at the following Azure Storage location:")
print("BaseLocation: " + base_url)
print("RelativeLocation: " + relative_url)
print("SasBlobToken: " + sas_token)
if (first):
first = False
url3 = base_url + relative_url + sas_token
saveBlobToFile(url3, "The results for " + outputName)
return
def uploadFileToBlob(input_file, input_blob_name, storage_container_name, storage_account_name, storage_account_key):
blob_service = asb.BlockBlobService(account_name=storage_account_name, account_key=storage_account_key)
print("Uploading the input to blob storage...")
data_to_upload = open(input_file, "r").read()
blob_service.put_blob(storage_container_name, input_blob_name, data_to_upload, x_ms_blob_type="BlockBlob")
def invokeBatchExecutionService():
storage_account_name = "storage1" # Replace this with your Azure Storage Account name
storage_account_key = "kOveEtQMoP5zbUGfFR47" # Replace this with your Azure Storage Key
storage_container_name = "input" # Replace this with your Azure Storage Container name
connection_string = "DefaultEndpointsProtocol=https;AccountName=" + storage_account_name + ";AccountKey=" + storage_account_key #"DefaultEndpointsProtocol=https;AccountName=mayatostorage1;AccountKey=aOYA2P5VQPR3ZQCl+aWhcGhDRJhsR225teGGBKtfXWwb2fNEo0CrhlwGWdfbYiBTTXPHYoKZyMaKuEAU8A/Fzw==;EndpointSuffix=core.windows.net"
api_key = "5wUaln7n99rt9k+enRLG2OrhSsr9VLeoCfh0q3mfYo27hfTCh32f10PsRjJtuA==" # Replace this with the API key for the web service
url = "https://ussouthcentral.services.azureml.net/workspaces/306bc1f050/services/61670382104542bc9533a920830b263c/jobs" #"https://ussouthcentral.services.azureml.net/workspaces/306bc1f050ba4cdba0dbc6cc561c6ab0/services/61670382104542bc9533a920830b263c/jobs/job_id/start?api-version=2.0"
uploadFileToBlob(r"C:\Users\Alex\Desktop\16_da.csv", # Replace this with the location of your input file
"input1datablob.csv", # Replace this with the name you would like to use for your Azure blob; this needs to have the same extension as the input file
storage_container_name, storage_account_name, storage_account_key)
payload = {
"Inputs": {
"input1": { "ConnectionString": connection_string, "RelativeLocation": "/" + storage_container_name + "/input1datablob.csv" },
},
"Outputs": {
"output1": { "ConnectionString": connection_string, "RelativeLocation": "/" + storage_container_name + "/output1results.csv" },
},
"GlobalParameters": {
}
}
body = str.encode(json.dumps(payload))
headers = { "Content-Type":"application/json", "Authorization":("Bearer " + api_key)}
print("Submitting the job...")
# If you are using Python 3+, replace urllib2 with urllib.request in the following code
# submit the job
req = urllib.request.Request(url + "?api-version=2.0", body, headers)
try:
response = urllib.request.urlopen(req)
except urllib.request.HTTPError:
printHttpError(urllib.HTTPError)
return
result = response.read()
job_id = result[1:-1] # remove the enclosing double-quotes
print("Job ID: " + job_id)
# If you are using Python 3+, replace urllib2 with urllib.request in the following code
# start the job
print("Starting the job...")
req = urllib.request.Request(url + "/" + job_id + "/start?api-version=2.0", "", headers)
try:
response = urllib.request.urlopen(req)
except urllib.request.HTTPError:
printHttpError(urllib.HTTPError)
return
url2 = url + "/" + job_id + "?api-version=2.0"
while True:
print("Checking the job status...")
# If you are using Python 3+, replace urllib2 with urllib.request in the follwing code
req = urllib.request.Request(url2, headers = { "Authorization":("Bearer " + api_key) })
try:
response = urllib.request.urlopen(req)
except urllib.request.HTTPError:
printHttpError(urllib.HTTPError)
return
result = json.loads(response.read())
status = result["StatusCode"]
if (status == 0 or status == "NotStarted"):
print("Job " + job_id + " not yet started...")
elif (status == 1 or status == "Running"):
print("Job " + job_id + " running...")
elif (status == 2 or status == "Failed"):
print("Job " + job_id + " failed!")
print("Error details: " + result["Details"])
break
elif (status == 3 or status == "Cancelled"):
print("Job " + job_id + " cancelled!")
break
elif (status == 4 or status == "Finished"):
print("Job " + job_id + " finished!")
processResults(result)
break
time.sleep(1) # wait one second
return
invokeBatchExecutionService()
РЕДАКТИРОВАТЬ 2: вышеупомянутая проблема была решена благодаря Джону и CSV загружается в хранилище BLOB-объектов.
Однако теперь есть HTTPError, когда задание отправляется в строке 130:
raise HTTPError(req.full_url, code, msg, hdrs, fp) HTTPError: Bad Request