У меня около 5000 страниц Classi c Сайтов Google, которые мне нужно, чтобы скрипт Google Apps в Google Sheets изучал одну за другой, извлекал данные и вводил их в Google Sheet построчно.
Я написал сценарий приложения для использования одного из листов под названием «Страницы», который содержит точный URL-адрес каждой страницы, строка за строкой, для запуска во время извлечения.
That in return would get the HTML contents and I would then use regex to extract the data I want which is the values to the right of each of the following...
- Job name
- Domain owner
- Urgency/Impact
- ISOC instructions
Which would then write that date under the proper columns in the Google Sheet.
This worked except for one big problem. The HTML is not consistent. Also, ID's and tags were not used so really it makes trying to do this through SitesApp.getPageByUrl not possible.
Here is the code I came up with for that attempt.
function startCollection () {
var masterList = SpreadsheetApp.getActiveSpreadsheet().getSheetByName("Pages");
var startRow = 1;
var lastRow = masterList.getLastRow();
for(var i = startRow; i <= lastRow; i++) {
var target = masterList.getRange("A"+i).getValue();
sniff(target)
};
}
function sniff (target) {
var pageURL = target;
var pageContent = SitesApp.getPageByUrl(pageURL).getHtmlContent();
Logger.log("Scraping: ", target);
// Extract the job name
var JobNameRegExp = new RegExp(/(Job name:<\/b><\/td>)(.*?)(\<\/td>)/m);
var JobNameValue = JobNameRegExp.exec(pageContent);
var JobMatch = JobNameValue[2];
if (JobMatch == null){
JobMatch = "NOTE FOUND: " + pageURL;
}
// Extract domain owner
var DomainRegExp = new RegExp(/(Domain owner:<\/b><\/td>)(.*?)(<\/span>)/m);
var DomainValue = DomainRegExp.exec(pageContent);
Logger.log("DUMP1:",SitesApp.getPageByUrl(pageURL).getHtmlContent());
var DomainMatch = DomainValue[2];
if (JobMatch == null){
DomainMatch = "N/A";
}
// Extract Urgency & Impact
var UrgRegExp = new RegExp(/(Urgency\/Impact:<\/b><\/td>)(.*?)(<\/td>)/m);
var UrgValue = UrgRegExp.exec(pageContent);
var UrgMatch = UrgValue[2];
if (JobMatch == null){
UrgMatch = "N/A";
}
// Extract ISOC Instructions
var ISOCRegExp = new RegExp(/(ISOC instructions:<\/b><\/td>)(.*?)(<\/td>)/m);
var ISOCValue = ISOCRegExp.exec(pageContent);
var ISOCMatch = ISOCValue[2];
if (JobMatch == null){
ISOCMatch = "N/A";
}
// Add record to sheet
var row_data = {
Job_Name:JobMatch,
Domain_Owner:DomainMatch,
Urgency_Impact:UrgMatch,
ISOC_Instructions:ISOCMatch,
};
insertRowInTracker(row_data)
}
function insertRowInTracker(rowData) {
var sheet = SpreadsheetApp.getActiveSpreadsheet().getSheetByName("Jobs");
var rowValues = [];
var columnHeaders = sheet.getDataRange().offset(0, 0, 1).getValues()[0];
Logger.log("Writing to the sheet: ", sheet.getName());
Logger.log("Writing Row Data: ", rowData);
columnHeaders.forEach((header) => {
rowValues.push(rowData[header]);
});
sheet.appendRow(rowValues);
}
So for my next idea, I have thought about using UrlFetchApp.fetch. The one problem I have though is that these pages on that Classics Google Site sit behind a non-shared with the public domain. While using SitesApp.getPageByUrl has the script ask for authorization and works, SitesApp.getPageByUrl does not meaning when it tries to call the direct page, it just gets the Google login page.
I might be able to work around this and turn them public, but I am still working on that.
I am running out of ideas fast on this one and hoping there is another way I have not thought of or seen. What I would really like to do is not even mess with the HTML content. I would like to use apps script under the Google Sheet to just look at the actual data presented on the page and then match a text and capture the value to the right of it.
For example have it go down the list of URLS on sheet called "Pages" and do the following for each page:
Find the following values:
- Find the text "Job name:", capture the text to the right of it.
- Find the text "Domain owner:", capture the text to the right of it.
- Find the text "Urgency/Impact:", capture the text to the right of it.
- Find the text "ISOC instructions:", capture the text to the right of it.
Write those values to a new row in sheet called "Jobs" as seen below.
Then move on the the next URL in the sheet called "Pages" and repeat until all rows in the sheet "Pages" have been completed.
Example of the data I want to capture
I have created an exact copy of one of the pages for testing and is public.
https://sites.google.com/site/2020dump/test
Пример проверки
The raw HTML of the table which contains all the data I am after.
Владелец домена: IT.FinanceHRCore Срочность / влияние: Средний (3 - срочность, 3 - удар) ISO C инструкции: Нет
Есть примеры того, как я могу sh это сделать? Я не уверен, как с точки зрения сценария приложений на go не смотреть на HTML и смотреть только на фактические данные, отображаемые на странице. Например, ищите текст «Название работы:» и затем берете текст справа от него.
Цель в конце дня - перенести данные с каждой страницы в один большой лист Google, чтобы мы можем убить Google Classi c Site.