Вы можете найти составляющие S & P 500 здесь.
https://en.wikipedia.org/wiki/List_of_S%26P_500_companies
library(quantmod)
e <- new.env()
getSymbols("MMM;ABT;ABBV;ABMD;ACN;
ATVI;ADBE;AMD;AAP;AES;AMG;AFL;A;APD;
AKAM;ALK;ALB;ARE;ALXN;ALGN;ALLE;AGN;ADS;
LNT;ALL;GOOGL", env = e)
pframe <- do.call(merge, as.list(e))
head(pframe)
Попробуйте это тоже.
library(quantmod)
Nasdaq100_Symbols <- c('GE','PG','MSFT','AAPL','PFE','AMD','DELL')
# put all stocks in one list object
stocks <- lapply(Nasdaq100_Symbols, getSymbols, auto.assign = FALSE)
# following is not needed but if you want to use the list for other purposes
# it is a good practice to name all the different list objects.
# names(stocks) <- Nasdaq100_Symbols
# merge all stocks into 1 xts object
nasdaq100 <- Reduce(merge, stocks)
# fill NA's with 0
nasdaq100 <- na.fill(nasdaq100, 0)
outcomeSymbol <- "GE.Volume" # <-- used GE as that data is available in the downloaded data set
# merge outcome to data
nasdaq100 <- merge(nasdaq100, lm1 = lag(nasdaq[, outcomeSymbol], -1))
# turn into data.frame
nasdaq100_df <- data.frame(date = index(nasdaq100), coredata(nasdaq100))
Наконец, попробуйте это, чтобы получитьтикеры.
library(rvest)
url <- "https://en.wikipedia.org/wiki/List_of_S%26P_500_companies"
SP500 <- url %>%
html() %>%
html_nodes(xpath='//*[@id="mw-content-text"]/div/table[1]') %>%
html_table()
SP500 <- SP500[[1]]
SP500
В качестве альтернативы см. ссылки ниже для получения дополнительных идей о том, как это сделать.
https://www.r -bloggers.com / download-sp-500-stock-data-from-googlequandl-with-r-command-line-script /
https://www.business -science.io / investments / 2016/10/23 / SP500_Analysis.html
https://www.business -science.io / investments / 2016/11/30 / Russell2000_Analysis.html