Трудно воспроизвести вашу ошибку, так как у меня нет всех входных данных, но я попытался воссоздать набор пользовательских слов для удаления, и все это сработало для меня.
Но есть лучшие способы сделать то, что вы пытаетесь сделать, и я перечислю их здесь.
Во-первых, для меня преобразование сработало. Но есть и лучшие Как добраться: сначала создайте объект токенов с удалением списка слов, затем создайте dfm. А затем преобразовать в формат stm .
library("quanteda", warn.conflicts = FALSE)
## Package version: 2.0.2
## Parallel computing: 2 of 8 threads used.
## See https://quanteda.io for tutorials and examples.
# set up data
data(crude, package = "tm")
Qcorpus <- corpus(crude)
# simulate words to remove, not supplied
myRMlist <- readLines(textConnection(c("and", "or", "but", "of")))
# conversion works
stm_input_stm <- Qcorpus %>%
tokens(remove_numbers = TRUE, remove_punct = TRUE, remove_symbols = TRUE) %>%
tokens_remove(pattern = c(myRMlist, stopwords("en"))) %>%
dfm() %>%
convert(to = "stm")
Однако нет необходимости преобразовывать с stm , поскольку stm::stm()
может напрямую принимать dfm:
# stm can take a dfm directly
stm_input_dfm <- Qcorpus %>%
tokens(remove_numbers = TRUE, remove_punct = TRUE, remove_symbols = TRUE) %>%
tokens_remove(pattern = c(myRMlist, stopwords("en"))) %>%
dfm()
library("stm")
## stm v1.3.5 successfully loaded. See ?stm for help.
## Papers, resources, and other materials at structuraltopicmodel.com
stm(stm_input_dfm, K = 5)
## Beginning Spectral Initialization
## Calculating the gram matrix...
## Finding anchor words...
## .....
## Recovering initialization...
## .........
## Initialization complete.
## ....................
## Completed E-Step (0 seconds).
## Completed M-Step.
## Completing Iteration 1 (approx. per word bound = -6.022)
## ....................
## Completed E-Step (0 seconds).
## Completed M-Step.
## Completing Iteration 2 (approx. per word bound = -5.480, relative change = 9.000e-02)
## ....................
## Completed E-Step (0 seconds).
## Completed M-Step.
## Completing Iteration 3 (approx. per word bound = -5.386, relative change = 1.708e-02)
## ....................
## Completed E-Step (0 seconds).
## Completed M-Step.
## Completing Iteration 4 (approx. per word bound = -5.370, relative change = 2.987e-03)
## ....................
## Completed E-Step (0 seconds).
## Completed M-Step.
## Completing Iteration 5 (approx. per word bound = -5.367, relative change = 6.841e-04)
## Topic 1: said, mln, oil, last, billion
## Topic 2: oil, dlrs, said, crude, price
## Topic 3: oil, said, power, ship, crude
## Topic 4: oil, opec, said, prices, market
## Topic 5: oil, said, one, futures, mln
## ....................
## Completed E-Step (0 seconds).
## Completed M-Step.
## Completing Iteration 6 (approx. per word bound = -5.366, relative change = 1.601e-04)
## ....................
## Completed E-Step (0 seconds).
## Completed M-Step.
## Completing Iteration 7 (approx. per word bound = -5.366, relative change = 5.444e-05)
## ....................
## Completed E-Step (0 seconds).
## Completed M-Step.
## Completing Iteration 8 (approx. per word bound = -5.365, relative change = 1.856e-05)
## ....................
## Completed E-Step (0 seconds).
## Completed M-Step.
## Model Converged
## A topic model with 5 topics, 20 documents and a 971 word dictionary.