Вероятно, у нас тут идиот. Я получаю список с помощью генераторов, и у меня проблемы с удалением дубликатов из моего списка обычным способом, используя set
import spacy
import textacy
nlp = spacy.load("en_core_web_lg")
text = ('''The Western compulsion to hedonism has made us lust for money just to show that we have it. Possessions do not make a man—man makes possessions. The Western compulsion to hedonism has made us lust for money just to show that we have it. Possessions do not make a man—man makes possessions.''')
doc = nlp(text)
keywords = list(textacy.extract.ngrams(doc, 1, filter_stops=True, filter_punct=True, filter_nums=False)) + list(textacy.extract.ngrams(doc, 2, filter_stops=True, filter_punct=True, filter_nums=False))
print(list(set(keywords)))
, результат содержит дубликаты:
[man, lust, makes possessions, man, Possessions, makes possessions, man makes, hedonism, man, money, compulsion, Western compulsion, man, possessions, man makes, compulsion, Possessions, Western compulsion, possessions, Western, makes, makes, lust, hedonism, Western, money]