Stanford NER Tagger и NLTK - не работает [Ошибка OSE: сбой команды Java] - PullRequest
0 голосов
/ 31 мая 2018

Попытка запустить Stanford NER Taggerand NLTK с ноутбука Jupyter.Я постоянно получаю

OSError: Java command failed

Я уже пробовал взломать на https://gist.github.com/alvations/e1df0ba227e542955a8a и поток Stanford Parser и NLTK

Я использую

NLTK==3.3
Ubuntu==16.04LTS 

Вот мой код Python:

Sample_text = "Google, headquartered in Mountain View, unveiled the new Android phone"

sentences = sent_tokenize(Sample_text)
tokenized_sentences = [word_tokenize(sentence) for sentence in sentences]

PATH_TO_GZ = '/home/root/english.all.3class.caseless.distsim.crf.ser.gz'

PATH_TO_JAR = '/home/root/stanford-ner.jar'

sn_3class = StanfordNERTagger(PATH_TO_GZ,
                       path_to_jar=PATH_TO_JAR,
                              encoding='utf-8')

annotations = [sn_3class.tag(sent) for sent in tokenized_sentences]

Я получил эти файлы, используя следующие команды:

wget http://nlp.stanford.edu/software/stanford-ner-2015-04-20.zip
wget http://nlp.stanford.edu/software/stanford-postagger-full-2015-04-20.zip
wget http://nlp.stanford.edu/software/stanford-parser-full-2015-04-20.zip
# Extract the zip file.
unzip stanford-ner-2015-04-20.zip 
unzip stanford-parser-full-2015-04-20.zip 
unzip stanford-postagger-full-2015-04-20.zip

Я получаю следующую ошибку:

CRFClassifier invoked on Thu May 31 15:56:19 IST 2018 with arguments:
   -loadClassifier /home/root/english.all.3class.caseless.distsim.crf.ser.gz -textFile /tmp/tmpMDEpL3 -outputFormat slashTags -tokenizerFactory edu.stanford.nlp.process.WhitespaceTokenizer -tokenizerOptions "tokenizeNLs=false" -encoding utf-8
tokenizerFactory=edu.stanford.nlp.process.WhitespaceTokenizer
Unknown property: |tokenizerFactory|
tokenizerOptions="tokenizeNLs=false"
Unknown property: |tokenizerOptions|
loadClassifier=/home/root/english.all.3class.caseless.distsim.crf.ser.gz
encoding=utf-8
Unknown property: |encoding|
textFile=/tmp/tmpMDEpL3
outputFormat=slashTags
Loading classifier from /home/root/english.all.3class.caseless.distsim.crf.ser.gz ... Error deserializing /home/root/english.all.3class.caseless.distsim.crf.ser.gz
Exception in thread "main" java.lang.RuntimeException: java.lang.ClassCastException: java.util.ArrayList cannot be cast to [Ledu.stanford.nlp.util.Index;
    at edu.stanford.nlp.ie.AbstractSequenceClassifier.loadClassifierNoExceptions(AbstractSequenceClassifier.java:1380)
    at edu.stanford.nlp.ie.AbstractSequenceClassifier.loadClassifierNoExceptions(AbstractSequenceClassifier.java:1331)
    at edu.stanford.nlp.ie.crf.CRFClassifier.main(CRFClassifier.java:2315)
Caused by: java.lang.ClassCastException: java.util.ArrayList cannot be cast to [Ledu.stanford.nlp.util.Index;
    at edu.stanford.nlp.ie.crf.CRFClassifier.loadClassifier(CRFClassifier.java:2164)
    at edu.stanford.nlp.ie.AbstractSequenceClassifier.loadClassifier(AbstractSequenceClassifier.java:1249)
    at edu.stanford.nlp.ie.AbstractSequenceClassifier.loadClassifier(AbstractSequenceClassifier.java:1366)
    at edu.stanford.nlp.ie.AbstractSequenceClassifier.loadClassifierNoExceptions(AbstractSequenceClassifier.java:1377)
    ... 2 more

---------------------------------------------------------------------------
OSError                                   Traceback (most recent call last)
<ipython-input-15-5621d0f8177d> in <module>()
----> 1 ne_annot_sent_3c = [sn_3class.tag(sent) for sent in tokenized_sentences]

/home/root1/.virtualenv/demos/local/lib/python2.7/site-packages/nltk/tag/stanford.pyc in tag(self, tokens)
     79     def tag(self, tokens):
     80         # This function should return list of tuple rather than list of list
---> 81         return sum(self.tag_sents([tokens]), [])
     82 
     83     def tag_sents(self, sentences):

/home/root1/.virtualenv/demos/local/lib/python2.7/site-packages/nltk/tag/stanford.pyc in tag_sents(self, sentences)
    102         # Run the tagger and get the output
    103         stanpos_output, _stderr = java(cmd, classpath=self._stanford_jar,
--> 104                                        stdout=PIPE, stderr=PIPE)
    105         stanpos_output = stanpos_output.decode(encoding)
    106 

/home/root1/.virtualenv/demos/local/lib/python2.7/site-packages/nltk/__init__.pyc in java(cmd, classpath, stdin, stdout, stderr, blocking)
    134     if p.returncode != 0:
    135         print(_decode_stdoutdata(stderr))
--> 136         raise OSError('Java command failed : ' + str(cmd))
    137 
    138     return (stdout, stderr)

OSError: Java command failed : [u'/usr/bin/java', '-mx1000m', '-cp', '/home/root/stanford-ner.jar', 'edu.stanford.nlp.ie.crf.CRFClassifier', '-loadClassifier', '/home/root/english.all.3class.caseless.distsim.crf.ser.gz', '-textFile', '/tmp/tmpMDEpL3', '-outputFormat', 'slashTags', '-tokenizerFactory', 'edu.stanford.nlp.process.WhitespaceTokenizer', '-tokenizerOptions', '"tokenizeNLs=false"', '-encoding', 'utf-8']

1 Ответ

0 голосов
/ 31 мая 2018

Загрузить Stanford Named Entity Recognizer версии 3.9.1 : см. Раздел «Загрузка» из Веб-сайт Stanford NLP .

Распакуйте его и переместите 2 файла.-tagger.jar "и" english.all.3class.distsim.crf.ser.gz "в вашу папку

Откройте jupyter notebook или ipython в вашем пути к папке и запустите следующий код python:

import nltk
from nltk.tag.stanford import StanfordNERTagger

sentence = u"Twenty miles east of Reno, Nev., " \
    "where packs of wild mustangs roam free through " \
    "the parched landscape, Tesla Gigafactory 1 " \
    "sprawls near Interstate 80."

jar = './stanford-ner.jar'

model = './english.all.3class.distsim.crf.ser.gz'

ner_tagger = StanfordNERTagger(model, jar, encoding='utf8')

words = nltk.word_tokenize(sentence)

# Run NER tagger on words
print(ner_tagger.tag(words))

Я проверял это на NLTK == 3.3 и Ubuntu == 16.0.6LTS

Добро пожаловать на сайт PullRequest, где вы можете задавать вопросы и получать ответы от других членов сообщества.
...