моя docker информация:
Containers: 1
Running: 1
Paused: 0
Stopped: 0
Images: 4
Server Version: 1.13.1
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: systemd
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: runc docker-runc
Default Runtime: docker-runc
Init Binary: docker-init
containerd version: (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1)
runc version: N/A (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f)
init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574)
Security Options:
seccomp
WARNING: You're not using the default seccomp profile
Profile: /etc/docker/seccomp.json
Kernel Version: 3.10.0-693.2.2.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
Number of Docker Hooks: 3
CPUs: 8
Total Memory: 15.51 GiB
Name: app1
ID: S4S6:YFCJ:76EF:ZYLX:ADOV:VDB3:TTDD:PVGR:Q6GQ:EFWR:CWVK:KEUN
Docker Root Dir: /ssd/docker
Debug Mode (client): false
Debug Mode (server): false
Username: hukangze
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Registry Mirrors:
https://noiy1xmc.mirror.aliyuncs.com
Live Restore Enabled: false
Registries: docker.io (secure)
docker изображения:
docker.io/elasticsearch 7.8.0 121454ddad72 3 weeks ago 810 MB
моя команда запуска:
docker run -d --name=elasticsearch -e "discovery.type=single-node" \
-v /ssd/elk/docker/elasticsearch/logs:/usr/share/elasticsearch/logs \
-v /ssd/elk/docker/elasticsearch/data:/usr/share/elasticsearch/data \
--ulimit nofile=65535:65535 --ulimit nproc=4096:4096 \
-e ES_JAVA_OPTS="-Xms512m -Xmx512m" \
-e "discovery.type=single-node" \
-e "http.port=9200" \
-e "node.name=node-1" \
-e "bootstrap.memory_lock=true" --ulimit memlock=-1:-1 \
elasticsearch:7.8.0
Когда она выполнялась для примерно через полминуты он печатает много журналов об этом. Затем он печатает снова каждые полминуты, вероятно, некоторые потоки ждали, пока другие потоки снимут блокировку, что привело к блокировке большого количества потоков. В конце концов JVM закончилась c G C.
Full thread dump OpenJDK 64-Bit Server VM (14.0.1+7 mixed mode, sharing):
Threads class SMR info:
_java_thread_list=0x00007fbfd8004eb0, length=49, elements={
0x00007fc068232000, 0x00007fc068234000, 0x00007fc068239800, 0x00007fc06823b000,
0x00007fc06823d800, 0x00007fc06823f800, 0x00007fc068241800, 0x00007fc068290000,
0x00007fc068294800, 0x00007fc0689c4000, 0x00007fc069a36000, 0x00007fc069a3b800,
0x00007fc06a49d000, 0x00007fc06a519000, 0x00007fc06a523000, 0x00007fc06a578800,
0x00007fc06a5da800, 0x00007fc06afca000, 0x00007fc06b094800, 0x00007fbfd4002000,
0x00007fbf98007800, 0x00007fbf9c001800, 0x00007fbf90006800, 0x00007fbf98012000,
0x00007fbfa40c3800, 0x00007fbfa40c5000, 0x00007fbfa40c7000, 0x00007fbfa40c9000,
0x00007fbfa40cb000, 0x00007fbf84001800, 0x00007fbf7c002000, 0x00007fc06b0de000,
0x00007fc06b0e0800, 0x00007fc06802a800, 0x00007fbf88044800, 0x00007fbf88057000,
0x00007fbf88058800, 0x00007fbf8805a800, 0x00007fbf8806d800, 0x00007fbf8806f800,
0x00007fbfa417a800, 0x00007fbfa41ec800, 0x00007fbfa42a4000, 0x00007fbf9c083800,
0x00007fbf9c085000, 0x00007fbf4c005000, 0x00007fbf4c007800, 0x00007fbfd4003800,
0x00007fbf8c00e000
}
"Reference Handler" #2 daemon prio=10 os_prio=0 cpu=4.39ms elapsed=64.60s tid=0x00007fc068232000 nid=0xbd waiting on condition [0x00007fc037efc000]
java.lang.Thread.State: RUNNABLE
at java.lang.ref.Reference.waitForReferencePendingList(java.base@14.0.1/Native Method)
at java.lang.ref.Reference.processPendingReferences(java.base@14.0.1/Reference.java:241)
at java.lang.ref.Reference$ReferenceHandler.run(java.base@14.0.1/Reference.java:213)
"Finalizer" #3 daemon prio=8 os_prio=0 cpu=7.15ms elapsed=64.60s tid=0x00007fc068234000 nid=0xbe in Object.wait() [0x00007fc037dfb000]
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(java.base@14.0.1/Native Method)
- waiting on <0x00000000e0100af8> (a java.lang.ref.ReferenceQueue$Lock)
at java.lang.ref.ReferenceQueue.remove(java.base@14.0.1/ReferenceQueue.java:155)
- locked <0x00000000e0100af8> (a java.lang.ref.ReferenceQueue$Lock)
at java.lang.ref.ReferenceQueue.remove(java.base@14.0.1/ReferenceQueue.java:176)
at java.lang.ref.Finalizer$FinalizerThread.run(java.base@14.0.1/Finalizer.java:170)
"VM Thread" os_prio=0 cpu=75.75ms elapsed=64.61s tid=0x00007fc06822f000 nid=0xbc runnable
"GC Thread#0" os_prio=0 cpu=388.35ms elapsed=64.91s tid=0x00007fc068052000 nid=0xb7 runnable
"GC Thread#1" os_prio=0 cpu=76.73ms elapsed=64.14s tid=0x00007fc030001000 nid=0xc8 runnable
"GC Thread#2" os_prio=0 cpu=73.41ms elapsed=64.14s tid=0x00007fc030002800 nid=0xc9 runnable
"GC Thread#3" os_prio=0 cpu=97.05ms elapsed=64.14s tid=0x00007fc030003800 nid=0xca runnable
"GC Thread#4" os_prio=0 cpu=85.62ms elapsed=64.14s tid=0x00007fc030005000 nid=0xcb runnable
"GC Thread#5" os_prio=0 cpu=84.97ms elapsed=64.14s tid=0x00007fc030006800 nid=0xcc runnable
"GC Thread#6" os_prio=0 cpu=80.30ms elapsed=64.14s tid=0x00007fc030008000 nid=0xcd runnable
"GC Thread#7" os_prio=0 cpu=77.89ms elapsed=64.14s tid=0x00007fc030009800 nid=0xce runnable
"G1 Main Marker" os_prio=0 cpu=3.11ms elapsed=64.91s tid=0x00007fc068058800 nid=0xb8 runnable
"G1 Conc#0" os_prio=0 cpu=81.85ms elapsed=64.91s tid=0x00007fc06805a000 nid=0xb9 runnable
"G1 Conc#1" os_prio=0 cpu=81.98ms elapsed=61.00s tid=0x00007fc040001000 nid=0xd7 runnable
"G1 Refine#0" os_prio=0 cpu=13.29ms elapsed=64.62s tid=0x00007fc068203800 nid=0xba runnable
"G1 Refine#1" os_prio=0 cpu=3.55ms elapsed=58.23s tid=0x00007fc038001000 nid=0xda runnable
"G1 Refine#2" os_prio=0 cpu=1.52ms elapsed=58.22s tid=0x00007fbfc8001000 nid=0xdb runnable
"G1 Refine#3" os_prio=0 cpu=1.54ms elapsed=58.22s tid=0x00007fbfcc001000 nid=0xdc runnable
"G1 Refine#4" os_prio=0 cpu=0.05ms elapsed=58.22s tid=0x00007fbfc0001000 nid=0xdd runnable
"G1 Young RemSet Sampling" os_prio=0 cpu=21.33ms elapsed=64.62s tid=0x00007fc068205800 nid=0xbb runnable
"VM Periodic Task Thread" os_prio=0 cpu=32.23ms elapsed=64.58s tid=0x00007fc068292000 nid=0xc5 waiting on condition
JNI global refs: 34, weak refs: 44
Heap
garbage-first heap total 524288K, used 89029K [0x00000000e0000000, 0x0000000100000000)
region size 1024K, 36 young (36864K), 25 survivors (25600K)
Metaspace used 86344K, capacity 93346K, committed 93516K, reserved 1128448K
class space used 11792K, capacity 14683K, committed 14720K, reserved 1048576K
Но я запускаю эти шаги на других машинах с docker, такой ситуации не было, почему?