Я попытался провести стресс-тест своего Cassandra, по какой-то причине мой инструмент Cassandra-стресс не работал на windows, поэтому в качестве обходного пути я сделал запрос POST на 100000 л oop к C*, бэкэнду был в JAVA,
Моя схема таблицы cassandra: -
CREATE TABLE Stress2.demomodel(acctID TEXT, transactionDate Date , card TEXT, amountCreditedFrom TEXT ,transactionAmount double ,balance DOUBLE, currentTime timestamp , PRIMARY KEY((acctID, transactionDate), currentTime )) WITH CLUSTERING ORDER BY (currentTime DESC);
Это мой код Looping
@Override
public DemoModel save(DemoModel demoModel) {
int i = 0;
try {
for (i = 0; i < 100000; i++) {
Thread.sleep(2L);
demoRepository.save(demoModel);
}
}
catch(Exception e) {
throw new RuntimeException("worked till " + i);
}
return null;
}
Перед запуском l oop this была статистика, предоставленная инструментом Stress,
Keyspace : stress2
Read Count: 0
Read Latency: NaN ms
Write Count: 2
Write Latency: 0.1565 ms
Pending Flushes: 0
Table: demomodel
SSTable count: 0
Space used (live): 0 bytes
Space used (total): 0 bytes
Space used by snapshots (total): 0 bytes
Off heap memory used (total): 342 bytes
SSTable Compression Ratio: -1.0
Number of partitions (estimate): 1
Memtable cell count: 2
Memtable data size: 266 bytes
Memtable off heap memory used: 342 bytes
Memtable switch count: 0
Local read count: 0
Local read latency: NaN ms
Local write count: 2
Local write latency: 0.152 ms
Pending flushes: 0
Percent repaired: NaN
Bytes repaired: 0.000KiB
Bytes unrepaired: 0.000KiB
Bytes pending repair: 0.000KiB
Bloom filter false positives: 0
Bloom filter false ratio: 0.00000
Bloom filter space used: 0 bytes
Bloom filter off heap memory used: 0 bytes
Index summary off heap memory used: 0 bytes
Compression metadata off heap memory used: 0 bytes
Compacted partition minimum bytes: 0
Compacted partition maximum bytes: 0
Compacted partition mean bytes: 0
Average live cells per slice (last five minutes): 2.0
Maximum live cells per slice (last five minutes): 2
Average tombstones per slice (last five minutes): 1.0
Maximum tombstones per slice (last five minutes): 1
Dropped Mutations: 0 bytes
Failed Replication Count: null
После завершения l oop, это была статистика, поскольку мы видим, что размер записываемой памяти увеличился до 13 мб
Keyspace : stress2
Read Count: 0
Read Latency: NaN ms
Write Count: 100002
Write Latency: 0.045491610167796646 ms
Pending Flushes: 0
Table: demomodel
SSTable count: 0
Space used (live): 0 bytes
Space used (total): 0 bytes
Space used by snapshots (total): 0 bytes
Off heap memory used (total): 13.26 MiB
SSTable Compression Ratio: -1.0
Number of partitions (estimate): 1
Memtable cell count: 100002
Memtable data size: 399 bytes
Memtable off heap memory used: 13.26 MiB
Memtable switch count: 0
Local read count: 0
Local read latency: NaN ms
Local write count: 100002
Local write latency: 0.045 ms
Pending flushes: 0
Percent repaired: NaN
Bytes repaired: 0.000KiB
Bytes unrepaired: 0.000KiB
Bytes pending repair: 0.000KiB
Bloom filter false positives: 0
Bloom filter false ratio: 0.00000
Bloom filter space used: 0 bytes
Bloom filter off heap memory used: 0 bytes
Index summary off heap memory used: 0 bytes
Compression metadata off heap memory used: 0 bytes
Compacted partition minimum bytes: 0
Compacted partition maximum bytes: 0
Compacted partition mean bytes: 0
Average live cells per slice (last five minutes): 2.0
Maximum live cells per slice (last five minutes): 2
Average tombstones per slice (last five minutes): 1.0
Maximum tombstones per slice (last five minutes): 1
Dropped Mutations: 0 bytes
Failed Replication Count: null
Теперь, когда я попытался грипп sh и ожидал, что некоторые данные в компактном разделе, как ни странно, было очень мало данных, вот статистика
Keyspace : stress2
Read Count: 0
Read Latency: NaN ms
Write Count: 100002
Write Latency: 0.045491610167796646 ms
Pending Flushes: 0
Table: demomodel
SSTable count: 1
Space used (live): 5.29 KiB
Space used (total): 5.29 KiB
Space used by snapshots (total): 0 bytes
Off heap memory used (total): 8 bytes
SSTable Compression Ratio: 0.5753424657534246
Number of partitions (estimate): 1
Memtable cell count: 0
Memtable data size: 0 bytes
Memtable off heap memory used: 0 bytes
Memtable switch count: 1
Local read count: 0
Local read latency: NaN ms
Local write count: 100002
Local write latency: 0.045 ms
Pending flushes: 0
Percent repaired: 0.0
Bytes repaired: 0.000KiB
Bytes unrepaired: 0.214KiB
Bytes pending repair: 0.000KiB
Bloom filter false positives: 0
Bloom filter false ratio: 0.00000
Bloom filter space used: 16 bytes
Bloom filter off heap memory used: 8 bytes
Index summary off heap memory used: 0 bytes
Compression metadata off heap memory used: 0 bytes
Compacted partition minimum bytes: 216
Compacted partition maximum bytes: 258
Compacted partition mean bytes: 258
Average live cells per slice (last five minutes): 2.0
Maximum live cells per slice (last five minutes): 2
Average tombstones per slice (last five minutes): 1.0
Maximum tombstones per slice (last five minutes): 1
Dropped Mutations: 0 bytes
Failed Replication Count: null
Теперь, когда я запрашиваю таблицу, были только 3 строки, но мы можем видеть LocalWriteCount> 100000,
Вот моя модель данных для схемы,
package com.example.connectCass.Model;
import lombok.Data;
import org.springframework.data.cassandra.core.cql.PrimaryKeyType;
import org.springframework.data.cassandra.core.mapping.Column;
import org.springframework.data.cassandra.core.mapping.PrimaryKeyColumn;
import org.springframework.data.cassandra.core.mapping.Table;
import com.datastax.driver.core.LocalDate;
import java.util.Date;
@Data
@Table
public class DemoModel {
@PrimaryKeyColumn(name = "acctID", type = PrimaryKeyType.PARTITIONED)
private String acctID;
@PrimaryKeyColumn(name = "transactionDate", type = PrimaryKeyType.PARTITIONED)
private LocalDate transactionDate;
@PrimaryKeyColumn(name = "currentTime", type = PrimaryKeyType.CLUSTERED)
private Date currentTime;
@Column
private String card;
@Column
private String amountCreditedFrom;
@Column
private double transactionAmount;
@Column
private double balance;
public void setTransactionDate(){
this.transactionDate =LocalDate.fromMillisSinceEpoch(new Date().getTime());
}
public void setCurrentTime(){
this.currentTime = new Date();
}
}
Может кто-нибудь сказать, где я иду не так? Спасибо