Регистрация заявок с помощью EssentialFilter в игровой среде - PullRequest
0 голосов
/ 19 октября 2019

Я последовал примеру, показанному здесь: https://www.playframework.com/documentation/2.7.x/ScalaHttpFilters#More-powerful-filters

Я пытался добавить ведение журнала для всех HTTP-вызовов, но не могу получить сообщение для входящего запроса, который будет зарегистрирован.

Здесьмой фильтр:

class LoggingFilter(implicit val mat: Materializer, ec: ExecutionContext) extends EssentialFilter {
  private val log = LogManager.getLogger

  private val CorrelationIdHeader = "X-EW-CorrelationId"

  def apply(nextFilter: EssentialAction): EssentialAction = (reqHeader: RequestHeader) => {
    val startTime     = System.nanoTime()
    val correlationId = UUID.randomUUID().toString

    log.debug("request header received")

    val headerWithId = reqHeader.withHeaders(reqHeader.headers.add(CorrelationIdHeader -> correlationId))
    val accumulator  = nextFilter(headerWithId)
    val loggingFlow: Flow[ByteString, ByteString, NotUsed] = Flow[ByteString]
      .wireTap { bodyBytes =>
        // this is not logged!!!
        log.info(s"HTTP ${reqHeader.method} ${reqHeader.uri} Id: $correlationId Body: ${bodyBytes.utf8String}")
      }
      .log("myflow")

    accumulator.through(loggingFlow).map { result =>
      val endTime      = System.nanoTime()
      val responseTime = Duration.fromNanos(endTime - startTime)
      val status       = result.header.status
      val msg          = s"HTTP Id: $correlationId StatusCode: $status ResponseTime: ${responseTime.toMillis}ms"

      if (status >= 200 && status < 400) log.info(msg)
      else log.error(msg)

      result.withHeaders("Request-Time" -> responseTime.toMillis.toString)
    }
  }
}

Это тест, который я использую для проверки журналов:

class LoggingSpec extends PlaySpec with OneAppPerSuiteWithComponents {

  override def components: BuiltInComponents = new BuiltInComponentsFromContext(context) with HttpFiltersComponents {
    import play.api.mvc.Results
    import play.api.routing.Router
    import play.api.routing.sird._

    lazy val router: Router = Router.from({
      case GET(p"/") =>
        defaultActionBuilder {
          Results.Ok("test")
        }
    })

    lazy val loggingFilter                            = new LoggingFilter()
    override lazy val httpFilters: Seq[LoggingFilter] = Seq(loggingFilter)
  }

  "The OneAppPerSuiteWithComponents trait" must {
    "provide an Application" in {
      import play.api.test.Helpers.{route, GET}
      val Some(result: Future[Result]) = route(app, FakeRequest(GET, "/"))
      Helpers.contentAsString(result) must be("test")
    }
  }
}

И это журналы:

2019-10-17 19:48:32.208 [DEBUG] HttpConfiguration - Generated dev mode secret 7db720124da7d3c8853c455100670e31 for app at file:/home/david/work/eventworld/eventworld-platform/gateway/target/scala-2.12/classes/application.conf
2019-10-17 19:48:32.420 [INFO ] Slf4jLogger - Slf4jLogger started
2019-10-17 19:48:32.428 [DEBUG] EventStream - logger log1-Slf4jLogger started
2019-10-17 19:48:32.429 [DEBUG] EventStream - Default Loggers started
2019-10-17 19:48:32.435 [INFO ] Remoting - Starting remoting
2019-10-17 19:48:32.522 [INFO ] Remoting - Remoting started; listening on addresses :[akka.tcp://application@127.0.1.1:2552]
2019-10-17 19:48:32.523 [INFO ] Remoting - Remoting now listens on addresses: [akka.tcp://application@127.0.1.1:2552]
2019-10-17 19:48:32.535 [INFO ] Cluster(akka://application) - Cluster Node [akka.tcp://application@127.0.1.1:2552] - Starting up, Akka version [2.5.25] ...
2019-10-17 19:48:32.558 [INFO ] Cluster(akka://application) - Cluster Node [akka.tcp://application@127.0.1.1:2552] - Registered cluster JMX MBean [akka:type=Cluster]
2019-10-17 19:48:32.559 [INFO ] Cluster(akka://application) - Cluster Node [akka.tcp://application@127.0.1.1:2552] - Started up successfully
2019-10-17 19:48:32.574 [INFO ] Cluster(akka://application) - Cluster Node [akka.tcp://application@127.0.1.1:2552] - No seed-nodes configured, manual cluster join required, see https://doc.akka.io/docs/akka/current/cluster-usage.html#joining-to-seed-nodes
2019-10-17 19:48:32.686 [DEBUG] ActorSystemProvider - Starting application default Akka system: application


2019-10-17 19:48:32.872 [DEBUG] Materializer - [myflow] Downstream finished.
2019-10-17 19:48:32.875 [DEBUG] CoordinatedShutdown - Performing phase [before-service-unbind] with [0] tasks
2019-10-17 19:48:32.875 [DEBUG] CoordinatedShutdown - Performing phase [service-unbind] with [0] tasks
2019-10-17 19:48:32.875 [DEBUG] CoordinatedShutdown - Performing phase [service-requests-done] with [0] tasks
2019-10-17 19:48:32.876 [DEBUG] CoordinatedShutdown - Performing phase [service-stop] with [1] tasks: [application-lifecycle-stophook]
2019-10-17 19:48:32.881 [DEBUG] Files$DefaultTemporaryFileCreator - deletePath: deleting = /tmp/playtemp2067256977816634840

2019-10-17 19:48:32.808 [DEBUG] LoggingFilter - request header received
2019-10-17 19:48:32.863 [INFO ] LoggingFilter - HTTP Id: 5239632b-9be7-4473-a641-6879072fc01c StatusCode: 200 ResponseTime: 55ms

2019-10-17 19:48:32.885 [DEBUG] CoordinatedShutdown - Performing phase [before-cluster-shutdown] with [0] tasks
2019-10-17 19:48:32.885 [DEBUG] CoordinatedShutdown - Performing phase [cluster-sharding-shutdown-region] with [0] tasks
2019-10-17 19:48:32.885 [DEBUG] CoordinatedShutdown - Performing phase [cluster-leave] with [1] tasks: [leave]
2019-10-17 19:48:32.889 [DEBUG] CoordinatedShutdown - Performing phase [cluster-exiting] with [1] tasks: [wait-exiting]
2019-10-17 19:48:32.890 [DEBUG] CoordinatedShutdown - Performing phase [cluster-exiting-done] with [1] tasks: [exiting-completed]
2019-10-17 19:48:32.890 [INFO ] Cluster(akka://application) - Cluster Node [akka.tcp://application@127.0.1.1:2552] - Exiting completed
2019-10-17 19:48:32.891 [INFO ] Cluster(akka://application) - Cluster Node [akka.tcp://application@127.0.1.1:2552] - Shutting down...
2019-10-17 19:48:32.892 [INFO ] Cluster(akka://application) - Cluster Node [akka.tcp://application@127.0.1.1:2552] - Successfully shut down
2019-10-17 19:48:32.892 [DEBUG] CoordinatedShutdown - Performing phase [cluster-shutdown] with [1] tasks: [wait-shutdown]
2019-10-17 19:48:32.896 [DEBUG] CoordinatedShutdown - Performing phase [before-actor-system-terminate] with [0] tasks
2019-10-17 19:48:32.896 [DEBUG] CoordinatedShutdown - Performing phase [actor-system-terminate] with [1] tasks: [terminate-system]
2019-10-17 19:48:32.898 [INFO ] RemoteActorRefProvider$RemotingTerminator - Shutting down remote daemon.
2019-10-17 19:48:32.899 [INFO ] RemoteActorRefProvider$RemotingTerminator - Remote daemon shut down; proceeding with flushing remote transports.
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.jboss.netty.util.internal.ByteBufferUtil (file:/home/david/.ivy2/cache/io.netty/netty/bundles/netty-3.10.6.Final.jar) to method java.nio.DirectByteBuffer.cleaner()
WARNING: Please consider reporting this to the maintainers of org.jboss.netty.util.internal.ByteBufferUtil
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
2019-10-17 19:48:32.912 [INFO ] Remoting - Remoting shut down
2019-10-17 19:48:32.912 [INFO ] RemoteActorRefProvider$RemotingTerminator - Remoting shut down.
2019-10-17 19:48:32.913 [DEBUG] EventStream - shutting down: StandardOutLogger

КакВы можете увидеть в журналах, ответ регистрируется, но запрос не.

Но есть странное: 2019-10-17 19:48:32.872 [DEBUG] Materializer - [myflow] Downstream finished. в начале, который, скорее всего, указывает на проблему, но я не понимаю, почему и что происходитздесь.

Почему запрос не зарегистрирован?

...