Easy fraud detection in Nigerian bank with TouK Nussknacker

Problem

We have a large mysql database with payment transactions from multiple banks, aggregated for Nigerian biggest operator. Each day the customers create about 300 thousand transactions. There are banking transactions and top-ups. One of our fraud detection processes for top-ups looks like this:

  1. Find top-up (called Airtime in our domain) transactions
  2. with amount above 3000
  3. ended with success
  4. If there was more than one such transaction in last 24 hours then we mark the customer as a fraud suspect
  5. If customer was marked as a fraud suspect in previous 24 hours (it means 24-48h hours ago) then we should block him

We were using cron in our previous solution, which ran big SQL query once a day and created report from found transactions. We wanted to make this fraud detection process realtime and to know about suspicious situations as fast as possible. That is why we decided to use TouK Nussknacker.

TouK Nussknacker

Nussknacker logo

Nussknacker is a GUI to the Apache Flink developed by TouK to help its clients manage various event stream processes. Nussknacker was first deployed in 2016 at one of the Polish mobile network operators. The GUI helps business analysts, marketers and other trained users create and manage processes with a minimal support from developers.

Architecture

Architecture of our solution was based on demo project available in Nussknackera github project. It contains:

  • UI – allows us to create, test and deploy flink jobs and also read the metrics or logs of them.
  • job manager – dispatches flink jobs to task managers
  • task manager – runs flink jobs
  • zookeeper – for discovery of task managers and keeping information about state
  • logstash – harvests the logs from flink tasks, parse them and send to elasticsearch
  • elasticsarch – keeps logs in index for further analyze
  • kibana – allows for searching in harvested logs
  • influxdb – keeps jobs metrics
  • grafana – allows for searching and viewing jobs metrics
  • mysql – the source of transactions

Prepare components

Nussknacker assumes some initial effort to create a problem model and components which know how to convert real data into the model, aggregate it and send to external destination.

The model

If we want to create a Nussknacker process, we have to create a model first. The model is a description of data, but also contains all data that might be used in the future or displayed in logs.

In our case we have created Transaction case class in Scala:

case class Transaction(
            id: Long,
            amount: BigDecimal,
            customerMsisdn: String,
            date: LocalDateTime,
            externalCustomerId: String,
            uid: String,
            paymentMethodId: String,
            statusId: Long,
            status: String,
            failureType: String,
            failureDetails: String,
            drawDownBillerId: Long,
            paymentProvider: String,
            paymentProviderId: Long,
            product: String) extends WithFields {

  val timestamp: Long = date.toInstant(ZoneOffset.UTC).toEpochMilli

  // ...
}

It is important to create a timestamp field representing when the event occured. Thanks to that aggregates are calculated in valid and deterministic way. This is very important for historical or delayed data.

Build input

The data is kept in the MySQL database, therefore flink needs information about how to fetch it, convert to the model and emit to processing.

We have created JdbcTransactionSource data class:

case class JdbcTransactionSource(limit: Long, intervalInSeconds: Long, jdbcConfig: JdbcConfig)
  extends RichSourceFunction[Transaction]
    with Checkpointed[java.lang.Long]
{
    // ...
}

The amount of data to fetch is limited by the source, because we do not want to take all the database records on first start or start after long period of time. What is more, a time interval is set between data fetches. The source has checkpoints stored in the HDFS or in other configured storage with metadata saved in zookeeper to know which records have already been processed.

The whole job of source class could be summarized by these lines:

val rs = statement.executeQuery()
while (rs.next()) {
  val transaction = fromResultSetRow(rs)
  sourceContext.collectWithTimestamp(transaction, transaction.timestamp)
  lastTransactionId = transaction.id
}

Build sink

Sink is a component which knows where to send data after processing. In our case, we want to just log a message with transaction. Generated logs will be sent to elasticsearch by logstash.

Sink code is quite simple:

class LoggingSink(messagePrefix: String, loggingLevel: String) extends FlinkSink with LazyLogging with Serializable {

  override def toFlinkFunction: SinkFunction[Any] = new SinkFunction[Any] with Serializable {
    override def invoke(in: Any): Unit = {
      val message = s"$messagePrefix - $in"
      loggingLevel match {
        case "error" => logger.error(message)
        case "warn" => logger.warn(message)
        case default => logger.info(message)
      }
    }
  }

  override def testDataOutput: Option[(Any) => String] = Some(in => s"[$loggingLevel] $messagePrefix $in")
}

 

Create aggregate function

One of the more difficult tasks is to a create our custom aggregate function. In our case, it is counter function, which will count occurences of transactions that meet our conditions. Each aggregate is created for a certain key. In our case, the key is customerMsisdn, since it points directly at a single customer. For each customer we want to know how many transactions in a time window were made or found.

We have created EventsCounter class which parses input parameters from configuration and based on the key transforms incoming events and stores count in the eventsCounter variable:

class EventsCounter extends CustomStreamTransformer {

  @MethodToInvoke(returnType = classOf[EventCount])
  def execute(@ParamName("key") key: LazyInterpreter[String],
              @ParamName("length") length: String) = {
    FlinkCustomStreamTransformation((start: DataStream[InterpretationResult]) => {
      val lengthInMillis = Duration(length).toMillis
      start.keyBy(key.syncInterpretationFunction)
        .transform("eventsCounter", new CounterFunction(lengthInMillis))
    })
  }
}

 

  • @MethodToInvoke tells Nussknacker that the annotated method should be invoked when process reaches component
  • @ParamName tells Nussknacker that component is configurable via parameters provided in UI

CounterFunction counts occurences in the time window and stores them as EventCount objects which knows when the first occurence of event in this time window was:

class CounterFunction(lengthInMillis: Long) extends TimestampedEvictableState[Long] {

  override def stateDescriptor =
    new ValueStateDescriptor[MultiMap[Long, Long]]("state", classOf[MultiMap[Long, Long]])

  override def processElement(element: StreamRecord[InterpretationResult]): Unit = {
    setEvictionTimeForCurrentKey(element.getTimestamp + lengthInMillis)
    state.update(filterState(element.getTimestamp, lengthInMillis))

    val ir = element.getValue
    val eventCount = stateValue.add(element.getTimestamp, 1)
    state.update(eventCount)

    val eventsCount = eventCount.map.values.flatten.sum
    val smallestTimestamp = eventCount.map.keys.min
    output.collect(new StreamRecord[ValueWithContext[Any]](
      ValueWithContext(EventCount(count = eventsCount, smallestTimestamp = smallestTimestamp), ir.finalContext), element.getTimestamp)
    )
  }
}

case class EventCount(count: Long, smallestTimestamp: Long)

Build Nussknacker process

We have created some components needed for our domain problem. We created a jar library with a class describing process building blocks:

class FraudDetectorConfigCreator extends ProcessConfigCreator {
  val transactionCategory = "TransactionStream"

  override def sourceFactories(config: Config): Map[String, WithCategories[SourceFactory[_]]] = {
    import net.ceedubs.ficus.Ficus._
    import net.ceedubs.ficus.readers.ArbitraryTypeReader._
    Map(
      "Transactions" -> WithCategories(new TransactionSourceFactory(config.as[JdbcConfig]("jdbc")), transactionCategory)
    )
  }

  override def customStreamTransformers(config: Config): Map[String, WithCategories[CustomStreamTransformer]] = Map(
    "eventsCounter" -> WithCategories(new EventsCounter, transactionCategory)
  )

  override def sinkFactories(config: Config): Map[String, WithCategories[SinkFactory]] = Map(
    "LoggingFactory" -> WithCategories(new LoggingSinkFactory, transactionCategory)
  )

  // other factories that we do not need in our problem
}

Components are available via factories and each factory is created in one or more categories (always TranscationStream in our case).

configuration

We provided some configuration for Nussknacker UI.

fraudDemo {
  timeout: 10s
  checkpointInterval: 20s
  restartInterval: "10s"
  processConfigCreatorClass: "pl.touk.zephyr.frauddetector.FraudDetectorConfigCreator"
  jdbc {
    driver: "com.mysql.jdbc.Driver"
    url: "jdbc:mysql://mysql-zephyr:3306/backofficedev"
    userName: "backofficedev"
    password: "..."
  }     
  defaultValues {
    values {
    }
  }

}

This configuration will be forwarded to flink with our job.

Process definition

After Nussknacker start the UI is available ‘http://[host]:8080/‘ and we could start defining processes. We created a new process with category TransactionStrem. Using our custom components and also generic components (e. g. filter or split) we designed a fraud detection process.

process

We have applied filters to find only certain transactions for this process. In the filters we are using Spring SPEL expressions like #input.amount > 3000. Since we defined the model, Nussknacker can suggest fields and validate expressions. We want to log such transactions with level INFO and continue processing (this is why we have used split component).

Next we count transactions in the 24 hours time window and find transactions for each customer that occured more than once. We log them again (this time with level WARN) and continue fraud detection.

The last step is to count suspcitious transactions that occured in the last 48 hours and filter our this suspicious transaction that occured in last 24 hours. Finally, we log such transactions with level ERROR.

After designing and saving, the process is ready for testing.

Testing

The JdbcTransactionSource provides method generateTestData, which connects to the database, fetches requested number of transactions and saves them to a file. In Nussknacker it can be done by clicking the generate button in the section Test. Next, we can execute the process locally with data from the file by clicking the from file button. The number of process transactions in each step will be shown in the nodes:

process_after_test

We can also look inside each node and check which transaction reached it by clicking on it.

Deploy and results

After testing the process, it can be deployed – sent to the job manager and started. To deploy the process we just need to click the Deploy button.

In metrics tab we could connect to dashboard and watch metrics from components.

metrics

In seach tab we could query harvested logs and see how many frauds were found.

search

What next?

We have created just one process that allows us to find fraud transactions in our system. Since we already created model, data source, sink and setup architecture, we are ready to quickly design and deploy new processes – e. g. finding big number of small transactions from the same customer (as next fraud detection process) or find failed transactions with specific details (to find repeatable problems of our customers).

You May Also Like