Subtle feature of CONNECT BY

Hierarchical queries (CONNECT BY … START WITH …) are used when retrieving data from tree-like structures. You use them if for example you want to retrieve all people who are directly or indirectly below a certain person in a company hierarchy. However, if you use joins and add some other conditions in the WHERE clause then the effect of the query is not obvious.

The good description of hierarchical queries can be found on the page

http://download.oracle.com/docs/cd/B10501_01/server.920/a96540/queries4a.htm#2053937 There are 2 important points in this article:
* If the|WHERE| predicate contains a join, Oracle applies the join predicates /before/ doing the |CONNECT| |BY| processing.
* If the |WHERE| clause does not contain a join, Oracle applies all predicates other than the|CONNECT| |BY| predicates _/after/_doing the |CONNECT| |BY|processing without affecting the other rows of the hierarchy. What does it mean? How does it work? In order to explain this I will describe the error I came across before I understood how the hierarchical queries really work. The query was meant to return all direct and not direct superiors for a given customer (some customers work in companies).

SELECT c.* FROM customer_all c
CONNECT BY PRIOR c.customer_id_high = c.customer_id
START WITH c.customer_id = :cust_id

This worked fine. However we had to change the query in order to return billing address of customers. Each customer can have many addresses, but only one of them is a billing one. Such address has the flag CCBILL set to ‘X’. Thus, the query was changed into this one:

SELECT level, c.*, cc.*
  FROM customer_all c,
       ccontact_all cc
 WHERE c.customer_id = cc.customer_id
   AND cc.ccbill = 'X'
CONNECT BY PRIOR c.customer_id_high = c.customer_id
START WITH c.customer_id = :cust_id

That was a mistake. The condition c.customer_id = cc.customer_id was applied before CONNECT BY whearas cc.ccbill = ‘X’ was executed after CONNECT BY. A customer (let’s suppose his/her customer_id = 123456) had two addresses, while his superior had one address and his superior’s superior had also one address. The query was executed in the following way (I omit stage irrelevant to this example): Stage 1: 2 records are found for the client 123456: one for his billing address and one for another address. They have LEVEL=1 Stage 2: CONNECT BY clause is applied to both records with LEVEL=1 from the previous stage. Thus, the superior of 123456 is added twice to the result set and there are two records with LEVEL=2 Stage 3: CONNECT BY clause is applied to both records with LEVEL=2 from the previous stage. Thus the superior of the superior of 123456 is added twice to the result set and there are two records with LEVEL=3 Stage 4: The non-billing address of customer 123456 is removed from the result set, but his superiors records which were added because of this record are not removed, as their address is not a billing one (they do not have any non-billing address). As a result, the query returned 5 records, although it was expected to return 3 records. The records of the superior and superior’s superior were duplicated because of the second address of 123456. The record with the second address was removed but the branch it created was preserved. The solution was to check in CONNECT BY clause if the address in the PRIOR record is the billing one

SELECT level, c.*, cc.*
  FROM customer_all c, ccontact_all cc
 WHERE c.customer_id = cc.customer_id
   AND cc.ccbill = 'X'
CONNECT BY PRIOR c.customer_id_high = c.customer_id
       AND PRIOR cc.ccbill = 'X'
 START WITH c.customer_id = :cust_id

There is a clearer solution available – to make a join with addresses table after executing the hierarchical query:

SELECT *
  FROM (SELECT c. *
          FROM customer_all c,
        CONNECT BY PRIOR c.customer_id_high = c.customer_id
         START WITH c.customer_id = :cust_id) c,
       ccontract_all cc
 WHERE c.customer_id = cc.customer_id
   AND cc.ccbill = 'X'

There is another interesting issue. When I changed the join to ANSI style (JOIN … ON instead of condition in the WHERE clause) then the execution time increased a few hundred thousand times. It seems that Oracle 9 on which I tested this query does not cope well with ANSI join syntax. This is not a surprise. Once we had to rapidly convert many views in a system. They used ANSI-style joins and when they were used in queries using Oracle-style joins then their performance dropped rapidly. We had to convert the views to use Oracle-style joins.

You May Also Like

Recently at storm-users

I've been reading through storm-users Google Group recently. This resolution was heavily inspired by Adam Kawa's post "Football zero, Apache Pig hero". Since I've encountered a lot of insightful and very interesting information I've decided to describe some of those in this post.

  • nimbus will work in HA mode - There's a pull request open for it already... but some recent work (distributing topology files via Bittorrent) will greatly simplify the implementation. Once the Bittorrent work is done we'll look at reworking the HA pull request. (storm’s pull request)

  • pig on storm - Pig on Trident would be a cool and welcome project. Join and groupBy have very clear semantics there, as those concepts exist directly in Trident. The extensions needed to Pig are the concept of incremental, persistent state across batches (mirroring those concepts in Trident). You can read a complete proposal.

  • implementing topologies in pure python with petrel looks like this:

class Bolt(storm.BasicBolt):
    def initialize(self, conf, context):
       ''' This method executed only once '''
        storm.log('initializing bolt')

    def process(self, tup):
       ''' This method executed every time a new tuple arrived '''       
       msg = tup.values[0]
       storm.log('Got tuple %s' %msg)

if __name__ == "__main__":
    Bolt().run()
  • Fliptop is happy with storm - see their presentation here

  • topology metrics in 0.9.0: The new metrics feature allows you to collect arbitrarily custom metrics over fixed windows. Those metrics are exported to a metrics stream that you can consume by implementing IMetricsConsumer and configure with Config.java#L473. Use TopologyContext#registerMetric to register new metrics.

  • storm vs flume - some users' point of view: I use Storm and Flume and find that they are better at different things - it really depends on your use case as to which one is better suited. First and foremost, they were originally designed to do different things: Flume is a reliable service for collecting, aggregating, and moving large amounts of data from source to destination (e.g. log data from many web servers to HDFS). Storm is more for real-time computation (e.g. streaming analytics) where you analyse data in flight and don't necessarily land it anywhere. Having said that, Storm is also fault-tolerant and can write to external data stores (e.g. HBase) and you can do real-time computation in Flume (using interceptors)

That's all for this day - however, I'll keep on reading through storm-users, so watch this space for more info on storm development.

I've been reading through storm-users Google Group recently. This resolution was heavily inspired by Adam Kawa's post "Football zero, Apache Pig hero". Since I've encountered a lot of insightful and very interesting information I've decided to describe some of those in this post.

  • nimbus will work in HA mode - There's a pull request open for it already... but some recent work (distributing topology files via Bittorrent) will greatly simplify the implementation. Once the Bittorrent work is done we'll look at reworking the HA pull request. (storm’s pull request)

  • pig on storm - Pig on Trident would be a cool and welcome project. Join and groupBy have very clear semantics there, as those concepts exist directly in Trident. The extensions needed to Pig are the concept of incremental, persistent state across batches (mirroring those concepts in Trident). You can read a complete proposal.

  • implementing topologies in pure python with petrel looks like this:

class Bolt(storm.BasicBolt):
    def initialize(self, conf, context):
       ''' This method executed only once '''
        storm.log('initializing bolt')

    def process(self, tup):
       ''' This method executed every time a new tuple arrived '''       
       msg = tup.values[0]
       storm.log('Got tuple %s' %msg)

if __name__ == "__main__":
    Bolt().run()
  • Fliptop is happy with storm - see their presentation here

  • topology metrics in 0.9.0: The new metrics feature allows you to collect arbitrarily custom metrics over fixed windows. Those metrics are exported to a metrics stream that you can consume by implementing IMetricsConsumer and configure with Config.java#L473. Use TopologyContext#registerMetric to register new metrics.

  • storm vs flume - some users' point of view: I use Storm and Flume and find that they are better at different things - it really depends on your use case as to which one is better suited. First and foremost, they were originally designed to do different things: Flume is a reliable service for collecting, aggregating, and moving large amounts of data from source to destination (e.g. log data from many web servers to HDFS). Storm is more for real-time computation (e.g. streaming analytics) where you analyse data in flight and don't necessarily land it anywhere. Having said that, Storm is also fault-tolerant and can write to external data stores (e.g. HBase) and you can do real-time computation in Flume (using interceptors)

That's all for this day - however, I'll keep on reading through storm-users, so watch this space for more info on storm development.

Private fields and methods are not private in groovy

I used to code in Java before I met groovy. Like most of you, groovy attracted me with many enhancements. This was to my surprise to discover that method visibility in groovy is handled different than Java!

Consider this example:

class Person {
private String name
public String surname

private Person() {}

private String signature() { "${name?.substring(0, 1)}. $surname" }

public String toString() { "I am $name $surname" }
}

How is this class interpreted with Java?

  1. Person has private constructor that cannot be accessed
  2. Field "name" is private and cannot be accessed
  3. Method signature() is private and cannot be accessed

Let's see how groovy interpretes Person:

public static void main(String[] args) {
def person = new Person() // constructor is private - compilation error in Java
println(person.toString())

person.@name = 'Mike' // access name field directly - compilation error in Java
println(person.toString())

person.name = 'John' // there is a setter generated by groovy
println(person.toString())

person.@surname = 'Foo' // access surname field directly
println(person.toString())

person.surname = 'Bar' // access auto-generated setter
println(person.toString())

println(person.signature()) // call private method - compilation error in Java
}

I was really astonished by its output:

I am null null
I am Mike null
I am John null
I am John Foo
I am John Bar
J. Bar

As you can see, groovy does not follow visibility directives at all! It treats them as non-existing. Code compiles and executes fine. It's contrary to Java. In Java this code has several errors, pointed out in comments.

I've searched a bit on this topic and it seems that this behaviour is known since version 1.1 and there is a bug report on that: http://jira.codehaus.org/browse/GROOVY-1875. It is not resolved even with groovy 2 release. As Tim Yates mentioned in this Stackoverflow question: "It's not clear if it is a bug or by design". Groovy treats visibility keywords as a hint for a programmer.

I need to keep that lesson in mind next time I want to make some field or method private!