Weird Oracle

“It’s not a bug, it’s a feature” PL/SQL like any other procedural extension to SQL has the ability to execute dynamic statements: EXECUTE IMMEDIATE. But not everyone knows it works differently for SQL statements and PL/SQL blocks. The difference lies in parameters passing.

Consider a simple example when we need to add a new row to a table using dynamic statement:

BEGIN
  p_date := to_char(SYSDATE);
  EXECUTE IMMEDIATE 'INSERT INTO test (created, modified, id, value)
      VALUES ('||p_date||', '||p_date||', '||p_id||', '||p_value||')';
END;

It works, but has a serious flaw: a new statement is compiled for every set of parameters and for every call. We should use placeholders in the statement and pass values through USING clause. To my great surprise, even experienced Oracle programmers may have problems to do it right:

BEGIN
  p_date := to_char(SYSDATE);
  EXECUTE IMMEDIATE 'INSERT INTO test (created, modified, id, value)
      VALUES (:p_date, :p_date, :p_id, :p_value)';
  USING (p_date, p_id, p_value);
END;

Looks good? But id does not work. According to specification when calling SQL statements, Oracle does not even look at placeholders names but on number and order of placeholders – every placeholder needs precisely one argument on the USING list. The correct way to do it is:

BEGIN
  p_date := to_char(SYSDATE);
  EXECUTE IMMEDIATE 'INSERT INTO test (created, modified, id, value)
      VALUES (:x, :x, :x, :x)';
  USING (p_date, p_date, p_id, p_value);
END;

Notice repeated p_date in using clause. Repeating of the placeholder name is also intentional – i think it might help notice that one need to be cautious when modifying this piece of code. Now to make things even more confusing, assume that we add a procedure to insert that row but still need to call it dynamically. This time Oracle will behave differently: it will now look at placeholder names and will expect only one value per placeholder name:

BEGIN
  p_date := to_char(SYSDATE);
  EXECUTE IMMEDIATE 'BEGIN insert_into_test (:p_date, :p_date, :p_id, :p_value); END;';
  USING (p_date, p_id, p_value);
END;

Now the total weirdness: USING clause has no way of specifying placeholder name for each argument – here still only the order counts. Reading such a piece of code and trying to decipher which parameter gets which value may be painful:

BEGIN
  p_date := to_char(SYSDATE);
  EXECUTE IMMEDIATE 'BEGIN some_proc (:p_date, :p_user, :p_date, :p_id, :p_value, :p_user); END;';
  USING (...???...);
END;

Now imagine that the dynamic block consists of several calls with some common arguments and that the block itself is created programmatically… I bet one will quickly use unique placeholder names (like :p1, :p2, :p3,…) and pass each value multiple times or give up parameter passing entirely and use string concatenation method instead. And if you are still reading this – a short riddle:

EXECUTE IMMEDIATE 'call some_proc(:a, :a, :b, :c);' USING (...);

How many values should be passed here?

You May Also Like

Spring security authentication-success-handler-ref and authentication-failure-handler-ref does not work with KerberosServiceAuthenticationProvider

I'm using SpringSecurity with KerberosServiceAuthenticationProvider which is Kerberos security extension. You can read how to use it on extension author's blog.But you cannot use handler on form-login to catch authorization result. It's because of inne...I'm using SpringSecurity with KerberosServiceAuthenticationProvider which is Kerberos security extension. You can read how to use it on extension author's blog.But you cannot use handler on form-login to catch authorization result. It's because of inne...

Recently at storm-users

I've been reading through storm-users Google Group recently. This resolution was heavily inspired by Adam Kawa's post "Football zero, Apache Pig hero". Since I've encountered a lot of insightful and very interesting information I've decided to describe some of those in this post.

  • nimbus will work in HA mode - There's a pull request open for it already... but some recent work (distributing topology files via Bittorrent) will greatly simplify the implementation. Once the Bittorrent work is done we'll look at reworking the HA pull request. (storm’s pull request)

  • pig on storm - Pig on Trident would be a cool and welcome project. Join and groupBy have very clear semantics there, as those concepts exist directly in Trident. The extensions needed to Pig are the concept of incremental, persistent state across batches (mirroring those concepts in Trident). You can read a complete proposal.

  • implementing topologies in pure python with petrel looks like this:

class Bolt(storm.BasicBolt):
    def initialize(self, conf, context):
       ''' This method executed only once '''
        storm.log('initializing bolt')

    def process(self, tup):
       ''' This method executed every time a new tuple arrived '''       
       msg = tup.values[0]
       storm.log('Got tuple %s' %msg)

if __name__ == "__main__":
    Bolt().run()
  • Fliptop is happy with storm - see their presentation here

  • topology metrics in 0.9.0: The new metrics feature allows you to collect arbitrarily custom metrics over fixed windows. Those metrics are exported to a metrics stream that you can consume by implementing IMetricsConsumer and configure with Config.java#L473. Use TopologyContext#registerMetric to register new metrics.

  • storm vs flume - some users' point of view: I use Storm and Flume and find that they are better at different things - it really depends on your use case as to which one is better suited. First and foremost, they were originally designed to do different things: Flume is a reliable service for collecting, aggregating, and moving large amounts of data from source to destination (e.g. log data from many web servers to HDFS). Storm is more for real-time computation (e.g. streaming analytics) where you analyse data in flight and don't necessarily land it anywhere. Having said that, Storm is also fault-tolerant and can write to external data stores (e.g. HBase) and you can do real-time computation in Flume (using interceptors)

That's all for this day - however, I'll keep on reading through storm-users, so watch this space for more info on storm development.

I've been reading through storm-users Google Group recently. This resolution was heavily inspired by Adam Kawa's post "Football zero, Apache Pig hero". Since I've encountered a lot of insightful and very interesting information I've decided to describe some of those in this post.

  • nimbus will work in HA mode - There's a pull request open for it already... but some recent work (distributing topology files via Bittorrent) will greatly simplify the implementation. Once the Bittorrent work is done we'll look at reworking the HA pull request. (storm’s pull request)

  • pig on storm - Pig on Trident would be a cool and welcome project. Join and groupBy have very clear semantics there, as those concepts exist directly in Trident. The extensions needed to Pig are the concept of incremental, persistent state across batches (mirroring those concepts in Trident). You can read a complete proposal.

  • implementing topologies in pure python with petrel looks like this:

class Bolt(storm.BasicBolt):
    def initialize(self, conf, context):
       ''' This method executed only once '''
        storm.log('initializing bolt')

    def process(self, tup):
       ''' This method executed every time a new tuple arrived '''       
       msg = tup.values[0]
       storm.log('Got tuple %s' %msg)

if __name__ == "__main__":
    Bolt().run()
  • Fliptop is happy with storm - see their presentation here

  • topology metrics in 0.9.0: The new metrics feature allows you to collect arbitrarily custom metrics over fixed windows. Those metrics are exported to a metrics stream that you can consume by implementing IMetricsConsumer and configure with Config.java#L473. Use TopologyContext#registerMetric to register new metrics.

  • storm vs flume - some users' point of view: I use Storm and Flume and find that they are better at different things - it really depends on your use case as to which one is better suited. First and foremost, they were originally designed to do different things: Flume is a reliable service for collecting, aggregating, and moving large amounts of data from source to destination (e.g. log data from many web servers to HDFS). Storm is more for real-time computation (e.g. streaming analytics) where you analyse data in flight and don't necessarily land it anywhere. Having said that, Storm is also fault-tolerant and can write to external data stores (e.g. HBase) and you can do real-time computation in Flume (using interceptors)

That's all for this day - however, I'll keep on reading through storm-users, so watch this space for more info on storm development.

micro-burn has Trello integration

After a few long evenings I've finally integrated micro-burn with Trello. All you need to run it for your Trello board is to write short configuration and run fat jar. It renders burndown chart visualising progress of cards on your board.You can specif...After a few long evenings I've finally integrated micro-burn with Trello. All you need to run it for your Trello board is to write short configuration and run fat jar. It renders burndown chart visualising progress of cards on your board.